Moh Hasbi Assidiqi

History of AI


What is AI?

Artificial Intelligence (AI) is a field of computer science that focuses on creating intelligent machines that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation². AI systems can be trained by researchers from Perplexity. Some key aspects of AI include:

AI has numerous applications in various industries, such as healthcare, finance, and education, where it can assist in tasks like dermatological diagnosis, financial forecasting, and language translation³. However, it is essential to consider the limitations and potential ethical issues associated with AI, such as privacy concerns, job displacement, and the need for transparency and accountability².

History of AI

The history of AI can be divided into three periods: the early period, the middle period, and the modern period.

Inception Period (1943 - 1956)

The inception period of AI began in the 1930 started from Nicolas Rashevsky’s mathematical modeling work. The first work recognized as AI was done by Warren McCulloch and Walter Pitts. They proposed a model of artificial neurons.

Here are some moments in this period:

Early Period (1956 - 1969)

The 1950s were a pivotal decade for the development of artificial intelligence (AI). Despite the skepticism of the intellectual community about machines’ capabilities, AI researchers proved them wrong by successfully performing various tasks indicative of human intelligence, such as playing checkers, proving theorems, and learning from data. This era was humorously referred to as the “Look, Ma, no hands!” period by John McCarthy, one of the pioneers of AI.

McCarthy also defined Lisp, the leading AI programming language for 30 years, and proposed AI systems based on knowledge and reasoning. He envisioned the Advice Taker, a program with world knowledge used to formulate action plans. Another influential figure in AI was Marvin Minsky, who moved to MIT in 1958 and focused on representation and reasoning. He initially collaborated with McCarthy, but they soon parted ways.

At IBM, Nathaniel Rochester and his colleagues produced some of the first AI programs, such as the Geometry Theorem Prover by Herbert Gelernter, which was able to solve complex mathematical problems. Herbert Simon, another prominent AI researcher, made optimistic predictions about AI’s future achievements, such as chess and theorem proving. However, not all AI endeavors were successful. Richard Friedberg proposed machine evolution (genetic programming) based on random mutations and selection, but failed to demonstrate progress.

On the other hand, Bernie Widrow and Frank Rosenblatt made significant contributions to the field of neural networks, a type of AI that could learn from data. Widrow developed adalines, and Rosenblatt invented perceptrons and proved the perceptron convergence theorem. These were the early steps towards the emergence of machine learning, a subfield of AI that would later dominate the field.

Here are some moments in this period:

Expert System Period (1970s - 1980s)

The late 1960s and early 1970s marked the rise of knowledge-intensive systems in AI. The DENDRAL program, developed at Stanford in 1969, was the first successful example of such a system. It could infer molecular structure from mass spectrometry data using a large set of rules and heuristics. The Heuristic Programming Project (HPP) was launched at Stanford in 1971 to explore the potential of expert systems in other domains, such as medicine, engineering, and law.

However, AI faced a major setback in 1973, when the Lighthill report criticized the field for its lack of progress and its inability to deal with the combinatorial explosion of search spaces. The report led to the cut of funding for AI research in Britain, and also influenced other countries to reduce their support for AI. This period is known as the first AI winter.

Despite the challenges, AI researchers continued to develop new methods and techniques for knowledge representation and reasoning. Marvin Minsky proposed frames in 1975, a structured way of organizing and manipulating knowledge about object and event types. Frames allowed for inheritance, default values, and exceptions, and were widely adopted by AI systems. One of the most influential applications of frames was the MYCIN system, an expert system for diagnosing blood infections, developed at Stanford in 1981. MYCIN used certainty factors to handle uncertainty and provide recommendations to doctors.

The early 1980s also witnessed the emergence of the first successful commercial expert system, the R1 system, which began operation at Digital Equipment Corporation in 1982. R1 could configure orders for new computer systems, and was estimated to save the company millions of dollars per year. R1 inspired many other companies to invest in expert systems, and the AI industry boomed to billions of dollars by the mid-1980s.

However, the AI boom was short-lived, as the limitations and difficulties of expert systems became apparent. Expert systems were brittle, hard to maintain, and domain-specific. They also required a lot of human expertise and knowledge engineering, which was costly and time-consuming. Moreover, they could not handle common sense, learning, or natural language. These factors contributed to the decline of the AI industry and the onset of the second AI winter in the late 1980s.

Meanwhile, a new paradigm of AI was emerging, based on the idea of learning from data rather than relying on predefined rules and heuristics. This paradigm was inspired by the study of neural networks, which are computational models of the brain. Neural networks had been around since the 1950s, but they gained popularity in the mid-1980s, when the back-propagation learning algorithm was popularized by the Parallel Distributed Processing book. Back-propagation allowed neural networks to learn complex functions and patterns from large amounts of data, and opened new possibilities for AI applications.

Here are some moments in this period:

Probabilistic Reasoning Period (1990s - 2000s)

The second half of the 1980s and the early 1990s witnessed a paradigm shift in AI, from symbolic and logic-based methods to data-driven and probabilistic methods. Neural networks, which are computational models inspired by the brain, made a comeback in 1986, when the back-propagation learning algorithm was reinvented and applied to many learning problems. Connectionist models, based on neural networks, challenged the traditional approaches of AI that relied on rules and heuristics.

Probabilistic reasoning and reinforcement learning also emerged as powerful techniques for dealing with uncertainty and learning from experience. Judea Pearl introduced Bayesian networks for uncertain reasoning in 1988, and Rich Sutton connected reinforcement learning to Markov decision processes in the same year. AI embraced the results of other fields such as statistics, operations research, and control theory, and became more interdisciplinary and empirical.

Machine learning and big data became the dominant themes of AI in the mid-1990s, as David McAllester advocated for rigorous and scientific methods in AI in 1995. Shared benchmark problem sets became the norm for demonstrating progress and comparing different approaches. Large data sets enabled learning algorithms to achieve high accuracy on tasks such as word-sense disambiguation and image completion, and paved the way for the emergence of deep learning in the next decade.

Here are some moments in this period:

Deep Learning Period (2010s - Present)

The 1990s marked the beginning of the machine learning and big data era in AI. The MNIST dataset, which consists of handwritten digits, was widely used for benchmarking image processing systems. In 2001, very large data sets became available, enabling machine learning algorithms to achieve high accuracy on tasks such as word-sense disambiguation, image inpainting, and speech recognition.

In 2009, the ImageNet dataset was created, which has been instrumental in the advancement of computer vision and deep learning research. ImageNet contains millions of images labeled with thousands of categories. In 2011, a deep neural network achieved a breakthrough in the ImageNet challenge, demonstrating a dramatic improvement over previous systems in visual object recognition. Deep learning methods also excel in other domains such as natural language processing, medical diagnosis, and game playing.

In 2012, a deep learning system won the ImageNet competition by a large margin, surpassing human performance on some vision tasks. Since then, deep learning systems have continued to improve and dominate the field of computer vision, as well as other fields of AI.

In 2016, the AI100 report was published, providing an overview of the state of the art in AI and its societal implications. The report highlights the potential benefits and challenges of AI applications in various domains such as transportation, healthcare, education, and security. The report also provides recommendations for ensuring the ethical and responsible use of AI.

Here are some moments in this period:

References

Artificial Intelligence: A Modern Approach by Stuart Russell and Peter Norvig

What is AI by Perplexity

¹What is AI Literacy? Competencies and Design Considerations

²What is AI?

³What is AI? Applications of artificial intelligence to dermatology

⁴What Is AI?

⁵What Is AI?