Artificial Intelligence and Generative AI Models

Milestones in AI Development


Learning Objectives

  • You know of the early history of AI research and the development of symbolic AI.
  • You know of the shift towards machine learning and the development of neural networks.

Artificial Intelligence as a research field

Artificial Intelligence as a research field can be traced to the 1956 Dartmouth Summer Research Project on Artificial Intelligence workshop, proposed by John McCarthy, Marvin Minsky, Nathaniel Rochester, and Claude Shannon. The workshop brought together leading researchers to explore the possibility of creating machines that could exhibit intelligent behavior. The term “artificial intelligence” was coined during this event, marking the birth of AI as a field.

Even before the term “artificial intelligence” was coined, the idea of intelligent machines had been explored in various forms throughout history. The concept of automata, mechanical devices capable of performing tasks without human intervention, dates back to ancient times. Similarly, contributions such as Alan Turing’s work on the Turing machine showed that machines could perform arbitrary computations given enough time and resources. The Turing Test from 1950 was also one of the first attempts to define and measure machine intelligence.

For additional details on the Turing test, see the article “Computing Machinery and Intelligence” by Alan Turing.

Loading Exercise...

Symbolic artificial intelligence

Early AI research focused on creating machines that could reason and solve problems. Foundational work from the 1950s included programs like the Logic Theorist, which was a computer program that could prove mathematical theorems, and the General Problem Solver, which aimed to solve a wide range of problems.

Both of these systems were based on the idea of searching through a space of possible solutions using symbolic AI, where the problem was represented using symbols and logical rules. Symbolic AI, also known as “Good old fashioned artificial intelligence” (GOFAI), relied on handcrafted rules and logical inference to model human intelligence.

The research in symbolic AI led also to the development of expert systems, which were rule-based systems designed to mimic human expertise in specific domains. Examples of expert systems include DENDRAL that helped in chemical analysis and MYCIN that was designed to assist in medical diagnosis. Similarly, the research also led to the development of systems and architectures for simulating human-like reasoning and learning, including Soar and ACT-R, which have both been used to model cognitive processes, and fueled work on Intelligent Tutoring Systems.

Despite the successes of symbolic AI, the field faced challenges, including difficulties in scaling systems to handle real-world complexity, limitations in dealing with uncertainty, and the need for extensive manual work related to building and maintaining knowledge bases. These challenges led to also a decrease in funding and interest in the field, leading to the first AI winter in the 1970s and 1980s.

Loading Exercise...

Machine learning

With the problems in scaling symbolic AI systems, the development of machine learning methods marked a shift in AI research towards statistical methods and algorithms that could learn and generalize from data. Machine learning aimed to develop systems that could improve their performance on tasks through data, rather than relying on predefined rules.

Machine learning methods — or “learning machines” were explored also in the early days of AI, but were shadowed by the symbolic AI approaches.

Especially in the late 1990s and early 2000s, advancements in computational power, the increased availability of datasets, and the evolution of machine learning algorithms led to renewed interest in AI research. The developments demonstrates that machine learning could handle tasks like classification, regression, and clustering, paving the way for modern AI applications.

One of the key milestones in machine learning was the development of neural networks, inspired by the structure of the human brain. The first model of neural networks was proposed by Nicolas Rashevsky in the 1930s, and later work by Frank Rosenblatt led to the development of the Perceptron, a simple neural network model that could learn to classify inputs into two categories. While the early limitations in computing power and training algorithms did not allow for powerful neural networks, the introduction of backpropagation algorithms in the 1980s enabled the training of multi-layer networks, reviving interest in neural networks.

Loading Exercise...

Deep learning

The advances in computational power, especially the development of graphics processing units (GPUs) and the availability of large datasets, led to the emergence of deep learning. Deep learning is a subfield of machine learning that uses multi-layered neural networks to learn representations of data with multiple levels of abstraction. Currently, deep learning is a cornerstone of AI research and development, also powering the recent advances in large language models.

Loading Exercise...