2.

A brief history of AI

Introduction

The quest to create artificial intelligence through machines dates back to the 1940s and has been a persistent topic of interest for scientists, technologists, writers, and philosophers ever since. Alan Turing, the legendary British mathematician, was one of the first to write on this topic in a formal way, and his Turing test served as a useful yardstick for many decades. The test asked a human being to engage in a conversation through a text-based chat interface. If the user couldn’t distinguish whether they were speaking to a real person or a computer program, then the system had at least a certain level of intelligence: it had passed the Turing test.

The Voight-Kampff Test, a fictional exam for signs of humanity and empathy used in the Blade Runner film series.

While an AI system that can hold a conversation with humans may seem like a recent revelation, there are examples of this technology dating all the way back to the mid-20th century. ELIZA, a therapy bot, was designed to take any input from a user and return it in the form of a question. People found it very useful as a personal therapist. Until recently, though, it was usually easy enough to trip an AI up into revealing its true nature. The latest generation of chatbots, powered by large language models (LLMs), can arguably pass this test with ease.

The dominant approach to AI from the 1970s onwards involved creating systems that could learn rules and facts, then put them to use inside a closed system. Prominent examples include Deep Blue, the program that beat Gary Kasparov in chess, or IBM’s Watson system, which bested human champions in Jeopardy. These AIs could become quite skilled at mastering the rules of certain games and could retain more information and explore more moves in advance than any human brain. They were limited, however, to their particular domain. Deep Blue couldn’t play checkers, and Watson would flop at Twenty Questions unless it was reengineered for an entirely new rule set. They lacked one key aspect of animal intelligence: the ability to take knowledge from one area and generalize it to another.

The birth of neural networks

Previously, AIs had become very good at spotting patterns, and were useful in specific domains, like chess, stock trades, or medicine, where they might spot a cancer that a radiologist would miss.

While AI built on a system of rules dominated for many decades, a few academics pursued another path. They believed that the best way to mimic intelligence generated by our brains was to build a digital version of our biological brain. To do this, they created artificial neurons that would interact in roughly the same way as the connective nodes in our own heads. The theory drew on a principle of neuroscience: by changing the strength of the connections between nodes in a network, you can teach it to encode knowledge. These were known as artificial neural networks.

This field did not bear much fruit during the 20th century, and many of the most famous and well-respected names in AI today spent decades toiling in relative academic obscurity. Those who maintained their conviction in the neural network approach, however, were validated in the 2000s, and especially in the early aughts, when the amount of data available combined with a massive amount of compute began to scale the size of these neural networks and produce astounding results. As the internet grew, AI systems powered by what was being called machine learning were put to use for prediction and recommendation. They now guide our shopping, news consumption, social media feeds, and many other fields, including tens of billions of dollars per day in automated trades executed by high-frequency bots.

Error rate in the ImageNet Large Scale Visual Recognition Challenge.

Machine learning was followed by deep learning, named for the growing number of layers in each neural network. The ImageNet 2012 Challenge is seen as a watershed moment. A neural network-based approach vastly outperformed the rest of the field and would soon surpass human performance. In short order, this approach became standard in the field and began to drive incredible advances in natural language processing, image recognition, and several other domains, including most recently generative models that can create text, images, video, or sound based on a user’s prompt.

Continue reading to dive into the details of GenAI and why it’s shaking up the world.

Stay updated

Subscribe to receive Stack Overflow for Teams content around knowledge sharing, collaboration, and AI.

By submitting this form, I agree to the Terms of Service and have read and understand Stack Overflow’s Privacy Policy