Of the myriad technological advances of the 20th and 21st centuries, one of the most influential is undoubtedly artificial intelligence (AI). From search engine algorithms reinventing how we look for information to Amazon’s Alexa in the consumer sector, AI has become a major technology driving the entire tech industry forward into the future.
Whether you’re a burgeoning start-up or an industry titan like Microsoft, there’s probably at least one part of your company working with AI or machine learning. According to a study from Grand View Research, the global AI industry was valued at $93.5 billion in 2021.
AI as a force in the tech industry exploded in prominence in the 2000s and 2010s, but AI has been around in some form or fashion since at least 1950 and arguably stretches back even further than that.
The broad strokes of AI’s history, such as the Turing Test and chess computers, are ingrained in the popular consciousness, but a rich, dense history lives beneath the surface of common knowledge. This article will distill that history and show you AI’s path from mythical idea to world-altering reality.
Also see: Top AI Software
From Folklore to Fact
While AI is often considered a cutting-edge concept, humans have been imagining artificial intelligences for millenniums, and those imaginings have had a tangible impact on the advancements made in the field today.
Prominent mythological examples include the bronze automaton Talos, protector of the island of Crete from Greece, and the alchemical homunculi of the Renaissance period. Characters like Frankenstein’s Monster, HAL 9000 of 2001: A Space Odyssey, and Skynet from the Terminator franchise are just some of the ways we’ve depicted artificial intelligence in modern fiction.
One of the fictional concepts with the most influence on the history of AI is Isaac Asimov’s Three Laws of Robotics. These laws are frequently referenced when real-world researchers and organizations create their own laws of robotics.
In fact, when the U.K.’s Engineering and Physical Sciences Research Council (EPSRC) and Arts and Humanities Research Council (AHRC) published its 5 principles for designers, builders and users of robots, it explicitly cited Asimov as a reference point, though stating that Asimov’s Laws “simply don’t work in practice.”
Microsoft CEO Satya Nadella also made mention of Asimov’s Laws when presenting his own laws for AI, calling them “a good, though ultimately inadequate, start.”
Also see: The Future of Artificial Intelligence
Computers, Games, and Alan Turing
As Asimov was writing his Three Laws in the 1940s, researcher William Grey Walter was developing a rudimentary, analogue version of artificial intelligence. Called tortoises or turtles, these tiny robots could detect and react to light and contact with their plastic shells, and they operated without the use of computers.
Later in the 1960s, Johns Hopkins University built their Beast, another computer-less automaton which could navigate the halls of the university via sonar and charge itself at special wall outlets when its battery ran low.
However, artificial intelligence as we know it today would find its progress inextricably linked to that of computer science. Alan Turing’s 1950 paper Computing Machinery and Intelligence, which introduced the famous Turing Test, is still influential today. Many early AI programs were developed to play games, such as Christopher Strachey’s checkers-playing program written for the Ferranti Mark I computer.
The term “artificial intelligence” itself wasn’t codified until 1956’s Dartmouth Workshop, organized by Marvin Minsky, John McCarthy, Claude Shannon, and Nathan Rochester, where McCarthy coined the name for the burgeoning field.
The Workshop was also where Allen Newell and Herbert A. Simon debuted their Logic Theorist computer program, which was developed with the help of computer programmer Cliff Shaw. Designed to prove mathematical theorems the same way a human mathematician would, Logic Theorist would go on to prove 38 of the first 52 theorems found in the Principia Mathematica. Despite this achievement, the other researchers at the conference “didn’t pay much attention to it,” according to Simon.
Games and mathematics were focal points of early AI because they were easy to apply the “reasoning as search” principle to. Reasoning as search, also called means-ends analysis (MEA), is a problem-solving method that follows three basic steps:
- Ddetermine the ongoing state of whatever problem you’re observing (you’re feeling hungry).
- Identify the end goal (you no longer feel hungry).
- Decide the actions you need to take to solve the problem (you make a sandwich and eat it).
This early forerunner of AI’s rationale: If the actions did not solve the problem, find a new set of actions to take and repeat until you’ve solved the problem.
Neural Nets and Natural Languages
With Cold-War-era governments willing to throw money at anything that might give them an advantage over the other side, AI research experienced a burst of funding from organizations like DARPA throughout the ’50s and ’60s.
This research spawned a number of advances in machine learning. For example, Simon and Newell’s General Problem Solver, while using MEA, would generate heuristics, mental shortcuts which could block off possible problem-solving paths the AI might explore that weren’t likely to arrive at the desired outcome.
Initially proposed in the 1940s, the first artificial neural network was invented in 1958, thanks to funding from the United States Office of Naval Research.
A major focus of researchers in this period was trying to get AI to understand human language. Daniel Brubow helped pioneer natural language processing with his STUDENT program, which was designed to solve word problems.
In 1966, Joseph Weizenbaum introduced the first chatbot, ELIZA, an act which Internet users the world over are grateful for. Roger Schank’s conceptual dependency theory, which attempted to convert sentences into basic concepts represented as a set of simple keywords, was one of the most influential early developments in AI research.
Also see: Data Analytics Trends
The First AI Winter
In the 1970s, the pervasive optimism in AI research from the ’50s and ’60s began to fade. Funding dried up as sky-high promises were dragged to earth by a myriad of the real-world issues facing AI researching. Chief among them was a limitation in computational power.
As Bruce G. Buchanan explained in an article for AI Magazine: “Early programs were necessarily limited in scope by the size and speed of memory and processors and by the relative clumsiness of the early operating systems and languages.” This period, as funding disappeared and optimism waned, became known as the AI Winter.
The period was marked by setbacks and interdisciplinary disagreements amongst AI researchers. Marvin Minsky and Frank Rosenblatt’s 1969 book Perceptrons discouraged the field of neural networks so thoroughly that very little research was done in the field until the 1980s.
Then, there was the divide between the so-called “neats” and the “scruffys.” The neats favored the use of logic and symbolic reasoning to train and educate their AI. They wanted AI to solve logical problems like mathematical theorems.
John McCarthy introduced the idea of using logic in AI with his 1959 Advice Taker proposal. In addition, the Prolog programming language, developed in 1972 by Alan Colmerauer and Phillipe Roussel, was designed specifically as a logic programming language and still finds use in AI today.
Meanwhile, the scruffys were attempting to get AI to solve problems that required AI to think like a person. In a 1975 paper, Marvin Minsky outlined a common approach used by scruffy researchers, called “frames.”
Frames are a way that both humans and AI can make sense of the world. When you encounter a new person or event, you can draw on memories of similar people and events to give you a rough idea of how to proceed, such as when you order food at a new restaurant. You might not know the menu or the people serving you, but you have a general idea of how to place an order based on past experiences in other restaurants.
From Academia to Industry
The 1980s marked a return to enthusiasm for AI. R1, an expert system implemented by the Digital Equipment Corporation in 1982, was saving the company a reported $40 million a year by 1986. The success of R1 proved AI’s viability as a commercial tool and sparked interest from other major companies like DuPont.
On top of that, Japan’s Fifth Generation project, an attempt to create intelligent computers running on Prolog the same way normal computers run on code, sparked further American corporate interest. Not wanting to be outdone, American companies poured funds into AI research.
Taken altogether, this increase in interest and shift to industrial research resulted in the AI industry ballooning to $2 billion in value by 1988. Adjusting for inflation, that’s nearly $5 billion dollars in 2022.
Also see: Real Time Data Management Trends
The Second AI Winter
In the 1990s, however, interest began receding in much the same way it had in the ’70s. In 1987, Jack Schwartz, the then-new director of DARPA, effectively eradicated AI funding from the organization, yet already-earmarked funds didn’t dry up until 1993.
The Fifth Generation Project had failed to meet many of its goals after 10 years of development, and as businesses found it cheaper and easier to purchase mass-produced, general-purpose chips and program AI applications into the software, the market for specialized AI hardware, such as LISP machines, collapsed and caused the overall market to shrink.
Additionally, the expert systems that had proven AI’s viability at the beginning of the decade began showing a fatal flaw. As a system stayed in-use, it continually added more rules to operate and needed a larger and larger knowledge base to handle. Eventually, the amount of human staff needed to maintain and update the system’s knowledge base would grow until it became financially untenable to maintain. The combination of these factors and others resulted in the Second AI Winter.
Also see: Top Digital Transformation Companies
Into the New Millennium and the Modern World of AI
The late 1990s and early 2000s showed signs of the coming AI springtime. Some of AI’s oldest goals were finally realized, such as Deep Blue’s 1997 victory over then-chess world champion Gary Kasparov in a landmark moment for AI.
More sophisticated mathematical tools and collaboration with fields like electrical engineering resulted in AI’s transformation into a more logic-oriented scientific discipline, allowing the aforementioned neats to claim victory over their scruffy counterparts. Marvin Minsky, for his part, declared that the field of AI was and had been “brain dead” for the past 30 years in 2003.
Meanwhile, AI found use in a variety of new areas of industry: Google’s search engine algorithm, data mining, and speech recognition just to name a few. New supercomputers and programs would find themselves competing with and even winning against top-tier human opponents, such as IBM’s Watson winning Jeopardy! in 2011 over Ken Jennings, who’d once won 74 episodes of the game show in a row.
One of the most impactful pieces of AI in recent years has been Facebook’s algorithms, which can determine what posts you see and when, in an attempt to curate an online experience for the platform’s users. Algorithms with similar functions can be found on websites like Youtube and Netflix, where they predict what content viewers want to watch next based on previous history.
The benefits of these algorithms to anyone but these companies’ bottom lines are up for debate, as even former employees have testified before Congress about the dangers it can cause to users.
Sometimes, these innovations weren’t even recognized as AI. As Nick Brostrom put it in a 2006 CNN interview: “A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it’s not labelled AI anymore.”
The trend of not calling useful artificial intelligence AI did not last into the 2010s. Now, start-ups and tech mainstays alike scramble to claim their latest product is fueled by AI or machine learning. In some cases, this desire has been so powerful that some will declare their product is AI-powered, even when the AI’s functionality is questionable.
AI has found its way into many peoples’ homes, whether via the aforementioned social media algorithms or virtual assistants like Amazon’s Alexa. Through winters and burst bubbles, the field of artificial intelligence has persevered and become a hugely significant part of modern life, and is likely to grow exponentially in the years ahead.