Yaabot

A Brief History of Artificial Intelligence

Any discussion about artificial intelligence (AI), or even the history of artificial intelligence, tends to end up centered on what the term actually means. In the case of other words, you’d just search for the definition, but it’s not so simple with AI – even experts in the field don’t agree on a unified meaning. 

Put simply, AI is the general idea of an object or device having the ability to perceive its environment and act on what it observes. 

Depending on the context, it can also refer to the specific methods – for example, natural language processing – that are used to accomplish this, or, more comprehensively, to the field of AI research itself. 

It’s frequently argued that no no “true” AI actually exists, but just as we know there to be different types of human intelligence, there are numerous types of artificial intelligence, too. 
In fact, there are many uses of AI that we are now so accustomed to that we barely think of them as AI at all, from the algorithms that are responsible for personalized ads and recommendations, to the neural networks that make it possible to predict everything from currency exchange rates to the weather. 

With all the hype around AI, and an exact definition proving elusive, it might be helpful to take a brief look backward. How did we get here – or more precisely, how did AI get here? 

AI in Antiquity

It’s impossible to pinpoint when human beings first imagined artificial entities or objects being capable of thought. What we do know is that the idea of “golems” – human-like beings created from lifeless materials – has been a part of storytelling traditions for centuries.

Maybe the most famous golem of all is the monster created by Victor Frankenstein in Mary Shelley’s 1818 story Frankenstein; or, The Modern Prometheus. In the novel, Frankenstein assembled his creature from spare parts, and, using a previously undiscovered technology, gave to it the ability to do much of what we associate with AI today: perceive, act on, and learn from its environment. 

But while there’s evidence of AI in fiction dating back hundreds of years, the technological advancements themselves can be traced back to the conjecturing of philosophers and mathematicians in antiquity. 

These thousand-year-old explorations of formal logic and reasoning formed the foundation from which Alan Turing and Alonzo Church, in the 1930s, concluded that digital devices would be capable of simulating those processes.

Turing’s work, in turn, paved the way for a mathematical model that is now widely regarded as the first work in the field of AI – the McCulloch-Pitts model for an “artificial neuron.” It would still be another decade before AI started to come into its own. 

A History of Artificial Intelligence: The Last Half-Century

Historians point to 1955-56 as the year in which AI was officially recognized as a unique academic field. Computer scientist John McCarthy coined the term “artificial intelligence,” and AI quickly assumed a large role in an era of widespread digital transformation. 

1956 – A computer program called Logic Theorist uses automated reasoning to create new proofs for math problems.

1959A study published in the IBM Journal of Research and Development reports that computers could now be programmed to play “a better game of checkers” than the person writing the program, and that it could learn to do so in only 8-10 hours of playtime. 

1962 – The Advanced Research Projects Agency (later DARPA) establishes its Information Processing Techniques Office (IPTO) and begins funding AI development with millions of dollars. 

1966 – Development begins of Shakey the Robot, who would become the first robot that could analyze commands and incorporate logic into physical movement. 

1973 – The world’s first android debuts. Named WABOT-1, it has the ability to measure distances between objects and use the data to walk around independently and interact with objects. 

1974 – The first so-called “AI winter” begins, which sees the hype and heavy optimism of the ‘60s being overshadowed by negative public perception, disagreements among researchers, and decreased funding. 

1980 – The era of AI as a business tool is born in the form of XCON, an expert system developed for American computer company Digital Equipment Corporation (DEC). Able to process orders automatically and speed production processes, XCON quickly results in millions of dollars saved by DEC annually. 

1981 – Japan’s government earmarks $850 million to fund the Fifth Generation computer project, with the goal of creating programs that could have conversations, translate languages, “read” pictures, and use human-like logic. The Fifth Generation project inspires an uptick in AI funding around the world. 

1987 – A collapse in the market for AI hardware triggers the second AI winter

1989 – Two programs, HiTech and Deep Thought, win at chess against masters of the game. 

1997 – Deep Thought’s successor, Deep Blue, beats the reigning world champion, Garry Kasparov, at chess. 

2003 – DARPA sets aside $7 million to fund the Radar project at Carnegie Mellon University, a first attempt at building a personal assistant. This same year, DARPA starts its own project, CALO (an acronym for Cognitive Assistant that Learns and Organizes) in collaboration with SRI International research institute.  

2005 – A robotic vehicle named “Stanley” wins the DARPA Grand Challenge by autonomously navigating 131 miles of desert trail.

2007 – A different driverless vehicle, “Boss,” completes a similar challenge in an urban environment, navigating 55 miles of roadway while negotiating traffic and obeying all traffic regulations. 

2011 – IBM-designed system “Watson” hands a significant defeat to the two winningest Jeapordy! champions, Ken Jennings and Brad Rutter. 

2011 – The iPhone 4s launches with Siri, a spinoff version of the personal assistant that SRI International developed for DARPA. 

2012 – Google debuts “Google Now,” which uses predictive algorithms to create personalized news and information feeds.

2014 – Amazon introduces the Alexa and Echo family of personal assistants.

2016AlphaGo uses AI to defeat world champion Lee Sedol at the game of Go, the first instance of a computer winning at Go (regarded to be a significantly more complex game than chess) with no handicaps. 

2018 – Waymo’s driverless taxi service takes off in Phoenix, Arizona. 

2019Facial recognition AI becomes a topic of debate for politicians and the general public. Deepfakes become a serious threat, marking a new chapter in the history of artificial intelligence.

The Future of AI: The Next Half-Century

Just as the timeline above could be extended to include dozens of other major events and milestones, so will the coming decades undoubtedly be filled with more advancements and applications for AI than anyone will be able to keep up with. 

Futurists – and AI experts in particular – are notorious for making overly optimistic predictions, and nearly always getting them wrong. As a result, it’s difficult for anyone to agree on the best ways to prepare for the changes that may be coming. 

What we do know is that the era in which AI was frequently written off as “all hype” is a bygone one, and that as it continues to become a bigger part of our everyday lives, it’s up to us to make the best of it. As we move forward, the history of artificial intelligence will continue to grow more exciting and unpredictable.

Exit mobile version