What is AI? |
Artificial intelligence (AI) is a field of computer science that focuses on creating machines that can perform tasks that typically require human intelligence, such as visual perception, speech recognition, decision-making, and language translation. |
Timeline |
1935 |
Alan Turing, a British logician and computer pioneer, did the earliest substantial work in the field of artificial intelligence |
1940 |
Edward Condon displayed Nimatron, a digital computer that played Nim perfectly. Konrad Zuse built the first working program-controlled computers. |
1943 |
Warren Sturgis McCulloch and Walter Pitts published "A Logical Calculus of the Ideas Immanent in Nervous Activity," laying foundations for artificial neural networks. |
1950 |
Alan Turing proposed the Turing test as a measure of machine intelligence. Claude Shannon published a detailed analysis of chess playing as search. Isaac Asimov published his Three Laws of Robotics |
1955 |
John McCarthy, known as the father of AI, developed the programming language LISP and coined the term "artificial intelligence". |
1956 |
The Dartmouth College summer AI conference was organized by John McCarthy, Marvin Minsky, Nathan Rochester of IBM, and Claude Shannon. McCarthy coined the term "artificial intelligence," and the conference is considered the formal founding of the field of AI. |
1957-1974 |
AI flourished, and computers became faster, cheaper, and more accessible. Machine learning algorithms improved, and people got better at knowing which algorithm to apply to their problem. Early demonstrations such as Newell and Simon's General Problem Solver and John McCarthy's Advice Taker showed the promise of AI. |
1980s |
AI was reignited by two sources: an expansion of the algorithmic toolkit and a boost of funds. John Hopfield and David Rumelhart popularized "deep learning" techniques, which allowed computers to learn using experience. Edward Feigenbaum introduced expert systems, which used a knowledge base of rules to make decisions. |
1990s |
AI research shifted toward practical applications, such as speech recognition, computer vision, and robotics. The development of the World Wide Web and the explosion of digital data created new opportunities for AI. |
2000s |
AI experienced a resurgence, thanks to advances in deep learning, big data, and cloud computing. Companies such as Google, Facebook, and Microsoft invested heavily in AI research and development, leading to breakthroughs in natural language processing, image recognition, and game playing |