Expectations that computers are on the verge of matching or surpassing humans' abilities may be rampant, but they aren't new.
Mitchell, a professor of computer science at Portland State University and external professor at SFI, introduced the audience to AI's history by noting that as far back as 1958 (and this was not even the beginning, she added), researcher and engineer Frank Rosenblatt debuted a device called The Perceptron, developed by the Office of Naval Research. The Perceptron had been taught to recognize hand-written letters and Rosenblatt's unveiling of it prompted The New York Times to declare that the Navy had "revealed the embryo of an electronic computer today that it expects will be able to walk, talk, see, write, reproduce itself and be conscious of its existence."
Many other predictions followed.
In 1961, Claude Shannon, pioneer of information theory, proclaimed, "I confidently expect that within a matter of 10 or 15 years, something will emerge from the laboratory which is not too far from the robot of science fiction fame."
Nobel Laureate Herbert Simon followed, in 1965, saying: "Machines will be capable, within 20 years, of doing any work that a man can do."
And MIT AI lab founder Marvin Minsky, in 1967, predicted that within a generation, the problem of creating artificial intelligence would be "substantially solved."
None of these predictions have actually come to pass yet. "We are still at a stage where machines are very much less intelligent than we are," Mitchell said. "but people still are making big predictions about them."
For example, Coursera co-founder, computer scientist Andrew Ng, an expert in machine learning, just two years ago compared AI to electricity circa 100 years ago, saying "I actually have a hard time thinking of an industry that I don't think AI will transform in the next several years." Elon Musk, more forebodingly, has described AI as "our biggest existential threat."
In fact, Mitchell's new book, Artificial Intelligence: A Guide for Thinking Humans, begins with a story of visiting Google for an AI meeting in 2014, along with her mentor, AI legend Douglas Hofstadter, author of the seminal computer science book and Pulitzer prize winner Gödel, Escher, Bach: An Eternal Golden Braid. At that meeting, Hofstadter expresses to the Google engineers his own fears about AI encroaching on those qualities he assumed to be distinctly human and un-replicable. Mitchell writes: "He fears that AI might show us that the human qualities we most value are disappointingly simple to mechanize."
The field, Mitchell soon discovered, was divided on this point. "What I found is that the field of AI is in turmoil," she writes, with one side confident of machines' limitations, and the other predicting it's just a matter of time before they wipe out humanity. She embarked on the book to "understand the true state of affairs" in AI, the basic tenets of which she presented in Santa Fe.
Her lecture first discussed what artificial intelligence actually is, given that the term, she said, is surprisingly difficult to define. Examples are easier to find: chess-playing machines, virtual assistants, GPS, self-driving cars. Mitchell said her favorite definition is "an anarchy of methods," as this helps characterize the way in which AI is an umbrella term for all types of ways machines perform actions people might describe as intelligent.
From there, Mitchell described how machine learning has become a more significant part of the field since the 1980s. Prior to then, so-called intelligent machines learned by having people manually program them with rules. In machine learning, which became more prevalent in the field between the 1990s and 2000s, machines actually learn by being given data rather than through human programming. By 10 years ago, machine learning had become prevalent in AI, and "deep learning," a type of machine learning design inspired by the way the human brain works, had taken over machine learning. Today, she noted "all of the things you use" that fall under AI—speech recognition, Google search, facial recognition—"are powered by deep learning."
All are far from flawless. Machines can be hacked, and she provided numerous examples of facial recognition software stymied by special glasses and self-driving cars led astray by so-called "adversarial machine learning," to name a few. Moreover, machines can't extrapolate larger concepts and meaning in the same way that humans can.
"I think what's going wrong is these systems are not capturing the meaning that is in the data they're processing," she said. "They don't understand in the way that we humans understand. It's hard to say exactly what it means to understand; it's one of the problems we have with this kind of language." Words such as intelligence, understanding and comprehension, she said "don't have very good definitions" in terms of how brains work. "I can't say whether a machine is understanding or not, but I know it's different than the way I understand. And if machines are going to work with us in our world, we need to make sure they understand the way we do."
The US Department of Defense, too, is building a machine with the goal of providing it with the common sense of an 18-month-old baby, she said.
Mitchell's own interest in AI, she said, is understanding how humans form concepts, abstractions and analogies, all of which intrinsically inform our understanding of the world. She provided, then, a quote from Hofstadter and Emmanuel Sander's book Surfaces and Essences, saying it should be "one of the mantras" of Artificial Intelligence:
"Without concepts there can be no thought, and without analogies there can be no concepts."
Her own quote on the matter was used to conclude the lecture:
"How to form and fluidly use concepts is the most important open problem in AI."
Watch the whole lecture: