BETA
This is a BETA experience. You may opt-out by clicking here

More From Forbes

Edit Story

What Does “Artificial Intelligence” Really Mean?

Following
This article is more than 4 years old.

Years ago, Marvin Minsky coined the phrase “suitcase words” to refer to terms that have a multitude of different meanings packed into them. He gave as examples words like consciousness, morality and creativity.

“Artificial intelligence” is a suitcase word. Commentators today use the phrase to mean many different things in many different contexts. As AI becomes more important technologically, economically and geopolitically, the phrase’s use—and misuse—will only grow.

Like all suitcase words, artificial intelligence is notoriously elusive to define precisely. To make headway here, it is helpful to start by trimming one word and considering how to define the term “intelligence”.

There is no one dimension, metric or capability that encapsulates human intelligence. On the contrary, what we reductively call “intelligence” is in fact a constellation of capabilities spanning perception, memory, language skills, quantitative skills, planning, abstract reasoning, decision-making, creativity and emotional depth, among others.

Given that human intelligence is vastly multi-dimensional, it stands to reason that artificial intelligence likewise cannot be reduced to any one specific function or technology. After all, AI is ultimately nothing more than humanity's effort to replicate its own cognitive capabilities in machines. This leads us to perhaps the best one-sentence definition that we can hope for: AI is best thought of as an entire field of study oriented around developing computing systems capable of performing tasks that otherwise would require human intelligence.

Over the years and to the present, initiatives falling under the broad auspices of artificial intelligence have included computer vision, speech recognition, natural language processing, language translation, manipulation of physical objects, navigation through physical environments, logical reasoning, game-playing, prediction, long-term planning and continuous learning, among many others.

An important related point is that the definition of artificial intelligence is a moving target. Practitioners refer to this phenomenon (sometimes with frustration) as the “AI effect”. In general, society finds it most natural to label a given capability as “AI” only when it has not yet been solved. Once researchers prove that a machine can accomplish a given feat, that feat begins to seem too pedestrian to be “real AI”. This has played out over and over again in recent years, with activities including language translation, chess, driving and Go.

As roboticist Rodney Brooks put it, “Every time we figure out a piece of [AI], it stops being magical; we say, ‘Oh, that's just computation.’” Douglas Hofstadter summed it up even more succinctly: “AI is whatever hasn't been done yet.”

The term “artificial intelligence”, then, defies straightforward definition by its very nature. No clear boundaries exist between AI and less exalted pursuits like statistics or computation.

This definitional complexity inevitably invites overuse and abuse of the term—from entrepreneurs looking to attract venture capital, from reporters looking to attract clicks, from politicians looking to attract attention.

Yet in spite of all the hype, the term “artificial intelligence” remains useful, even essential (like all suitcase words). Even if its boundaries are blurry, there is value in having an overarching term to encompass and unify this family of conceptually related efforts. It facilitates communication and keeps us from getting bogged down every time we seek to discuss the topic.

Perhaps the most noteworthy difference between artificial intelligence and human intelligence is that there is no discernible upper bound to AI. Rather, its boundaries continue to expand with every passing year. No one knows with certainty where this will lead. Alan Turing, one of the first and greatest AI thinkers, had a thought-provoking point of view. “It is customary to offer a grain of comfort, in the form of a statement that some peculiarly human characteristic could never be imitated by a machine,” Turing said in 1951. “I cannot offer any such comfort, for I believe that no such bounds can be set.”

Follow me on Twitter