Intelligence vs. Artificial Intelligence

Time for a break from the hard technical information. Time for some background information and philosophy. This will be the first of many.

The Birth of Artificial Intelligence

In the 1950s, when electronic digital computers were shiny and new, people were amazed by their capabilities. They were able to do things that previously had required humans. The term "computer" had, in fact, previously been used to refer to humans who often sat in a room full of people, hand calculators on their desk, churning through thousands of calculations. The Manhattan Project made extensive use of them to perform the millions of calculations they required.

Perhaps it was natural to think at that point that it would be a small step to add even more "intelligence" to computers. A small program could be written to solve nasty equations easy enough. And soon computers were being used to process non-numeric information. Language translation was a good target. Researchers began trying to apply the computer to more and more tasks that seemed to require human intelligence. A lot of progress was made and the future seemed bright. Just around the corner lay the "electronic brain" that could do anything a human could do. The programs were written to handle "symbolic" data, since much of the premise was that the brain handled thoughts and computations symbolically. The programming language LISP was an outgrowth and strong building block of this effort.

At around the same time researchers were discovering how the brain actually functions at a low level. Neurons were being researched and a few people actually built artificial neurons. A few simple "neural networks" were created with a handful of these neurons. It was very difficult to build them with vacuum tube technology; they were big, complicated, and took lots of power. These "perceptrons" showed some promise, but perhaps it didn't seem practical or reasonable to try to build an intelligent machine with them, especially when we had the computer.

So, the people trying to build intelligent machines split into two camps: Connectionists (neural networks) and Computationalists (symbolic programming.) That split is still very much alive today.

The connectionists had some early success, but it was difficult to scale up. Plus, their neural nets couldn't be "programmed" the way those shiny new computers could. And computers were the "in" thing. The computationalists kept building on their successes. Getting bigger and faster computers every year helped. Those successes made good reading in popular press. The researc dollars followed. And where the research dollars go, so goes the interest of the researchers.

By the mid to late 60s the computationalists had pretty much won. The term "artificial intelligence" was synonymous with computationalism and its Golden Child programming language LISP. Then progress stalled and people started to murmur that the emporer had no clothes. The programs did things that at first glance appeared "intelligent" but turned out to be a sham. Like a parrot mocking someone's words, the from was there but there was no substance behind it.

Perhaps the real turning point was Eliza. Eliza was a program written by artificial intelligence (AI) researcher Joseph Weizenbaum in the late 60s. It was written to demonstrate the superficial nature of human-machine communication. It succeeded very well, probably way beyond what its creator intended.

The Eliza program would interpret a script that defined its behavior. One script in particular, "DOCTOR," became very famous. The Eliza program would take a sentence of typed input, look for keywords, and apply rules to create or choose some semi-appropriate output, based on the particular script. The DOCTOR script was designed to imitate a Rogerian psychotherapist. In essence it would modify and repeat back what the "patient" said in order to get keep the patient talking. Weizenbaum was shocked and amazed at the reactions people had to DOCTOR. People would become emotionally attached and actually believed the computer understood them. Reportedly, Weizenbaum's own secreatary even asked him to leave the room so that she and Eliza could "talk." Some researchers even suggested that Eliza and DOCTOR could provide therapy to actual patients. But in reality, the computer had absolutely no idea what the "patient" was talking about. It was all a ruse. But it was "artificial intelligence."

Artificial intelligence is just that: artificial. It's a fake. It's a parrot. It mimicks some type of intelligent behavior, with no substance. Like a Hollywood movie set; it appears real until you look behind the curtain.

The AI Winter is Coming

At around the same time Eliza showed up, progress in AI had stalled. The early successes weren't scaling up, even though a lot more computer power was thrown at the problem (and research dollars.) The controversy surrounding Eliza and DOCTOR didn't help. Places like MIT built bigger and faster computers, computers designed to run LISP efficiently with lots of memory. Some neat demonstrations were written, but the progress didn't match the effort. Interest went away, and with it the research dollars. The AI Winter had set in.

Winter lasted through the 70s and into the 80s. But in the 80s things started to change. The connectionists had been quietly plugging away the whole time. The minimal research budgets didn't bother them so much; they were used to not having any. And at the same time personal computers became more powerful than the mainframes of the mid 60s. Some people used those fast PCs with all that memory to run LISP, but the connectionists saw it as a chance to build reasonable neural networks. The groundhog didn't see his shadow. Spring was coming.

Today you can't turn on CNN or FOX news, or the Disney channel, or even the Home Shopping Network without someone telling you about the latest "AI" device that's going to change your world. But, by and large, it's a fake (there are a few exceptions.) And neural networks, machine learning, and whatever the latest buzzword this week is dominates all the computer science conferences and websites. Now don't get me wrong. Some of this stuff is pretty impressive. But it is NOT intelligent. Mostly, it's a bunch of really fancy (and computationally expensive) pattern matching.

So over the last seventy years or so, a lot of artificial intelligence has been tossed around. And mostly it has been parroting intelligence. Lots of flash, very little substance. Artificial Intelligence got a justifiable bad reputation, but did prove it was named correctly.

When we do manage to build an intelligent machine, is it artificial? No, it's truly intelligent. It's not a fake, or a ruse, or a parrot. We wouldn't want to label it "artificial" when it is real. We also wouldn't want to label it "artificial" with the reputation that comes with the name. We could call it "machine intelligence" which is better, but aren't we machines, too? Organic machines rather than silicon and steel, but machines nonetheless. Why should we distinguish it at all? I will call it simply "intelligence" or "intelligent." If I need specify I will call it an "intelligent machine." When our intelligent machine grows up and goes to school we don't want the other kids calling it "artificial" do we?


Previous Contents Next

Small Time Electronics

Your Source for Custom Electronics, Projects, and Information




© 2019 William R Cooke