The Mind of A New Machine
April, 1984
A group of university scientists spent years working on the ultimate computer, a machine with so much knowledge and so much calculating power that it could answer the questions that had vexed mankind for centuries. Finally, the day came when they were ready to plug in their creation. One of them pushed the ON button and, standing nervously at a terminal, typed in the first question.
"Is there a God?" he asked.
The machine thundered back, without hesitation, "There is now."
Jokes Like that were current in the Fifties and Sixties, when computers were vaguely sinister, if invisible, machines that hid out in the basements of huge Government agencies or powerful corporations. It was a time of technological progress and a sense of predestination about that progress. Cars were getting bigger, planes faster; why wouldn't computers just keep getting smarter, until the day--probably really--soon--when they would be smarter than any person? That faith presented new ethical questions. (Do computers have souls?) It also led quickly to the familiar science-fiction territory where a computer calculates that it knows more than the fools who tend it and decides to seize control.
That fear of runaway computers has eased in recent years as the machines have become commonplace in the office and the home. But the notions that scientists will one day turn out truly intelligent machines and that intelligence can, indeed, be mechanized remain truisms of the age in which we live. Everything about our experience with computers--in fact, our very idea of progress--leads us to those conclusions. Nowadays, it seems, we are bombarded by stories about computers' outperforming people in areas long regarded as uniquely human. The Japanese have earned considerable attention with their Fifth Generation computer project, an enormous research effort aimed at producing an intelligent machine by 1990. Computers--unerring, tireless, astonishingly swift--seem to have a terrific head start on the road to superintelligence, and we appear to be proceeding at a steady clip toward that goal. Consider the bright tomorrow when we can fill some electronic Einstein with the contents of the Library of Congress and then have the machine sift through it all and tell us things about ourselves we had yet to figure out. Imagine how the world will be turned upside down with the advent of an intelligence of our creation--yet greater than our own.
It would be unbelievable.
Getting computers to do more than routine data-processing tasks, such as printing out payroll checks, is the province of an area of computer science called artificial intelligence, or A.I. The phrase was coined during the Fifties, A.I.'s early years, and has troubled some in the field ever since. "It sounds like something ersatz," says one researcher, "like artificial flowers, or something unpleasant to contemplate, like artificial insemination."
But a more significant problem came from the use of the word intelligence. It seemed to assume what had yet to be demonstrated--that a machine could, in fact, work like the human brain. As a result, expectations were raised early on about how much A.I. would be able to deliver. That trend was generally encouraged by A.I. workers, who were caught up, along with most Americans, in the can-do, all-American optimism of the postwar years. One researcher predicted in 1957 that within ten years, computers would be writing critically acclaimed music. A lot of time has since been spent living down the excesses of the young science.
The initials A.I. also presented definitional problems. What would constitute an intelligent computer? Just what is intelligence, anyway? The most famous way of answering that question came from a British mathematician named Alan Turing, in what has come to be called the Turing test. In one version of the test, a person is placed in a room with two terminals, one of them connected to a computer, the other to a person in the next room. After "conversing" via the keyboards with both entities, the test taker must decide which was the computer and which was the fellow human. If he can't decide, or if he guesses wrong, then the computer is declared to be, for all intents and purposes, intelligent. (The existence of the Turing test has given rise to the coffee house game of trying to come up with the most "revealing" question to put forward in a test if you had only one question to ask. Douglas R. Hofstadter, the computer scientist and writer, suggested it be "If you had only one question to ask a computer in a Turing test....")
A.I. scientists don't really spend any time trying to build a machine that can pass the Turing test, and they tend to view discussions of it as the domain of laymen; they have their "semantic networks" and "heuristically guided searches" to keep them busy. Still, inquiring about the test is convenient shorthand for getting A.I. researchers' opinions about how smart they think computers can eventually become. Lately, they seem to be growing increasingly noncommittal about when, if ever, a machine will actually succeed at the task. Answers tend to be along the lines of "Sometime between the next ten and 1000 years." While there may be disagreement on when a computer will be able to take on the Turing test, there's no disputing that, so far, no machine anywhere has even come close to passing.
Or has one?
Consider the following: Some years ago, the vice-president of a Boston computer company went into his office on a Saturday morning to catch up on work. He wanted to use the computer but thought that an employee in another room was logged on. So he sat down at a terminal and, thinking that he was talking to one of his workers, had the following exchange:
Do you think I can use the computer this morning?
Why Do You Ask?
I might be able to make some additional sales.
Why Aren't You Sure?
My customers have never seen a demonstration of the system.
Does This Mean Very Much To You?
Of course it does.
You Seem Very Positive About That.
This is ridiculous.
What Do You Mean This Is Ridiculous?
Please dial me up at 491.
No one called, of course, since the vice-president wasn't communicating with anyone at all but with a computer program called ELIZA that had been left running by mistake. Eliza was written by an MIT computer scientist named Joseph Weizenbaum and was designed to conduct itself much as a psychiatrist does in the initial interview with a patient. The program spent most of its time asking questions about the operator's previous response, trying to draw the "patient" out of himself. That process, in a therapist's hands, involves training, insight and skill. Lacking those, Eliza operated through a series of clever ruses supplied by Weizenbaum. For example, it would look for certain stock phrases, such as "I am," and then spit back some automatic reply, such as How Do You Feel About Being...? If ELIZA didn't recognize any pattern in the words it was given, it replied with canned shrink talk, such as I see or what does that suggest to you? So saying "Help, I've set my clothes on fire" might prompt the computer to reply, Please Continue. To an unwitting observer, like the Boston executive, ELIZA did a more than convincing job of imitating a human being. But underneath, it was all a con.
No one knew that better than Weizenbaum. Like many A.I. researchers, he was interested in language and was curious to see how well he could get the appearance of a conversation going between a computer program and a person. Since the two needed something to talk about, he structured the program around the kinds of exchanges that routinely come up in a psychiatrist's office. But the computer/human dialog could just as easily have been about cooking eggs, he said. Weizenbaum expressly denied that any psychological, or even linguistic, "understanding" was taking place in his program. As far as he was concerned, ELIZA was an interesting bit of A.I. research that could be used in the real world as, say, a parlor game.
But to his great surprise, a lot of the real world saw it differently. Weizenbaum's secretary asked to be alone in the computer room so she could talk to ELIZA. He got phone calls from desperate people eager for a little time with the program so they could work out their problems. A psychologist wrote that while "further work must be done before the program will be ready for clinical use," ELIZA would soon take its rightful place in psychiatric work. If drugs and electric shock couldn't empty out the psycho wards, perhaps time-sharing would.
Of course, ELIZA was in such demand partly for the same reason we so often talk to ourselves--humans tend to be more willing to speak freely if they think no one is listening. But the episode also demonstrated a common fallacy: that computers and people go about their business the same way. It looked as though ELIZA were working as a skilled and understanding therapist. But in reality, the program was merely searching through strings of letters for key words and then replying with stored phrases that had a high probability of being appreciated. Those who believed in ELIZA were making one of the biggest mistakes people make when they get around computers. They figured that if a machine acts like a person on the (continued on page 194)A New Machine(continued from page 128) outside, then it acts like one on the inside--that it has all the complexity and, well, intelligence that people have.
That's one of the subplots to the A.I. story, and lately, it seems to have been forgotten.
•
In five or six years, portable, quasi-human brains will be commonplace. They will be an intelligent electronic race, working as partners with the human race. We will carry these small creatures around with us everywhere--little electronic friends that can solve all your problems.
--Robert Jastrow, 1982
Around 1970, a few years after the ELIZA stir, a young MIT graduate student named Terry Winograd was finishing a program he called SHRDLU. Winograd shared Weizenbaum's interest in language, but he wanted to write a program that actually "understood" English sentences rather than just acted as though it did. Like Weizenbaum, Winograd needed to have something to talk with the computer about, so he chose a tabletop full of toy wooden blocks. After three years of work, he had prepared SHRDLU to have conversations such as this one:
Pick up a big red block.
OK.
Find a block that is taller than the one you are holding and put it into the box.
By "It" I Assume You Mean the Block that is Taller than the One I am Holding. Ok.
How many things are on top of green cubes?
Three.
While ELIZA used a few tricks to mimic understanding, SHRDLU was far more complex. It attacked each sentence like a ransacking thief, ripping it apart, searching for patterns, comparing what it found against what it knew and then figuring out the sequence of actions needed to do what it had been told. There was no obvious ELIZAlike rigidity here; if it didn't know what its human user had on his mind, SHRDLU didn't hesitate to say so. Despite the fact that SHRDLU'S "universe" was only a bunch of blocks, A.I. enthusiasts, eager for some demonstrable advance to silence a growing number of skeptics, hailed the program as a breakthrough that pointed the way to even better things to come. Today, more than a decade after its completion, SHRDLU is still frequently cited as one of A.I.'s outstanding achievements.
It was especially admirable considering the lack of success that earlier A.I. projects had had with language. Back in the Fifties, one of the first problems taken on by researchers was computerized translation. That would be, it was reasoned at the time, a simple task: Just put all English words into a computer, along with the corresponding words in, say, Russian. The mechanical dictionary, as it was called, would then go about its work like a giant decoder ring and would be able to spit out an English version of Pravda in a minute. Of course, it wasn't quite that simple. The Russian translation of "Time flies like an arrow" might come out "Clock insects enjoy a weapon." Who knows what "By and large, it's pretty eye-catching" would look like in Russian? The mechanical dictionary ignored most of the important aspects of language; and after millions were spent on it by the U.S. Government, the project was scrapped, with the A.I. community a little wiser as a result.
SHRDLU produced much more than a word-by-word translation of what a human operator sent it, and the program appeared to be a major step toward computers that actually "understood" people. Winograd had made an enviable start at an academic career, and the A.I. community watched as he set out on the next logical step in his research--expanding his program. There was quite a gap between what SHRDLU could talk about and what human beings can discuss. But by the conventional wisdom of the day, that gap should have methodically narrowed as Winograd went about his work.
But it didn't, and soon, something odd happened. "I started bumping up against a problem," Winograd said. "After working for a while, I wasn't very comfortable that I was getting where I wanted to go. I didn't see that gap closing." The problem he encountered can be illustrated by this sentence: "John was depressed, so he jumped off the Golden Gate Bridge." It's not explicitly stated, but any person listening knows that John ended the day by meeting his Maker; he doesn't need to be told that depressed people sometimes try to kill themselves or that the Golden Gate Bridge is high enough to get the job done most of the time. It turns out there are huge--arguably, infinite--reservoirs of similarly unstated information we draw from every day, nearly every time we use language. But unless you specifically tell it, a computer has none of that knowledge. Even if it "understands" the dictionary meaning of all the words in a sentence, it doesn't know that a person who asks, "Can you spare a dime?" wants cash, not a financial statement. And those "hidden" aspects of language are essential.
Winograd realized that a sentence doesn't transport its meaning the way a truck carries its cargo; instead, it is more like a blueprint that lets the listener, using his own knowledge, reconstruct what is being said. How, then, could computers use language the way people do if they lacked the sorts of information about the way the world works that every human has? Winograd, one of the field's most promising researchers, concluded that they couldn't.
He had run into one form of what A.I. researchers now call the "common-sense problem." It turns out that the aspect of human intelligence most difficult to duplicate is not our ability to solve complicated problems in physics or mathematics but the way we can talk with a neighbor or read a newspaper or cross the street. The astounding feats of memory and manipulation that computers dazzle us with turn out to be almost trivial when compared with the simplest acts every human performs every day. Yet those commonplaces are the cornerstones of every bit of "advanced" thought. In trying to expand his work, Winograd had helped kick out one of A.I.'s intellectual foundations. The simplest use of language turned out to have a complexity that had been unimagined even by linguists, much less by A.I. workers eager to turn their machines loose on the world. How would intelligence be possible without language? What about the many other hurdles--such as pattern recognition or inductive reasoning--that researchers would need to clear on the way to their "intelligent" machine? There was now every reason to believe they would turn out to be equally troublesome. Clearly, A.I. was going to be a lot harder than anyone had ever imagined.
•
[In the next two decades] there remains the real chance that computers will be seen as deities, and if they evolve into Ultra-intelligent Machines, there may even be an element of truth in the belief.
--Christopher Evans, The Micro Millennium
So A.I. researchers learned that they couldn't have the whole world just yet; they would have to be content with a microworld--a carefully limited, clearly defined slice of real life. Winograd's blocks were a microworld, which is why SHRDLU performed as well as it did. A.I. workers realized that microworlds could be considerably larger than a table full of blocks; that, in fact, one could comprise a fairly elaborate body of knowledge. With expectations scaled down to more realistic levels, an impressive collection of A.I. work has resulted.
Logic students go through exercises in which they have to translate statements such as "If it rains, then there's no game" into symbolic equations, such as "If R, then not G." A computer doesn't need to know anything about wet playing fields to be able to store that proposition or to link it up with any number of others. Give a computer enough of those rules joined into a network and it will likely do well at predicting when the opening pitch will or will not be thrown. Of course, the rules can be about a topic of more commercial import than sporting events. That's the basis for an increasingly significant A.I. field called expert systems.
In 1976, a group of A.I. researchers from SRI International in Menlo Park, California, began talking with geologists. They wanted to extract from the scientists all they knew about minerals and mining and then write that knowledge up into a system of rules that a computer program could understand. In a series of long, technical sessions, they forced the geologists to think explicitly about the way they put facts and theories together to reach conclusions. If you spot some potassic alteration, for instance, what's the chance of then finding porphyry mineralization? Over several years, much of the knowledge that a group of geologists had about a few minerals in a few areas was being deposited onto a computer program. It was a kind of mental mining, part of what A.I. workers call knowledge engineering.
Eventually, a system of rules for geological expertise was put together into a program called PROSPECTOR. Then the Menlo Park workers got hold of some survey information from around Mount Tolman, in the eastern part of Washington state. PROSPECTOR chewed over the data and reported back that there was molybdenum in them there hills. While not exactly a vital cog in the wheel of American industry, molybdenum has its uses--sparkplug points and furnaces among them. A mining company drilled where PROSPECTOR told it to and found deposits worth about $100,000,000.
That's a lot of sparkplugs, certainly enough to impress those concerned with matters of profit and loss. "I've got stacks of calling cards from people who have visited here--Chevron, Standard, Shell, even some that I've never heard of," says René Reboh, who headed up the PROSPECTOR project. Usually, the large companies have heard enough about A.I. to know they somehow want to be involved in the field, perhaps just by buying a new program to run on their big main-frame IBM. But after looking around SRI, Reboh says, they're usually sufficiently impressed that "the decision they make is to start their own A.I. group."
A number of major companies are now heavily involved in A.I., many of them working on expert systems. One that has made the biggest commitment is Schlumberger, Ltd., a multinational oil-services company that regularly interprets the results of test drillings for oil and gas. Since there are more down-hole samples than human experts to examine them, Schlumberger hopes to speed up the work by developing a computerized expert system for the job. Elf Aquitaine, a company exploring for oil in the North Sea, has commissioned an expert system to help explain why its drill bits get stuck--a piece of information of no small value when down time on offshore rigs costs $100,000 a day. In Boston, the Digital Equipment Corporation uses an expert system to help configure its VAX-11 line of computers. The machines have so many components that company salesmen frequently made mistakes in putting through orders. The Luddites of this world may take pleasure in the fact that among the first persons to have their jobs threatened by a computerized expert system were computer salesmen.
A.I. researchers have not yet come up with a way of using computers to put together expert systems. That task, which must be done largely by hand, is both time-consuming and expensive. One estimate has a typical expert system taking between two and five man-years and between $300,000 and $1,500,000 to prepare. Since not every enterprise that could use an in-house expert system can afford a new research-and-development wing for the job, there are expert-system companies that will program one for you. Most were started by A.I. researchers attached to major universities and have grown sufficiently large to make A.I. one of the financial community's newest high-tech glamor industries. The phenomenon is reminiscent of the recent flight of biologists out of the academy and into genetic-engineering firms, and it leaves some observers worrying what will happen to basic A.I. research if the best minds in the field are out chasing venture capital instead of visions.
Employers who contemplate a double whammy of expert systems and robots to wipe out their work force should know that the systems have rather severe limitations. Without extensive human reprogramming, for example, they can't learn from their mistakes. They will work only in areas where all the assumptions that go into making a decision can be spelled out in advance, in detailed "if ... then" rules. That keeps the programs confined to essentially technical areas and away from problems that involve human behavior in all its wonderful unpredictability. No one yet knows how to build an expert system that will close a sale or deal with an irate customer or, for that matter, do what ELIZA was supposed to have been doing.
But still, their fields of expertise can be large. Already, the world's best analyst of mass spectrograph patterns, which reveal a sample's chemical composition, is not a chemist but an expert system developed by A.I. workers at Stanford University. In a few narrow areas of medicine, such as diagnosing blood infections, computerized expert systems have become so adept that attending physicians are often reduced to giving the machine's print-out a quick once-over before approving its diagnosis as their own. A much more ambitious medical expert system, covering the entire field of internal medicine, is now being developed at the University of Pittsburgh. Initial tests show that while the program holds its own against human doctors in a number of areas, it is still very far from the point where you and I would want to entrust ourselves to its care.
Stories about expert systems even run to the poignant. At a seminar conducted recently by Teknowledge, a Palo Alto company that is a leader in the knowledge-engineering field, one executive in attendance, the head of a paper-milling concern, told how he was looking into an expert system to monitor a special machine used to make a fancy and rare grade of paper. He said he had only one employee left who was able to run the necessary equipment, and that man was about to retire, leaving no one--save, possibly, a computer--with the skill needed to keep the craft alive.
•
There are now in the world machines that think, that learn and that create. Moreover, their ability to do these things is going to increase rapidly until the range of problems they can handle will be coextensive with the range to which the human mind has been applied.
--Herbert Simon, 1957
The discovery of A.I. by business in the past few years has prompted a new round of press attention to the field. Many of the articles accept without question most popularly held notions of the inevitability, if not the actual existence, of an intelligent machine. Stories routinely assume that the thinking computer is already here, and some go on as though there are now machines spending their days in piercing moments of insight and wondrous fits of creation. Magazines and newspapers have recently described computers almost as if we'll soon be having them over for dinner, brandy and cigars. But there's usually quite a difference between how these programs are described in press accounts and what they actually do. Frequently, it's a case of the ELIZA error revisited, of thinking that a computer is doing more than it actually is.
MIT researchers have developed a program that is an experiment in duplicating the human process of reasoning by analogy. It is routinely described as being able to read, understand and then compare Shakespeare's plays; one writer was impressed by its ability "to extract general insights about human behavior." But the program is hardly doing the same thing a college student does when he sweats through an essay exam in English lit. Instead, it is fed highly schematized descriptions of Shakespearean plots. By churning through all the possible combinations of those statements, it can come up with matches between two plays. Rather than bring anything resembling human intelligence to bear on the problem, its only real ability is to do algebralike manipulations of internally stored equations. If you asked it about the role that guilt or madness plays in Macbeth or Hamlet, you'd get a blank screen, since it was told nothing of them. The program doesn't know Shakespeare but merely a list of statement/equations with a thin Shakespearean veneer--statements that could as easily have been about nursery rhymes or molybdenum deposits.
Yale researchers have created a program called BORIS that is often said to be able to understand stories about everyday human relationships and to answer questions about them, as well. BORIS can be told, When Paul Walked into the Bedroom and Found Sarah with Another Man, He Almost had A Heart Attack. If you then ask it, How did Paul Feel? it will answer, Paul was surprised. Maybe even speechless. Actually, the program can understand (process?) only two stories, and only because its programmers have prepped it by feeding in specific rules that anticipate what happens in the stories. One rule says that if a man and a woman are in bed together, it should be inferred they are having sex; another says that if a person sees others in an act that ordinarily requires secrecy, then that person is "surprised." Equipped with that thicket of rules, BORIS will be able to remark on Paul's surprise at his wife's infidelity. But change the story ever so slightly to introduce a situation not covered by the rules--say, by putting Paul in bed with another man--and it will be BORIS, not Sarah, that will be surprised into speechlessness.
Few people outside A.I. pay attention to a computer program that was able to talk about blocks on a table. Yet programs that seem to deal with human behavior command widespread attention. But in an important sense, there's little difference between SHRDLU'S cubes and BORIS' husband and wife. Both are microworlds, and both have rigid rules that determine in advance how much the program will be able to understand. The key difference is that in the case of programs such as BORIS, we are tempted, as people were with ELIZA, to think that the computer responds as it does because it shares our understanding of all of the complexities of the real world. Actually, only a handful of those complexities exist in the program, and in rigid, reductionist form. We're projecting our own intelligence onto a collection of silicon, wires and flashing lights.
For human beings, words--be they molybdenum or murder--have their meaning only because they invoke an almost endless network of internal associations. The word itself is only a kind of shorthand. But for a computer, a word is just a stored token; there is nothing "behind" it. You can no more say that a computer "knows" what it is talking about than you can say that an adding machine "knows" about mathematics or that a slot machine "knows" about fruit because it can spin out combinations of oranges, cherries and plums. When printed by a computer, a word like surprised is to human language what a Hollywood facade is to a building.
None of this is to suggest any guile on the part of A.I. workers, who know the current limitations of their systems better than anyone. Programs like BORIS are attempts at developing computerized models of the way people think and act, and they may turn out to be first steps toward computers that actually do that. But it's also important to remember, as the direction of A.I. work shifts away from blocks and toward human beings, that computers are still very far from that goal. Hubert L. Dreyfus, a Berkeley philosophy professor who has been A.I.'s most persistent critic over the years, has written that the history of A.I. is strewn with examples of the "first-step fallacy," in which an initial bit of success at solving a problem is confused with the problem's full-fledged solution. The medieval alchemists, Dreyfus noted, thought they had found a way of using chemistry to change dirt into quicksilver; they labored unsuccessfully for centuries to do the same with lead and gold.
•
Wetware is the brain, and I don't see anything in wetware that can't be duplicated in hardware with the right software running on it.
--An A.I. Programmer
But if it bears remembering what computers haven't done, the opposite is also true. A.I. workers have been able to come up with ways to formalize, codify and then stuff into a machine a number of endeavors once thought to be uniquely human. Dreyfus once doubted that a computer would ever be able to beat an average human player at chess. Although computer programs today can beat all but the world's very best players, he was, in a sense, correct, for they are playing almost two different games. A computer doesn't go about chess the same way a human does; instead, it usually looks through as many moves as possible, assigns a value to each one and then picks the move with the highest score. If you left your queen wide-open in the middle of the board, the computer wouldn't immediately "see" the opportunity the way a person would; instead, it would stumble across your blunder only in the course of investigating millions of other possibilities. But it ends up playing chess, and with a vengeance.
Computers can't always do the job the same way people do, but they can get the job done, nonetheless. At IBM, for example, researchers have discovered that because most business letters are written in the same style, a computer program has a surprising degree of success in "understanding" them. The company hopes one day to create a system that will help out in the office by reading and summarizing incoming mail, possibly even flagging what it considers important. The system would need constant human oversight and wouldn't work some of the time and would be useless with anything not written in businessese. But what busy manager, buried in memos, wouldn't want one around just the same?
Along with expert systems, these stabs at solving small parts of the "natural language problem" have been among A.I.'s recent success stories. While remaining every bit as complex as Winograd found it to be with SHRDLU, human language has yielded part of itself to the computer. The most striking example is in the area of query systems, those bizarre codes we must use in place of conversational English to get information out of a machine.
At a Sunnyvale, California, company called Symantec, where they are not immune to Silicon Valley's propensity for high-tech punning, a group of A.I. workers is building a program that lets people use everyday English to talk to their computers. This is not state-of-the-art A.I. work; such systems already run on mainframes for large companies that pay $50,000 and up for the service. But at Symantec, they are putting together a system that will sell for around $750 and will run on personal computers. A.I. for the masses.
Gary Hendrix, the head of the company, makes it clear that the system can't handle all of English. For starters, aside from a few special commands such as List or Print, the program understands only two verbs: be and have.
"That sounds at first to be a terrible limitation, but it turns out to be not so bad," Hendrix says. "You can say who is the Owner of Fido? instead of Who Owns Fido? It takes a few minutes to get used to that, but then you're off and running."
Hendrix showed me a prototype of the system on an IBM Personal Computer. When we started, the program knew only about the employees of an imaginary company--their names, sexes, salaries and the like. I suggested that the best test of its abilities would be for me to use my own natural English to ask the questions. "How many female employees are there?" I wanted to know. Hendrix typed that sentence into the machine. There was a pause of a few seconds, and then a list of women's names appeared on the screen. At the bottom, the computer typed the Words Female Employees-11.
Then Hendrix showed how information can be added to the program. The Length of the Lafayette is 500 Feet, he keyed in. I wanted to know what the Lafayette was. "I'm glad you asked that question," he said, "because look at what the system asked." On the screen were the words What Sort of thing is the Lafayette? "For both of you," Hendrix said, "the Lafayette is a ship," and he typed in those words.
The program now knew that the Lafayette was a ship and that it had 500 "feet" worth of something called "length." For future reference, it wanted to know what the adjectival form of the word length was, as well as the word's opposite. To help it appear more conversational later on, it also wanted to know which was correct--longer or more long. After being told that "Mary is the captain of the Fox" (another ship, this one shorter than the Lafayette), the program was able to deal with the request Please List the Department, Salary, Location and Sex of the Longest Ship's Captain's Manager.
The computer still didn't "know" anything about salaries or ships or sex. In fact, it has no knowledge of the world at all and will quite happily store the proposition that "Mount Everest is three feet tall" or "The semcloe is a beregi." In one sense, it's nothing but a very complex exercise in filling in the blanks. But the program can manipulate huge numbers of those blanks in a way that will usually look very much like what we call English. And that, in turn, lets people take advantage of the enormous powers of computers with an ease that was once impossible.
This sort of applied artificial intelligence may seem pedestrian to most of us, nursed as we have been on science-fiction dystopias full of malevolent computers escaping human control. But if A.I. affects our lives in the coming years, it won't be by depositing some full-blown sentient being on our doorstep and leaving us the problem of contending with it. Instead, it will likely be by delivering tools designed to make it easier to deal with the simple tasks we face each day.
While it's easy to show what A.I. hasn't done and how the wild claims of A.I. boosters, such as those that begin each section of this article, have had little basis in reality, it would be less than wise to try to predict how far the field can ultimately go. A.I. workers have a knack for finding structures for human activities once thought to be spontaneous and unchartable. Even if their goal of capturing the mind forever eludes them, it's likely we'll be impressed at how much ground they can claim in the effort.
"The Russian translation of 'Time flies like an arrow' might come out 'Clock insects enjoy a weapon.' "
Like what you see? Upgrade your access to finish reading.
- Access all member-only articles from the Playboy archive
- Join member-only Playmate meetups and events
- Priority status across Playboy’s digital ecosystem
- $25 credit to spend in the Playboy Club
- Unlock BTS content from Playboy photoshoots
- 15% discount on Playboy merch and apparel