Mind Games

article by tom athanasiou

Submitted by ludd on February 16, 2010

The world of artificial intelligence can be divided up a lot of different ways, but the most obvious split is between researchers interested in being god and researchers interested in being rich. The members of the first group, the AI "scientists,'' lend the discipline its special charm. They want to study intelligence, both human and "pure'' by simulating it on machines. But it's the ethos of the second group, the "engineers,'' that dominates today's AI establishment. It's their accomplishments that have allowed AI to shed its reputation as a "scientific con game'' (Business Week) and to become as it was recently described in Fortune magazine, the "biggest technology craze since genetic engineering.''

The engineers like to bask in the reflected glory of the AI scientists, but they tend to be practical men, well-schooled in the priorities of economic society. They too worship at the church of machine intelligence, but only on Sundays. During the week, they work the rich lodes of "expert systems'' technology, building systems without claims to consciousness, but able to simulate human skills in economically significant, knowledge-based occupations (The AI market is now expected to reach $2.8 billion by 1990. AI stocks are growing at an annual rate of 30@5).

"Expert Systems''

Occupying the attention of both AI engineers and profit-minded entrepreneurs are the so-called "expert systems.'' An expert is a person with a mature, practiced knowledge of some limited aspect of the world. Expert systems, computer programs with no social experience, cannot really be expert at anything; they can have no mature, practiced knowlege. But in the anthropomorphized language of AI, where words like "expert,'' "understanding,'' and "intelligence'' are used with astounding--and self-serving-- naivete, accuracy will not do. Mystification is good for business.

Expert systems typically consist of two parts: the "knowledge base'' or "rule base,'' which describes some little corner of the world--some "domain'' or "microworld''; and the "inference engine,'' which climbs around in the knowledge base looking for connections and correspondences. "The primary source of power. . .is informal reasoning based on extensive knowledge painstakingly culled from human experts,'' explained Doug Lenat in an article that appeared in Scientific American in September 1984. "In most of the programs the knowledge is encoded in the forms of hundreds of if-then rules of thumb, or heuristics. The rules constrain search by guiding the program's attention towards the most likely solutions. Moreover. . .expert sytems are able to explain all their inferences in terms a human will accept. The explanation can be provided because decisions are based on rules taught by human experts rather than the abstract rules of formal logic.''

The excitement about expert systems (and the venture capital) is rooted in the economic signficance of these "structural selection problems.'' Expert systems are creatures of microworlds, and the hope is that they'll soon negotiate these microworlds well enough to effectively replace human beings.

Some recent expert systems, and their areas of expertise, are CADUCEUS II (medical diagnosis), PROSPECTOR (geological analysis), CATS-1 (locomotive trouble shooting), DIPMETER adviser (sample oil well analysis), and R1/XCON-XSEL (computer system sales support and configuration.) Note that the kinds of things they do are all highly technical, involve lots of facts, and are clearly isolated from the ambiguities of the social world.

Such isolation is the key. If our sloppy social universe can be "rationalized'' into piles of predictable little microworlds, then it will be amenable to knowledge-based computerization. Like automated teller machines, expert systems may soon be everywhere:

@U5In financial services like personal financial planning, insurance underwriting, and investment portfolio analysis. (This is an area where yuppie jobs may soon be under direct threat.)

@U5In medicine, as doctors get used to using systems like HELP and CADUCEUS II as interactive encyclopedias and diagnostic aids. These systems will also be a great boon to lawyers specializing in malpractice suits.

@U5In equipment maintenance and diagnosis. "Expert [systems] are great at diagnosis,'' said one GE engineer. In addition to locomotives, susceptible systems include printed circuit boards, telephone cables, jet engines, and cars.

@U5In manufacturing. "Expert systems can help plan, schedule, and control the production process, monitor and replenish inventories. . ., diagnose malfunctions and alert proper parties about the problem.'' (Infosystems, Aug. '83).

@U5In military and counterintelligence, especially as aids for harried technicians trying to cope with information overload.

But Do They Work?

If these systems work, or if they can be made to work, then we might be willing to agree with the AI hype that the "second computer revolution'' may indeed be the "important one.'' But do they work, and, if so, in what sense?

Many expert sytems have turned out to be quite fallible. "The majority of AI programs existing today don't work,'' a Silicon Valley hacker told me flatly, "and the majority of people engaged in AI research are hucksters. They're not serious people. They've got a nice wagon and they're gonna ride it. They're not even seriously interested in the programs anymore.''

Fortune magazine is generally more supportive, though it troubles itself, in its latest AI article, published last August, to backpeddle on some of its own inflated claims of several years ago. Referring to PROSPECTOR, one of the six or so expert systems always cited as evidence that human expertise can be successfully codified in sets of rules, Fortune asserted that PROSPECTOR's achievements aren't all they've been cracked up to be: "In fact, the initial discovery of molybdenum [touted as PROSPECTOR's greatest feat] was made by humans, though PROSPECTOR later found more ore.''

Still, despite scattered discouraging words from expert critics, the AI engineers are steaming full speed ahead. Human Edge software in Palo Alto is already marketing "life-strategy'' aids for insecure moderns: NEGOTIATION EDGE to help you psyche out your opponent on the corporate battlefield, SALES EDGE to help you close that big deal, MANAGEMENT EDGE to help you manipulate your employees. All are based on something called "human factors analysis.''

And beyond the horizon, there's the blue sky. Listen to Ronald J. Brachman, head of knowledge representation and reasoning research at Fairchild Camera and Instrument Corporation "Wouldn't it be nice if. . . instead of writing ideas down I spoke into my little tape recorder. . .It thinks for a few minutes, then it realizes that I've had the same though a couple of times in the past few months. It says, "Maybe you're on to something.'<+P>'' One wonders what the head of knowledge engineering at one of the biggest military contractors in Silicon Valley might be on to. But I suppose that's besides the point, which is to show the dreams of AI "engineers'' fading off into the myths of the AI "scientists''--those who would be rich regarding those who would be god. Mr. Brachman's little assistant is no mere expert system; it not only speaks natural English, it understands that English well enough to recognize two utterances as being about the same thing even when spoken in different contexts. And it can classify and cross-classify new thoughts, thoughts which it can itself recognize as interesting and original. Perhaps, unlike Mr. Brachman, it'll someday wonder what it's doing at Fairchild.

Machines Can't Talk

The Artifical Intelligence program at UC Berkeley is trying to teach computers to do things like recognizing a face in a crowd, or carrying on a coherent conversation in a "natural'' language like English or Japanese. Without such everyday abilities so basic we take them completely for granted--how would we be said to be intelligent at all? Likewise machines?

The culture of AI encourages a firm, even snide, conviction that it's just a matter of time. It thrives on exaggeration, and refuses to examine its own failures. Yet there are plenty. Take the understanding of "natural languages'' (as opposed to formal languages like FORTRAN or PASCAL.) Humans do it effortlessly, but AI programs still can't--even after thirty years of hacking. Overconfident pronouncements that "natural language understanding is just around the corner'' were common in the '50s, but repeated failure led to declines in funding, accusations of fraud, and widespread disillusionment.

Machine translation floundered because natural language is essentially--not incidentally--ambiguous; meaning always depends on context. My favorite example is the classic, "I like her cooking,'' a statement likely to be understood differently if the speaker is a cannibal rather than a middle American. Everyday language is pervaded by unconscius metaphor, as when one says, "I lost two hours trying to get my meaning across.'' Virtually every word has an open-ended field of meanings that shade gradually from those that seem utterly literal to those that are clearly metaphorical. In order to translate a text, the computer must first "understand it.''

TA for Computers

Obviously AI scientists have a long way to go, but most see no intrinsic limits to machine understanding. UCB proceeds by giving programs "knowledge'' about situations which they can then use to "understand'' texts of various kinds.

Yale students have built a number of "story understanding systems,'' the most striking of which is "IPP,'' a system which uses knowledge of terrorism to read news stories, learn from them, and answer questions about them. It can even make generalizations: Italian terrorists tend to kidnap businessmen; IRA terrorists are more likely to send letter bombs.

How much can we expect a program like IPP to learn? How long will it be before its "understanding'' can be "generalized'' from the microworld of terrorism to human life as a whole? In what sense can it be said to understand terrorism at all, if it cannot also understand misery, violence, and frustration? If it isn't really understanding anything, then what exactly is it doing, and what would it mean for it to do it better? Difficult questions these.

The foundation stone of this "IPP'' school of AI is the "script.'' Remember the script? Remember that particularly mechanistic pop psychology called "Transactional Analysis''? It too was based upon the notion of scripts, and the similarity is more than metaphorical.

In TA, a "script'' is a series of habitual stereotyped responses that we unconsciously "run'' like tapes as we stumble through life. Thus if someone we know acts helpless and hurt, we might want to "rescue'' them because we have been "programmed'' by our life experience to do so.

In the AI universe the word "script'' is used in virtually the same way, to denote a standard set of expectations about a stereotyped situation that we use to guide our perceptions and responses. When we enter a restaurant we unconciously refer to a restaurant script, which tells us what to do--sit down and wait for a waiter, order, eat, pay before leaving, etc. The restaurant is treated as a microworld, and the script guides the interpretation of events within it, once a script has been locked in, then the context is known, and the ambiguity tamed.

But while behavior in a restaurant may be more or less a matter of routine, what about deciding which restaurant to go to? Or whether to go to a restaurant at all? Or recognizing a restaurant when you see one? These problems aren't always easy for humans, and their solution requires more than the use of scripts. In fact, the research going on at Berkeley is specifically aimed at going beyond script-bound systems, by constructing programs that have "goals'' and make "plans'' to achieve those goals. Grad students even torture their programs by giving them multiple conflicting goals, and hacking at them until they can satisfy them all.

Anti-AI

The academic zone of AI is called "cognitive studies.'' At UC Berkeley, however, cognitive studies is not just AI; the program is interdisciplinary and includes philosophers, anthropologists, psychologists, and linguists. (The neurophysiologists, I was told, have their own problems.) Specifically, it includes Herbert Dreyfus and John Searle, two of the most persistent critics of the whole AI enterprise. If Cal hasn't yet made it onto the AI map (and it hasn't), it's probably fair to say that it's still the capital of the anti-AI forces, a status it first earned in 1972 with the publication of Dreyfus' What Computers Can't Do.

Dreyfus thinks he's winning. In the revised edition of his book, published in 1979, he claimed that "there is now general agreement that. . . intelligence requires understanding, and understanding requires giving the computer the background of common sense that adult human beings have by virtue of having bodies, interacting skillfully in the material world, and being trained into a culture.''

In the real world of AI, Dreyfus's notion of being "trained into a culture'' is so far beyond the horizon as to be inconceivable. Far from having societies, and thus learning from each other, today's AI programs rarely even learn from themselves.

Few AI scientists would accept Dreyfus' claim that real machine intelligence requires not only learning, but bodies and culture as well. Most of them agree, in principle if not in prose, with their high priest, MIT's Marvin Minsky. Minsky believes that the body is "a tele-operator for the brain,'' and the brain, in turn, a "meat machine.''

The Dark Side of AI

"Technical people rely upon their ties with power because it is access to that power, with its huge resources, that allows them to dream, the assumption of that power that encourages them to dream in an expansive fashion, and the reality of that power that brings their dreams to life.''

--David Noble, The Forces of Production

As fascinating as the debates within AI have become in recent years, one can't help but notice the small role they allocate to social considerations. Formal methods have come under attack, but generally in an abstract fashion. That the prestige of these methods might exemplify some imbalance in our relationship to science, some dark side of science itself, or even some large social malevolence--these are thoughts rarely heard even among the critics of scientific arroganace.

For that reason, we must now drop from the atmospherics of AI research to the charred fields of earth. The abruptness of the transition can't be avoided: science cloaks itself in wonder, indeed it provides its own mythology, yet behind that mythology are always the prosaic realities of social life.

When the first industrial revolution was still picking up steam, Fredrick Taylor invented "time/motion'' study, a discipline predicated on the realization that skill-based manufacturing could be redesigned to eliminate the skill--and with it the automony--of the worker. The current AI expert systems' insight that much of human skill can be extracted by knowledge engineers, codified into rules and heuristics, and immortalized on magnetic disks is essentially the same.

Once manufacturing could be "rationalized,'' automation became not only possible, but in the eyes of the faithful, necessary. It also turned out to be terrifically difficult, for reality was more complex than the visions of the engineers. Workers, it turned out, had lots of "implicit skills'' that the time/motion men hadn't taken into account. Think of these skills as the ones managers and engineers can't see. They're not in the formal job description, yet without them the wheels would grind to a halt. And they've constituted an important barrier to total automation: there must be a human machinist around to ease the pressure on the lathe when an anomalous cast comes down the line, to "work around'' the unevenness of nature; bosses must have secretaries, to correct their English, if for no other reason.

Today's latest automation craze, "adaptive control,'' is intended to continue the quest for the engineer's grail--the total elimination of human labor. To that end the designers of factory automation systems are trying to substitute delicate feedback mechanisms, sophisticated sensors, and even AI for the human skills that remain in the work process.

Looking back on industrial automation, David Nobel remarked that "men behaving like machines paved the way for machines without men.'' By that measure, we must assume ourselves well on the way to a highly automated society. By and large, work will resist total automation--in spite of the theological ideal of a totally automated factory, some humans will remain--but there's no good reason to doubt that the trend towards mechanization will continue. Among the professions, automation will sometimes be hard to see, hidden within the increasing sophistication of tools still nominally wielded by men and women. But paradoxically, the automation of mental labor may, in many cases, turn out to be easier than the automation of manual labor. Computers are, after all, ideally suited to the manipulation of symbols, far more suited than one of today's primitive robots to the manipulation of things. The top tier of our emerging two-tier society may eventually turn out to be a lot smaller than many imagine.

As AI comes to be the basis of a new wave of automation, a wave that will sweep the professionasl up with the manual workers, we're likely to see new kinds of resistance developing. We know that there's already been some, for DEC (Digital Equipment Corporation), a company with an active program of internal AI- based automation, has been strangely public about the problems it has encountered. Arnold Kraft, head of corporate AI marketing at DEC: "I fought resistance to our VAX-configuration project tooth and nail every day. Other individuals in the company will look at AI and be scared of it. They say, "AI is going to take my job. Where am I? I am not going to use this. Go Away!' Literally, they say "Go Away!'' [Computer Decisions, August 1984]

Professionals rarely have such foresight, though we may hope to see this change in the years ahead. Frederick Hayes-Roth, chief scientist at Teknowledge, a Palo Alto-based firm, with a reputation for preaching the true gospel of AI, put it this way: "The first sign of machine displacement of human professionals is standardization of the professional's methodology. Professional work generally resists standardization and integration. Over time, however, standard methods of adequate efficiency often emerge.'' More specifically: "Design, diagnosis, process control, and flying are tasks that seem most susceptible to the current capabilities of knowledge systems. They are composed largely of sensor interpretation (excepting design), of symbolic reasoning, and of heuristic planning--all within the purview of knowledge systems. The major obstacles to automation involving these jobs will probably be the lack of standardized notations and instrumentation, and, particularly, in the case of pilots, professional resistance.'' Hayes-Roth is, of course, paid to be optimistic, but still, he predicts "fully automated air-traffic control'' by 1990-2000. Too bad about PATCO.

Automating the Military

On October 28, 1983, the Defense Advanced Research Projects Agency (DARPA) announced the Strategic Computing Initiative (SCI), launching a five-year, $600 million program to harness AI to military purposes. The immediate goals of the program are "autonomous tanks'' (killer robots for the Army, a "pilot's associate'' for the Air Force, and "intelligent battle management systems'' for the Navy). If things go according to plan, all will be built with the new gallium arsenide technology, which, unlike silicon, is radiation resistant. The better to fight a protracted nuclear war with, my dear.

And these are just three tips of an expanding iceberg. Machine intelligence, were it ever to work, would allow the military to switch over to autonomous and semi-autonomous systems capable of managing the ever-increasing speed and complexity of "modern'' warfare. Defense Electronics recently quoted Robert Kahn, director of information processing technology at DARPA, as saying that "within five years, we will see the services start clamoring for AI.''

High on the list of military programs slated to benefit from the SCI is Reagan's proposed "Star Wars'' system, a ballistic missile "defense'' apparatus which would require highly automated, virtually autonomous military satellites able to act quickly enough to knock out Soviet missiles in their "boost'' phase, before they release their warheads. Such a system would be equivalent to automated launch-on-warning; its use would be an act of war.

Would the military boys be dumb enough to hand over control to a computer? Well, consider this excerpt from a congressional hearing on Star Wars, as quoted in the LA Times on April 26, 1984:

"Has anyone told the President that he's out of the decision- making process?'' Senator Paul Tsongas demanded.

"I certainly haven't,'' Kenworth (Reagan science advisor) said.

At that, Tsongas exploded: "Perhaps we should run R2-D2 for President in the 1990s. At least he'd be on line all the time.''

Senator Joseph Biden pressed the issue over whether an error might provoke the Soviets to launch a real attack. "Let's assume the President himself were to make a mistake. . .,'' he said.

"Why?'' interrupted Cooper (head of DARPA). "We might have the technology so he couldn't make a mistake.''

"OK,'' said Biden. "You've convinced me. You've convinced me that I don't want you running this program.''

But his replacement, were Cooper to lose his job, would more than likely worship at the same church. His faith in the perfectability of machine intelligence is a common canon of AI. This is not the hard-headed realism of sober military men, compelled by harsh reality to extreme measures. It is rather the dangerous fantasy of powerful men overcome by their own mythologies, mythologies which flourish in the super-heated rhetoric of the AI culture.

The military is a bureaucracy like any other, so it's not surprising to find that its top level planners suffer the same engineer's ideology of technical perfectability as do their civilian counterparts. Likewise, we can expect resistance to AI-based automation from military middle management. Already there are signs of it. Gary Martins, a military AI specialist, from an interview in Defense Electronics (Jan. '83): "Machines that appear to threaten the autonomy and integrity of commanders cannot expect easy acceptance; it would be disastrous to introduce them by fiat. We should be studying how to design military management systems that reinforce, rather than undermine, the status and functionality of their middle-level users.''

One noteworthy thing about some "user interfaces'': Each time the system refers to its knowledge-base it uses the idiom "you taught me'' to alert the operator. This device was developed for the MYCIN system, an expert on infectious diseases, in order to overcome resistance from doctors. It reappears unchanged, in a system designed for tank warfare management in Europe. A fine example of what political scientist Harold Laski had in mind when he noted that "in the new warfare the engineering factory is a unit of the Army, and the worker may be in uniform without being aware of it.''

Overdesigned and unreliable technologies, when used for manufacturing, can lead to serious social and economic problems. But such "baroque'' technologies, integrated into nuclear war fighting systems, would be absurdly dangerous. For this reason, Computer Professionals for Social Responsibility has stressed the "inherent limits of computer reliability'' in its attacks on the SCI. The authors of Strategic Computing, an Assessment, assert, "In terms of their fundamental limitations, AI systems are no different than other computer systems. . . The hope that AI could cope with uncertainty is understandable, since there is no doubt that they are more flexible than traditional computer systems. It is understandable, but it is wrong.''

Unfortunately, all indications are that, given the narrowing time-frames of modern warfare, the interplay between technological and bureaucratic competition, and the penetration of the engineers' ideology into the military ranks, we can expect the Pentagon to increasingly rely on high technology, including AI, as a "force and intelligence multiplier.'' The TERCOM guidance system in cruise missiles, for example, is based directly on AI pattern matching techniques. The end result will likely be an incredibly complex, poorly tested, hair-trigger amalgamation of over-advertised computer technology and overkill nuclear arsenals. Unfortunately, the warheads themselves, unlike the systems within which they will be embedded, can be counted upon to work.

And the whole military AI program is only a subset of a truly massive thrust for military computation of all sorts: a study by the Congressional Office of Technology Assessment found that in 1983 the Defense Department accounted for 69% of the basic research in electrical engineering and 54.8% of research in computer science. The DOD's dominance was even greater in applied research, in which it paid for 90.5% of research in electrical engineering and 86.7% of research in computer sciences.

Defensive Rationalizations

There are many liberals, even left-liberals, in the AI community, but few of them have rebelled against the SCI. Why? To some degree because of the Big Lie of "national defense,'' but there are other reasons given as well:

• Many of them don't really think this stuff will work anyway.

• Some of them will only do basic research, which "will be useful to civilians as well.''

• Most of them believe that the military will get whatever it wants anyway.

• All of them need jobs.

The first reason seems peculiar to AI, but perhaps I'm naive. Consider, though, the second. Bob Wilinsky, a professor at UC Berkeley: "DOD money comes in different flavors. I have 6.1 money. . . it's really pure research. It goes all the way up to 6.13, which is like, procurement for bombs. Now Strategic Computing is technically listed at a 6.2 activity [applied research], but what'll happen is, there'll be people in the business world that'll say "OK, killer robots, we don't care,' and there'll be people in industry that say, "OK, I want to make a LISP machine that's 100 times faster than the ones we have today. I'm not gonna make one special for tanks or anything.' So the work tends to get divided up.''

Actually, it sounds more like a cooperative effort. The liberal scientists draw the line at basic research; they won't work on tanks, but they're willing to help provide what the anti-military physicist Bruno Vitale calls a "rich technological menu,'' a menu immediately scanned by the iron men of the Pentagon.

Anti-military scientists have few choices. They can restrict themselves to basic research, and even indulge the illusion that they no longer contribute to the war machine. Or they can grasp for the straws of socially useful applications: AI-assisted medicine, space research, etc. Whatever they choose, they have not escaped the web that binds science to the military. The military fate of the space shuttle program demonstrates this well enough. In a time when the military has come to control so much of the resources of civil society, the only way for a scientist to opt out is by quitting the priesthood altogether, and this is no easy decision.

But let's assume, for the sake of conversation, that we don't have to worry about militarism, or unemployment, or industrial automation. Are we then free to return to our technological delirium?

Unfortunately, there's another problem for which AI itself is almost the best metaphor. Think of the images it invokes, of the blurring of the line between humanity and machinery from which the idea of AI derives its evocative power. Think of yourself as a machine. Or better, think of society as a machine--fixed, programmed, rigid. The problem is bureaucracy, the programmed society, the computer state, 1984.

Of course, not everyone's worried. The dystopia of 1984 is balanced, in the popular mind, by the utopia of flexible, decentralized, and now intelligent computers. The unexamined view that microcomputers will automatically lead to "electronic democracy'' is so common that it's hard to cross the street without stepping in it. And most computer scientists tend to agree, at least in principle. Bob Wilinsky, for example, believes that the old nightmare of the computer state is rooted in an archaic technology, and that "as computers get more intelligent we'll be able to have a more flexible bureaucracy as opposed to a more rigid bureaucracy. . .''

"Utopian'' may not be the right word for such attitudes. The utopians were well meaning and generally powerless; the spokesmen of progress are neither. Scientists like Wilinsky are well funded and often quoted, and if the Information Age has a dark side, they have a special responsibility to bring it out. It is through them that we encounter these new machines, and the stories they choose to tell us will deeply color our images of the future. Their optimism is too convenient; we have the right to ask for a deeper examination.

Machine Society

Imagine yourself at a bank, frustrated, up against some arbitrary rule or procedure. Told that "the computer can't do it,'' you will likely give up. "What's happened here is a shifting of the sense of who is responsible for policy, who is responsible for decisions, away from some person or group of people who actually are responsible in the social sense, to some inanimate object in which their decisions have been embodied.'' Or as Emerson put it, "things are in the saddle, and ride mankind.''

Now consider the bureaucracy of the future, where regulation books have been replaced by an integrated information system, a system that has been given language. Terry Winograd, an AI researcher, quotes from a letter he received:

"From my point of view natural language processing is unethical, for one main reason. It plays on the central position which language holds in human behavior. I suggest that the deep involvement Wiezenbaum found some people have with ELIZA [a program which imitates a Rogerian therapist] is due to the intensity with which most people react to language in any form. When a person receives a linguistic utterance in any form, the person reacts much as a dog reacts to an odor. We are creatures of language. Since this is so, it is my feeling that baiting people with strings of characters, clearly intended by someone to be interpreted as symbols, is as much a misrepresentation as would be your attempt to sell me property for which you had a false deed. In both cases an attempt is being made to encourage someone to believe that something is a thing other than what it is, and only one party in the interaction is aware of the deception. I will put it a lot stronger: from my point of view, encouraging people to regard machine-generated strings of tokens as linguistic utterances, is criminal, and should be treated as criminal activity.''

The threat of the computer state is usually seen as a threat to the liberty of the individual. Seen in this way, the threat is real enough, but it remains manageable. But Winograd's letter describes a deeper image of the threat. Think of it not as the vulnerability of individuals, but rather as a decisive shift in social power from individuals to institutions. The shift began long ago, with the rise of hierarchy and class. It was formalized with the establishment of the bureaucratic capitalist state, and now we can imagine its apotheosis. Bureaucracy has always been seen as machine society; soon the machine may find its voice.

We are fascinated by Artificial Intelligence because, like genetic engineering, it is a truly Promethean science. As such, it reveals the mythic side of science. And the myth, in being made explicit, reveals the dismal condition of the institution of science itself. Shamelessly displaying its pretensions, the artificial intelligentsia reveals as well a self-serving naivete, and an embarrassing entanglement with power.

On the surface, the myth of AI is about the joy of creation, but a deeper reading forces joy to the margins. The myth finally emerges as a myth of domination, in which we wake to find that our magnificent tools have built us an "iron cage,'' and that we are trapped.

Science is a flawed enterprise. It has brought us immense powers over the physical world, but is itself servile in the face of power. Wanting no limits on its freedom to dream, it shrouds itself in myth and ideology, and counsels us to use its powers unconsciously. It has not brought us wisdom.

Or perhaps the condition of science merely reflects the condition of humanity. Narrow-mindedness, arrogance, servility in the face of power--these are attributes of human beings, not of tools. And science is, after all, only a tool.

Many people, when confronted with Artificial Intelligence, are offended. They see its goal as an insult to their human dignity, a dignity they see as bound up with human uniqueness. In fact, intelligence can be found throughout nature, and is not unique to us at all. And perhaps someday, if we're around, we'll find it can emerge from semiconductors as well as from amino acids. In the meantime we'd best seek dignity elsewhere. Getting control of our tools, and the institutions which shape them, is a good place to start.

--Tom Athanasiou

Comments