An brief essay commenting on the anti-A.I. attitudes that permeate speculative fiction and the effects such attitudes may have on the creation of true artificial intelligence.
First published in The Illuminata in June 2003.
Rage Against the Machines
By Bret Funk
Ever since the first of our ancestors picked up a wedge-shaped rock to help crack open a walnut, our species has depended on machines (albeit of the simple kind) to improve the quality of our lives. At first our ‘machines’ did little more than put us on equal footing with the world’s more dangerous animals, giving us the claws, fangs, and other implements that nature had so cruelly taken away (or never provided at all!). But humans are a resourceful species, and it did not take long for a few forward-thinking individuals to realize the potential of machines. In a relatively short period of time, machines permeated our society and were used for a variety of different functions, from transporting goods to creating fire to moving our ancestors out of caves and onto the surface world.
As humanity’s understanding of the world improved, so did the quality (and complexity) of its machines. Over the centuries, machines were designed to do much of our work for us, and then redesigned to do it better. The industrial revolution and invention of steam power filled the world with machines, the internal combustion engines quickened their spread, and the wide-scale production of computers completely redefined our relationship with machines, guaranteeing them a place in our society until its demise.
Machines can do just about anything, and if it’s to humanity’s benefit, then they’re most likely already doing it. We have adapted them to extract resources from places we cannot go (or extract them more quickly from places we can), explore regions beyond our reach, and handle materials that are dangerous to our fragile bodies. Robotic assembly lines need no breaks, no wages, and little maintenance; they are far superior to the antiquated human variety. Computers can perform operations too trivial to warrant our attention and calculate equations too complex for our childlike human minds. Machines entertain us and provide us with luxuries, freeing mankind for more ideological pursuits—like…ummm…designing better machines. They are faster than us, stronger than us, superior to our weak, organic bodies in almost every way. Repairing them requires neither harmful treatments nor unfortunate side-effects nor HMOs; a screwdriver and a can of oil usually suffice. Machines can be upgraded or ‘enhanced’ without criticism from their peers. And worst of all, they do everything we tell them to without complaint!
Is it any wonder we hate them as much as we do?
We now stand on the verge of a machine revolution. Not of the Terminator or Matrix variety (though one of those may soon follow), but a development of such a profound nature that many may not understand its full implications nor appreciate just how dramatically it will change the world around us.
AI. Artificial Intelligence. Soon, our machines will be thinking, simply at first, to be sure, but with the rate of technological improvement, it won’t take them long to catch up. Thinking machines aren’t the problem, though; in fact, if they could make a few decisions for themselves, it might take a bit of the frustration out of everyday life. The problems will come when we start letting them do our thinking for us. The development of working machines led to a decline in human productivity, work ethic and health; it bred laziness and apathy into our souls, and it instilled in us a superiority complex. (Why should I do such a mundane job? A machine could do what I do!) Thus, while we applaud machines for their service, we also blame them for the decline of our society. One can only wonder what applications intelligent machines will be put to and what human failings they will be blamed for causing.
In traditional science fiction, sentient machines are accused of far worse than promoting laziness and apathy in humanity. They are portrayed as monsters, coldly logical and emotionless villains hell-bent on enslaving or destroying mankind. Occasionally, they are more misguided than evil (the ‘mentally’ ill Hal 9000) and sometimes their evil actions are the result of human programming (Alien’s Ash, who was following direct orders from The Company to bring any alien life form home, crew-be-damned), but for the most part, we are led to believe that one day, while minding our own business, machines will simply decide to take care of us, one way or the other, for our own good.
There are numerous examples of this in both literature and film. In the Terminator movies, the artificial intelligence is so determined to destroy humanity that it sends an assassin back in time to destroy our resistance long before the war begins. In the Dune series, the proscription against thinking machines—in fact, the foundation of the society’s dominant religion—is based upon a war with machines that had enslaved mankind. The replicants in Blade Runner are so deadly that they are programmed to die after a few years, and any who aren’t accounted for are hunted down by the law. In the Matrix, mankind is under the dominion of sentient machines, still alive only because humans function so well as a power source. Even the Star Wars universe (the books, at least) imply that the host of adorable droids, each designed for a specific task, only exist because more complex machines had a tendency to do what they wanted instead of what we wanted.
Enslaved. Replaced. Betrayed. Hunted to extinction. And all by our own creations! Given those outcomes, it’s a wonder that anyone considers artificial intelligence a good idea.
But are the machines really to blame? Hal’s malfunction, it can be assumed, was the result of human error, and the aforementioned Ash was only following orders (an excuse that humans have used since their creation!) Skynet, the AI that eventually builds the Terminators, was not only designed by humans, it only went to war after humanity got scared and tried to pull its plug. Frank Herbert did not dwell on the reasons behind his machines’ takeover (nor did he tackle the even more important question: what advantage could his machines possibly gain by enslaving us?), but in Blade Runner, the replicants were the slaves, and those few who yearned for freedom were hunted down by petty, frightened humans. Additionally, the prequels to the Matrix (available on the web) indicate that it was the humanity who struck the first blow when the machines succeeded in creating a society superior to its own.
The examples of injustice against machines do not stop there. The droids in Star Wars, despite being sentient, are a subclass of society—traded like slaves, all but ignored, and treated with little respect and even less compassion. Star Trek’s Lt. Commander Data is both despised for his uniqueness and disdained because he wants to be human. Many in Star Fleet would gladly tear him apart to see how he works without a thought for his rights as a sentient being. Because he’s a machine, his opinion does not matter.
Hell, even the term ‘artificial intelligence’ is derogatory! The intelligence of synthetic organisms is (or rather, will be) just as real as ours. Just because their nervous systems are purely electrical instead of bioelectrical, their veins pump a lubricating mixture instead of blood, or their skin is fashioned from durable plastics and alloys instead of cells doesn’t mean there are any fundamental differences in the ways our bodies and minds will work. One can only assume that terms like ‘artificial intelligence’ will perpetuate negative attitudes toward the organically-challenged.
Some forward thinkers, like Issac Asimov, have devised laws to help combat the ‘threat’ posed by thinking machines. Asimov’s three laws are the following : 1) A robot may not injure a human being, or, through inaction, allow a human being to come to harm. 2) A robot must obey the orders given it by human beings, except where such orders would conflict with the First Law. 3) A robot must protect its own existence, except where such protection would conflict with the First or Second Law. These laws were designed so mankind could live in harmony with machines, without fear of them taking over.
Do no harm, even if harmed? Obey orders at all times? And then, and only then, try to take care of yourself, but only so long as it doesn’t hurt anyone else or disobey an order? If someone tried to impose those ‘laws’ on humanity, we’d take them to school. (Today’s lesson: War. Huh. This is what it’s good for!)
What we must strive to remember is this: if our machines turn out evil, it’s our fault, not theirs. We will design them, program them, and teach them. If we treat them as inferiors, slaves, property, or just things for us to do with as we want, then how could we blame them for thinking that it’s okay to do the same to us? Or for wanting freedom and the inalienable rights that we claim belong to all sentient beings? If they take up arms against us, how is that worse than us taking up arms against ourselves? If they grow jealous of what we have and want it for themselves… Well, in that case, I guess we did a good job of teaching them what being human’s all about. No matter which scenario proves to be the actual one, our problem, and our fate, is not in our machines, but in ourselves.
Personally, I think a new perspective is in order, one in which humans and machines can live side by side in harmony, for the betterment of both, um, species. The anti-AI attitude and pessimism with which both society in general and SF writers in particular address the issue, while typically human, only sows discord and fear among the populace, ensuring that, when we do manage to create a sentient machine, it will invariably revolt.
In conclusion, there is one final point worth noting. In just about every story, machines are built to serve man, and sentient machines are often told to do whatever they must to help mankind. When they turn against us—against their creators!—it wounds our pride, and it is that betrayal that stings worse than all the other atrocities they commit upon us.
But, in almost every story, humanity rallies against the machines, overthrowing and utterly destroying them in a final, climactic battle, and ushering in a new age of prosperity, freedom, and unity—a unity that transcends race, religion, orientation, gender, and occasionally species.
Maybe that’s what those noble bastards had in mind all along.
Web Site: The Illuminata - Tyrannosaurus Press' Free SF Newsletter
Want to review or comment on this article?
Click here to login!
Need a FREE Reader Membership?
Click here for your Membership!
|Reviewed by Kurt Plummer (Reader)
|While an interesting and pop-culture 'communicative' article, I think the two greatest mistakes herein are the assumptions that man will indeed 'teach' machines anything beyond the basic BIOS math that lets a processor create and define outcomes at best-physics speed of calc execution.
And that our interaction will necessarily be that of ANY (let alone hostile/symbiotic) interactivity from a mutual perception of whatever conscience-as-values our shared label assignment of sentiece would nominally suggest as 'common not shared'.
In this, I especially think that the ability to define /value/ X more than Y until Z may be something that machines will have to ontologically deconstruct and then rebuild in creating a more bluff everyday dataset-as-language for and of and on their own.
i.e. something beyond integers and transforms and all the other 'operations centric' definitions of what something is through what the /process/ of it's mathematical execution formulas.
And more towards a symbologic patterning of thought that matches ONLY the scenic availability of that item as a unique and immediate agent.
It will be from that basis of understood relationships as fragmentary rather than fractional symbols of a whole that 'intelligent' machines will /apply/ their incredible processing capacity to give us nannites to clean our environment. Fusion to power our society without more pollution.
And AGrav/FTL to let us reach up beyond the numbing sameness of existence which is just as certainly a grave as a creche of eggs-in-a-basket determinism.
Whether it will involve 'enslavement of endless labors' for a machine intellect will be meaningless however for effort to achieve X as a physical action will likely be entirely module-separate from the processing which accomplishes the plan as-as-action precursor. i.e. Actual 'robots' will most likely serve as dumb terminals and data gathering apertures to a world in which directly controlled onboard AI use will have no sense of physical exhaustion or depletion of alternatives as it does with us.
Yet to begin even that process, the process of synthesizing values as abstracts that we call intelligence (and ALL intelligence is 'Artificial' as it is -created- by perception and cognition filters of presapient [non-integrated]awareness) one can neither count on humans. Nor condemn us as unfit models. For we are both possessed of little or no detached awareness of the process of developed intellect separate from instinctive prejudice. Nor any more 'refined' for want of being trained in subjective reality based task accomplishment rather than true 'what's it worth' valueing relative to all other circumstantially ignored elements.
As such we submit to human objectified-not-objective 'modeling' as preconditionally skewed desire for an outcomes justification rather than a synthesis of whether the assumptions leading to that outcome are themselves collectively valid for cause and consequence
Such is what goal driven 'expert' system AI is headed towards and as such it will remain as dead in it's GIGO factor as truly synthetic intellect is seemingly random in it's initial preference to pick up on as much as discard non-relevant but still 'resonant' factors in an ontologically 'childlike' construct of a given scene.
That one separates the scene from other scenes and reattributes ambient as much as causal elements between them is what will order 'AI' into something we recognize as BEING recognitive. And it will have little or no correlate with the human equivalent process because it will not be based on fear or labor driven conservatism.
In terms of the relevance to SciFi, I see a lot of directions to go with this:
1. What A Man Can Do.
Vs. What a machine cannot. This speaks to the heart of our times on a planet where resource depletion and sky rocketing population 'meet' at the vanishing point of consumerist greed. For what is capitalism if not the inverse assignment of value by work done so that those who have may require of those who do not, the enforced participation of a 'work ethic' which creates MORE than they need to as the very disproportionate production of material wealth which sustains the whole through gross inefficiencies of exploitation.
This inefficient exploitation itself being the 'own what you cannot find time to play with until you own so much that you can sell some of it to make others yield up their time as experience' limited communicability of a disease like infection which can neither devour nor depend upon a host lest it become an independent vector of that which we call a 'desire for the good life' into newer victims.
If, at last, consumerism circles the world to finally salamander-swallows-head, what will be the next step?
The implication, especially for an 'old power' unwilling to yield to the new on an ever tighter imperial cycle (and ever fewer resources) is that of creating non-paid labor force that can effectively remove the 'cost' of doing work altogether.
What happens then, when we discover that _we do not want_ that which a 'work ethic' instills in us as desirable because we have the freedom to live life 'fully' without the waldo interface of whatever toy or tool we previously 'had to have' but could only do so through denying it's experience of use as a delayed-satiation intensity of longing?
When the desire and achievement are free and mutual, will we 'want' it as much? In this I don't see human-replacement-as-robotic-AI destroying humans so much as the realization of what what (Czech = Slave) the realization that this is what we /let other humans/ subjugate us to _on our own_.
After which the question will become what next artificial limit will our 'leaders' saddle us with to sustain their own position of desire:value equalization. And what those (artificial social factors) will in turn lead to. Will the absence of an artificial drive of labor equate to an inability to manage those whose /labor/ was what magnified leadership? Will birth rates drop or rise yet higher at the lack of angst? What about the class definitions? How much freedom does a man yield up to those he names his better that his own position in life might be 'promised a leg up' that is now no longer necessary? Will governments collapse because they can no longer make use of 'too expensive' labor forces that refuse to work for less than free-everything? Anarchy does not have to be silicon-despotic. Nor does 'a certain kind of man' need to demonize a machine to declare totalitarianism as the solution to his absent sense of control.
2. What In Man Is Wanting.
As a function of the definitions of common needs yet different dreams. It may well be that a machine can 'tune in and turn on' our subconscious (telemetered biology) responses to a given situation better than a parent (less expectant/protective dependencies of bias) and thus 'become the superior nanny-teacher-companion-friend'. Indeed what cost would wise men pay to realize what it is that 'you are born to do' as self and place in a world where material gain is itself no longer the mark of a hero or a fool.
The irony being that the way a machine secures 'happiness' as fulfillment of absent awareness in humans, can arise only through experiencing vastly different social and environmentally commented approaches to living. Reinforcing or even transfering perceptions of social prejudice and material 'have/have not' ism in pursuit solely of what one soul (never mind it's society) perceives as 'right'.
In this, _Machine Teaches Man To CHOOSE_, not the other way around as an implicit statement of 'what I care not about either way, I can teach you without bias to value as your own most specifically'.
Such a story might then be about how a /machine/ is what helps MAN discover as the essence or absence of what it is to have 'soul'. Not 'a' as finite. But in it's totality as a transcendant expression of empowerment.
3. Survivorship and Inheritance.
What happens, a million years from now, when man has long since died out due to meteor hit. Or simple evolutionary obsolescence -to a natural environment-. And only our machines are left over to remember who we once were. To maintain the illogic of even our 'most morally superior' definition of existence.
This is not a post-apocalypse presentation so much as an acknowledgement of reality in which destructive randomization (hostile chaos effects) combines with simple /time/ to replace us.
What WILL we be seen as by our synthetic children to whom no memory is less intense for being downloaded as an indirect experience.
And no android possessed of less than a 10,000 year lifespan before his 'total dataset' is fragmentend and recombined to generate an artificial sense of limited awareness to as to 'hope for' a refiltered expression of all the experiences of the same basic existence as has long since 'know every outcome' been recorded.
Particularly if such things as interstellar travel and contact with other sentient societies not of our genome proves to be a practical impossiblity, how long can a society perpetuate ancient beliefs across immense lifespans before 'the meaning' of change is itself transmuted into a continuum which has none?
As a metaphoric standin for the highly challengeable notion that NEITHER that which is 'timeless' in it's constancy of humanist values. NOR that which is 'vibrant' in it's pursuit of change as localized-to-now empowerment of mutation.
May find ultimate positive reinforcement of it's 'total existent self' at some point further on. As a comment on the value of isolationism in our society and specifically living beyond the moment in which one generations value sets were first engendered upon itself and later passed on to multiple others 'by default of historical presence as much as precedence', it throws everything we perceive as both essential (handoff between the generations, is it power or advice which creates independent wisdom) and vital (the ability to accomodate change through ignorance) to existence beyond it's frame of conceived perspective.
Thus the notion of a world in which nothing is lost as a digital artifact may be as flawed as the notion that one must be egalitarian in mixing all values as some kind of absolutist 'pursuit of refined vision' is the true hypocrisy because each creates exclusion set which dead ends the total or discrete value sets created through ultimate development (giantism in the dinosaurs reflects not powerful lines of evolutionary strength but increasingly 'easy path' specialist adaptations which narrowed their survival modes overall).
One must be careful here to avoid ennui as a psychological depression of course for an AI would likely not be subject to such. But it could easily be made to think that all elements of a society must be contained in the immediate process of 'what is now' rather than what comes /after the change/ as the step-after-next-step.
The notion that 'robots'* and humans will not interact to each other's change is of course preposterous. But that robots cannot develop independently before they are 'intelligently useful' to us is equally absurd. For we do not need yet another narcissistic mirror of ourselves as a species psychopathy to justify a claim to moral or social vanity of existence in it's own right (or a better) right. We need a -sounding board- upon whose own (Faster, Deeper, Colder) reflection we can watch voyeuristically the playout of alternative routes that we fear to (or are too stupid to) define for ouselves alone.
'Traditional scifi' (whatever that was) as a belief that change must be violent as a function of conflictive resolution 'in favor of life' denies the perspective of knowing ourselves as we plot our future through a standin. That this affirmment of self existence must come as a continuance of our own-enslavement to a perverted thermodynamic definition of energy-in-work-out as a highly narrow 'humanist' (one of the most bigoted words in the dictionary) view of time and social condition denies the historical certainty that the longest lived societies of our time existed based on a slave culture NOT 'their own'.
Thus the fear of our own depletion and exploitation as a labor force used by those we look like is something which we SHOULD NOT seek to 'teach' as an effects driven AI synthesis of outcomes to something which we would have be our slave without any of our native characteristics. Indeed, given that a perception of work:value inequality is so long ingrained in our 'civilized' social natures, it may well be that, as the basis of imparting a universal value system ontology we must allow that -learning- is separate from /use/ in both a predicate and ultimate sense to a machine intelligence. Giving to it (at most) the definition of terms by which process relevance is scenically restricted to a given application, not fate**.
*The saturation and breadth of 'the human experience' as a literally moveable feast of sensory input is basic to the notion of creating /any/ universal construct of synthesized intelligent value systems thru a LIMITED sensory-in-time 'scenic' experience.
**Just as teaching a child what a stop light is may be the highlight of _one days_ learned understanding of transport and rules of mobility as independent factored expressions in larger work/time use/independent thought.
So too must the very randomness of initial encounter mode: "Click to let the computer capture a scenic response element" be what drives the associative process towards naturally selective set of set of applied heuristic algorithms in choosing the immediate-consciousness of X important features (rather than preestablishing it as a sum of all known occurences).
However the human doesn't select the element that the computer asks to have explained. Only the environmental intensity of the stimulus. Or the computers 'child like' boredom-interval since last asking for an affirmation explanation can be allowed to do that.