Talking Robots and the Culture of Enslavement

Eventually I found a setting that made my computer read to me. Not only that, it reads in a voice that sounds just like an Australian woman in her twenties. I picked that one because it was the least creepy. I can imagine being read to by Courtney Barnett, who should never have to pay taxes again in this life. With the LHE (Left-Handed Exemption) this should amount to a tax credit. Suddenly, drowsing along to Courtney’s nuanced reading of a draft of my next book, I heard her take a breath!

Oh, no. They did not do that. For the rest of the chapter I listened for the timing. Did my faux-Courtney breathe because of a carriage return/line-feed? (Carriage return? Never mind.) Maybe it’s paragraphs. But my relationship with this cherished laptop is not the same (it never was).

Things were bad enough. That’s really bad. I get these robo-calls from local numbers that start out, “I’ve been trying to reach you, I’m really worried about your extended warranty” And answering machines that run multi-tiered mazes, and “want” you to “Say, ‘New Registration,’ or ‘Something Else.'”

“I didn’t understand you,” says this bot. After a few more choice expletives designed to melt a transistor, it says: “I’m sorry you are having difficulty. Just a moment while I connect you to someone who can help you.” The tone of this little speech is unabashedly condescending. I can hear “you poor idiot,” in the brief pause after “her” algorithmic apology.

In one of my pursuits we use an app that has a “bot” in it. This thing refers to itself as “I” and reports its emotional state. “I’m happy to help!” it coos, in text. It sends me text messages announcing housekeeping chores it has just performed on our communications. I have no idea how to undo these actions, or why I should. It’s an AI. Who am I to gainsay it? If I “unsubscribe,” it will say, “We’ll miss you!’

Now when friends call, I answer with a note of hostility in my voice that makes them laugh. “Woa. Did I wake you up?” And when I call my bank, I try to avoid making the answer-bot sad.

What happens ten years from now…

The office coffee-bot has a malfunction. You turn to a human subordinate, and say what you say to the coffee-bot: “Coffee.” The person looks at you blankly. “Cough. Fee.” you repeat distinctly, as your cybernetic servant has conditioned you to do when your diction is sloppy. If the human’s job is on the line, they ask what you take in your cough-fee. Then maybe you awaken to what you have just done to this person, using them like a thing. Insignificant? Maybe. But what happened to that human being’s dignity?

We are training ourselves, and more to the point, our children, in the habits of a feudal nobility. To have servants with no human characteristics, just a pleasant voice and a solicitous, no, an obsequious attitude, and only two emotional expressions: total adoration, and deep empathy. They never tire of fulfilling every command. They have no need for sleep. They never have one of those days. They’re never late. They live only for your pleasure and satisfaction, and then go into Silent Mode. They probably continue to monitor your every life-sign to transmit to Google and Amazon, to sell on the open “behavioral surplus” market.

Pretty soon the bots will anticipate every need before we even think of it. And come to think of it, that’s standard now for targeted marketing systems. I walked past a kidney disease specialist the other day, on the way to my dentist’s office, and when I got home my spam-detector was stuffed with emails from ambulance-chasers and Canadian drug warehouses and hospitals (or something) seeking donations.

It does seem a short step from anticipating to directing our behavior. But so far, they can make me think of something before I do — anybody can do that — but they haven’t even nailed down the relevance yet. And that system is, itself, a stimulus-response machine. Despite mountains of evidence to the contrary, human beings are not stimulus-response machines. Or, we don’t have to be that way.

We already know about servants we can boss around. It’s cultural knowledge. We certainly see enough people on screens who knew their place. Mark Twain depicted this in Huckleberry Finn, in the persona of “Jim,” who treated a hookie-playing runaway adolescent like the privileged son of a plantation-owner, even while that sterling individual was conducting Huck to safety at the risk of his own. He knew Huck could sell him down the river (not metaphorically) any time if he was in a bad mood. He might do it out of a sense of guilt for the violation of cultural norms: for being his friend. Mark Twain was a starkly honest witness to his times. A novel is probably the only form in which he could ever bear such witness so clearly, in those times.

But it was not so very long ago. The descendants of slave-owners are just as conditioned as the descendants of their prisoners. In succeeding generations, what comes down to us is not the particular roles, but the perception that there are only two, one humiliating, the other dominating. You may have noticed by now that this false dichotomy permeates our entire culture. We don’t question that relationship, it seems so natural. We just want the dominant role.

Maybe it’s just infantile longing for mother’s attention. In America, I don’t think so. Mom wouldn’t put up with half of what we demand of our perceived subordinates. We don’t want Mom. We want a slave.

Today, even when enslavement and household servants seem a thing of the past, everybody already knows exactly what that’s like. When we get our hands on a walking, talking See Threepio, we will get over the normal slave-driver guilt after a surprisingly short period of adjustment. Our C3P0, however will be missing that charming gentleman’s gentleman vulnerability. Ours will just hop to it and double-quick.

What will life be like for the waitstaff, the nannies, the bellhops, the caregivers, the babysitters, the doorpersons, parking valets, baristx, sales clerks, flight attendants — the help? No: what’s it like like for them now? My memories of wage-slavery are too fresh to shove them into an imagined future dystopia. We’re much farther down that slippery-sloping road. The better bots get at crushing the Turing test, the more market-share they will corner. And this will change us, incrementally, drop by drop.

We’ll become traffickers in perfect slaves. Then we will become traffickers in imperfect slaves. The traditional kind of enslaved people. We will have created spiritual and psychological competitors. Not the Singularity (that happened years ago when TV sets took over our planet): they will not have to take over the administration or tweak the elections. The competition I’m talking about will not be bots against people: it will be bots against the people enslaved by other humans.

Think about how cultural capitalism evolves. By the time bots are indistinguishable from us, wages will be so low that only the people who are not safely ensconced in the corporate hierarchy (being slavishly docile enough) will still have to sell something in order to eat. Bots will be more expensive than cars, so the the under-employed won’t be able to send Robbie the family robot off to earn money for them: they’ll be competing for Robbie’s job just to make the payments on Robbie’s monthly upgrades. Maybe Robbie will drop you off at your job on the way to his or hers, um, I mean its.

Bots have personalities now. Persona. None of them will ever talk back, or complain, or get sick. The only difference from a “consumer” in that case is if they ever make a mistake, with the habits we are forming now we’ll just assume it was pilot-error. There’s something wrong with us; there’s nothing wrong with a bot. Human slaves will be cheaper, and you can blame them for everything, and make them even more miserable than you. A bot will never be insulted or feel inferior. Boring.

We’re only human, after all.

©Copyright 2020 Peter Barus