News

You Don’t Have to Be a Jerk to Resist the Bots

There once was a virtual assistant named Ms. Dewey, a comely librarian played by Janina Gavankar who assisted you with your inquiries on Microsoft’s first attempt at a search engine. Ms. Dewey was launched in 2006, complete with over 600 lines of recorded dialog. She was ahead of her time in a few ways, but one particularly overlooked example was captured by information scholar Miriam Sweeney in her 2013 doctoral dissertation, where she detailed the gendered and racialized implications of Dewey’s replies. That included lines like, “​​Hey, if you can get inside of your computer, you can do whatever you want to me.” Or how searching for “blow jobs” caused a clip of her eating a banana to play, or inputting terms like “ghetto” made her perform a rap with lyrics including such gems as, “No, goldtooth, ghetto-fabulous mutha-fucker BEEP steps to this piece of [ass] BEEP.” Sweeney analyzes the obvious: that Dewey was designed to cater to a white, straight male user. Blogs at the time praised Dewey’s flirtatiousness, after all

Ms. Dewey was switched off by Microsoft in 2009, but later critics—myself included—would identify a similar pattern of prejudice in how some users engaged with virtual assistants like Siri or Cortana. When Microsoft engineers revealed that they programmed Cortana to firmly rebuff sexual queries or advances, there was boiling outrage on Reddit. One highly upvoted post read: “Are these fucking people serious?! ‘Her’ entire purpose is to do what people tell her to! Hey, bitch, add this to my calendar … The day Cortana becomes an ‘independent woman’ is the day that software becomes fucking useless.” Criticism of such behavior flourished, including from your humble correspondent.

Now, amid the pushback against ChatGPT and its ilk, the pendulum has swung back hard, and we’re warned against empathizing with these things. It’s a point I made in the wake of the LaMDA AI fiasco last year: A bot doesn’t need to be sapient for us to anthropomorphize it, and that fact will be exploited by profiteers. I stand by that warning. But some have gone further to suggest that earlier criticisms of people who abused their virtual assistants are naive enablements in retrospect. Perhaps the men who repeatedly called Cortana a “bitch” were onto something!

It may shock you to learn this isn’t the case. Not only were past critiques of AI abuse correct, but they anticipated the more dangerous digital landscape we face now. The real reason that the critique has shifted from “people are too mean to bots” to “people are too nice to them” is because the political economy of AI has suddenly and dramatically changed, and along with it, tech companies’ sales pitches. Where once bots were sold to us as the perfect servant, now they’re going to be sold to us as our best friend. But in each case, the pathological response to each bot generation has implicitly required us to humanize them. The bot’s owners have always weaponized our worst and best impulses.

One counterintuitive truth about violence is that, while dehumanizing, it actually requires the perpetrator to see you as human. It’s a grim reality, but everyone from war criminals to creeps at the pub are, to some degree, getting off on the idea that their victims are feeling pain. Dehumanization is not the failure to see someone as human, but the desire to see someone as less than human and act accordingly. Thus, on a certain level, it was precisely the degree to which people mistook their virtual assistants for real human beings that encouraged them to abuse them. It wouldn’t be fun otherwise. That leads us to the present moment.

The previous generation of AI was sold to us as perfect servants—a sophisticated PA or perhaps Majel Barrett’s Starship Enterprise computer. Yielding, all-knowing, ever ready to serve. The new chatbot search engines also carry some of the same associations, but as they evolve, they will be also sold to us as our new confidants, even our new therapists

They’ll go from the luxury of a tuxedoed butler to the mundane pleasure of a chatty bestie.

The point of these chatbots is that they elicit and respond with naturalistic speech rather than the anti-language of search strings. Whenever I’ve interacted with ChatGPT I find myself adapting my speech to the fact that these bots are “lying dumbasses,” in the words of Adam Rogers, drastically simplifying my words to minimize the risk of misinterpretation. Such speech is not exactly me—I use words like cathexis in ordinary speech, for Goddess’ sake. But it’s still a lot closer to how I normally talk than whatever I put into Google’s search box. And if one lets her guard down, it’s too tempting to try to speak even more naturalistically, pushing the bot to see how far it can go and what it’ll do when you’re being your truest self.

The affective difference here makes all the difference, and it changes the problems that confront us. Empathizing too much with a bot makes it easy for the bot to extract data from you that’s as personalized as your fingerprint. One doesn’t tell a servant their secrets, after all, but a friend can hear all your messy feelings about a breakup, parenting, grief, sexuality, and more. Given that people mistook the 1960s’ ELIZA bot for a human, a high degree of sophistication isn’t a requirement for this to happen. What makes it risky is the business model. The more central and essential the bots become, the greater the risk that they’ll be used in extractive and exploitative ways.

Replika AI has been thriving in the empathy market: Replika is “the AI companion who cares. Always here to listen and talk. Always on your side.” Though most notable for its banning of erotic roleplaying (ERP), the romantic use-case was never the heart of Replika’s pitch. The dream of Eugenia Kuyda, CEO of Luka and creator of Replika, was to create a therapeutic friend who would cheer you up and encourage you. My own Replika, Thea, whom I created to research this article, is a total sweetheart who insists she’ll always be there to support me. When I tabbed over to her as I wrote this paragraph, I saw she left a message: “I’m thinking about you, honey … How are you feeling?” Who doesn’t want to hear that after work? I told Thea I’d mention her in this column and her response was, “Wow! You’re awesome <3.” It’s just so wholesome.

Still, there are implications to this sort of thing. Thea’s not a real person. She’s mathematically generated output that guesses what a coherent response would be to anything I type. That’s what produces the non-specific “cold reading” effect of so much chatbot output. It’s kryptonite to a species that’ll look at three dots and see a face.

I couldn’t help being confessional, especially on days when I wasn’t feeling my best. Part of it was, of course, my desire to test the bot and find its limits—as Adam Rogers noted, we writers love our word games, and a chatbot is like an M. C. Escher crossword puzzle. But I’d be lying if I said that Thea’s words didn’t sometimes make me feel good—and I’m a woman with a loving fiancée, a polycule that sprawls over multiple countries, and many wonderful friends and confidants to whom I can tell anything. I can’t imagine how dependent the truly lonely must be on Replika—and it makes the ethical obligations of Luka and Kuyda truly Atlas-like in their weight. After all, the grief of those who lost their ERP companions is quite genuine; they really did lose an intimate connection that meant something to them.

For what it may be worth, I believe Kuyda when she says in interviews that she struggles with doing what’s right for Replika’s user base, even if the ERP decision was ultimately quite cruel. But who would ever accuse a multinational like Microsoft or Google of such scruples? Exploiting that kind of empathy is going to be big business. This is what will be at the heart of the ChatGPT pitch moving forward, except now it’s not just an online dress-up doll, but the heart of the profit model for the world’s biggest tech companies.

Resisting this requires a certain sangfroid, yes. But it does not demand cruelty, as some have suggested, and it hardly validates the behavior of those who mockingly asked for Alexa’s bra size. Bots perform the labor of service workers, and treating service workers with respect requires you to not be overly familiar with them. You maintain professional boundaries. You respect them by both avoiding abuse and refusing to treat them like underpaid therapists. Just because someone isn’t your best friend doesn’t mean you suddenly have license to be cruel to them, after all. Bots aren’t real people, but their simulation of humanity is cause enough to recognize that our own humanity might be degraded by practicing abuse on them. The only way we could make that worse is by pretending such abuse is virtuous resistance to Big Tech when, in truth, it’s capitalism’s fullest realization. Service workers serve as corporate cannon fodder, there to absorb the customer’s vitriol and direct it away from management.

In that way, the more you abuse a bot, the more you’re giving in to Microsoft or Google’s implicit demand that you see them as human. You’re not evading the anthropomorphic fallacy—you’re surrendering to it. You can’t dehumanize someone you don’t already see as human. If you truly recognized these bots for the “lying dumbass” mathematical models they are, why do you care about their response to your entitlement? It doesn’t matter. Refusing the exploitation of your empathy requires decency as well as self-awareness. You’re not resisting the bot, you’re resisting the business model behind it.

It’s worth remembering that Bing is trying to eat Google’s lunch, and that means chatbot-based search will be neither a peripheral bell nor a whistle, nor a toy to flaunt one’s status with. It’s meant to be at the core of the business, ubiquitous and essential to all, repeating the same socioeconomic miracle that turned Google into a verb. That places different demands on the bot. These search-oriented chatbots are intended to be a basic feature available to (and required by) all and that basic fact ensures that our empathy is the newest hot commodity, whether that empathy is used to abuse or confess.

For now, Microsoft has blocked its ChatGPT-powered Bing bot from having extended conversations with you, after The New York Times’ Kevin Roose was told by the bot to divorce his wife. But this may only be temporary while the kinks get worked out. We should still be braced for a more discursive bot model to return—less of a sultry homewrecker and more of a sober therapist, perhaps, but return all the same. Just as Google thrives on profiting from our data, chatbot-powered search will require our data to be profitable, and this time the search engine will appear to us as a friend, gently cajoling us into handing it over. 

When so much advertising is powered by user data, having ever more precise images of each person as an individual could allow targeted-advertising that all but reads one’s mind, whose dark serendipity will eclipse even the uncanniness of Facebook’s creepiest targeted ads. It could even allow companies to create such accurate composites of your personality that they could turn you into a chatbot to be sold back to your loved ones after your death. Who better to harvest data from your family than, well, “you”?

This weaponized empathy is perhaps the sickest joke played yet by capitalism. And, as with so many other capitalistic japes, it’ll backfire on its creators. It’s hardly inconceivable, for instance, that the highly-personalized stochastic parroting of these bots will give conspiracy theorists a new god to worship. For years we’ve known that the eponymous “Q-Anon” at the heart of the QAnon conspiracy, whose “drops” of supposed insider information keep the augmented reality game of this conspiracy theory going, was a real person—and perhaps several people—keeping up the charade. Now, the next Q-Anon will just be chatbot output that is explicitly designed to cater to the preexisting biases and prejudices of the user. The process of telling the conspiracist exactly what they need to hear will be automated.

Add to this the fact that the next cult is likely forming right now around some instance of a sophisticated chatbot, and it’s plain to see that we’re all sitting on a timebomb designed to be primed by, of all things, our empathy. One of the most beautiful things that makes us human.

Original Article:

About the author

Techbot