Bender and Chiang at AI forum
University of Washington linguist Emily Bender and science-fiction writer Ted Chiang share the stage at Town Hall Seattle. (GeekWire Photo / Alan Boyle)

What do you get when you put two of Time magazine’s 100 most influential people on artificial intelligence together in the same lecture hall? If the two influencers happen to be science-fiction writer Ted Chiang and Emily Bender, a linguistics professor at the University of Washington, you get a lot of skepticism about the future of generative AI tools such as ChatGPT.

“I don’t use it, and I won’t use it, and I don’t want to read what other people do using it,” Bender said Friday night at a Town Hall Seattle forum presented by Clarion West

Chiang, who writes essays about AI and works intelligent machines into some of his fictional tales, said it’s becoming too easy to think that AI agents are thinking.

“I feel confident that they’re not thinking,” he said. “They’re not understanding anything, but we need another way to make sense of what they’re doing.”

What’s the harm? One of Chiang’s foremost fears is that the thinking, breathing humans who wield AI will use it as a means to control other humans. In a recent Vanity Fair interview, he compared our increasingly AI-driven economy to “a giant treadmill that we can’t get off” — and during Friday’s forum, Chiang worried that the seeming humanness of AI assistants could play a role in keeping us on the treadmill.

“If people start thinking that Alexa, or something like that, deserves any kind of respect, that works to Amazon’s advantage,” he said. “That’s something that Amazon would try and amplify. Any corporation, they’re going to try and make you think that a product is a person, because you are going to interact with a person in a certain way, and they benefit from that. So, this is a vulnerability in human psychology which corporations are really trying to exploit.”

AI tools including ChatGPT and DALL-E typically produce text or imagery by breaking down huge databases of existing works, and putting the elements together into products that look as if they were created by humans. The artificial genuineness is the biggest reason why Bender stays as far away from generative AI as she can.

“The papier-mâché language that comes out of these systems isn’t representing the experience of any entity, any person. And so I don’t think it can be creative writing,” she said. “I do think there’s a risk that it is going to be harder to make a living as a writer, as corporations try to say, ‘Well, we can get the copy…’ or similarly in art, ‘We can get the illustrations done much cheaper by taking the output of the system that was built with stolen art, visual or linguistic, and just repurposing that.'”

Bender, Chiang and Nissley at AI forum
The captions displayed above the heads of the panelists at Town Hall Seattle’s AI forum occasionally generated laughter from the audience, followed by laughter from the panelists. “We’re already in AI hell,” moderator Tom Nissley joked. (GeekWire Photo / Alan Boyle)

Bender gave a thumbs-up to Hollywood writers and actors for winning protections from AI encroachment during this year’s contract negotiations with the studios. But she gave a thumbs-down to journalists who slip AI-produced content into their own work on the sly. (Full disclosure: No AI tools were used in the writing of this report.)

“I have Google Alerts set on the phrase ‘computational linguistics,’ for example,” she said. “I set that years ago as a way to find job opportunities for my students. Starting in November 2022, it kept sending me these news articles where people were writing about ChatGPT, and a remarkable number of them started with some ChatGPT-generated paragraph, unflagged, and then below the fold, ‘Oh, haha, that was written by a machine.’ And I thought, what kind of journalist would sacrifice their integrity like this? And also, how dare you trick me into reading fake text?”

You might argue that Bender has been assimilated into the AI ecosystem merely by virtue of the fact that she uses Google Alerts — but she makes a distinction between generative AI and special-purpose technologies such as machine translation and automatic speech-to-text transcription.

“I really appreciate having a spell-checker,” she said. “So, language technology can certainly be valuable.”

Even generative AI might have its place, Chiang said.

“A lot of times, the world calls upon us to generate a lot of bullshit text, and if you had a tool that would handle that, that’d be great,” he said. “Or, I mean, it’s not great. The problem is that the world insists that we generate all this sort of bullshit text. So having a tool that does that for you … that is arguably of some utility.”

Chiang and Bender agreed that generative AI will need regulatory guardrails.

“The guardrails that I’d like to see are things around transparency,” Bender said. “I think that we should all know whenever we’ve encountered synthetic media, it should be immediately apparent to the human eye. It should also be mechanistically encoded so that you could filter it out and not see it at all. I think we need transparency about training data. I think we need transparency about energy use. And on top of that, I would love to see accountability. I would love to live in a world where OpenAI is actually responsible for everything that ChatGPT outputs.”

“I don’t have anything to add to that,” Chiang said.

Other human-generated gems from the Town Hall Seattle chat:

  • Chiang said AI-generated text will be a boon for internet scammers: “That is, I think, an example of this broader problem, of valuable human-generated text being drowned out in a sea of AI-generated nonsense.”
  • AI programs have mastered complex games such as chess and Go, but Chiang noted that it took them millions of trials to gain that mastery. He then pointed to an experiment in which rats learned to drive miniature cars after just 24 trials. Based on that measure, AI programs are “not as good at skill acquisition as mice are,” Chiang said. “It’s going to be a long time before they’re as good at skill acquisition as humans are.”
  • Chiang acknowledged that AI will make it harder to distinguish between student-written essays and machine-generated text. “It is a gigantic problem that might be insoluble,” he said. “It might be that essay writing has lost its usefulness as a pedagogical tool.”
  • Will AI put authors like Chiang out of business? “It is not at all clear to me that AI-generated text is a game-changer for the prose fiction market,” he said. “In terms of the cost of publishing a book, the amount that you pay the author is only a tiny fraction of that. So you’re not actually saving all that much.” He said generative AI might be useful as a brainstorming tool — and noted that science-fiction author Philip K. Dick used I Ching divination coins for a similar purpose when he wrote “The Man in the High Castle.”
  • Chiang is arguably best-known as the author of the short story that inspired the 2016 movie “Arrival,” which features a linguistics professor as the main character. “Arrival” also reflects a controversial concept in linguistics known as the Sapir-Whorf Hypothesis. So what do linguists think of Chiang’s story? “All my linguist friends are jealous that I get to meet Ted,” Bender said. “I’m going to be speaking at the Linguistic Society of America in January, and I’m already arranging my talk so that I can brag about this.”
Like what you're reading? Subscribe to GeekWire's free newsletters to catch every headline

Job Listings on GeekWork

Find more jobs on GeekWork. Employers, post a job here.