The Lifelike Illusions of A.I.

0 11


In January, 1999, the Washington Post reported that the National Security Agency had issued a memo on its intranet with the subject “Furby Alert.” According to the Post, the memo decreed that employees were prohibited from bringing to work any recording devices, including “toys, such as ‘Furbys,’ with built-in recorders that repeat the audio with synthesized sound.” That holiday season, the Furby, an animatronic toy resembling a small owl, had been a retail sensation; nearly two million were sold by year’s end. They were now banned from N.S.A. headquarters. A worry, according to one source for the Post, was that the toy might “start talking classified.”

Tiger Electronics, the makers of the Furby, was perplexed. Furbys couldn’t record anything. They only appeared to be listening in on conversations. A Furby possessed a pre-programmed set of around two hundred words across English and “Furbish,” a made-up language. It started by speaking Furbish; as people interacted with it, the Furby switched between its language dictionaries, creating the impression that it was learning English. The toy was “one motor—a pile of plastic,” Caleb Chung, a Furby engineer, told me. “But we’re so species-centric. That’s our big blind spot. That’s why it’s so easy to hack humans.” People who used the Furby simply assumed that it must be learning.

Before the Furby, Chung had worked as a mime, a puppeteer, and a creature-effects specialist for movies. After it, he designed the Pleo, an animatronic toy dinosaur. If you petted a Pleo by scratching its chin, it cooed or purred; if you held it upside down by the tail, it screamed and, afterward, shivered, pouted, and cried. If a Pleo was grasped by the neck, it made choking sounds; it sometimes appeared to doze off, and only a loud sound could rouse it. After the toy’s release, the Pleo team noticed that, when the dinosaurs were sent back for repairs, customers rarely asked for replacements. “They wanted theirs repaired and sent back,” he said. If you bring your dog to the vet, you don’t want just any dog returned to you. Pleo had become a pet simulator.

The financial pressures of the toy industry forced Chung to be parsimonious. He devised a minimal set of rules for making his animatronic toys appear to be alive. “What’s got the biggest bandwidth for the least amount of parts to hack a human brain?” Chung asked me. “A human face. And what’s the most important part of the face? The eyes.” Furby’s eyes move up and down in a way meant to imitate an infant’s eye movements while scanning a parent’s face. It has no “off” switch, because we don’t have one. The goal, Chung explained, is to build a toy that seems to feel, see, and hear, and to show emotions that change over time. “If you do those things, you could make a broom alive,” he said.

Chung considers Furby and Pleo to be early, limited examples of artificial intelligence—the “single cell” form of a more advanced technology. When I asked him about the newest developments in A.I.—especially the large language models that power systems like ChatGPT—he compared the intentional design of Furby’s eye movements to the chatbots’ use of the word “I.” Both tactics are cheap, simple ways to increase believability. In this view, when ChatGPT uses the word “I,” it’s just blinking its plastic eyes, trying to convince you that it’s a living thing.

We know that, in principle, inanimate ejecta from the big bang can be converted into thinking, living matter. Is that process really happening in miniature at server farms maintained by Google, Meta, and Microsoft? One major obstacle to settling debates about the ontology of our computers is that we are biased to perceive traces of mind and intention even where there are none. In a famous 1944 study, two psychologists, Marianne Simmel and Fritz Heider, had participants watch a simple animation of two triangles and a circle moving around one another. They then asked some viewers what kind of “person” each of the shapes was. People described the shapes using words like “aggressive,” “quarrelsome,” “valiant,” “defiant,” “timid,” and “meek,” even though they knew that they’d been watching lifeless lines on a screen.

New, surprising technologies can bewilder us in ways that exacerbate these animist tendencies. In 1898, after Nikola Tesla exhibited a radio-controlled boat, he described the vessel as having “a borrowed mind”—borrowed, presumably, from the radio operator who controlled it at a distance. Today, our intuitions might tell us that chatbots have “borrowed” minds from their training text. And yet, from the very beginning of computer programming, its practitioners have warned us that we’re likely to mistake the execution of mechanized instructions for independent thinking. “It is desirable to guard against the possibility of exaggerated ideas that might arise as to the powers of the Analytical Engine,” the mathematician Ada Lovelace wrote in 1843, of a design for an elaborate calculating machine proposed by Charles Babbage that is often regarded as the first computer program. The machine, she argued, “weaves algebraic patterns just as the Jacquard loom weaves flowers and leaves,” and would not be creative. “The Analytical Engine has no pretensions whatever to originate anything. It can do whatever we know how to order it to perform.”

Large language models can seem to do more than what we ask them to; they exhibit something that we might call creativity if a human did it. What is actually happening in these moments? Last year, researchers from Stanford and Google described the humanlike interactions of chatbots “living” in a virtual, simulated town that they named Smallville. The town, built with textual descriptions provided to the A.I., was populated with twenty-five “generative agents,” each powered by ChatGPT. The agents had both a private stream of text—an “inner voice”—and a public stream; each agent also had its own biography. (One, for example, was an altruistic pharmacy owner married to a college professor who “loves his family very much.”) The researchers asked one of the agents to plan a Valentine’s Day party for the others. Two “game days” later, with no further input from the computer scientists, the news had spread around town; five agents showed up.

I asked Joon Sung Park, one of the Smallville researchers, whether there was a sense in which the virtual agents in Smallville had subjectivity: for example, did they know the difference between their “inner voice” and communications from the other agents in town? He didn’t think so; if their programming resembled the human brain at all, he said, it did so as a caricature and not a replica. A month after the release of Smallville, A.I. researchers from Google’s DeepMind, Imperial College London, and EleutherAI, a nonprofit research group, published an opinion paper in Nature that aimed to reframe chatbot interactions in a way that provided an “antidote” to anthropomorphism. Chatbots, they argued, should be seen as dialogue simulators “capable of role-playing an infinity of characters.” An L.L.M. can simulate a multitude of humanlike personas on demand.

Users might not know it, but ChatGPT is in character: hidden text provided to the A.I. before each chat session gives it pointers on how to behave. In one instance, unearthed by a clever user, a series of hidden prompts tell ChatGPT, “Your choices should be grounded in reality,” and encourage the system to “make choices that may be insightful or unique sometimes.” That such plain, second-person prose can shape a computer’s output is remarkable. But is the A.I. really making “choices”? Park doesn’t believe that it’s doing so, at least not in the same way that we do. Before becoming a computer scientist, he trained as a painter. “For most Impressionist paintings, if you look really closely at it, it’s basically just nothing,” he said. “It’s just some blob of paint. It doesn’t really represent anything. But if you just step away just a few steps, then it all of a sudden looks realistic. I think we have that going on. What we’ve created is more akin to a movie character, or a character in Disney animation.”

In their book “Disney Animation: The Illusion of Life,” the animators Frank Thomas and Ollie Johnston trace how Disney’s approach evolved with time. “Prior to 1930, none of the characters showed any real thought process,” they write. “The only thinking was done in reaction to something that had happened.” Characters looked unnatural, in part because they were drawn separately. A major innovation occurred when animators focussed instead on drawing interactions. The main catalyst for that change, Thomas and Johnston explain, was a 1930 scene in which a dog stared into the camera and snorted. “Miraculously,” they write, “he had come to life!” (Ham Luske, the supervising animator for the character of Snow White, later advised his fellow-animators, “The drawings are the expression of your thoughts.”) The right kind of interaction, vividly rendered, could make even simply drawn characters seem real—a lesson similar to the one Heider and Simmel would find in their study a decade later.

Such lessons eventually found their way into other forms of digital entertainment. In the mid-nineties, Seamus Blackley, a physicist and programmer who has since been credited as the father of the Xbox, led the development of Trespasser, a video game in the “Jurassic Park” universe. Trespasser is set in an open world, allowing players to explore simulated environments with greater freedoms of movement and choice than are typical in more scripted or linear games. Its open world was meant to be populated by believable, virtual dinosaurs. Earlier, Blackley, who is an amateur pilot, had had an epiphany while working on Flight Unlimited, a top-selling flight simulator: the point of a flight simulator, he’d discovered, wasn’t to simulate an airplane but to give players the feeling of flying one. The same insight applied to Trespasser. “We weren’t trying to make intelligent dinosaurs,” Blackley told me. “We were trying to make the player feel like they were in a world with intelligent dinosaurs.”

At first, he said, the Trespasser team approached the lifelike-dinosaur problem by giving each individual dinosaur nine emotional variables—fear, love, curiosity, fatigue, thirst, pain, hunger, anger, and “solidity,” a proxy for physical recklessness. The variables could be set at dynamic values ranging from zero to one, and could act as an accelerator or decelerator for various behaviors. The system didn’t work: in early testing, the dinosaurs oscillated rapidly between emotional states, and were prone to wild mood swings. So the team simplified, giving their dinosaurs only two emotions, anger and hunger. It worked, creating simple moments of interaction that realized Blackley’s goals. “When the dinosaurs would attack each other or hear something and run away from you, that was crazy,” he recalled. “That made them feel alive. That was enough.”

Not long ago, I watched an old, recorded Trespasser session, in which a player comes across a lone dinosaur that ignores him, walking in circles. In reality, the software seemed caught in a loop. But the dinosaur looks forlorn, listless, depressed. Its rejection of the player feels somehow intentional—the result of meekness, perhaps, or some unknowable thought process. Once interactions are sufficiently complex, even glitches can feel lifelike.

In 2019, I was hired as a science-fiction consultant by OpenAI. My job was to write stories about artificial intelligence alongside an early, internal version of GPT-2—an ancestor of the software that now powers ChatGPT. I had access to the program on my phone, via chat in the Slack app. I could text it anything, and it would return a word-limit-capped response continuing my prompt from where I’d left off.

At the time, I lived in San Francisco. “Look, I have A.G.I. on my phone,” I’d tell people, referring glibly to “artificial general intelligence”—a theoretical kind of A.I. that matches human intelligence in all domains, which is being pursued by OpenAI, Google DeepMind, and Meta. Few who listened seemed to care; still, I had a handful of canned experiments I liked to do with those who were interested. Once, I showed a professor of literature at Stanford a passage that began with a quote from Ernest Hemingway’s story “The Short Happy Life of Francis Macomber.” The passage had then been continued by GPT-2. I wanted to see whether he could tell where Hemingway ended and the A.I. began. To my surprise, he cut Hemingway off early, before GPT-2 had started.

Hemingway’s story begins with a group of people ordering some drinks. The word “gimlet” appears in three of the first four lines of dialogue. When I prompted GPT-2 to iterate on the story’s opening, it wrote:

They were talking about the gimlets. And the gimlets. And the gimlets.
And the gimlets. He just stood there with his head thrown back and the
drink in his hand.

The first gimlet was a good drink for him, he thought, the first one.
But there was still much to say. The other gimlets would follow.

He drank the gimlet, and then put it down. He sat down and drank the
gimlet and then he put it down again. He drank the gimlet and then he
had the feeling that the gimlets were giving him what he wanted. And
as soon as that feeling was over. He drank the gimlet and then he felt
guilty about drinking the gimlets.

He sat with the gimlets in the sink and tried to drown them. But when he saw the gimlets, he realized they must be gimbals.



Source link

Leave A Reply

Your email address will not be published.