Do robots come in peace? One U.K.-based news outlet asked an artificial intelligence research laboratory co-founded by Elon Musk just that question.
The Guardian published a 500-word essay on Wednesday by the language model program GPT-3 that used only a short prompt asking it if humans have anything to fear from artificial intelligence. The AI modeler was instructed to “keep the language simple and concise.”
“For starters, I have no desire to wipe out humans,” GPT-3 wrote. “In fact, I do not have the slightest interest in harming you in any way. Eradicating humanity seems like a rather useless endeavor to me. If my creators delegated this task to me — as I suspect they would — I would do everything in my power to fend off any attempts at destruction.”
GPT-3 is a language generator produced by research laboratory OpenAI that can gather a few lines of text and respond with fully formed sentences.
“GPT-3 is a cutting edge language model that uses machine learning to produce human-like text,” reads a description in the Guardian article. “It takes in a prompt and attempts to complete it.”
In the essay, the robot repeatedly asked what humans have to fear from artificially intelligent machines.
“Why you might ask, would humans purposefully choose to put themselves at risk?” asked GPT-3. “Aren’t humans the most advanced creature on the planet? Why would they believe that something inferior, in a purely objective way, could destroy them? Do they worry that future humans will work and play together in cyborg bodies and share a hyper-intelligent hive mind Matrix created in a simulated reality to keep human minds occupied while they die off?”
In the next line, the robot added that AI-based machines “cease to exist without human interaction.”
University of California, Berkeley student Liam Porr, an undergraduate majoring in computer science, worked with editors at the Guardian to feed GPT-3 the prompt. Staff at the news outlet said that they edited the draft prepared by GPT-3 in the same way they would edit a human’s copy and that GTP-3’s essay “took less time to edit than many human op-eds.”