<mediadc-video-embed data-state="{"cms.site.owner":{"_ref":"00000161-3486-d333-a9e9-76c6fbf30000","_type":"00000161-3461-dd66-ab67-fd6b93390000"},"cms.content.publishDate":1655127988918,"cms.content.publishUser":{"_ref":"0000017c-2d8e-d3f3-a7fc-7ffef6720000","_type":"00000161-3461-dd66-ab67-fd6b933a0007"},"cms.content.updateDate":1655127988918,"cms.content.updateUser":{"_ref":"0000017c-2d8e-d3f3-a7fc-7ffef6720000","_type":"00000161-3461-dd66-ab67-fd6b933a0007"},"rawHtml":"
var _bp = _bp||[]; _bp.push({ "div": "Brid_55127863", "obj": {"id":"27789","width":"16","height":"9","video":"1028394"} }); ","_id":"00000181-5d50-d405-a3e7-ddf6caef0000","_type":"2f5a8339-a89a-3738-9cd2-3ddf0c8da574"}”>Video EmbedA Google engineer was placed on paid leave after speaking on a public platform about how a research system he developed for generating chatbots may have developed sentience.
Blake Lemoine, an engineer who worked at Google’s Responsible AI group, claims that recent chats he had with Google’s Language Model for Dialogue Applications persuaded him that it should be treated as a sentient creature, according to a Washington Post report on Saturday. However, the Big Tech company was unhappy with Lemoine and put him on paid leave. It has not stopped Lemoine from arguing that the program has gained sentience.
“Over the course of the past six months, LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,” Lemoine wrote in a Medium post after the story came out.
SENATORS WANT FDA TO UPDATE MEDICAL DEVICE SECURITY GUIDELINES
When asked for evidence of his claims of sentience, Lemoine struggled to provide any. He instead used his experience as a priest to conclude that LaMDA was sentient and released transcripts of his interview with the program. The transcripts showed he discussed emotions, sentience, and the difference between butlers and slaves in an attempt to determine the level of sentience.
He later went on to discuss his work and Google’s allegedly unethical activities around AI with a representative of the House Judiciary Committee. Google immediately placed him on paid leave because he breached Google’s confidentiality agreement.
Google denies Lemoine’s claims, saying that other employees have not seen what Lemoine has.
“Hundreds of researchers and engineers have conversed with LaMDA and we are not aware of anyone else making the wide-ranging assertions, or anthropomorphizing LaMDA, the way Blake has,” Google spokesman Brian Gabriel said in a press statement.
CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER
AI experts have claimed for years that they are approaching the ability to replicate sentience in machines as computers get smarter. This has also led some researchers to begin debating the ethics of AI, specifically how people should treat an AI if it does become sentient.
LaMDA is an open-ended conversational AI application developed by Google that typically takes on the role of a person or an object during conversations. It uses Google’s Transformer, an open-source neural network architecture for understanding language. It draws data from several different data sets, including online resources, to find sentence patterns and predict what a reasonable response might be.