Google to host public tests of AI that engineer alleged was sentient

Google is allowing the public to sign up to test the latest artificial intelligence it has developed, including the chat bot that one engineer alleged was sentient.

Google’s engineers announced that people can sign up for access to AI Test Kitchen, an app that will let users test the company’s emerging AI technologies. Most notably, it includes the opportunity to play with Google’s Language Model for Dialogue Applications, or LaMDA, the software that a former company engineer is convinced was sentient.

TESLA DEMANDS TAKEDOWN OF VIDEOS OF CARS MOWING DOWN CHILD MANNEQUINS

“We see a future where you can find the information you’re looking for in the same conversational way you speak to friends and family,” wrote Josh Woodward, senior director of product management at Google Labs, in a blog post Thursday. “While there’s still lots of work to be done before this type of human-computer interaction is possible, recent research breakthroughs in generative language models — inspired by the natural conversations of people — are accelerating our progress.”

While testers may be eager to see what the software can do, Google encourages users to be wary. The company said that early previews of LaMDA “may display inaccurate or inappropriate content,” echoing Meta’s warnings about its chat bot BlenderBot 3, which started making racist statements after its inception.

Google announced the AI Test Kitchen in May as a chance for users to get hands-on experience with various AI projects in limited quantities. The app is “meant to give you a sense of what it might be like to have LaMDA in your hands,” Google CEO Sundar Pichai said at the announcement.

Users can now sign up for Google’s wait list to access the app. All they have to do is visit the website and sign up with their Google account to be wait-listed for when the Big Tech company might release the app to small groups of people in the coming months.

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

LaMDA was at the center of a news story after Blake Lemoine, an engineer at Google’s Responsible AI group, attempted to convince others that the chat bot had developed sentience. “Over the course of the past six months, LaMDA has been incredibly consistent in its communications about what it wants and what it believes its rights are as a person,” Lemoine wrote in a blog post. Google quickly responded in June by suspending, then firing the engineer for violating the company’s employment and data security policies.

Related Content