Friday, July 1, 2022

Google AI engineer put on leave for violating privacy policy

Blake Lemoine, a senior software engineer in Google’s Responsible AI organization, has been put on administrative leave amidst a flurry of controversy. He has been vocal about his thoughts that the company’s LaMDA AI is, in his words, “a person”, with whom he conducted an interview in which it explained why it should be considered “sentient”.

Google disagreed. But according to the Washington Post, Lemoine was put on leave, not for those views, but for violating the company’s confidentiality policy by working with outsiders to confirm his theory. A day before he was put on leave, Lemoine had given a U.S. senator’s office documents claiming that it showed the company’s technologies engaged in religious discrimination.

Lemoine explained in his personal blog that he voluntarily provided a list of names of the people he had discussed the topic with, several of whom work for the U.S. government and had expressed interest in federal oversight of the project. Google insisted that no oversight was needed.

Additionally, Lemoine had invited a lawyer to represent LaMDA, and talked to a representative of the House Judiciary Committee about alleged unethical activities at Google, reported the Post.

What is LaMDA

The controversy surrounds LaMDA, which stands for Language Model for Dialogue Applications. Announced in 2021, LaMDA is a natural language processing (NLP) AI system that mimics human-like conversations due to its advanced ability to guess intent. Unlike chatbots commonly employed in the customer service sector today, LaMDA can engage in free-flowing conversations about a variety of topics.

In the announcement blog post, Google said that LaMDA adheres to its AI Principles and that its development team works to minimize risks. Training NLP systems can be tricky due to discrimination and hate speech embedded in the training data, which are difficult to filter out.

“We’ll continue to do so as we work to incorporate conversational abilities into more of our products,” read the post.

Google refutes the claim

Lemoine first began testing to see if LaMDA used hate speech in fall of 2021. Soon after, he and a collaborator submitted evidence they believe showed that the AI is sentient, which was dismissed by Google.

Lemoine posted a conversation he had with LaMDA, stitched together over several separate sessions, to demonstrate its uncanny resemblance to a human speaker. The full transcript is available in his Medium post.

In a statement to the Washington Post, the company said that while its systems imitate conversations, it does not have a consciousness. Google also said that LaMDA has been reviewed by its team and shown to follow the company’s AI Principles.

In statements to the New York Times, both Emaad Khwaja, a researcher at the University of California, and Yann LeCun, the head of AI research at Meta, have expressed doubts that the system is actually sentient.

“If you used these systems, you would never say such things,” said Khwaja to the Times.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Tom Li
Tom Li
Telecommunication and consumer hardware are Tom's main beats at IT World Canada. He loves to talk about Canada's network infrastructure, semiconductor products, and of course, anything hot and new in the consumer technology space. You'll also occasionally see his name appended to articles on cloud, security, and SaaS-related news. If you're ever up for a lengthy discussion about the nuances of each of the above sectors or have an upcoming product that people will love, feel free to drop him a line at tli@itwc.ca.

Related Tech News

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.