Site icon IT World Canada

Google AI engineer put on leave for violating privacy policy

Google office building

Blake Lemoine, a senior software engineer in Google’s Responsible AI organization, has been put on administrative leave amidst a flurry of controversy. He has been vocal about his thoughts that the company’s LaMDA AI is, in his words, “a person”, with whom he conducted an interview in which it explained why it should be considered “sentient”.

Google disagreed. But according to the Washington Post, Lemoine was put on leave, not for those views, but for violating the company’s confidentiality policy by working with outsiders to confirm his theory. A day before he was put on leave, Lemoine had given a U.S. senator’s office documents claiming that it showed the company’s technologies engaged in religious discrimination.

Lemoine explained in his personal blog that he voluntarily provided a list of names of the people he had discussed the topic with, several of whom work for the U.S. government and had expressed interest in federal oversight of the project. Google insisted that no oversight was needed.

Additionally, Lemoine had invited a lawyer to represent LaMDA, and talked to a representative of the House Judiciary Committee about alleged unethical activities at Google, reported the Post.

What is LaMDA

The controversy surrounds LaMDA, which stands for Language Model for Dialogue Applications. Announced in 2021, LaMDA is a natural language processing (NLP) AI system that mimics human-like conversations due to its advanced ability to guess intent. Unlike chatbots commonly employed in the customer service sector today, LaMDA can engage in free-flowing conversations about a variety of topics.

In the announcement blog post, Google said that LaMDA adheres to its AI Principles and that its development team works to minimize risks. Training NLP systems can be tricky due to discrimination and hate speech embedded in the training data, which are difficult to filter out.

“We’ll continue to do so as we work to incorporate conversational abilities into more of our products,” read the post.

Google refutes the claim

Lemoine first began testing to see if LaMDA used hate speech in fall of 2021. Soon after, he and a collaborator submitted evidence they believe showed that the AI is sentient, which was dismissed by Google.

Lemoine posted a conversation he had with LaMDA, stitched together over several separate sessions, to demonstrate its uncanny resemblance to a human speaker. The full transcript is available in his Medium post.

In a statement to the Washington Post, the company said that while its systems imitate conversations, it does not have a consciousness. Google also said that LaMDA has been reviewed by its team and shown to follow the company’s AI Principles.

In statements to the New York Times, both Emaad Khwaja, a researcher at the University of California, and Yann LeCun, the head of AI research at Meta, have expressed doubts that the system is actually sentient.

“If you used these systems, you would never say such things,” said Khwaja to the Times.

Exit mobile version