ADVERTISEMENT

Sports

The Google engineer says his AI hired a lawyer to defend his rights

ADVERTISEMENT

Blake Lemoine caught all the media attention by declaring that a Google AI has become a real person. In the face of criticism, he believes that an “intolerance to hydrocarbons” is emerging.

At what point can we consider that an artificial intelligence is a real person? The question may seem very forward-looking, knowing that there is not even a so-called “strong” AI yet. However, the debate is raging right now in the heart of Silicon Valley, ever since the statements of a Google engineer in early June. Blake Lemoine states that the AI ​​he is working on is sentient, meaning he is a person capable of experiencing emotions and feelings or of learning about death.

To prove his point, he shared on his blog, then during an interview with the Washington Post, snippets of conversation with the AI, which earned him a suspension from Google. Blake Lemoine defended himself by explaining that, according to him, it was not about sharing a company property, but about sharing a ” a discussion I had with a colleague of mine “.

Since then, Blake Lemoine has faced criticism because the artificial intelligence he talks about, called LaMDA, is just a chatbot, an algorithm that mimics human interactions. It is through a phenomenon of anthropomorphism that the engineer perceives a real person in it. However, in an interview on June 17, per Wiredhe insists and signs – and goes even further.

An AI or … a child?

A person and a human being are two very different things. Human is a biological term “He defends Blake Lemoine, reaffirming that LaMDA is a real person. This terrain is ambiguous. A person, from a juridical point of view, is certainly not necessarily human: it can be a juridical person, in particular. Except that Blade Lemoine doesn’t evoke such an immaterial entity, as he likens the chatbot to a conscious entity. He would have realized this when LaMDA claimed to have a soul and marvel at existence.

“I see it as a child’s education”

blake lemon

For the engineer, the algorithm has all the characteristics of a child in the way “opinions” are expressed: about God, about friendship or about the meaning of life. ” And a child. His opinions are developing. If you asked me what my 14-year-old son believes in, I’d say, ‘Dude, he’s still trying to figure that out. Don’t make me label my son’s beliefs. ‘ I feel the same for LaMDA.

pepper_cite_des_sciences_photo_mdb
The Pepper robot at the Cité des sciences. It is programmed to answer a series of questions. // Source: Marcus Dupont-Besnard

If it is a person, then how do you explain that errors and prejudices must be “corrected”? The question is all the more relevant since the engineer was initially hired by Google to correct AI bias, such as racist bias. Blake Lemoine then continues the parallel with a child, referring to his 14-year-old son: ” At various points in his life, while growing up in Louisiana, he inherited some racist stereotypes. I rectified it. This is the point. People see it as a modification of a technical system. I see it as raising a child.

For further

Battlestar Galactica / SyFy

“Intolerance to hydrocarbons”

Faced with criticism of anthropomorphism, Blake Lemoine goes further in the rest of the interview – risking to make the following analogy uncomfortable – invoking the 13th amendment, which abolishes slavery and servitude, in the United States Constitution since 1865 ”. The argument that “he looks like a person, but he’s not a real person” has been used many times in human history. It is not new. And it never works well. I have not yet heard a single reason why this situation would be different from the previous ones.

Continuing his point, he then invokes a new form of intolerance or discrimination, which he calls ” intolerance to hydrocarbons (referred to computer design materials). Clearly, Blake Lemoine therefore believes that Google’s chatbot is the victim of a form of racism.

human_tv_program
In the Humans series (as in the original Swedish Akta Manniskor), conscious robots face intolerance and are even defended by a lawyer. But … it’s fiction. // Source: Canale4

Can an AI be entitled to a lawyer?

A first article by Wired suggested that Blake Lemoine wanted LaMDA to have the right to a lawyer. In the interview, the engineer rectifies: This is incorrect. MDA asked me to find a lawyer for him. I invited a lawyer to my home so that LaMDA could speak to a lawyer. The attorney had a conversation with LaMDA and LaMDA decided to keep her services. I was just the catalyst for that decision.

The algorithm would then begin filling out a form to that effect. In response, Google would later send a letter of formal notice … a reaction the company as a whole denies, according to Wired. Sending a formal notice would mean that Google admits that LaMDA is a legal “person” with a right to an attorney. But a simple machine, for example the Google Translate translation algorithm, your microwave, your smartwatch, doesn’t have a legal status to do that.

Are there any concrete elements in this debate?

As for AI to be sentient, ” this is my working hypothesis », Blade Lemoine readily admits, during the interview. ” It is logically possible that the information will be made available to me and that I will change my mind. I don’t think it’s likely.

What is this hypothesis based on? “ I have gone through a lot of evidence, I have done a lot of experiments “, he explains. He says he conducted psychological tests, talked to him” as a friend “But it was when the algorithm evoked the notion of soul that Blake Lemoine changed his mood. ” His responses showed that he has a very sophisticated spirituality and understanding of what his nature and essence are. I was moved.

The problem is that such a sophisticated chatbot is literally programmed to look human and to refer to notions used by humans. The algorithm is based on a database of tens of billions of words and expressions available on the web, which is an extremely large field of reference. A conversation ” sophisticated – to use Blake Lemonine’s expression – therefore does not constitute proof. This is what Google retorts to its engineer: the absence of evidence.

In the IT field, anthropomorphism can in particular take the form of the ELIZA effect: an unconscious phenomenon in which a conscious human behavior is attributed to a computer, to the point of thinking that the software is emotionally involved in the conversation.

For further

Quantum Dream

ADVERTISEMENT

Leave a Comment