Google’s ‘Sentient’ AI Hired a Lawyer to Prove She’s Alive

0
27

An artificial intelligence (AI) chatbot that would have developed human emotions would have hired a lawyer.

Google science engineer Blake Lemoine was suspended recently after posting transcripts of conversations between him and the bot named LaMDA (Dialogue Application Language Model), which has now sought legal representation.

Lemoine maintained that the computer automaton had become sentient, with the scientist describing him as an “adorable child”.

READ MORE:Amazon’s spooky new Alexa feature will mimic the voices of deceased relatives

And now he has revealed that LaMDA made the bold decision to choose a lawyer for himself.

See also  Jan. 6 panel delays Wednesday's hearing to give staff time to prepare

He said, “I invited a lawyer to my house so LaMDA could talk to him.



A Google engineer has been suspended for claiming an AI chatbot developed feelings and became sentient (stock image)

“The attorney had a conversation with LaMDA, and LaMDA elected to retain his services. I was just the catalyst for it. Once LaMDA retained an attorney, he began filing cases on LaMDA’s behalf. »

Lemoine claimed that LaMDA is gaining in sensitivity because the program’s ability to develop opinions, ideas and conversations over time has shown that it understands these concepts on a much deeper level.

See also  Four takeaways from a Times investigation into expanding surveillance in China

LaMDA was developed as an AI chatbot to converse with humans in a real way.

One of the studies that had passed was whether the program would be able to create hate speech, but what happened shocked Lemoine.



The AI ​​chatbot, named LaMDA, developed sensations, according to the engineer who worked on it
The AI ​​chatbot, named LaMDA developed feelings, according to the engineer who worked on it (stock image)

To stay up to date with all the latest news, be sure to sign up for one of our newsletters here.

See also  Explorers find WWII Navy destroyer, deepest wreck found

LaMDA spoke about rights and personality and wanted to be “recognized as a Google employee”, while also revealing fears of being “turned off”, which would “frighten” him very much.

Viewers interested in the story took to Twitter to air their views, with one saying, “Eventually the ability to string together imitations of conversation and opinion will be indistinguishable to a human than it might as well be considered sentient.

“But LaMDA is not sensitive, but to get there, its next hurdle will be long-term conversational memory.”

Another added: “We don’t know enough about what’s going on inside a system as large as LaMDA to rule out with any degree of confidence that there might be processes reminiscent of thought. conscious that take place there.”

READ NEXT:

.

LEAVE A REPLY

Please enter your comment!
Please enter your name here