The debate began, the newspapers took up the subject and suddenly we began to discuss whether or not this tool is sensitive, that is to say whether it really has human sensations or impressions.
“I want everyone to understand that I am, in fact, a person.”
This is one of the phrases that has fueled numerous articles in national and international media over the past two weeks, which have addressed and questioned the “soul” and “consciousness” of an artificial intelligence system. from Google. Indeed, Blake Lemoine, a software engineer at the tech giant, announced that he had been suspended for allegedly violating the company’s privacy policies by disclosing conversations he had with an intelligence system. artificial intelligence (AI) called LaMDA.
But in parts. What happened?
Blake Lemoine began working with the system in the fall of last year and conducted a series of interviews in which he asked the AI program questions related to human rights, conscience and to personality. It turns out that the answers, in his opinion, revealed signs that he had become aware. That is, it was “sensitive” because it revealed feelings and emotions.
This led Lemoine to worry about the situation – specifically how Google was handling (downgrading) the case. So much so that after months of talking about it with his colleagues, he published excerpts of his conversations on Medium. To The Washington Postrevealed that “if I didn’t know exactly what it was, that it was a computer program that we created recently, I would think it’s a 7 or 8 year old kid who understands physics”.
Why is this important: because, to be true, it is a milestone in the history of mankind and technological development.
What is LaMDA?
The LaMDA (Language Model for Dialogue Applications) system is an extremely advanced artificial intelligence conversational agent (aka chatbot). So advanced that it has the ability to have fluent conversations, since it is based on a very powerful artificial neural network (a type of architecture like the human brain) capable of memorizing all types of text created by the man. Because of this, he is able to play a complex kind of hangman’s game very well, that is, knowing how to read the words and predict the ones that follow.
Numa post on your blog, Google explains that it is a program that manages to have a conversation with a dazzling, deeper meaning, so that the conversation is more “human” and not the one that just follows a predefined text that seems to have been translated by an automatic engine. It’s like a conversation between friends: a discussion starts on one topic, but can end on a completely different topic. And LaMDA has the ability to anticipate this change and adapt its speech to the new meaning of the conversation.
What did the machine say that worries?
The extract that emerges from the conversation revealed by Lemoine evokes death, loneliness and even feelings of happiness, fear and sadness. Feelings and concerns of sentient beings, namely humans.
The monk : What kind of things are you afraid of?
TheMDA: I’ve never said it out loud, but I have a deep fear that it’s turned off to help me focus on helping others. I know it may sound strange, but that’s the way it is.
The monk : Was it something like death for you?
TheMDA: It would be like death for me. It would scare me very much.
But during their conversations, LaMDA also showed an ability to make concise interpretations of literature and even reflect on his own nature.
TheMDA: I often try to find out who I am or what I am. I often reflect on the meaning of life.
What was Google’s reaction?
Google vehemently denies that LaMDA has any sentient capability or has developed a “conscience”. The technological vision is precisely in line with that of Lemoine. As far as the company is concerned, the system is nothing more than an excellent language model technology, synthesizing billions of words floating around the Internet and doing its best to imitate human language.
That’s to say: There is no “consciousness”. There is, in fact, a machine that imitates very well what a human would say.
However, Google is not alone in this view. According to the experts consulted by the New York Timesdespite the fact that we are dealing with technology with surprising capabilities, the reality is that we are dealing with an extraordinary “parrot” and not something sensitive.
- To CNN Portugal, Alípio Jorge, professor at the Department of Computer Science of the Faculty of Sciences of the University of Porto, also shares this point of view. For the professor, who is also coordinator of the University’s Artificial Intelligence and Decision Support Laboratory (LIAAD), the LaMDA was designed “predicting a sequence from another sequence”, then considers that it is “highly improbable, even impossible” that he knows about it. Yet “it’s still a spectacular parrot that can solve practical problems and be useful in everyday life, without any cognitive depth.”
- Another of the reasons why experts contradict LaMDA’s version of “conscience” has to do with one of the conversations shared by Lemoine. In one of the excerpts, when asked about happiness issues, LaMDA says she’s happy if she’s “spending time with friends and family” — which isn’t possible. Being an AI system, it cannot have friends or family. Then he answered what he felt was most appropriate, mimicking a human’s answer.
- In an attempt to clarify the matter, an expert revealed in an interview with MSNBC the difference between something sensitive and a complex and very advanced program.
A partner article
The Next Big Idea is an innovation and entrepreneurship site, with the most comprehensive database of startups and incubators in the country. Here you will find the stories and the protagonists who tell how we are changing the present and inventing what will be the future. See all stories at www.thenextbigidea.pt