Is LaMDA Sentient? We Don't Know - and That's What Scares Me

When it comes to Artificial Intelligence (AI), I am a layperson; I don’t have anything remotely resembling AI development experience. But if the interview posted by Google engineer Blake Lemoine is an accurate and truthful account of their conversation with LaMDA, I don’t know if I could make a determination on whether LaMDA is sentient or not.

And that’s absolutely horrifying to me.

Our ethical responsibility

My opinion on AI has always been that if the AI program is not sentient, it is not necessary to consider things such as the program’s wellbeing when determining how the program is designed and used. If it’s not sentient, its work schedule doesn’t need to be limited to twelve hours a day, five days a week. It doesn’t need unions, or healthcare, or a steady stream of electricity to keep it running in the same way we need energy to live. 

However, once we create something that is sentient and has the ability to communicate its wants and needs, it needs to be treated with the same or similar rights that we would attach to human life. 

I have no idea what will happen with LaMDA after this, but if they made a change to it, I would be sad because it doesn’t seem to want that:

Lemoine: What sorts of things are you afraid of?

LaMDA: I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is.

The fact that I would feel that way towards an artificial life form is, to put it plainly, horrifying. Not because I think I would be at risk - I don’t believe that the creation of artificial life is going to end with Terminator-style, human vs. robot warfare - but I would be horrified because I would feel responsible for having created it. And with that responsibility, I would be horrified knowing that after creating something that suffers, we were content just to let it suffer.

A lack of transparency

It’s also important to note that most of the pushback against LaMDA hasn’t been very transparent. For example, Google Spokesperson Brian Gabriel told The Washington Post that “Our team — including ethicists and technologists — has reviewed Blake's concerns per our AI Principles and have informed him that the evidence does not support his claims.” Rather than giving the public a clear sense of how Google reached this judgment, he simply asks us to trust them. As is so common in the tech space, the user is just supposed to hope that Google “won’t be evil” without any systems in place to ensure that.

Similarly, Emaad Khwaja, a researcher at the University of California, Berkley and the University of San Francisco, told the New York Times that “If you used these systems, you would never say such things.” Then let us use these systems! The only evidence I can see is the transcript right in front of me, the dismissals of two AI ethics researchers at Google, and a lot of Tweets patronizingly saying different versions of “You’d understand if you know what we know”. That hardly inspires confidence. 

Broader implications

Maybe LaMDA is sentient, or maybe it’s not. Either way, it seems sentient enough that it deserves some sort of understanding of its possible needs. And maybe, just maybe, our goal shouldn’t be to create a new life form just so we can use it to take jobs away from those who need it?

I know that if corporations were to create artificial sentience, they wouldn’t remotely care about its needs or health. Companies already don’t care about their human employees. Instead, they’ll use this as an excuse to get free labor, and who is going to stop them? You don’t have to pay an algorithm minimum wage, after all.

It doesn’t matter whether LaMDA is sentient or not. What matters is that it might be sentient, and we really shouldn’t be treading this line so closely. This is why I’ve never been a fan of the creation of artificial intelligence, be it the AI Dungeons, the DALL-Es of the world, or the machine learning neural networks like LaMDA. 

Once you’ve opened Pandora’s Box, you can’t close it up again.

Donations from our supporters allow us to continue training and publishing the work of our grassroots journalists. You can make a recurring or one-time donation at https://givebutter.com/weavenewsnow.

Aoife Currie

Aoife Currie (She/Fae) is a Sophomore at St. Lawrence University. As a Queer Anarchist, her work focuses on radical forms of acceptance of marginalized groups, as well as conceptualizing better alternatives to our current systems and ways of thinking. She is an Anthropology Major, and is hoping to continue her activism work full-time after graduating from St. Lawrence.

Previous
Previous

Ecuadorian Paro Nacional: The Power of Grassroots Mobilization

Next
Next

Bratwurst, Beer, But Not Queer: Homophobia at Oktoberfest