LaMDA and the Sentient AI Trap | WIRED

GOOGLE AI RESEARCHER Blake Lemoine was recently placed on administrative leave after going public with claims that LaMDA, a large language model designed to converse with people, was sentient. At one point, according to reporting by The Washington Post, Lemoine went so far as to demand legal representation for the LaMDA; he has said his beliefs about LaMDA’s personhood are based on his faith as a Christian and the model telling him it had a soul.

Discussions of whether language models can be sentient date back to ELIZA, a relatively primitive chatbot made in the 1960s. But with the rise of deep learning and ever-increasing amounts of training data, language models have become more convincing at generating text that appears as if it was written by a person.

Recent progress has led to claims that language models are foundational to artificial general intelligence, the point at which software will display humanlike abilities in a range of environments and tasks, and be able to transfer knowledge between them.

Former Google Ethical AI team co-lead Timnit Gebru says Blake Lemoine is a victim of an insatiable hype cycle; he didn’t arrive at his belief in sentient AI in a vacuum. Press, researchers, and venture capitalists traffic in hyped-up claims about super intelligence or humanlike cognition in machines.

Source: LaMDA and the Sentient AI Trap | WIRED