Last week, a story that one of Google’s AI projects developed a conscience went viral. Shortly after, the company released a statement denying the allegations made by one of its engineers.
In May 2021, Google presented the Language Model for Dialog Applications (LaMDA) — a large-scale AI system that can chat/answer/write in a seemingly human manner. The system has been trained on large databases of texts available on the Internet.
Basically, what the system does is recognize patterns and make predictions. It recognizes the word and decides which string of words comes next. Owing to the vast database it’s been trained on, LaMDA is able to engage in a flowing conversation on almost any topic.
Blake Lemoine, Google’s engineer who made the claims, was placed on administrative leave after releasing the story to the public. The company claims this is due to Lemoine violating the Confidentiality Policy in connection to his concerns.
While Lemoine claims that he provided evidence of LaMDA being sentient, Google published a statement denying this. According to the statement, the company’s team of experts reviewed the provided evidence and concluded that the evidence didn’t support the claims.
However, the AI community and AI experts agreed that LaMDA is nowhere near developing a conscience. Nevertheless, experts claim that our society is entering a new phase of tech development where neural networks will be so well-trained that more and more people will believe they have a conscience.