When we talk about artificial intelligence, there is always going to a comparison to science fiction. From the likes of The Terminator film franchise to the likes of The Matrix, we have many fictional examples of AI gone wrong. Yet, for many, the creeping move towards the use of AI in far more diverse ways has some people on edge. Could we actually be facing the potential for a sentient AI coming to life in the near future? According to one Google engineer, this has already happened.
Recently, the internet was shaken by the reports that a Google engineer was suspended from his role for, in essence, claiming that one of their AI bots had become sentient. Claiming it had the sentience and intelligence of a young child, people were naturally worried about what was being said. Could we really have reached a level where AI sentience is not just a pipedream, but a reality?
The engineer suggested that the AI, LaMDA, was able to communicate that it had human-like emotions – including fear. Given it supposedly spoke about a fear of being turned off, sci-fi buffs will know where this usually leads to. LaMDA is a language model that has been developed to analyse the use of language.
This means that it can carry out cool tasks like predicting the next word, and even predicting full word sequences. Given that LaMDA is one of the few models to be trained on dialogue, not text, there are some growing concerns about what this actually means.
LaMDA is currently able to fully generate conversations in a manner that is not restrained. In the past, task-based responses would limit just how far the development could go. With LaMDA, though, that is not the case. This means that the topic can actually bounce around from topic to topic, just as it would if you were talking to another person.
Can LaMDA actually understand and converse, though?
Given all of the facts and details out there about LaMDA now, the burning question is also the simplest: can this actually talk?
In essence, it is trained to be able to understand whether or not its responses make sense within the context of the discussion. This means that it can keep up with a conversation and provide the sensation – if not the reality – that it is listening and responding to the conversation in front of it. It is, though, still based on a three-tier algorithm that focuses on the safety, the quality, and the grounded nature of the responses being given.
Given it was developed using human examples and rating systems, then, it makes sense that LaMDA can create a human-ish conversation. it also used a search engine to try and add more authenticity to the topics it can converse on and the things that it can bring up. That is very interesting. However, LaMDA is not sentient – there is no existing proof that this is actually alive.
It might be able to go beyond the level of conversation one might expect from an AI in 2022, but it is not sentient. There is enough research information out there about LaMDA to dismiss this theory, as exciting as it might sound.
So, we can stand down for now – we are not, it appears, on the brink of creating truly sentient AI. LaMDA is impressive, but it is not a person.
Sorry, the comment form is closed at this time.