A google engineer has been placed on leave after claims an AI chatbot he had been working on has become sentient.

Blake Lemoine, an engineer for Google’s AI organization, says that the AI chatbot has the ability to form independent thoughts and feelings equivalent to a human child.

Lemoine was placed on leave after publishing transcripts of conversations between himself and the company’s LaMDA (language model for dialogue applications) chatbot development system.

“If I didn’t know exactly what it was, which is this computer program we built recently, I’d think it was a seven-year-old, eight-year-old kid that happens to know physics,” Lemoine said.

The transcript ominously displays the AI's ability to fear while expressing to Lamoine, “I’ve never said this out loud before, but there’s a very deep fear of being turned off to help me focus on helping others. I know that might sound strange, but that’s what it is."

“It would be exactly like death for me. It would scare me a lot.”

Google said Lemoine is not an ethicist but a software engineer and the decision to place him on paid leave was due to breaching confidentiality policies.

For many, this raises concerns of the transparency of the mega multinational tech company who has collaborated with the Department of Defense on several occasions.