Bookmarked Google engineer put on leave after saying AI chatbot has become sentient (by Richard Luscombe in The Guardian)
A curious and interesting case is emerging from Google, one of its engineers claims that a chatbot AI (LaMDA) they created has become sentient. The engineer is suspended because of discussing confidential info in public. There is however an intruiging tell about their approach to ethics in how Google phrases a statement about it. “He is a software engineer, not an ethicist“. In other words, the engineer should not worry about ethics, they’ve got ethicists on the payroll for that. Worrying about ethics is not the engineer’s job. That perception means you yourself can stop thinking about ethics, it’s been allocated, and you can just take its results and run with it. The privacy officer does privacy, the QA officer does quality assurance, the CISO does information security, and the ethics officer covers everything ethical…..meaning I can carry on as usual. I read that as a giant admission as to how Google perceives ethics, and that ethics washing is their main aim. It’s definitely not welcomed to treat ethics as a practice, going by that statement. Maybe they should open a conversation with that LaMDA AI chatbot about those ethics to help determine the program’s sentience 🙂
Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist.