Bookmarked Google engineer put on leave after saying AI chatbot has become sentient (by Richard Luscombe in The Guardian)

A curious and interesting case is emerging from Google, one of its engineers claims that a chatbot AI (LaMDA) they created has become sentient. The engineer is suspended because of discussing confidential info in public. There is however an intruiging tell about their approach to ethics in how Google phrases a statement about it. “He is a software engineer, not an ethicist“. In other words, the engineer should not worry about ethics, they’ve got ethicists on the payroll for that. Worrying about ethics is not the engineer’s job. That perception means you yourself can stop thinking about ethics, it’s been allocated, and you can just take its results and run with it. The privacy officer does privacy, the QA officer does quality assurance, the CISO does information security, and the ethics officer covers everything ethical…..meaning I can carry on as usual. I read that as a giant admission as to how Google perceives ethics, and that ethics washing is their main aim. It’s definitely not welcomed to treat ethics as a practice, going by that statement. Maybe they should open a conversation with that LaMDA AI chatbot about those ethics to help determine the program’s sentience πŸ™‚

Google said it suspended Lemoine for breaching confidentiality policies by publishing the conversations with LaMDA online, and said in a statement that he was employed as a software engineer, not an ethicist.

13 reactions on “Google Says Ethics Is For Ethicists

  1. @ton reason I would never work for a company like this. I already refused, in a previous company I worked, to track users in a certain way which would allow them to pinpoint certan individuals. Ethic is, or should be, one of the competencies of a Software Engineers.

  2. @ton From the Guardian link:”Lemoine, an engineer for Google’s responsible AI organization”So you have an AI organization and a *responsible* AI organization? Some time ago I noticed that in some AI award ceremony, there was a (special) category:”AI for good”, which I read as “AI is evil by default”

  3. @ton google, because following orders and blind obedience turned out so well at Nuremberg… Incidentally, even if I do consider the engineer wrong, Fuck @google for the way its management thinks, acts, and conducts business.

  4. @ton So the big issue as I see it isn’t that they had a software engineer doing AI ethics, but that they had a credulous idiot doing AI ethics. Shows how seriously Google takes ethics.

  5. @ton Yes β€” always keep in mind that as software engineer you are the last line of defense. If you implement it, it is in the world. Your decision whether to build something or not decides how the world changes β€” however small that might seem.The ethicist should have the job to be there so software engineers can ask questions: β€œwhat will happen if we do it this way” (and receive answers) β€” but not β€œis it OK to do it this way?”

  6. @tychosoft @ton @google I expect that in this case the claims about sentience are a hoax, but it is indicative of how Google treats emerging ethics issues. That is, that they need to be appropriately ethics-washed by official Google appointees. Ordinary engineers can’t do ethics, according to them.

  7. @bob yeah, the case is irrelevant in my observation. It’s that in their communications they let slip their ethics work is blue washing, that is concerning. Which chimes with earlier data points and a few ethicists I have known within Google.

Comments are closed.