The news: Google placed an engineer on leave for violating company confidentiality policy after he claimed an AI system had consciousness.
The trouble with AI: Google seems to have a particularly fraught relationship with its AI team. Former Google ethicists Timnit Gebru and Margaret Mitchell, who were both fired after voicing concerns about AI, warn that although LaMDA isn’t sentient, Google creating systems that can impersonate humans is in itself harmful, per the Post.
An AI’s convincing demonstration of human-like awareness that’s difficult to refute can prompt a strong emotional reaction in people who may want to forge relationships with it or fight for its rights.
Why it’s worth watching: AI has been advancing at a rapid pace, including in the subfield of natural language processing (NLP), which grants systems like LaMDA human-like conversational qualities that some believe is pushing the technology closer to self-awareness.
The bigger picture: AI’s many issues—such as bias, cybersecurity vulnerabilities, or gray areas about sentience—mean Big Tech has a social responsibility to be transparent about the technology and accept responsibility for adverse consequences.
Further reading: Take a look at our Conversational AI report.
You've read 0 of 2 free articles this month.
685 Third Avenue21st FloorNew York, NY 100171-800-405-0844
1-800-405-0844[email protected]