The Trial Lawyer Spring 2025 | Page 41

Google Drops Pledge To Not Use AI To Create Weapons

By Julia Conley

Weeks into U . S . President Donald Trump ’ s second term , Google has removed from its Responsible AI Principles , a commitment to not use artificial intelligence to develop technologies that could cause “ overall harm ,” including weapons and surveillance — walking back a pledge that employees pushed for seven years ago as they reminded the company of its motto at the time : “ Don ’ t be evil .”

That maxim was deleted from the company ’ s code of conduct shortly after thousands of employees demanded Google end its collaboration with the Pentagon on potential drone technology in 2018 , and this week officials at the Silicon Valley giant announced they can no longer promise they ’ ll refraining from AI weapons development .
James Manyika , senior vice president for research , technology , and society , and Demis Hassabis , CEO of the company ’ s AI research lab DeepMind , wrote in a blog post on progress in “ Responsible AI ” that in “ an increasingly complex geopolitical landscape ... democracies should lead in AI development , guided by core values like freedom , equality , and respect for human rights .”
“ And we believe that companies , governments , and organizations sharing these values should work together to create AI that protects people , promotes global growth , and supports national security ,” they said .
Until February 4th , Google pledged that “ applications we will not pursue ” with AI included weapons , surveillance , technologies that “ cause or are likely to cause overall harm ,” and uses that violate international law and human rights .
The Trial Lawyer 39