Microsoft pledges $500 million to tackle Seattle housing crisis
At least 20 killed, dozens injured in pipeline explosion in Mexico
Lotus cars could be made in China at new Geely plant
Shopko pharmacy to close, store spared
How the Government Shutdown Will Hurt the Economy
Google pledges not to use artificial intelligence for weapons or surveillance
09 June 2018, 09:49 | Austin Hogan
Polina Godz Jacobin
Google's principles say it will not pursue AI applications meant to cause physical injury, that tie into surveillance "violating internationally accepted norms of human rights", or that present greater "material risk of harm" than countervailing benefits.
This commitment follows protests from staff over the United States military's research into using Google's vision recognition systems to help guide drones.
After recent backlash from employees and internet users, Google announced that its contract with the Pentagon to develop machine learning (ML) algorithms that can be used to identify drone targets will not be renewed when it expires next year. However, the company says it will continue working with the U.S. government and military on other technologies.
It's interesting that Google mentioned worldwide human rights laws here, because just recently, the United Nations' Special Rapporteur called on technology companies to implement global human rights laws by default into their products and services, instead of their own filtering and censorship rules, or even the censorship rules of certain local governments.
In April, more than 3,000 Google employees wrote a letter to Pichai, the New York Times reported.
"These are not theoretical concepts; they are concrete standards that will actively govern our research and product development and will impact our business decisions", Google CEO Sundar Pichai wrote in a post. The company said on Thursday that if the principles had existed earlier, Google would not have bid for Project Maven. And they should only be made available for purposes that fall in line with the above. To improve upon its principles, Google should commit to independent and transparent review to ensure that its rules are properly applied, he said.
Peter Asaro, vice chairman of the International Committee for Robot Arms Control, said this week that Google's backing off from the project was good news because it slows down a potential AI arms race over autonomous weapons systems. Asaro praised Google's ethical principles for their commitment to building socially beneficial AI, avoiding bias, and building in privacy and accountability. Google is widely seen as a potential contender for a massive contract to move Defense Department systems to cloud servers. But it is also key to its future ambitions, many of which involve ethical minefields of their own, including its self-driving Waymo division and Google Duplex, a system that can be used to make dinner reservations by mimicking a human's voice over the phone.
Over 4,000 Google employees ended up protesting Google's involvement with the Pentagon, saying in an open letter that Google should not be in the "business of war".
90min Meet the England Squad
It's something that I'm really looking forward to. "I am aware of the support from Yorkshire clubs when we travel anyway". A fully-fit England squad trained at St George's Park ahead of their final friendly before the tournament.