Autonomous killer robots were on the agenda at RightsCon this week, at the Beanfield Centre in Toronto. But this wasn’t a sci-fi convention.
Delegates to the global digital human rights conference were told that all the technology needed to build swarms of autonomous lethal machines already exists today. Without inventing any new technology, someone could build self-piloting drones equipped with explosives and artificial intelligence object-recognition systems to identify and destroy specific targets.
The situation is serious enough that Stanford University researcher Todd Davies said during a panel discussion that governments should be working towards international treaties to outlaw autonomous weapons systems, similar to the international ban on landmines, or the nuclear non-proliferation treaty.
And it’s not just killer robots, at RightsCon, there were more than 20 panels dealing with technical, social and human rights implications of artificial intelligence.
As AI begins to creep into every aspect of our lives, human rights advocates worry that it will be deployed in ways that undermine human rights.
Several people who spoke to the Financial Post said that if companies don’t take this seriously, they risk becoming the next Cambridge Analytica, which is at the centre of a scandal for perceived misuse of new technology.
When AI systems are deployed to moderate online comments, as Facebook Inc. currently does, if it’s done poorly it has the potential to undermine free speech.
Machine learning systems rely on huge data sets to learn how to do a specific task, and if the data is biased based on race or gender, it can lead to automated systems that perpetuate discrimination.
Moreover, because of the nature of AI machine learning systems, there are technical challenges and policy choices which make it difficult for activists to assess whether the systems are discriminatory.
“Artificial intelligence is coming, and if we don’t do something now, then there could be terrible results,” said Drew Mitnick, policy counsel for Access Now, the non-profit that organizes RightsCon.
“I think we’re freaked out by the possibility of what could happen, if we’re not careful.”
Mitnick was involved in drafting the Toronto Declaration, an 11-page document published as part of RightsCon, aimed at setting standards for non-discrimination in AI systems.
All this is food for thought for Gawain Morrison, CEO and co-founder of Sensum Co., a small Irish company that uses AI systems to measure human emotions and biometrics.
Morrison said it’s tricky to navigate the moral issues, and he’s made the conscious decision not to do business with casinos or surveillance companies.
Morrison sees it as a potential commercial advantage to avoid dabbling in the most creepy, morally fraught applications for AI systems.
“We have made judgement calls as a business of industries that we won’t work in, and we won’t do business in, but we’ve also had to survive as a company,” he said. “It’s moral in the first place, and then you have to justify a decision from a financial point of view.”
These issues are also front-of-mind for Matthias Spielkamp, executive director of Algorithm Watch, a non-profit that studies automated decision-making systems and advocates for more transparency.
Spielkamp said that he sees the human rights issues associated with AI systems as something that society will be grappling with for decades.
“We had the same challenge when we were looking at the development of the car,” he said.
“It took decades before we had a grip on that — how to regulate that, including putting up street signs and traffic lights and stuff like that.”
• Email: firstname.lastname@example.org | Twitter: jamespmcleod