On a summer night in Dallas in 2016, a bomb-handling robot made technological history. Police officers had attached C-4 explosive to it, steered it near an active shooter and detonated it. Micah Xavier Johnson became the first person in the U.S. to be killed by a police robot.
Then-Dallas Police Chief David Brown called the decision sound. Johnson had fatally shot five officers, wounded nine and hit two civilians.
But some robotics researchers were troubled. "Bomb squad" robots are marketed as tools for safely disposing of bombs, not for delivering them to targets. Their profession had supplied the police with a new form of lethal weapon, and in its first use as such, it had killed a Black man.
Like most police robots in use today, the Dallas device was a straightforward remote-control platform. But more sophisticated robots are being developed with algorithms for, say, facial recognition or deciding on their own to fire projectiles. But many of today's algorithms are biased against people of color and others who are unlike the white, male, affluent and able-bodied designers of most robot systems. In the future, critical decisions might be made by a robot, one created by humans, with their flaws in judgment baked in.
"It is disconcerting that robot peacekeepers, including police and military robots, will, at some point, be given increased freedom to decide whether to take a human life," wrote Ayanna Howard, a robotics researcher at Georgia Tech, and her colleague Jason Borenstein.
During the past decade, evidence has accumulated that "bias is the original sin of AI," Howard noted in her 2020 audiobook, "Sex, Race and Robots." Facial-recognition systems have been shown to be more accurate in identifying white faces than those of other people. (In January, one system told the Detroit police that it had matched photos of a suspect with the driver's license of a Black man with no connection.)
Joy Buolamwini, the founder of the Algorithmic Justice League and a graduate researcher at the MIT Media Lab, has encountered interactive robots at two laboratories that failed to detect her. (At MIT, she wore a white mask in order to be seen.)
The long-term solution is "having more folks that look like the United States population at the table when technology is designed," said Chris S. Crawford, a professor at the University of Alabama. Algorithms trained mostly on white male faces (by mostly white male developers) are better at recognizing white males.
"I personally was in Silicon Valley when some of these technologies were being developed," he said. More than once, he added, "I would sit down and they would test it on me, and it wouldn't work. And I was like, 'You know why it's not working, right?' "
Robot researchers typically educated to solve technical problems, not to consider societal questions. So it was striking that many signed statements declaring themselves responsible for addressing injustices in the lab and outside it. They committed to actions aimed at making the creation and use of robots less unjust.
"I think the protests in the street have really made an impact," said Odest Chadwicke Jenkins, a roboticist and AI researcher at the University of Michigan.
Jenkins was one of the lead organizers and writers of a manifesto signed by nearly 200 Black scientists in computing and more than 400 allies. It describes Black scholars' personal experience of "the structural and institutional racism and bias that is integrated into society, professional networks, expert communities and industries."
The open letter is linked to action items, ranging from not placing all the work of "diversity" on minority researchers to ensuring that at least 13% of funds spent by organizations and universities go to Black-owned businesses to tying metrics of racial equity to promotions. It also asks readers to support organizations dedicated to advancing people of color in computing and AI, including Black in Engineering, Data for Black Lives, Black Girls Code, Black Boys Code and Black in AI.