See more of the story

In a news conference this week, Attorney General Eric Holder was asked what he planned to do to increase the Obama administration's transparency with regard to the drones program. "We are in the process of speaking to that," Holder said. "We have a rollout that will be happening relatively soon."

Due to the program's excessive secrecy, few solid details are available to the public. Yet, as new technologies come online — on Tuesday, the Navy launched an unmanned stealth jet from an aircraft carrier — new concerns are emerging about how the U.S. government may use drones.

The X-47B, which can fly without human input, is a harbinger of what's to come. A growing number of international human-rights organizations are concerned about the development of lethal autonomy — that is, drones that can select and fire on people without human intervention. But as the outcry over this still-hypothetical technology grows, it's worth asking: Might the opposite be true? Could autonomous drones actually better safeguard human rights?

Last month, Christof Heyns, the U.N. special rapporteur on extrajudicial, summary or arbitrary executions, released a major report calling for a pause in developing autonomous weapons and for the creation of a new international legal regime governing future development and use. Heyns asked whether this technology can comply with human-rights law and whether it introduces unacceptable risk into combat.

The U.N. report is joined by a similar report, issued last year by Human Rights Watch. HRW argues that autonomous weapons take humanity out of conflict, creating a future of immoral killing and increased hardship to civilians. The organization calls for a categorical ban on all development of lethal autonomy in robotics. It also is spearheading a new global campaign to forbid the development of lethal autonomy.

That is not as simple as it sounds. "Completely banning autonomous weapons would be extremely difficult," Armin Krishnan, a political scientist at the University of Texas at El Paso who studies technology and warfare, told me. "Autonomy exists on a spectrum."

If it's unclear where to draw the line, then maybe intent is a better way to think about such systems. Lethally autonomous defensive weapons, such as the Phalanx missile defense gun, decide on their own to fire. Dodaam Systems, a South Korean company, even manufactures a machine gun that can automatically track and kill a person from two miles away. These stationary, defensive systems have not sparked the outcry autonomous drones have. "Offensive systems, which actively seek out targets to kill, are a different moral category," Krishnan explains.

Yet many experts are uncertain whether autonomous attack weapons are necessarily a bad thing, either. "Can we program drones well? I'm not sure if we can trust the software or not," Samuel Liles, a Purdue professor specializing in transnational cyberthreats and cyberforensics, wrote in an e-mail. "We trust software with less rigor to fly airliners all the time."

The judgment and morality of individual humans certainly isn't perfect. Human decisionmaking is responsible for some of the worst atrocities of recent conflicts. Just on the American side, massacres — like when Marines killed 24 unarmed civilians in Haditha or Marine special forces shot 19 unarmed civilians in the back in Jalalabad — speak to the fragility of human judgment about using force.

Yet, machines are not given the same leeway: Rights groups want either perfect performance from machines or a total ban on them.

Humans get tired, they miss important information, or they just have a bad day. Without machines making any decisions to fire weapons, humans are shooting missiles into crowds of people they cannot identify in so-called signature strikes. When a drone is used in such a strike, it means an operator has identified some combination of traits — a "signature" — that makes a target acceptable to engage. These strikes are arguably the most problematic use of drones, as the U.S. government tightly classifies what these criteria are and has announced that it will consider all "military-aged males" who die to be combatants unless proven otherwise.

A machine could, conceivably, result in fewer casualties and less harm to civilians. That doesn't mean machines will always be this way. Machine learning, a branch of artificial intelligence in which computers adapt to new data, poses a challenge if applied to drones. "I'm concerned with the development of self-programming," Krishnan says. "As a self-programming machine learns, it can become unpredictable."

Such a system doesn't exist now and won't for the foreseeable future. Moreover, the U.S. government isn't looking to develop complex behaviors in drones. A Pentagon directive published last year says, "Autonomous and semiautonomous weapon systems shall be designed to allow commanders and operators to exercise appropriate levels of human judgment over the use of force."

So, if problematic development of these types of weapons is already off the table, what is driving the outcry over lethal autonomy?

It's difficult to escape the science-fiction aspect to this debate. James Cameron's "Terminator" franchise is a favorite image critics conjure up to illustrate their fears. Moreover, the concern seems rooted in a moral objection to the use of machines per se: that when a machine uses force, it is somehow more horrible, less legitimate and less ethical than when a human uses force. It isn't a complaint fully grounded in how machines, computers and robots actually function.

In many cases, human rights would actually benefit from more autonomy. If something goes wrong, culpability can be more easily established. From a legal standpoint, countries cannot violate international human-rights law or the laws of armed conflict. This is true whether a drone has a human operator or not. But unlike the lengthy investigations, inquests and trials required to unravel why a human made a bad decision, making that determination for a machine can be as simple as plugging in a black box. If an autonomous drone does something catastrophic or criminal, there should be a firmly established liability for those responsible.

The issue of blame is the trickiest one in the autonomy debate. Rather than throwing one's hands in the air and demanding a ban, as rights groups have done, why not simply point blame at those who employ the technology? If an autonomous Reaper fires at a group of civilians, then the blame should start with the policymaker who ordered it deployed and end with the programmer who encoded the rules of engagement.

Making programmers, engineers and policymakers legally liable for the autonomous weapons they deploy would break new ground in how accountability works in warfare. But it would also create incentives that make firing weapons less likely — surely the result so many rights groups want to achieve.

Joshua Foust is a national-security columnist for PBS and the editor of the Central Asia blog Registan.net. His website is joshuafoust.com. He wrote this article for Foreign Policy magazine.