Would you trust John Yoo?

0

Noting that the U.S. military was developing armed, autonomous robots to serve as battleground soldiers in our wars, a 2015 New York Times article asked an important question: “Can they learn to make moral choices?”

But wait, what about us humans — when it comes to war, can we learn to make moral choices? Remember that President George W. Bush chose to torture al Qaeda suspects (including innocent civilians) by waterboarding them. Where was the morality in that? It was, of course, immoral. But, at the request of the White House, an ambitious, eager-to-please lawyer in the Justice Department wrote a memo that whitewashed waterboarding, summarily decreeing that such torture was not, technically, torture.

The author of that memo sanctioning Bush’s immoral warfare was John Yoo. His name is relevant to the current question about robot morality because Yoo is now at the American Enterprise Institute — a nest of far-right, neo-con war hawks — where he’s become a leading booster of turning robots into our killing machines. In a September Wall Street Journal article, Yoo exults that, unlike humans, robots won’t get fatigued in battle or become “emotionally involved” in the business of killing humans. Not merely cold-blooded warriors, these efficient machines are no-blooded — plus, they’re much cheaper than a flesh-and-blood army.

But, you might ask, what if they go rogue, turning into an army of rampaging “Terminators” and using their artificial intelligence against us civilians? Tut-tut, says Yoo, admonishing us to “have more confidence in our ability to develop autonomous weapons within the traditional legal and political safeguards.”

Huh? Come on, John — you’re the guy who carelessly, flagrantly and immorally violated those very safeguards in your torture memo! We’re to trust you? No thanks.

This opinion column does not necessarily reflect the views of Boulder Weekly.

LEAVE A REPLY

Please enter your comment!
Please enter your name here