A.I. leaders (naively) vow no lethal autonomous weapons

More than 160 companies with divisions dedicated to advancing artificial intelligence just signed on to a pledge to “neither participate in nor support the development, manufacture, trade, or use of lethal autonomous weapons,” or LAWS, the text states.

That’s nice; very peace-keeping-ish. But that’s also a bit naive.

Artificial intelligence is, after all, a global business — a highly competitive global pursuit. Tracking down which bit of technological genius was used to develop that bit of autonomous weaponry seems like it could easily devolve into the proverbial hunt for the missing needle in the haystack.

What seems innocuous now could one day be used for aggressiveness.

On top of that, it’s not the companies the citizens need to fear. It’s the companies’ technology falling into the hands of the wrong governments the citizens need to fear.

Google DeepMind scientists may refuse to construct autonomously operated weapons — and they did. They signed the agreement. But technology’s biggest enemy, generally speaking, is a determined hacker. This pact may be back-patting to the participants. But China’s companies aren’t joining — and China’s home to some of the busiest government technology hackers in the world.

Think of it this way: A simple agreement among gentlemen holds little meaning among thieves.

University College London, the XPRIZE Foundation, the Swedish A.I. Society, British MP Alex Sobel and techno-world bigwig Elon Musk all joined Google in vowing to stay far away from LAWS — those weapons that use artificial intelligence to identify and kill, absent a human’s final decision. And on hand, that’s good. Machines ought not be allowed the freedom to determine a kill; humans ought not take themselves off the ethical and moral hooks for making this final fatal decision.

But on the other hand: Some countries, some governments, some political leaders — some regimes hostile to America and the West — are indeed pursuing his technology for wartime assets. The national security implications for free societies of not having this technology could very quickly supplant the ethical concerns against developing this weaponry.

A worse-case? The very companies that shudder at the thought of building LAWS may themselves one day fall victim to the governments that don’t. Wouldn’t it make sense to have the technology, even if it’s not used? Peace through strength.

After all, a world where wars are fought by rules of fair play is alluring. But it’s important not to be naive. It’s important not to underestimate the enemy and turn blind eyes to the realities of wars that are more often waged with win-at-all-costs attitudes.

First appeared at The Washington Times.

Leave a Reply

Your email address will not be published. Required fields are marked *