Killer robot swarms, an update
So, you think killer robots are scary? Try an entire swarm of them.
It’s no secret that militaries around the world are competing to develop the smartest weapons.
But AI in warfare doesn’t necessarily mean high-powered brains — it can also be a blizzard of dumb-ish little vehicles overwhelming an enemy. Vladimir Putin, in a speech about AI war several years ago, predicted that “when one party’s drones are destroyed by drones of another, it will have no other choice but to surrender.”
So where’s the Pentagon on this? Developing an effective drone swarm — a group of autonomous drones that can communicate to achieve a goal — is “without a doubt a priority” for the U.S. military, Elke Schwarz, author of Death Machines: The Ethics of Violent Technologies, told Digital Future Daily.
The Pentagon doesn’t openly discuss many of its most advanced technologies, but last year it called for proposals from the defense industry for a new program called AMASS, for Autonomous Multi-Domain Adaptive Swarms-of-Swarms. The goal: To develop the ability to deploy thousands of autonomous land, sea and air drones to overwhelm and dominate an enemy’s area defenses, according to recently updated documents.
As for where they’d send such a swarm — officials haven’t named names, but observers reading between the lines think they may envision using them in the event of a Chinese invasion of Taiwan. (A Pentagon spokesperson did not immediately respond to a request for comment.)
“I am not surprised that DARPA and DoD are working on this considering they are in a tech race with China, which also has its own swarm accomplishments to date,” the Center for a New American Security’s Samuel Bendett told Digital Future Daily. Last week, the Hudson Institute’s Bryan Clark also called for the U.S. to challenge China with drone swarms.
The AMASS program isn’t the first time the Strategic Technology Office of the Defense Advanced Research Projects Agency — better known as DARPA — has looked into using autonomous drone swarms to gain an upper hand. Six years ago, the department launched a separate OFFensive Swarm-Enabled Tactics Program (OFFSET) program, which hopes to perfect the use of swarms to assist Army ground forces.
Last year, six months after the Pentagon ran its final OFFSET test, a top DARPA official told FedScoop that it could be possible for the U.S. military to launch swarms of up to 1,000 drones within the next five years.
So far, the number of real-life military drone swarms known to have been deployed stands at one: In 2021, Israel sent a fully autonomous swarm of small drones to locate, identify and attack Hamas militants in concert with other missiles and weapons.
Israel’s swarm was “just the beginning,” George Mason University policy fellow Zak Kallenborn wrote in DefenseOne. While AI was used, the drones weren’t as sophisticated as future swarms could be, he wrote, since they coordinated with mortars and ground-based missiles to strike targets miles away. In the future, “swarms will not be so simple.”
So how about drone-swarm ethics? And limits? In the wrong hands, drone swarms have potential to be weapons of mass destruction, experts warn, because of two things: their potential to inflict harm on lots of people at once, and a lack of control to ensure they don’t harm civilians. Since swarms communicate together, unlike a group of drones that act independently, the risk for catastrophe if something goes wrong is much higher.
The DoD does have some guardrails in place. The department updated its autonomous weapons policy to adhere to its AI Ethical Principles, which outline the design, development, deployment and use of AI. In the case of drone swarms, the policy would ensure that the technology needs to be entirely foolproof — with no risk for deadly miscalculations or unpredicted actions — before being used.
But nations without such safeguards could do irreparable damage. Drones can be cheap and easy to build. Networks can be created by unethical programmers. In short, a drone swarm is a fairly scary technology accessible to many countries — or even insurgent groups.
“They could be used for wide-scale surveillance as well as wide-scale indiscriminate attacks,” Michel said.
And for malign actors like terror groups and those without AI laws, “the fact that swarms are terrifying and unpredictable and indiscriminate,” he said, “would actually be a major selling point.”
Google was great at search algorithms from the get-go — but making search profitable took a while. The company’s PageRank algorithm blew existing web search techniques out of the water in the early days of the Web and quickly found a loyal following, but the company’s backers could only exhale after the founders started selling keyword advertising at 5 cents a click in 2002. (The moneymaking pay-per-click idea was actually pioneered by one of Bill Gross’ Idealab holdings, called Goto.com.).
Of course, Google isn’t the only search engine in town. It’s still dominant, but Microsoft’s Bing has been slowly creeping upward in market share. Now the two companies are openly battling it out for search supremacy with generative AI.
Today, Microsoft unveiled AI-powered updates to its search engine and browser at a press event today, and is reportedly planning on letting people “toggle” between a traditional search results page and a ChatGPT-powered chat service. Hot on its heels, Google is set to debut its conversational AI, named Bard. Based on CEO Sundar Pichai’s blog post yesterday, Bard looks to be integrated directly into Google’s search results page.
The difference in how AI responses are presented to the user — a toggle page for Microsoft, an integrated page for Google — will likely affect how each company thinks about search-based ad revenue. As the tech writer Alex Kantrowitz observed, it’s not obvious how to integrate ads into conversational replies. The race to win the lion’s share of the global search market might not just be about who figures out generative AI first, but rather who is able to definitively monetize the data and power-hungry beast that is next-generation web search. —Mohar Chatterjee
Tim Wu, the influential antitrust thinker and policymaker, ranked some of the Biden administration’s biggest strategic bets on industrial tech policy during a speech at a conference hosted by the University of Colorado, Boulder yesterday.
His criteria? Whether federal subsidies were used to create a profitable ecosystem around foundational technologies rather than Washington just picking and supporting one-off winners.
Wu, who stepped down from his two year stint as President Joe Biden’s advisor on technology and competition policy last month, was excited about the $65 billion dedicated to broadband infrastructure programs, disappointed by the airline industry’s pandemic bailouts, and somewhere in between on the CHIPS Act.
While cautioning that it’s hard to know exactly which foundational technologies will matter — the big Clinton-era push to support supercomputers hasn’t aged too well — Wu saw the Biden administration’s broadband investments as the right idea in principle: an industrial policy that did not create “a sort of private empire,” but rather invested in “public resources that can be drawn on by all the companies taking inputs from the space, as we did with the Internet,” he said. —Mohar Chatterjee
Stay in touch with the whole team: Ben Schreckinger ([email protected]); Derek Robertson ([email protected]); Mohar Chatterjee ([email protected]); Steve Heuser ([email protected]); and Benton Ives ([email protected]). Follow us @DigitalFuture on Twitter.
If you’ve had this newsletter forwarded to you, you can sign up and read our mission statement at the links provided.