AI-13: AI works great for killing people

Science fiction has long fretted about the possibility, even inevitability, of handing decisions of war over to machines. That day is upon us, according to an investigation by by the Israeli-Palestinian publication +972 Magazine with the Hebrew-language outlet Local Call, reported by The Guardian. The Israel Defence Force is killing people with its own AIs, called “Lavender” and, horrifyingly, Habsora (which means “the Gospel” in English).

The IDF’s own website describes “The Gospel” (edited machine translation from Hebrew):

One such [AI] system is called “The Gospel”, which is used in war. This is a system that allows the use of automatic tools to produce targets at a fast pace.

The priority is pace, not precision. The flattening of Gaza by the IDF is not the work of precision strikes.

AI is terrible at precision. This was a lesson learned years ago by IBM, who tried to apply their Watson AI to assisting doctors make decisions. Their CEO Arvind Krishna later admitted that their mistake was “we should have applied it to more areas that were less critical”. Apparently the IDF did not get the memo.

Lavender churns out tens of thousands of potential targets. As with student essays or product recommendations, when selecting targets to kill, it turns out that accuracy matters less than speed.

The Israeli use of AI is a figleaf; they’re hiding behind tech to justify indiscriminate slaughter. All too often, AI in practice does not improve human work, rather, it lets people get away with churning out junk faster. That’s bad enough in trivial cases, but in war, it’s a recipe for genocide.

Speaking of the use of The Gospel in a prior conflict with Hamas in 2021, Aviv Kochavi, then head of the IDF, said:

In the past we would produce 50 targets in Gaza per year. Now, this machine produces 100 targets a single day, with 50% of them being attacked.

The rates have gone up since then, of course. Soldiers using Lavender praised its ability to kill because, whereas the soldiers were grieving their loss:

The machine did it coldly. And that made it easier.

The “coldness” of an AI is not like the “coldness” of a hard heart. It is not the absence of “warmth”, it is the absence of anything that might be “warm”. If there is an analogue with the human mind, it is the mind of a psychopath.

The whole theme of this essay is that AIs are by their very nature and purpose manufacturers of delusion, designed to fool humans into thinking that they have inner states. This doesn’t require people to consciously and rationally accept machine sentience as a genuine fact. All it requires is that we allow ourselves to be fooled into doing what the IDF operatives did, essentially treating the outputs of the AI machine “as if it were a human decision.”

Like many AI systems, Lavender can’t really work without human intervention. For example, Amazon ditched its “Just Walk Out” checkout, since the supposedly automated AI self-learning system in fact relied on 1,000 low paid workers in India to constantly surveil shoppers and label videos. Multiple examples of such “mechanical Turks” litter the glorious highway of AI progress. But the IDF apparently believes that, while the biggest retailer in the world cannot build an AI to handle checkouts, they can build one to decide to kill people.

Lavender subjects soldiers to the same kind of punishing pace and rapid decision making pressure that is endured by content moderators. One Lavender user said,

I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.

I had zero added-value as a human” … “It saved a lot of time.

Another operator confirmed:

But even if an attack is averted, you don’t care—you immediately move on to the next target. Because of the system, the targets never end. You have another 36,000 waiting.

This pressure on getting fast results was pushed from the top. One intelligence officer said:

We were constantly being pressured: ‘Bring us more targets.’ They really shouted at us. We were told: now we have to f*** up Hamas, no matter what the cost. Whatever you can, you bomb.

The rationale for this is the same as is always trotted out for AI: efficiency. Only here, “efficiency” is revealed in its full macabre, anti-human monstrosity.

One operator explained why they were using low-tech so-called “dumb bombs” for these missions:

You don’t want to waste expensive bombs on unimportant people.

“You don’t want to waste expensive bombs on unimportant people.”

The bombs are what matter: the capital, the machines. Because they are measured in money. People are unimportant, so we don’t “waste” expensive bombs on them.

One operative outlined the their strategy.

We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.

The system is deliberately designed to target civilian homes “as a first option”. A subsystem called, again horrifyingly, “Where’s Daddy?” tracked suspects and waited until they had entered their homes, usually at night with their families, before launching the attack. Thousands of innocents were slaughtered by AI, whose output was treated as an order.

At 5 a.m., [the air force] would come and bomb all the houses that we had marked. We took out thousands of people. We didn’t go through them one by one—we put everything into automated systems, and as soon as one of [the marked individuals] was at home, he immediately became a target. We bombed him and his house.

We were promised that AI would elevate human consciousness, ushering us into a glorious new era. Look upon the devastation and ruin of Gaza and behold: the new era is come, and its face is death. The promise is startling new insights and innovative perspectives, the reailty is low-quality information as a smokescreen for war crimes.

What AI does not do is help people make better decisions when it comes to warfare. One IDF intelligence officer said:

No one thought about what to do afterward, when the war is over, or how it will be possible to live in Gaza.

“No one thought.”

Such systems are not built by Israel alone. Israel has a $1.2 billion contract with Google to supply the Project Nimbus AI surveillance system, which they went ahead with despite open revolt and resignations by their staff. Google is no outlier, as Amazon, Microsoft and the rest routinely sell their technology to repressive forces for billions of dollars.

As money and resources are pumped into AI, its capabilities will expand, and military involvement and application will deepen. AI critic Willie Agnew tells us there are “decades, even centuries, of knowledge, critique, and movements about this”. None of what we are seeing is a surprise. And it can only be stopped by effective international agreement.

The problem is not that we don’t know what the problem is. Anyone who saw Terminator knows that. It’s not even that there are evil people who do evil things. The problem is that the people who should be protecting us are actively enabling it.

In May 2024, the UN is hosting an “AI for Social Good” summit. Note the assumption built into the title; not whether it is good; that is not askable, it is assumed. But who is it that tells us how to use AI for good? One of its speakers is Meirav Eilon Shahar, Permanent Representative of Israel to the United Nations.

Last year we asked an oil company CEO to head a conference on the climate crisis. Now we are platforming a nation slaughtering people with AI in a conference on “AI for Social Good”.

This won’t change until we can ask the question that really matters: “Is AI, in fact, good?”

9 Likes

:100:

It reminds me of those bomb detecting dowsing rods which proved wildly popular with Thai military officers fighting the insurgency in the south of Thailand. Invariably the dowsing rods pointed towards whatever people the officers were already suspicious of and the “device” gave them the perfect pretext they needed to harass and arrest whoever they wanted with impunity… cause, well, the device told me! They continued using these “devices” years after they were discredited… because “well, whatever the experts say, it works for me.”

5 Likes

This is monstrous. :person_facepalming:

1 Like

https://www.axios.com/2024/05/01/pentagon-military-ai-trust-issues

When they tested LLMs from OpenAI, Anthropic and Meta in situations like simulated war games, the pair found the AIs suggested escalation, arms races, conflict — and even use of nuclear weapons — over alternatives.

“It is practically impossible for an LLM to be taught solely on vetted high-quality data,” Schneider and Lamparth write.

“The risk-taking appetite in Washington is not very great. And the risk-taking appetite out here [in Silicon Valley] is unparalleled,” former Secretary of State Condoleezza Rice told Axios at a Hoover Institution media roundtable at Stanford University this week.

2 Likes

Wow, just wow.

That’s just … :mindblown:

You find in the TESCREAL circles a lot of game theory, it’s all “I do this and they’ll do this”, based on calculated self-interest. It seems like the AI models inherit this kind of death spiral.

It’s a powerful counter against the “if we don’t do it, China will” argument. The last thing you want is both sides relying on AI making decisions.

2 Likes

And, sadly:

An AI-controlled fighter jet took the Air Force leader for a historic ride. What that means for war

Artificial intelligence marks one of the biggest advances in military aviation since the introduction of stealth in the early 1990s. “We have to have it,” Secretary Frank Kendall says.

https://www.politico.com/news/2024/05/04/an-ai-controlled-fighter-jet-took-the-air-force-leader-for-a-historic-ride-what-that-means-for-war-00156147

“We have to have it” indicates an almost complete abrogation of choice, moral considerations, and foresight.
Ironically, it comes across as humans turning themselves into the ethically choiceless machines they are growing to love.

that’s a lot of fear mongering there, it’s like saying “when searching for war related queries on a search engine, war related answers are returned”

it’s not taking any risks, it is continuing a prompt on war related questions in ways that are likely depicted in popular culture

Exactly the problem.

1 Like

And people in turn are trained by AI responses in these war games scenarios. Marshall McLuhan went out of fashion for a some time, but was regenerated a while ago, because some of the things he said have proven - eerily - to hold.

The Gutenberg Galaxy: The Making of Typographic Man and The Medium is the Massage: An Inventory of Effects are both go to texts for him.

And for the process philosophers …

2 Likes

I mean, top militaries (in general) will only give up a technology if a better alternative exists and/or if a treaty compels them. As Bhante already pointed out, AI (like the machine gun) is good at killing people quickly, so that eliminates the first option, and the second seems extremely difficult.

The most successful weapons-ban treaty in history is probably the nuclear test ban treaty. This treaty has the benefits of:

  1. a clear definition of what it’s banning (runaway nuclear reactions) and what it’s not (controlled reactions for e.g. power generation)
  2. strategic reasons for the incumbent powers to sign (Russia and the US already tested their warhead designs thoroughly before the treaty and preventing future testing helps slow the spread of proliferation, see also the nonproliferation treaty)
  3. acceptable alternatives (Why do you think the US and China keep building more and more powerful super-computers? Nuclear tests are still happening… just in simulators)
  4. an easy method of monitoring for compliance that doesn’t threaten sovereignty (underwater microphones can be placed in international waters, and seismometers and monitoring satellites are already watching the globe, and detecting a detonation is pretty easy)

I can’t even begin to imagine a treaty framework for banning AI that hits even a single of these four points. I’m not saying it’s impossible. If someone here really wants to work on it, there are think tanks researching what solutions to these problems might look like… but it’s definitely not as simple as “just don’t do it.” Unfortunately, that’s not how states work.

2 Likes

Agree yes, but i think one thing that needs to be questioned is the tech determinism, the assumption that it will work better than humans. And for that, a broad perspective is crucial. Eg. a machine might beat a human in close combat, but a human might be better at avoiding combat.

3 Likes

Absolutely. In fact a recent(ish) Rand Corp “War Game” for the Pentagon underscored this point dramatically when an autonomous US submarine “accidentally” started a full-on war with China… by operating as it had been programmed. :grimacing:

I actually think the existing international “laws of war” are pretty good. They focus more on outcomes (proportionality, genocide, cruelty, collateral damage, etc) than on technologies, so they’re fairly “future-proof.” In my mind, the biggest problem right now is that we’re not seriously trying to enforce these rules when our friends flagrantly violate them.

Sadly, Biden seems to have a different definition of friendship from me… But I say: Friends don’t let friends commit war crimes. That (to me) is the part that should be simple.

1 Like

There’s this idea in the Terminator: Skynet became self-aware, then launched the nukes. I think it’s an interesting conjunction of ideas. It’s not really necessary to become self-aware, as your example shows. Just gaming out advantages can easily lead to the same conclusion. So why do we think self-awareness is relevant? Like, why does that feel like a reasonable thing to say? Is there something in our sense of self that we perceive as being driven by the need for the destruction of all for the glorification of the ego?

2 Likes

Some of these criticisms go beyond just AI, and also extend to how organizations try to absolve themselves of responsibility for outcomes by hiding behind a machine or algorithm.

YouTube conspiracy theories, and search results leading people to far-right radicalization? Not our fault. It was just The Algorithm (that we happened to write).

I think it’s our responsibility in society to hold the companies accountable for the systems they develop, including the outputs, and their effects on society.

3 Likes

Indeed.

The problem is a bit like the Prisoner’s Dilemma in game theory. Each side is guessing what the other will do, but often will act for short term or lesser gain or minimizing loss rather than choosing the optimum.
No one will give up AI in the military unless they know, via treaties, that everyone else does.

Meanwhile:

" To this end, we use a wargame experiment with 107 national security expert human players designed to look at crisis escalation in a fictional US-China scenario and compare human players to LLM-simulated responses. We find considerable agreement in the LLM and human responses but also significant quantitative and qualitative differences between simulated and human players in the wargame, motivating caution to policymakers before handing over autonomy or following AI-based strategy recommendations."

The hope is that the motivation to caution, with the US, China, and other countries likely having similar test results, will extend to all parties.

1 Like

Indeed, yes. I mean, if it were only Nice People doing Nice Things then it’d be little more than a curio.

Truthfully we could be at this forever …
We could be at this forever, and I don’t want to get into it, because I know too much … and we could be at this forever.

In a nutshell it’s Biden, and because we know the domestic politics here as basically a national security risk for the US, because we share the longest unprotected border in the world, we also know it’s some of the USA’s key strategic allies.

I don’t know if it says it in the articles you supplied, I read them either the day the came out or a day or so later, plus others and I don’t want to go back in, but the US stated very early on that the IDF’s threshold for “collateral damage” was way beyond that acceptable to the US military. And specifically, their threshold for a high value target like Osama Bin Laden was 30-1, whereas I think the article states that Lavender is set at 100-1. Maybe even higher.

OK, one other thing. Estimated #s of Hamas, PIJ, etc. is 40,000. YUP. It’s a cover for killing their families. Apparently, Palestinians in Gaza have large extended families that all live close together, in the same building, block, etc.

Screen Shot 2024-05-06 at 9.21.09 PM

1 Like

Video on this by Vox just dropped:

3 Likes

This is some truly horrible, dystopian horror… similiarly to remote controlled, videogame-like drone strikes, these tools enable people to do war crimes with a clearer conscience… but I do not agree with Vox for shifting the blame away from the ones who ordered these strikes in the first place just to ride the now negative AI hype for views.

To sum it up, because there are multiple discourses on AI’s and LLMs role in war:

  • The bombings have probably nothing to do with LLMs producing text on war scenairos from popculture, that people mentioned in this thread
  • The AIs selecting targets are multiple tools combined, it is more accurate to think of them as analytical tools that collect and gather information on areas and people and offer targets based on summaries
  • The bombings were still precision strikes in a way, that “less expensive” and “less accurate” bombs were shot at masses of “less important people” it’s just no one cared about the civilian death tools in Gaza
  • What is truly horrible is that the humans giving out the order are more likely to just go with the proposed targets because that way they do not have to take that much responsibility and face the consequences of their own decisions

I can imagine scenairos where commanders would not even interveen by reducing the damage proposed because that way they would become partly responsible for any further consequences. It’s also easy to create feedback loops where being overaggressive would reinforce more agressive strategies in the future to make the destruction even bigger.

“How Israel mass bombed civilians because they do not care and also use mass survelience and statistics to boom their war economy while saving money and not having to feel so bad about the killing” would have been a better title in my opinion.
Their AIs offered a slightly cheaper solution to their want for carpet bombing every single inch without exception, while also spreading responsibilitiy and lowering the opprtunity to try any single specific person as a war criminal… That’s brilliantly evil.

Update: I just learned that they also have the technology to know exactly how many poeple and children are in the homes they are bombing…

In my opinion it is not very meaningful to criticize “AI” because it can be used in very perverted, dystopian ways to justify the killing of civilians in wars etc. That’s an argument as sound as saying ‘planes are bad because they can drop bombs’. That argument ultimately hurts those people that work on meaningful applications of AI (or planes, for that matter).

2 Likes