AI-13: AI works great for killing people

Science fiction has long fretted about the possibility, even inevitability, of handing decisions of war over to machines. That day is upon us, according to an investigation by by the Israeli-Palestinian publication +972 Magazine with the Hebrew-language outlet Local Call, reported by The Guardian. The Israel Defence Force is killing people with its own AIs, called “Lavender” and, horrifyingly, Habsora (which means “the Gospel” in English).

The IDF’s own website describes “The Gospel” (edited machine translation from Hebrew):

One such [AI] system is called “The Gospel”, which is used in war. This is a system that allows the use of automatic tools to produce targets at a fast pace.

The priority is pace, not precision. The flattening of Gaza by the IDF is not the work of precision strikes.

AI is terrible at precision. This was a lesson learned years ago by IBM, who tried to apply their Watson AI to assisting doctors make decisions. Their CEO Arvind Krishna later admitted that their mistake was “we should have applied it to more areas that were less critical”. Apparently the IDF did not get the memo.

Lavender churns out tens of thousands of potential targets. As with student essays or product recommendations, when selecting targets to kill, it turns out that accuracy matters less than speed.

The Israeli use of AI is a figleaf; they’re hiding behind tech to justify indiscriminate slaughter. All too often, AI in practice does not improve human work, rather, it lets people get away with churning out junk faster. That’s bad enough in trivial cases, but in war, it’s a recipe for genocide.

Speaking of the use of The Gospel in a prior conflict with Hamas in 2021, Aviv Kochavi, then head of the IDF, said:

In the past we would produce 50 targets in Gaza per year. Now, this machine produces 100 targets a single day, with 50% of them being attacked.

The rates have gone up since then, of course. Soldiers using Lavender praised its ability to kill because, whereas the soldiers were grieving their loss:

The machine did it coldly. And that made it easier.

The “coldness” of an AI is not like the “coldness” of a hard heart. It is not the absence of “warmth”, it is the absence of anything that might be “warm”. If there is an analogue with the human mind, it is the mind of a psychopath.

The whole theme of this essay is that AIs are by their very nature and purpose manufacturers of delusion, designed to fool humans into thinking that they have inner states. This doesn’t require people to consciously and rationally accept machine sentience as a genuine fact. All it requires is that we allow ourselves to be fooled into doing what the IDF operatives did, essentially treating the outputs of the AI machine “as if it were a human decision.”

Like many AI systems, Lavender can’t really work without human intervention. For example, Amazon ditched its “Just Walk Out” checkout, since the supposedly automated AI self-learning system in fact relied on 1,000 low paid workers in India to constantly surveil shoppers and label videos. Multiple examples of such “mechanical Turks” litter the glorious highway of AI progress. But the IDF apparently believes that, while the biggest retailer in the world cannot build an AI to handle checkouts, they can build one to decide to kill people.

Lavender subjects soldiers to the same kind of punishing pace and rapid decision making pressure that is endured by content moderators. One Lavender user said,

I would invest 20 seconds for each target at this stage, and do dozens of them every day. I had zero added-value as a human, apart from being a stamp of approval. It saved a lot of time.

I had zero added-value as a human” … “It saved a lot of time.

Another operator confirmed:

But even if an attack is averted, you don’t care—you immediately move on to the next target. Because of the system, the targets never end. You have another 36,000 waiting.

This pressure on getting fast results was pushed from the top. One intelligence officer said:

We were constantly being pressured: ‘Bring us more targets.’ They really shouted at us. We were told: now we have to f*** up Hamas, no matter what the cost. Whatever you can, you bomb.

The rationale for this is the same as is always trotted out for AI: efficiency. Only here, “efficiency” is revealed in its full macabre, anti-human monstrosity.

One operator explained why they were using low-tech so-called “dumb bombs” for these missions:

You don’t want to waste expensive bombs on unimportant people.

“You don’t want to waste expensive bombs on unimportant people.”

The bombs are what matter: the capital, the machines. Because they are measured in money. People are unimportant, so we don’t “waste” expensive bombs on them.

One operative outlined the their strategy.

We were not interested in killing [Hamas] operatives only when they were in a military building or engaged in a military activity. It’s much easier to bomb a family’s home. The system is built to look for them in these situations.

The system is deliberately designed to target civilian homes “as a first option”. A subsystem called, again horrifyingly, “Where’s Daddy?” tracked suspects and waited until they had entered their homes, usually at night with their families, before launching the attack. Thousands of innocents were slaughtered by AI, whose output was treated as an order.

At 5 a.m., [the air force] would come and bomb all the houses that we had marked. We took out thousands of people. We didn’t go through them one by one—we put everything into automated systems, and as soon as one of [the marked individuals] was at home, he immediately became a target. We bombed him and his house.

We were promised that AI would elevate human consciousness, ushering us into a glorious new era. Look upon the devastation and ruin of Gaza and behold: the new era is come, and its face is death. The promise is startling new insights and innovative perspectives, the reailty is low-quality information as a smokescreen for war crimes.

What AI does not do is help people make better decisions when it comes to warfare. One IDF intelligence officer said:

No one thought about what to do afterward, when the war is over, or how it will be possible to live in Gaza.

“No one thought.”

Such systems are not built by Israel alone. Israel has a $1.2 billion contract with Google to supply the Project Nimbus AI surveillance system, which they went ahead with despite open revolt and resignations by their staff. Google is no outlier, as Amazon, Microsoft and the rest routinely sell their technology to repressive forces for billions of dollars.

As money and resources are pumped into AI, its capabilities will expand, and military involvement and application will deepen. AI critic Willie Agnew tells us there are “decades, even centuries, of knowledge, critique, and movements about this”. None of what we are seeing is a surprise. And it can only be stopped by effective international agreement.

The problem is not that we don’t know what the problem is. Anyone who saw Terminator knows that. It’s not even that there are evil people who do evil things. The problem is that the people who should be protecting us are actively enabling it.

In May 2024, the UN is hosting an “AI for Social Good” summit. Note the assumption built into the title; not whether it is good; that is not askable, it is assumed. But who is it that tells us how to use AI for good? One of its speakers is Meirav Eilon Shahar, Permanent Representative of Israel to the United Nations.

Last year we asked an oil company CEO to head a conference on the climate crisis. Now we are platforming a nation slaughtering people with AI in a conference on “AI for Social Good”.

This won’t change until we can ask the question that really matters: “Is AI, in fact, good?”

9 Likes

:100:

It reminds me of those bomb detecting dowsing rods which proved wildly popular with Thai military officers fighting the insurgency in the south of Thailand. Invariably the dowsing rods pointed towards whatever people the officers were already suspicious of and the “device” gave them the perfect pretext they needed to harass and arrest whoever they wanted with impunity… cause, well, the device told me! They continued using these “devices” years after they were discredited… because “well, whatever the experts say, it works for me.”

4 Likes

This is monstrous. :person_facepalming:

1 Like

https://www.axios.com/2024/05/01/pentagon-military-ai-trust-issues

When they tested LLMs from OpenAI, Anthropic and Meta in situations like simulated war games, the pair found the AIs suggested escalation, arms races, conflict — and even use of nuclear weapons — over alternatives.

“It is practically impossible for an LLM to be taught solely on vetted high-quality data,” Schneider and Lamparth write.

“The risk-taking appetite in Washington is not very great. And the risk-taking appetite out here [in Silicon Valley] is unparalleled,” former Secretary of State Condoleezza Rice told Axios at a Hoover Institution media roundtable at Stanford University this week.

1 Like

Wow, just wow.

That’s just … :mindblown:

You find in the TESCREAL circles a lot of game theory, it’s all “I do this and they’ll do this”, based on calculated self-interest. It seems like the AI models inherit this kind of death spiral.

It’s a powerful counter against the “if we don’t do it, China will” argument. The last thing you want is both sides relying on AI making decisions.

1 Like