AI-13: AI works great for killing people

Well, I won’t look at it, because although this is all very important, I think we shouldn’t let it detract attention from the “high payload” munitions deliveries that the US just put on “pause.”

Without going back into all the specific information I gave to my friends about this … the 2000lb US-made Mark 84 bombs that have been “paused” contain just under 1000 lbs of TNT. A singular bomb produces a 50 m deep crater and a frag radius of 370 metres. It was originally a “dumb bomb” but may now have some guidance capabilities. There is nothing “targeted” about this bomb. And the IDF has been using it throughout Gaza.

Supposedly this bomb is a “bunker buster” aimed to penetrate Hamas’ tunnel system, which the IDF claims is beneath things like hospitals, schools, mosques … so … doctors, nurses working in hospitals, patients, refugees sheltering in these places, which include WHO and UNRWA facilities, were deemed - by both the US and Israel - to be “human shields” for Hamas, and thus “blocking” the US and IDF’s military objective of “wiping Hamas off the map” which “human shields” cannot do. So logically, of course, just go through these people that the US and Israel deemed to be human shields to get to Hamas and BOMB.

Here’s another thing.

By mid Nov. 2023, Israel had dropped over 12,000 bombs in Gaza (365 km square in size),equivalent to 25,000 tonnes of TNT.

By early January 2024, Israel had dropped 45, 000 bombs, the equivalent of 65,000 tonnes of TNT.

For a comparison:

America’s “Operation Meetinghouse” Mar. 9-10, 1945 bombing of East Tokyo: this singular raid was equivalent to 1,665 tonnes of TNT. It killed over 100, 000 people in two days, because the US used incendiary bombs.

Similarly the IDF has been using white phosphorus.

America’s nuclear bombs in Japan:

Hiroshima, (900 km square in size) - equivalent to 16,000 tonnes of TNT

Nagasaki (405 km square in size) - equivalent to 21,000 tonnes of TNT

America’s bombing of Vietnam, Laos and Cambodia 1965-75: equivalent to 7.5 million tonnes of TNT

America’s “shock and awe” bombing during the Gulf War (1990-91): equivalent to 88,500 tonnes of TNT.

2 Likes

Palmer Lucky (creator of the original Oculus) has started advertising hiimself in a bunch of videos, including building channels aimed towrads children, that show how much fun an engineer can have working on weapons against the bad guys:

@sujato I suggest this might be the right time to start showing alternatives to the next generation of lonely tinkers with anime cat girl avatars

1 Like

This article is actually in English:

1 Like

Great quote there. As a monk, it is wild to me that they just keep pushing ads at me, even though I’ve never bought anything at all since, like basically before the internet existed. The ad technology is just terrible at doing what it says it will do, targeting ads based on user preference. And now we’re taking the same approach to targeting human life.

No machine should ever have the power to decide life or death for a human. This is Asimov’s First Law of Robotics. We should see this capacity as a moral abomination, something that must never exist under any circumstances, and ban it outright under international law.

2 Likes

Yoooooo :smiley:

By the way, ads seem to no longer really work, people want to have meaningful conversations (or a really horny video game catering to their interests at least):

Well, I think it’s good enough. And that’s why companies use it. You may have heard the old advertiser saying, “I know that only 10% of my advertising budget is doing me any good. The problem is I don’t know which 10%.” So targeted adds are much better than un-targeted adds.

But obviously you can’t use the same logic/technology for war.

Is it though? I mean, it’s beside the topic, but my impression is that they use the ad tech because they’ve been sold the ad tech. The actual numbers are hidden inside Google and Meta, who are selling the ability to sell. I feel like we’ve rolled over and acquiesced to a corporate surveillance state on their say-so that it’ll make money for companies. :person_shrugging:

No, it really does work well enough, because it is so big-data.

I’ll admit that I have been part of groups who used FB advertising to target Buddhists for events. At the time I was using it it was cheap and very effective. And by effective I mean that people came to the events telling us they learned about it on face book. And it had the extra advantage (slimy as it was) of dramatically increasing local participation on our FB page, which we could then engage for free. If we had a $50 budget for advertising, I’d happy put at least half of it into FB and the other half into printed posters. I mean, there really aren’t that many other options for advertising in a city, although we used whatever free ones we could.

I looked into just doing print ads in newspapers. Even for a small town paper the cost was jaw dropping for what you would get. It’s a well known thing. $25 might get you half an inch in the classified section.

At that time you could do amazing/creepy good targeting. Basically just telling people near by (which is something FB is very good at) who were interested in Buddhism/Buddhism adjacent things. Interestingly, now it’s not possible to target people by religion because it was being actively used for bad things. So now you can only target by adjacent things. I haven’t used it in years.

And there is all kinds of other targeting that can be done. Want to target enough xenophobic people to sway an election? Just target people who live within x number of miles of their high-school/place of birth. Will the targeting be perfect? No. Will it be enough to pass Brexit? You betcha!

I have also used Google pay-per-click advertising given by Google for free to non-profits. It’s a different concept, but still all about targeting. You only pay (which we didn’t have to do since we were given grants that we never came close to spending) when someone clicks on your link/ad. How good your content is once someone gets to your site is up to you. But Google allows you to (forces you to? can’t remember) integrate Google Analytics into the ad platform so you can see if the clicks you pay for actually give you the outcomes that you want.

All to say that yes, the system works very well for advertisers. Perfect? No. But FB and Google aren’t what they are just because of hype. And in the case of Google, the useless ad you see costs the company advertising zero, literally. There is hype of course. But the data driven nature of it really does let the advertisers know if they are getting bang for their buck.

I only share this because most people have never used paid internet advertising before.

2 Likes

I guess something more analogous to the thread would be facial recognition. Does facial recognition work well enough to generate engagement by tagging people in the photos you posts on Facebook? Sure it does. Because at their scale a bump of half a percent equals big money. Does facial rec work well enough that you can use it to issue arrest warrants? Totally not! At least not in any community I would want to live in.

2 Likes

And … here we are.

Leveraging AI’s inherent lack of compassion or morality to make it easier to kill:

In a cross-sectional study conducted in 2016 on 3000 Dutch physicians, the emotional burden of preparing and performing euthanasia was commonly reported by physicians, for example, due to concerns about administering the lethal drugs

It shouldn’t be easy to kill. And, like pretty much every other claim about AI, there’s no actual evidence that it will even do this much.

Hot on the heels of this one. AI continues to ravage the fundamentals of education.

1 Like

I read this article.

As of this year, I have started teaching some masters courses at university. so this article strikes close to home for me.

One of the courses I teach is about professionalism, which includes ethics, leadership and diversity.

I have definitely encountered students who can barely put together a coherent sentence in English. And yet their written reports are acceptable, and sometimes even good.

I don’t necessarily assume these students are using AI to write their reports. And even if they are, it is not necessarily bad. I actually encourage students with a poor command of English to use tools like Grammarly to improve their writing. What is important is that students are able to do critical thinking and generate ideas. The policy is students should declare their use of AI, and explain in what way AI has helped them.

It is important to value diversity and recognise different people have different skills, and all are required in teamwork. In the end, diverse skills and personalities contibute to a better team and a more productive society. A student who does not have a good command of English may have good data analysis skills, or creative skills.

I had one student whose command of English was very poor and she did not do well in her individual report. However, her team recognised she had good graphic design skills, and she contributed by designing beautiful visuals and overall design for the group report and presentation. As a result, she probably single handed raised the team’s performance from a Distinction to a High Distinction, and I made sure she and the team was aware of that.

The self confidence this gave her was immeasurable. She personally thanked me after the course and said her outlook on life has changed. I believe we all should work towards enabling people, rather than judging everyone by a single arbitrary standard.

1 Like

Here’s a recent video where, in a session at Cambridge, Peter Thiel is asked what he thinks about the use of Lavender AI by the IDF in Gaza.

Starts at 1:07:18

Um, ah look, I yeah I’m not I’m not um you know you know without without um going into all the de- you know I I you know I’m not on top of all the details of what’s going on in Israel because my bias is to defer to Israel. Its its its not for us to to second-guess every um everything and ah I believe that um broadly the IDF gets to decide ah what it wants to do and that they’re broadly in the right and that’s that’s sort of the perspective that I come back to and if I if I fall into the trap of um arguing you on every detailed point I’m I’m actually gonna I would actually be conceding the the the broader issue that um the Middle East should be micromanaged from Cambridge and I think that’s just simply absurd. Um and so I’m not going to concede that point.

This is discussed in an article here:

https://responsiblestatecraft.org/peter-thiel-israel-palantir

To briefly clarify:

  • Lavender is one of the AI systems used by the IDF to kill people.
  • Thiel is partnering with the IDF “to harness Palantir’s advanced technology in support of war-related missions”.
  • Despite his defensive protestations, he wasn’t asked to “concede a point” or to “get into all the details” but simply to give his perspective on the use of a technology in which he is one of the world’s largest investors in a conflict in which he is actively contributing tools of war.

Again, it’s really important to understand, not just that he supports the IDF no matter what war crimes they commit, but that he is unable to articulate a coherent answer to a simple question. For all their aura of being sophisticated gurus of the future, these people are stuck and mired in primitive, crude, narrow-minded ideologies of the past, and are deliberately using whatever methods they can to drag the world down to their level.

2 Likes

We see an analogous situation in the financial markets. Computers making trades. It’s all about the speed. But it has led to much more volatile markets. Computers don’t care if they crash the market. They don’t even care if the firm using them makes money. They’re just executing code.
The idea of military AI’s treating killing people (It’s all about the speed) like a trading AI treats executing trades is horrific.
And the AI doesn’t even care if it’s own side wins. The AI doesn’t care if you win, if millions die, if the whole of humanity is wiped out. Even saying it doesn’t care anthropomorphises it.

4 Likes

Did you hear about the book called The Human Machine Team? You can get it online. The Guardian wrote a series of articles on the author of this book, because apparently they discovered that he is the head of the IDF’s military intelligence Unit 8200.

Screen Shot 2024-09-18 at 2.06.11 AM

And one more … he resigned just the other day.

2 Likes