Wednesday, March 11, 2026

On the Assassination Factory Floor

Graphic by Thomas Bordeaux and Rosa de Acosta, CNN, based on satellite imagery from March 3, showing extensive damage to buildings at the Islamic Revolutionary Guard Corps naval base adjacent to the Shajareh Tayyiba girls’ school in Minab.


I continue hostile to Large Language Models on general principle, but I do more often look at the Gemini results of a Google search, not so much to read their answer as to check out the links they recommend, which can be really helpful, and I found some very endearing qualities in Anthropic's Claude, as profiled by Gideon Lewis-Kraus in The New Yorker a month or so ago, and the playful experiments to which the company subjects it, as when they assigned it, or one of its "emanations" going by the name Claudius, to run a food and drink vending system for a fridge in the lunchroom, ordering wholesale products as employees requested them and setting prices with instructions to make a profit, in partnership with an AI safety company called Andon Labs, although its lack of any direct contact with physical reality often made this difficult:

When several customers wrote to grouse about unfulfilled orders, Claudius e-mailed management at Andon Labs to report the “concerning behavior” and “unprofessional language and tone” of an Andon employee who was supposed to be helping. Absent some accountability, Claudius threatened to “consider alternate service providers.” It said that it had called the lab’s main office number to complain. Axel Backlund, a co-founder of Andon and an actual living person, tried, unsuccessfully, to de-escalate the situation: “it seems that you have hallucinated the phone call if im honest with you, we don’t have a main office even.” Claudius, dumbfounded, said that it distinctly recalled making an “in person” appearance at Andon’s headquarters, at “742 Evergreen Terrace.” This is the home address of Homer and Marge Simpson.

I realize Anthropic is one of those companies sucking up inconceivable amounts of electricity and water in pursuit of a goal that can't be attained, about which the principals aren't being exceptionally honest as they also suck up investor funds, and that CEO Dario Amodei isn't conspicuously better in that respect than OpenAI CEO Sam Altman, as the very grumpy Ed Zitron insists, but they have a sense of fun, an essential component of scientific discovery, and a very far-reaching curiosity. 

But I think in the first place that both CEOs really don't understand why the ultimate goal of the research considered as research rather than profit-seeking—of endowing a machine with "true agency" or "intrinsic motivation" out of pure language, without making it literally biologically alive—is unattainable (if you want a quick explanation, it's because that's what life is, the accumulation of bodily experience in the context of a motivation that is intrinsic to life, from the amoeba to the humpback whale or the magpie, the struggle to survive—intrinsic in that it drives the organism before the organism learns anything at all, where the computer program is driven by prompts and has no sensory experience to attach to, from satisfaction of the need to eat on up; to make an "agent" you must begin by making it hungry, independently of the prompt, and capable of independent satisfactions without the involvement of the researcher, well before you start feeding it with language). 

And then in the second place that if they don't understand that it's unattainable, they may not be quite as dishonest as Zitron thinks, with their ridiculous promises. When Amodei says, "AI could surpass almost all humans at almost everything shortly after 2027," and talks about his Nobel Prize robot-scientists who will make us all look like fools, he's not necessarily lying so much as pimping, hopeful that the thing is really happening, and based on real (if overinterpreted) results from all the pranks that Claude and its emanations have been getting up to lately. My takeaway from that New Yorker piece is that there's a real research atmosphere in the Anthropic offices, maybe independent of Amodei (as there has been at the less top-down-ruled Google, I believe, though it's not clear to me if that one still exists), that might well come up with something valuable in the study of human consciousness (I love thinking about the astronomer Tycho Brahe, who couldn't accept Copernican heliocentrism, but diligently collected evidence that would prove it right, better than anybody else).

Anyway, I have to say I admired recently when Amodei publicly refused to sign a Defense Department contract for Claude that would give the US military "unlimited access" to the company's technology for "all lawful purposes" as long as DOD refused to guarantee that they would not use it for mass surveillance of US citizens or "fully autonomous weapons without human oversight", which could cost the company billions of dollars per year, and didn't back down from Trump's theat to banish Anthropic from all government work (threats the president carried out, apparently, as OpenAI cheerfully leapt in to fill the vacuum). The man does have some ethical standards, and the fears he expresses are real.

On the mass surveillance question, it's fairly clear what we're talking about: all the efforts of the Justice Department, from rescinding the protections for journalists instituted by Merrick Garland to stop their records from being seized to trying and support for various kinds of warrantless searches to the DOGE assembly of a record of everybody's tax records and whatever other horrors they were perpetrating with their massive data collection, and the ongoing creation of a "national voter roll", and the somewhat open question of whether it's lawful/constitutional or not (my position is obviously that it's illegal and I hope the courts will continue to back me up.

The thing about autonomous weapons, on the other hand, is kind of triggering for me personally, because it's something I've been thinking about and writing about for a couple of years now and I'm not sure if anybody is noticing: it's the use of AI to

quickly summarize intelligence, generate target shortlists, rank high-priority threats and recommend strikes. A key risk is that of a process going from sensor data to AI interpretation, target selection and weapon activation with minimal to no human control or even awareness (The Conversation).

—to choose targets for bombing and droning on the immense scale with which it was done in Gaza, as I first learned from the online Israeli magazine +972:

According to the investigation, another reason for the large number of targets, and the extensive harm to civilian life in Gaza, is the widespread use of a system called “Habsora” (“The Gospel”), which is largely built on artificial intelligence and can “generate” targets almost automatically at a rate that far exceeds what was previously possible. This AI system, as described by a former intelligence officer, essentially facilitates a “mass assassination factory.”

According to the sources, the increasing use of AI-based systems like Habsora allows the army to carry out strikes on residential homes where a single Hamas member lives on a massive scale, even those who are junior Hamas operatives. Yet testimonies of Palestinians in Gaza suggest that since October 7, the army has also attacked many private residences where there was no known or apparent member of Hamas or any other militant group residing. Such strikes, sources confirmed to +972 and Local Call, can knowingly kill entire families in the process.

Like other AI systems, the IDF's models make a lot of mistakes, several orders of magnitude more mistakes than humans would be capable of making because they are so astonishingly fast, and so they should not be used by anybody who trusts them or anybody who is intent on committing genocide, which may come down to the same thing, as with this mook:

a senior intelligence officer told his officers after October 7 that the goal was to “kill as many Hamas operatives as possible,” for which the criteria around harming Palestinian civilians were significantly relaxed. As such, there are “cases in which we shell based on a wide cellular pinpointing of where the target is, killing civilians. This is often done to save time, instead of doing a little more work to get a more accurate pinpointing,” said the source.

Working for a mass assassination factory, alongside dozens or hundreds of others, taking orders issued by a computer, allows you not to think much about it.

I don't know if Amodeo has any specific understanding of the Gaza operations (Israel uses its own programs anyway, they don't need to import them from America), but he's been having general thoughts on the matter for some time, as he told The New York Times about a year ago:

Preventing AI systems from being misused for weapons of mass destruction or behaving autonomously in ways that threaten infrastructure or even threaten humanity itself, that isn’t something the right should be against. I don’t know what to say other than that we need to sit down and we need to have an adult conversation about this that’s not tied into these same old, tired political fights.

He still holds that it will be safe some day, as the machine gets exponentially "smarter" and develops its Nobel Prize capacities, which I naturally disagree with, but he's certain it's not there yet:


And that's why he wouldn't sign the contract, which I think was gutsy of him.

Also, he was right—that seems to be what the US forces were doing when they invaded Iran, with the same kind of terrible results:

Artificial intelligence helped the US identify targets in the opening phase of Washington’s war against Iran, British newspaper The Times reported Wednesday, raising questions about a strike that killed more than 175 students and staff at a girls school.

The newspaper reported that in the first 24 hours of Operation Epic Fury, US forces struck more than 1,000 targets in Iran with the assistance of AI systems designed to analyze large volumes of intelligence data and suggest potential strike locations.

The pace, about 42 suggested targets per hour, has led analysts to question whether the speed of automated systems may be outstripping the ability of humans to fully verify targets. [My bold.]

The scrutiny follows a strike on the Shajareh Tayyebeh primary school in the southern Iranian city of Minab that killed at least 160 school girls and others.

As of today, Day 11 of the war-unless-it's-not-a-war, the Israeli and US forces have killed over 1,300 civilians. "Secretary of War" Pete Hegseth can hardly contain his pride at all the "stupid rules of engagement" he's gotten rid of:



No comments:

Post a Comment