There may be an Israeli navy technique known as the “fog process”. First used throughout the second intifada, it’s an unofficial rule that requires troopers guarding navy posts in circumstances of low visibility to shoot bursts of gunfire into the darkness, on the speculation that an invisible menace could be lurking.
It’s violence licensed by blindness. Shoot into the darkness and name it deterrence. With the daybreak of AI warfare, that very same logic of chosen blindness has been refined, systematized, and handed off to a machine.
Israel’s current battle in Gaza has been described as the primary main “AI battle” – the primary battle through which AI programs have performed a central position in producing Israel’s listing of purported Hamas and Islamic jihad militants to focus on. Techniques that processed billions of knowledge factors to rank the chance that any given individual within the territory was a combatant.
The darkness within the watchtower was a situation of the terrain. The darkness contained in the algorithm is a situation of the design. In each instances, the blindness was chosen. It was chosen as a result of blindness is helpful: it creates deniability, it makes the violence really feel inevitable, it strikes the query of who determined from an individual to a process. The fog didn’t carry. It was given a chance rating and known as intelligence.
It might have been chosen blindness that led, in the beginning of the US-Israeli Iran battle, to the strike on the Shajareh Tayyebeh elementary college in Minab, in southern Iran. A minimum of 168 folks had been killed, most of them youngsters, women aged seven to 12.
The weapons had been exact. Munitions consultants described the concentrating on as “extremely correct”, every constructing individually struck, nothing missed. The issue was not the execution. The issue was intelligence. The college had been separated from an adjoining Revolutionary Guard base by a fence and repurposed for civilian use practically a decade in the past. Someplace within the concentrating on cycle, it appears that evidently reality was by no means up to date.
The precise position of AI within the strike on Minab has not been formally confirmed. What is understood is that the concentrating on infrastructure through which these programs function has no dependable mechanism for flagging when the underlying intelligence is a decade old-fashioned.
Whether or not or not an algorithm chosen this college, it was chosen by a system that algorithmic concentrating on constructed. To strike 1,000 targets within the first 24 hours of the marketing campaign in Iran, the US navy relied on AI programs to generate, prioritize, and rank the goal listing at a pace no human workforce might replicate.
Gaza was the laboratory. Minab is the market. The result’s a world through which essentially the most consequential concentrating on selections in trendy warfare are made by programs that can’t clarify themselves, provided by firms that reply to nobody, in conflicts that generate no accountability and no reckoning. That isn’t a failure of the system. That’s the system.
Who’s guilty when AI kills?
We should always resist the temptation to solely blame the algorithm for the logic that makes youngsters into acceptable error charges. In July 2014, 4 boys from the Bakr household – Ismail, Zakariya, Ahed and Mohammad, aged 9 to 11 – had been killed on a seaside in Gaza. No AI was concerned. The positioning had been preclassified as a Hamas naval compound. The boys had been flagged as suspicious as a result of they ran, then walked – habits that matched a concentrating on template for fighters making an attempt not to attract consideration. When the primary missile hit, the surviving youngsters fled. The drone adopted them and fired once more. An officer later testified that from a vertical aerial view, it is rather laborious to determine youngsters. The strike was logged as a concentrating on error.
A labeled Israeli navy database, reviewed by the Guardian, +972 Journal and Native Name, indicated that of greater than 53,000 deaths recorded in Gaza, named Hamas and Islamic Jihad fighters accounted for roughly 17%. That implies the remaining, 83%, had been civilians. These aren’t the statistics of a battle fought with precision, this can be a battle the place imprecision is the goal. (The IDF disputed figures introduced within the Guardian article though they didn’t determine which figures.)
So AI concentrating on programs didn’t invent this logic. They inherited it, encoded it throughout thousands and thousands of knowledge factors, and automatic it past any significant human verify. When a faculty in Minab is classed in a database as a navy compound, that isn’t a malfunction. It’s the fog process, the identical logic that chased 4 boys down a seaside in Gaza – working precisely as designed, at a distinct scale, in a distinct nation, with a distinct weapon. The darkness simply has higher {hardware} now.
Many of those AI programs inherently defy worldwide humanitarian regulation, which doesn’t merely demand right outcomes from navy operations; it requires a cautious course of earlier than they’re carried out. A commander should make each cheap effort to confirm {that a} goal is a legit navy goal. The regulation additionally requires that all the things possible be performed to guard civilians from the consequences of assault, not as an afterthought, however as a parallel and equal obligation.
That obligation can’t be delegated to a system whose reasoning is opaque and whose outputs can’t be interrogated in actual time. In Gaza, an algorithm processed knowledge on each individual within the strip – telephone data, motion patterns, social connections, behavioral alerts – and produced a ranked listing of names, every assigned a chance rating indicating the probability they had been a combatant. This isn’t the identical as a human analyst figuring out a identified militant and programming a weapon to hit them. The AI was not confirming identities. It was inferring them, statistically, throughout a whole inhabitants, producing targets that no human had individually assessed earlier than they appeared on the listing.
Verification, on this system, meant a human operator reviewed every title for a median of about 20 seconds, lengthy sufficient to substantiate the goal was male. Then they signed off. One system alone produced greater than 37,000 targets within the first weeks of the battle. One other was able to producing 100 potential bombing websites per day. The people within the loop weren’t exercising judgment. They had been managing a queue.
In Iran, the image is, at the moment, much less totally documented. However the scale tells its personal story. Two sources confirmed to NBC Information that Palantir’s AI programs, which draw partially on giant language mannequin expertise, had been used to determine targets. (Palantir’s CEO, Alex Karp, mentioned he “can’t go into specifics” when requested about this on CNBC, however mentioned that Claude was nonetheless built-in into Palantir’s programs used within the Iran battle.) Brad Cooper, head of the US Central Command, has boasted that the navy is utilizing AI in Iran to “sift via huge quantities of knowledge in seconds” to be able to “make smarter selections quicker than the enemy can react”. Whether or not or not each strike was AI-assisted, the tempo of the marketing campaign was solely attainable as a result of concentrating on had been considerably automated.
When reported verification instances for AI-assisted targets are measured in seconds, we’re not speaking about human judgment with algorithmic help. We’re speaking about rubber-stamping a machine’s output. And when that machine’s knowledge is a decade old-fashioned, the results are written in rows of small coffins.
The businesses implicated on this aren’t obscure protection startups. Palantir, based with early CIA funding and now one of many major AI infrastructure suppliers to the US navy, provided programs used within the Iran marketing campaign. These programs draw partially on Anthropic’s Claude, a big language mannequin whose guardian firm tried to withstand Pentagon stress to take away moral constraints on its use for concentrating on. The Pentagon responded by threatening to chop ties and turning to OpenAI and others as a substitute. The marketplace for killing at scale doesn’t lack for suppliers.
The episode is instructive: the one firm that attempted to attract a line was sidelined, and the killing continued with out interruption. Google, regardless of important inside worker protest, signed Venture Nimbus, a cloud-computing and AI contract with the Israeli authorities and navy value greater than $1bn.
Amazon is a co-signatory to Venture Nimbus alongside Google. Microsoft had deep integration with Israeli navy programs earlier than partially withdrawing beneath stress in 2024, at which level the information migrated to Amazon Internet Providers inside days.
Anduril, based by Palmer Luckey and staffed closely with former US protection officers, builds autonomous weapons programs explicitly designed for deadly concentrating on. OpenAI, which till just lately prohibited navy use in its phrases of service, quietly eliminated that restriction in early 2024 and has since pursued Pentagon contracts. These are among the many most dear firms on this planet, with client merchandise utilized by a whole bunch of thousands and thousands of individuals, college analysis partnerships, and important political affect in Washington, Brussels and past.
After all personal firms have provided militaries for hundreds of years – with radios, vehicles, satellite tv for pc navigation, microwave expertise and, after all, advanced weapons programs. This isn’t new or inherently corrupt. The “dual-use” downside is as previous as industrialization: virtually any highly effective expertise can be utilized for navy ends.
However AI concentrating on isn’t merely a part that militaries incorporate into their operations. It’s the resolution structure itself – the factor that determines who will get killed and why. When a single system can generate tens of hundreds of targets within the time it will have taken a human intelligence workforce to confirm 10, the query isn’t whether or not personal firms ought to provide militaries. It’s whether or not any authorized framework can survive contact with it.
In worldwide regulation we speak about accountability frameworks: the chain of answerability that runs from a choice to make use of deadly pressure again to the one that licensed it. An accountability framework requires that somebody be identifiable because the decision-maker, that their reasoning be reconstructable after the very fact, and that the method obligations the regulation calls for – proportionality evaluation, verification, precaution – will be proven to have been adopted.
AI concentrating on systematically destroys every of those circumstances. Attribution dissolves throughout a series of engineers, commanders, operators and company suppliers, every of whom can level to a different. Reasoning disappears right into a chance rating that no lawyer can audit and no court docket can cross-examine. Course of collapses right into a 20-second approval of a machine suggestion. And the businesses that constructed and offered the system sit solely outdoors the authorized framework, as a result of worldwide humanitarian regulation was designed for states and their brokers, and Palantir isn’t a signatory to the Geneva conventions.
The accountability framework has not been merely strained or examined by AI warfare. It has been made structurally irrelevant.
Lifting the fog of battle
We should always cease calling these expertise firms and begin calling them what they’re: protection contractors.
The most important AI companies aren’t impartial infrastructure suppliers who occurred to discover a navy buyer. They’re being built-in into the concentrating on structure of recent warfare. Their programs sit contained in the kill chain, their engineers maintain safety clearances, their executives rotate via the identical revolving door that has at all times linked Silicon Valley to the Pentagon.
These AI suppliers are on the slicing fringe of the military-industrial advanced, and needs to be regulated as such. A transparent accountability chain applies to companies akin to Raytheon and Lockheed Martin – entailing export controls, congressional oversight, legal responsibility frameworks and procurement circumstances – whereas the weak rules that apply to the businesses writing the algorithms that choose navy targets have by no means been utilized, examined or enforced.
That isn’t an oversight. It’s a alternative, actively maintained by lobbying, by the deliberate blurring of “industrial” and “protection” merchandise, and by a regulatory tradition that also treats AI as a client expertise that occurred to search out its solution to the battlefield. Palantir spent near $6m lobbying Washington in 2024, and in a single quarter of 2023 outspent Northrop Grumman. It launched a devoted basis to form the coverage surroundings it operates in. The consortium of Palantir, Anduril, OpenAI, SpaceX and Scale AI was described by its personal contributors as a challenge to provide a brand new era of protection contractors to the US authorities. The enterprise capital companies backing these firms, Andreessen Horowitz and Founders Fund, have cultivated affect via proximity to energy: former senior officers on their advisory boards, companions rotating via authorities roles and direct entry to the policymakers who decide how a lot the Pentagon spends and on what.
The EU AI Act, essentially the most bold try but to manipulate synthetic intelligence, explicitly exempts navy and nationwide safety functions, with the acknowledged justification that worldwide humanitarian regulation is the extra applicable framework. It’s a outstanding act of circularity: the one physique of regulation being systematically destroyed by these programs is designated as their regulator, whereas the regulators who may truly constrain them look away.
In america, the AI provisions of the 2025 Nationwide Protection Authorization Act don’t regulate navy AI. They direct companies to undertake extra of it. Pete Hegseth’s AI technique, issued in January 2026, frames the query solely as a race, directing the Pentagon to maneuver at wartime pace, with AI as the primary proving floor. The regulatory tradition has not did not meet up with the expertise. It has determined, intentionally, to not attempt.
Up to now, the one critical authorities intervention in AI navy functionality we now have seen got here not from a state demanding restraint or accountability, however from the US demanding the programs be made extra deadly. That’s the horizon of ambition we now have accepted.
Banning these programs outright is unimaginable when so most of the actors concerned care little about worldwide regulation. However stress factors stay, and they’re actual. Any future authorities in Washington that desires to make use of AI navy functionality with out producing an endless collection of Minabs will want a regulatory framework – not as a concession to critics however as a primary requirement for not turning into a rogue actor. The identical is true in Europe, the place Britain has dedicated over £1bn to a brand new AI-integrated concentrating on system connecting sensors and strike capabilities throughout all domains, and the place France’s main AI firm has partnered with a German protection startup to construct autonomous weapons platforms, and the place Germany is deploying AI-guided assault drones in Ukraine.
There’s a gap to manage these programs. The EU has the obvious instruments, not via the AI Act, which intentionally exempts navy functions, however via export controls and procurement circumstances on the dual-use programs that transfer between industrial and protection markets. Worldwide courts are starting to open doorways too: the ICJ advisory opinion on Palestinian rights has created a framework through which firms supplying programs utilized in illegal strikes face potential legal responsibility publicity in jurisdictions that take worldwide regulation severely. And AI companies want governments, not simply as prospects however because the suppliers of the computing energy, the vitality, and the bodily infrastructure that frontier AI requires and that no firm can maintain from industrial revenues alone. That dependency offers states which might be keen to make use of it actual leverage over firms that would favor to not be regulated. The query is whether or not any authorities with the instruments to behave will determine, earlier than the subsequent Minab, that the price of inaction has develop into too excessive.
What regulation ought to appear like is comparatively simple, even whether it is laborious to implement. AI programs utilized in concentrating on should be explainable – not by way of chance rating however reasoning {that a} lawyer can audit. The cumulative civilian price of AI-assisted campaigns should be assessed as a complete. And the legal responsibility that stops on the operator should prolong up the provision chain to the businesses that knowingly constructed and offered opaque programs to be used in armed battle. These aren’t novel calls for. They’re the minimal circumstances for the legal guidelines of battle to imply something within the age of algorithmic concentrating on.
Within the meantime, the fog process is operational and coming to outline the way forward for battle. However the troopers who fired into the darkness had been a minimum of current in it. The businesses that constructed what changed them are doing it from Palo Alto, at no private threat, with no authorized publicity, and with each incentive to do it once more.
-
Avner Gvaryahu is a DPhil researcher on the Blavatnik college of presidency, College of Oxford. He’s a former government director of Breaking the Silence, an Israeli human rights group of former troopers


