WHEN the Pentagon came knocking on Anthropic’s door in February, it was not shopping for a clever tech-chatbot. It wanted a weaponised brain. US officials pressed the San Francisco startup tech company to plug its Claude AI system straight into secret classified networks. Anthropic said no.
Its leaders explained they could not help creating fully-automatic weapons or tools for large-scale spying, even during war. Kudos to Anthropic, the creators of the Claude AI system that is popular with AI users.
This refusal came during a week when the US and Israel killed Iran’s Supreme Leader, Ayatollah Ali Khamenei, sparking huge missile and drone attacks in the Gulf. Anthropic’s choice might be one of the most important decisions in the early days of AI.
In the past, people such as soldiers, scientists, and whistle-blowers resisted war. Now, rules and ethics live inside computer programmes and company policies in California.
Anthropic’s rules are clear and bold. Their AI will not be used to make or run fully-automatic killing machines. It will not help scan millions of lives for “national security.” That means keeping safety measures to stop Claude from making plans for mass killings or helping build a system where everyone is watched. The company will still work with the US government, but only if people and laws stay involved.
While debates continue over these rules, a separate conflict unfolds in the Middle East. Iran’s military strategy relies on missiles to threaten nearby countries, uranium enrichment for near-nuclear capability, and cost- effective drones to target air defences from Ukraine to the Red Sea.
On February 28, US and Israeli forces attacked Iran’s military setup, striking nuclear sites, missile bases, command centres, and Khamenei. Iran responded with its biggest missile and drone attacks ever. But its weapons are limited. Factories and launchers can be destroyed, and sanctions make it hard to replace them. If the war continues, Iran may lose much of its power to threaten others.
However, Iran’s beliefs are harder to destroy. Some leaders see the destruction of Israel as a religious goal, not just a political one. Martyrdom, or dying for the cause, is seen as honourable. Some reports say Khamenei chose not to escape before the attacks, showing this mind set.
- Byo author eyes SA award
- Court freezes account after US$300 000 deposit
- Zim actor lands role in SA TV show
- Treasury bungles TelOne $46bn debt plan
Keep Reading
This matters for AI. Today’s AI can help military leaders recognise patterns, predict what will happen, and make fast decisions. AI can analyse satellite images, intercepted messages, and news faster than any human team. It can suggest which targets to hit, with what weapons, and in what order. It can run thousands of simulations to tell commanders the chances of success or escalation.
Using AI like this might help avoid civilian deaths and mistakes. But it also brings risks. The more leaders trust AI’s numbers and dashboards, the less time there is for doubt or political debate. In the Cold War, the fear was a tired officer making a mistake. Now, the fear is leaders trusting a computer’s “90%” success rate over their own gut feeling.
Anthropic’s insistence on safety measures tries to keep some human involvement. If an AI cannot easily plan mass killings or huge surveillance, there is still a gap between what a government wants and what it can actually do. That gap is where laws, ethics, and public debate should happen. When a company refuses to cross that gap, it is saying: we will do some tasks, but we will not help make machines act without people’s morals.
Zimbabwe and AI governance
This restraint is not balanced. While US companies debate “responsible AI,” drones based on Iranian designs keep attacking ships, bases, and infrastructure. While Anthropic worries about surveillance at home, many countries already use AI to monitor dissidents, journalists, and opposition. The same AI models sought by the Pentagon are being used elsewhere as “smart policing” or “border optimisation”.
This is important for countries such as Zimbabwe and the Global South. African governments are being offered AI tools for surveillance, cyber defence, and military purposes by different powers. Some systems will have clear limits, following Western laws and human rights.
Others will be black boxes, made in Moscow, Beijing, or Tel Aviv, with few protections.
For Zimbabwe, which knows the dangers of unchecked security, the question is not if AI will be used, but under whose rules and values. Will their policing, elections, and borders use AI with real bans on automatic killing and broad spying? Or will they follow the logic of whoever offers the technology?
There is a control issue: If Washington companies restrict certain uses by the US military, African states reliant on foreign technology could face security risks if providers refuse service. While ethical policies are ideal, they can leave states exposed, so local leaders should establish their own guidelines instead of depending solely on external standards.
The Iran-Anthropic situation is a warning. On one side is a regime willing to face disaster, seeing collapse as proof of its beliefs. On the other is a superpower relying on algorithms, whose designers are trying to add conscience. Smaller states are caught in the middle and will deal with the consequences.
For readers in Harare, the lesson is not to avoid AI in security altogether. Instead, any rush to bring in “smart” systems from abroad must come with tough questions at home. Who controls the data? Who can check the code? Who decides what the system can and cannot do, even in a crisis?
Bangure is a filmmaker. He has extensive ex-perience in both print and electronic media production and management. He is a past chair-person of the National Employment Council of the Printing, Packaging and Newspaper Industry. He has considerable exposure to IT networks and Cloud technologies and is an enthusiastic scholar of artificial intelligence. — naison.bangure@hub- edutech.com




