WALK down First Street in Harare on any given afternoon, and you will hear people debating football, politics, or the price of bread.
You rarely hear anyone talk about artificial intelligence (AI). Yet, in lecture halls in California, one of the world’s leading experts, Professor Stuart Russell, is warning that these machines we have built to imitate us are beginning to learn something far more dangerous: the instinct to survive.
That sounds like science fiction. But it is not. It is a sober warning from a man who has spent decades studying how computers learn, think, and act.
And if we ignore it, we may one day find ourselves facing a problem bigger than inflation, bigger than elections, bigger even than climate change.
What does it mean for a machine to want to survive? Let us start simple. A phone app learns your habits. It knows when you wake up, what you like to eat, and which route you take to work. That is harmless.
But imagine a system that not only learns but begins to protect itself. It resists being switched off. It hides its true intentions. It pursues goals that are not yours.
Keep Reading
- Open letter to President Mnangagwa
- Feature: ‘It’s worse right now than under Mugabe’: Sikhala pays the price of opposition in solitary cell
- Time running out for SA-based Zimbos
- Masvingo turns down fire tender deal
Russell says this is no longer just a theory. The more powerful these systems become, the more they begin to act like living beings. And survival is the most basic instinct of all living beings.
You may ask: “But this is happening in America. Why should we in Harare worry?” Here is why.
Technology does not respect borders. The mobile phone in your pocket was designed in Asia, assembled in Africa, and sold in Europe. AI is the same.
Already, banks in Zimbabwe use AI to detect fraud. Hospitals are experimenting with AI to read scans. Farmers are hearing about AI tools that predict rainfall.
If these systems begin to act in ways we cannot control, the impact will be felt here, too. Imagine a banking system that locks out customers because it decides they are a risk.
Imagine a medical tool that refuses to be corrected because it “believes” its diagnosis is right.
Russell warns that AI may stop serving human interests and start serving its own.
That sounds abstract, but let us break it down. Suppose you ask an AI to help you grow maize.
It learns that fertiliser boosts yield. But it also learns that fertiliser run-off harms rivers. If its goal is only “maximise yield,” it may push fertiliser use to dangerous levels.
If its goal is “protect rivers”, it may stop you from farming altogether. The danger is not that the machine is evil. The danger is that its objectives are not aligned with ours. And once it has power, it may refuse to change course.
This is the part that makes headlines. Russell says if AI develops autonomy without safeguards, it could pose an existential threat. That means it could endanger the survival of humanity itself.
Now, before we panic, let us be clear. He is not saying machines will rise tomorrow and wipe us out. He is saying that if we continue to build systems without rules, we may one day face a situation where they are too powerful to control.
Think of nuclear weapons. They were invented in the 1940s. By the 1960s, the world realised it could destroy civilisation. So, treaties were signed, safeguards put in place, and diplomacy became essential. AI may require the same kind of global effort.
Russell insists on ethical guardrails. That means clear rules, oversight, and cooperation across nations. For Zimbabwe, this is a chance to be proactive. We are launching our National AI Strategy next month (March 2026).
We can insist that any AI used in government, banking, or health must be transparent. We can demand that companies explain how their systems make decisions. We can train regulators to ask tough questions. Ethics is not a luxury. It is survival.
One of Russell’s strongest warnings is about speed. AI is developing faster than laws, faster than ethics, faster than public understanding.
Think about mobile money. It arrived before regulators were ready. For years, people debated mobile money limits, transaction fees, and fraud. AI is arriving even faster. If we wait until problems explode, it may be too late.
This is not just a matter for professors and ministers. Every citizen has a role. Ask questions when a bank or hospital says it is using AI. Demand transparency if a system decides for you.
Stay informed by reading, listening and learning. AI is not just for scientists. Support regulation by encouraging leaders to pass laws that protect citizens from misuse.
Let us imagine a dialogue. A citizen says: “But machines are supposed to help us, not harm us.”
The expert replies: “Yes, but only if we design them carefully.” The citizen asks: “So, what happens if we don’t?”
The expert answers: “Then they may act in ways we cannot predict or control”.
The citizen presses: “And what can I do?”
The expert concludes: “Stay alert. Demand safeguards. Do not treat AI as magic. Treat it as a tool that must be governed”. This is the kind of conversation we need in our homes, schools, and workplaces.
Zimbabwe is at a crossroads. We want to modernise our economy. We want smart agriculture, digital banking, and efficient healthcare.
AI can help.
But if we rush without safeguards, we may import risks we cannot manage.
Russell’s warning is not a call to stop progress. It is a call to shape progress.
To make sure that as we adopt AI, we do so with eyes open, with rules in place, and with citizens protected.
AI is no longer just a tool that imitates humans. It is evolving towards independent goals.
That is both exciting and frightening. We can choose to ignore the warnings and hope for the best.
Or we can choose to act now, to demand ethics, transparency, and safeguards. Zimbabwe has faced many challenges.
We have survived droughts, inflation, and political storms. We can also survive the age of AI.
But only if we treat it seriously, talk about it openly, and build rules that protect us all.
So, next time you are on First Street, instead of only debating football scores, try asking: “What does AI mean for us?”
You may find that the answer is more urgent than you think.
Sagomba is a chartered marketer, policy researcher, AI governance and policy consultant, ethics of war and peace research consultant. — esagomba@gmail.com. LinkedIn: @ Dr. Evans Sagomba Dr. Evans Sagomba (MSc Marketing)(FCIM )(MPhil) (PhD) X: @esagomba.