THERE’S a quiet but urgent race unfolding across the world.
It’s not about who builds the fastest chip or the most powerful algorithm.
It’s about whether artificial intelligence will remain a tool for human progress or drift into something that undermines our dignity, our agency, and our sense of purpose.
This is the race towards pro-human AI, and it matters as much in Harare as it does in Silicon Valley.
For the past few years, conversations about AI have been dominated by efficiency, productivity, and automation.
The mantra has been “AI-first,” a slogan that sounds harmless but carries a troubling undertone: humans are second.
In workplaces across the globe, employees are now judged not by the quality of their work but by how many AI tools they can wield.
The underlying message is clear: machines are smarter, faster, and more reliable, and humans must prove they are not obsolete.
- Harvest hay to prevent veldfires: Ema
- Public relations: How artificial intelligence is changing the face of PR
- Queen Lozikeyi singer preaches peace
- Public relations: How artificial intelligence is changing the face of PR
Keep Reading
That mindset is not just anti-human; it is corrosive.
It erodes confidence, mental health, and the joy of meaningful work.
Zimbabwe, like many nations, is watching this race from the sidelines, but the implications are direct.
Our financial services, education systems, and even government institutions are beginning to experiment with AI.
If we import the “AI-first” ideology without question, we risk creating workplaces where people are valued less than the tools they use.
Imagine a young graduate entering the job market only to be told that their worth depends on how well they can mimic a machine.
That is not progress; it is surrender.
This is why the recent Pro-Human AI Declaration is so important.
It lays out five principles that should guide the future: keeping humans in charge, avoiding concentration of power, protecting the human experience, safeguarding liberty, and holding AI companies accountable.
These are not abstract ideals.
They are practical guardrails.
They remind us that AI must serve human flourishing, not replace it.
Consider the issue of concentration of power.
Right now, a handful of global corporations control the most advanced AI systems.
They decide what values are embedded in these tools, what data is used to train them, and how they are deployed.
For Zimbabwe and other African nations, this creates dependency.
We risk becoming consumers of foreign ideologies packaged as technology.
Pro-human AI policies demand decentralisation.
They call for community-driven innovation, open-source platforms, and local oversight.
In our context, this could mean universities, startups, and government agencies working together to build AI systems that reflect Zimbabwean values, respect for community, resilience, and inclusivity.
Another pressing issue is the creeping narrative of AI “sentience.”
Some companies now speak of their models as if they were conscious beings deserving of rights.
One firm even drafted a “constitution” for its chatbot, claiming to care about its well-being.
This may sound futuristic, even charming, but it is deeply misleading.
AI is not a citizen.
It is not a neighbour. It is a product such as a dishwasher or a car, and its makers must be held liable when it fails.
Granting machines “special status” risks diluting human rights.
It risks creating a world where the dignity of people is negotiated against the supposed needs of algorithms.
Zimbabwe, with its long struggle for human rights and self-determination, should be especially wary of such narratives.
We cannot afford to let speculative ideas about machine personhood overshadow the urgent need to protect human beings.
The labour question is equally critical. Studies already show that overreliance on AI tools can lead to burnout, cognitive impairment, and skill erosion.
In a country such as ours, where unemployment is high and young people are desperate for opportunities, the danger is clear.
If jobs are redefined as “AI-assisted tasks,” then those without access to expensive tools will be excluded.
Worse, those who do have access may find themselves trapped in a cycle of dependency, their creativity stifled by the constant demand to prove they are not replaceable.
Pro-human AI policies must confront this head-on.
They must ensure that technology enhances human skills rather than erodes them.
They must protect workers from being reduced to appendages of machines.
So, how do we move forward?
First, by insisting that AI governance is not just about safety checklists and compliance audits.
Those are necessary, yes, but they are not sufficient.
Pro-human AI requires a broader vision.
It requires us to ask: Does this technology make people more confident, more capable, more fulfilled?
Or does it diminish them?
In Zimbabwe, this could mean embedding AI literacy in schools, not just as a technical skill but as a civic one. It could mean creating policies that guarantee workers the right to refuse AI monitoring or replacement.
It could mean supporting local innovators who design tools that solve real community problems rather than chasing hype.
Second, we must resist the narrative of inevitability.
Too often, AI is presented as a force of nature, unstoppable and beyond human control.
That is false.
AI is built, trained, and deployed by people. It reflects choices about data, design, and purpose. If those choices are made without regard for human dignity, the result will be anti-human systems.
But if they are made with care, with accountability, and with a commitment to human flourishing, AI can be a powerful ally.
Zimbabwe has the opportunity to shape those choices, not by competing in raw technological power but by leading in ethical clarity.
In the end, we must remember that pro-human AI is not just about avoiding harm. It is about actively supporting well-being.
It is about creating technologies that help teachers inspire students, doctors heal patients, farmers grow food, and artists express themselves.
It is about ensuring that AI amplifies human creativity rather than replacing it. In our context, this could mean
AI tools that help small enterprises manage cybersecurity risks, or platforms that support local languages and cultural expression.
These are not luxuries; they are necessities if AI is to serve the people rather than the other way around.
Sagomba is a doctor of philosophy, who specialises in AI, ethics and policy researcher, AI governance and policy consultant, ethics of war and peace research consultant, political philosophy and also a chartered marketer. — esagomba@ gmail.com/ LinkedIn; @Dr. Evans Sagomba/ X: @esagomba.




