Guest blog by Dr Casper Joubert
As an occupational medicine practitioner, I have seen firsthand how quickly many healthcare workers have embraced artificial intelligence. The adoption has not been uniform, with some colleagues far ahead and others more hesitant, but what is striking is that much of this has happened before any formal guidelines were put in place. Whether it is using ChatGPT to draft clinical documents, computer-vision tools for imaging, or machine-learning models for risk stratification, AI is already embedded into everyday clinical practice. These tools have become so integrated into workflow that many colleagues no longer think of them as “AI”, they see them simply as instruments that make their work possible in an overstretched system.
That is exactly why striking the right balance between over-regulation and under-regulation is important. Heavy-handed rules tend to be ignored. On the other hand, the absence of guardrails allows untested or unsafe systems to influence decisions with life-changing consequences. In occupational health where decisions shape a worker’s livelihood, compensation rights, and long-term health outcomes, this balance becomes even more critical.
Africa’s Reality: Scarce Resources, High Demand - One of the clearest drivers for AI adoption in occupational health is the severe shortage of expertise. South Africa has roughly 50 occupational medicine specialists, and the situation in many African countries is even more constrained. Only about 30% of African nations have established national OSH policies or programmes (ILO). Millions of occupational injuries and diseases occur annually, yet most workers still have little or no access to formal OH systems. Our context is further shaped by structural challenges such as fragmented services, non-digitised data, long distances between workplaces and specialists and weak reporting pathways. Yet these limitations create opportunity. Africa can leapfrog over legacy systems and transition directly into digital-first, AI-enabled occupational health, building modern systems from the ground up.
The Three Pillars for Responsible AI in Occupational Health
1. Financial: Making the Business Case - AI becomes compelling when it makes financial sense, and the numbers are clear.
- USD 15 trillion projected global economic contribution of AI by 2030 (PWC).
- 230 million new digital jobs in Africa expected by 2030 (SAP).
- Up to USD 11 billion potential efficiency gains in South Africa through workforce optimisation.
In occupational health, this means predictive analytics, fewer injuries, reduced downtime, lower compensation costs, and more accurate and scalable health surveillance. For a continent facing severe challenges, AI is not a luxury it is an efficiency multiplier.
2. Legislative: Lead Fast, Comply Smart - Africa is not beginning from scratch. The African Union’s Continental AI Strategy already offers a guiding framework, while global developments particularly the EU AI Act continue to shape expectations around transparency, quality, and accountability. But these external frameworks cannot simply be imported wholesale. Africa now has a window of opportunity to design AI legislation that reflects our own labour realities, workforce vulnerabilities, data environments, and resource constraints. If we do not proactively build laws that match our unique context, we risk being compelled to adopt regulations created for quite different societies and ultimately limit our ability to innovate and protect our workers. Embedding compliance early will enable us to strengthen:
- accountability,
- defensible medical decision-making,
- transparency, and
- alignment with future global norms.
3. Ethical: A Moral Imperative for Worker Protection - Africa’s workers often labour in the most hazardous conditions globally e.g. Mining fatality rates are four times higher than the global average.
Ethical AI is not an abstract concept. It is the practical foundation for earlier diagnosis, smarter prevention, and more equitable protection for vulnerable workers. Rather than speculating about job losses, the more urgent task is ensuring that AI is deployed safely, transparently, and responsibly in occupational health. For Africa to benefit, we must focus on practical safeguards:
Transparency and Disclosure - We must be open with workers, employers, and clinical teams when AI tools are used whether for report drafting, risk prediction, or screening. Transparency builds trust and clarifies that AI is assisting, not replacing, human judgment.
Awareness of Bias - AI systems are only as good as the data they are trained on. Many global datasets do not reflect African populations, industries, or disease patterns. Because of these imperfect datasets and the likelihood that much of the underlying research is not grounded in African realities, a clinician cannot abdicate his or her responsibility when interpreting AI outputs. We must remain alert to:
- demographic bias,
- the under-representation of African workers,
- and skewed predictions that disproportionately affect minorities or high-risk sectors.
Protecting Privacy and Data Security - Strict de-identification, secure handling of data, and adherence to local and international privacy standards are non-negotiable. Workers must have confidence that their personal and medical data will not be misused.
The Human-in-the-Loop Principle - This is the most important safeguard. AI should support, not replace, clinical reasoning. Every AI-generated suggestion must be reviewed by a qualified clinician, who retains full responsibility for the final decision. Human oversight prevents false positives, misinterpretation, and over-reliance on opaque algorithms.
Upskilling and AI Literacy - We must build a workforce that is AI-literate not necessarily technical, but capable of:
- crafting clear prompts,
- evaluating outputs critically,
- cross-checking information across multiple LLMs,
- and recognising when AI may be incorrect.
Conclusion - AI is reshaping the future of work, whether we prepare for it or not. For Africa, the risk is not adoption, it is non-adoption. With scarce skills, fragmented systems, and high occupational disease burdens, we cannot afford to delay. If we anchor AI adoption in the financial, legislative, and ethical pillars and protect its use with transparency, bias awareness, and human oversight we can build a future where occupational health is smarter, more equitable, and far more accessible than ever before.
Disclosures - This blog post is adapted from a presentation I delivered at the AI in Occupational Health Virtual Conference on 9th July 2025, where I spoke on AI developments in the South African and broader African context. The views, interpretations, and opinions expressed here are entirely my own and do not represent the official position of my employer or any institution with which I am affiliated. Large Language Models including ChatGPT 5.1 and Gemini 3.0 were used solely for language refinement and structural editing. All core ideas, arguments, factual content, and analytical perspectives were developed by me as the author.
Editor's note: SOM published new guidance on AI in Occupational Health in January 2026, download it here.
About the Author:
Dr Casper Joubert is a final-year Occupational Medicine Registrar at Stellenbosch University and Tygerberg Hospital in Cape Town, South Africa. He currently serves as the Co-Chair of the Western Cape Chapter of the South African Society of Occupational Medicine (SASOM). With a diverse clinical background spanning three countries, South Africa, the United Arab Emirates, and Mauritius, Dr Joubert brings a global perspective to occupational health practice. His professional interests include digital health innovation, risk management, and the ethical integration of artificial intelligence into workplace medicine. Beyond his medical career, he is an enthusiastic traveller, runner, and golfer, finding balance between professional commitment and personal wellbeing.

