The Ethics of AI-Powered Burnout Detection: Balancing Worker Privacy, Organisational Productivity, and the Right to Disconnect

[object Object]

Abstract

Background: Workplace burnout has reached epidemic proportions globally, with the World Health Organization (WHO) formally classifying it as an occupational syndrome in 2019. Simultaneously, artificial intelligence (AI)-powered wellness monitoring tools have proliferated rapidly across organisations in both developed and developing economies, promising to detect early burnout indicators through real-time analysis of biometric data, communication patterns, and behavioural signals. These technologies represent a significant shift in how organisations approach employee wellbeing — one that carries profound implications for worker autonomy, privacy, and the human right to genuine rest. Problem Statement: Despite their stated intentions, AI wellness monitoring tools raise critical ethical questions that remain inadequately addressed in both academic literature and organisational policy. By surveilling employee behaviour continuously, such systems risk transforming corporate wellness initiatives into instruments of covert monitoring, eroding the boundaries between work and rest, and ultimately intensifying rather than alleviating the very burnout they claim to prevent. This tension is particularly acute in contexts such as Nigeria and the broader Global South, where labour protections are less formalised and workers may be especially vulnerable to algorithmic workplace control. Objective: This paper develops a comprehensive ethical design framework — the PRISM Framework (Privacy-centred, Rights-based, Inclusive, Supportive, Mission-aligned) — for AI-powered workplace wellness tools. The framework is designed to balance organisational productivity imperatives with employees' fundamental rights to privacy, rest, and disconnection from work, in alignment with the Power of the Pause. Methods: This study employs a mixed-methods approach combining a systematic literature review (SLR) following PRISMA guidelines and a normative ethical framework analysis. The SLR searched Google Scholar, IEEE Xplore, PubMed, and SSRN using the keywords: 'workplace AI', 'burnout detection technology', 'right to disconnect', 'wellness monitoring ethics', and 'occupational wellbeing AI'. Inclusion criteria required peer-reviewed articles and official policy documents published between 2015 and 2026, in English, addressing AI or digital technology in workplace wellness contexts. The ethical analysis applied three established normative frameworks: consequentialist ethics, Kantian deontological ethics, and the UNESCO Recommendation on the Ethics of AI (2021). Key Findings: The analysis yields four principal findings. First, current AI wellness tools predominantly operationalise a surveillance-centred design paradigm that conflicts with workers' reasonable expectations of privacy and rest. Second, the right to disconnect — codified in law across France, Spain, Portugal, and Belgium — is systematically undermined by always-on AI monitoring architectures. Third, workers in the Global South, including Nigeria, face disproportionate risks due to weak regulatory frameworks and power asymmetries in the employer-employee relationship. Fourth, ethical AI design for wellness is achievable through the application of the PRISM Framework proposed in this paper. Policy Implications: This paper's findings directly support the United Nations Sustainable Development Goals, specifically SDG 3 (Good Health and Wellbeing) and SDG 8 (Decent Work and Economic Growth). Concrete recommendations are provided for AI developers, organisational leaders, and national policymakers — including those in Nigeria and the African Union — to design, procure, and regulate AI wellness tools in ways that protect rather than exploit workers.

Export Metadata

DOI: https://doi.org/10.5281/zenodo.19798273

Published: 4/26/2026

Publisher: Genius Open Access

ISSN: 0000-0000