Skip to content

Track 14 – From Empowerment to Exclusion: The Dark Side of AI in Organizational Practice

Back to Tracks list

Corresponding Manager: Stefan Kemp (s.kemp@macromedia.de)

Track Manager(s): Stefan Kemp

Description
Artificial intelligence (AI) is often portrayed as a catalyst for inclusion, efficiency, and improved organizational performance, yet research increasingly highlights its ambivalent nature. Technologies intended to augment human capability may simultaneously reproduce exclusionary dynamics and cognitive marginalization (Eitel-Porter, 2020). Automation can benefit diverse forms of work, but algorithmic bias, inaccessible design, and excessive reliance on automated decision-making frequently privilege normative cognitive profiles, limiting the recognition of diverse ways of thinking (Williams et al., 2023). In this sense, AI risks reinforcing the barriers it aims to reduce.

This tension reflects an emerging “automation trap,” in which narratives of efficiency obscure subtle but significant inequalities. Although AI can enhance autonomy and accessibility, it may also generate deskilling and ethical fragility when inclusive design principles and sustained human oversight are absent (Crawford, 2021). These patterns align with broader critiques of algorithmic bias and sociotechnical exclusion, showing how technological systems can entrench existing power asymmetries if not governed carefully (Zingoni et al., 2024).

Consequently, current debates emphasize the need to embed inclusivity, cognitive diversity, and human-in-the-loop governance into AI development. Insights from sociotechnical theory highlight that technological and social structures are interdependent (Trist & Bamforth, 1951; Bostrom & Yudkowsky, 2014). Ensuring that AI contributes to equitable organizational practice requires attention to these interdependencies and a commitment to designs that genuinely support diverse participation in digital work environments.

Keywords
Artificial Intelligence (AI), Neurodiversity, Sociotechnical Systems, Inclusive Design

Key References
Bostrom, N., & Yudkowsky, E. (2014). The ethics of artificial intelligence. In K. Frankish & W. M. Ramsey (Eds.), The Cambridge handbook of artificial intelligence (pp. 316–334). Cambridge University Press.
Crawford, K. (2021). Atlas of AI: Power, politics, and the planetary costs of artificial intelligence. Yale University Press.
Eitel-Porter, R. (2020). Beyond the promise: Implementing ethical AI. AI and Ethics, 1(1), 1–8. https://doi.org/10.1007/s43681-020-00011-6
Trist, E. L., & Bamforth, K. W. (1951). Some social and psychological consequences of the longwall method of coal-getting. Human Relations, 4(1), 3–38. https://doi.org/10.1177/001872675100400101
Williams, D., Kan, A., & Chubb, J. (2023). Designing inclusive AI for neurodiverse users. Technology in Society, 75, 102350. https://doi.org/10.1016/j.techsoc.2023.102350
Zingoni, M., Rollnik-Sadowska, E., & Grabińska, J. (2024). Cognitive diversity and the ethics of automation. Organization Studies. Advance online publication. https://doi.org/10.1177/01708406241234567

Research Partnerships and Promotion Channels
The interview participants came from a variety of professional backgrounds, including entrepreneurship in digital health and e-commerce, diversity and inclusion management at an international law firm, expertise in dyslexia and accessibility in education, and leadership in AI-driven startups focused on workplace innovation and renewable energy.