Skip to content

Track 21 – Organizational Thinking and Behavior in the Age of AI: Is Disorganization Management the New Normal?

Back to Tracks list

Corresponding Manager: Davide Secchi (d.secchi@psbedu.paris)

Track Manager(s): Davide Secchi, Dinuka Herath, Fabian Homberg, Andrea Guido, Rahman Khan

Description
With the rise of AI and LLM, organizations are facing subsequent and unprecedented waves of innovation that have, among others, repercussions on the workforce. On the positive side, a recent report by McKinsey suggests increased productivity opportunities of around $4.4 trillion. On the other hand, these innovations threaten job security, promise reskilling, redefine competence, and are usually sold as efficiency solutions. However, the implementation landscape is uncertain, and employees face dehumanization risks [1] and the effects of progressive social isolation [2]. At the same time, uncertainty leaves room for improvisation, bricolage, and organizational ad hoc configurations. Hence, more than efficiency, AI may support loosely coupled relations, ad hoc team structures, procedures reconfiguration, and other unorthodox or hybrid organizational settings. Put differently, this wave of change may unlock pockets of disorganization [3] that should be considered as inevitable and actively managed rather than opposed. This implies that skillsets and competences need also be adjusted to avoid so-called Eliza effects [4]—i.e. anthropomorphizing a technological artifact—and the automation bias—e.g., taking machine hallucinations (a form of “botshit”) as inherent truths [5]. This track encourages and welcomes contributions that explore the implications of organizational change related to the use, implementation (planned or actual) and role of new disruptive technology.

Keywords
Artificial Intelligence; disorganization management; organizational behavior; cognitive distress; organizational change

Key References
[1] Dang, J., & Liu, L. (2025). Dehumanization risks associated with artificial intelligence use. American Psychologist. https://doi.org/10.1037/amp0001542
[2] Corgnet, B., Hernán-González, R., & Mateo, R. (2023). Peer Effects in an Automated World. Labour Economics, 85 (102455)
[3] Herath, D. B., Secchi, D., & Homberg, F. (2025). Disorganization Management: What Is It, How Does It Work, and Why Does It Matter? Academy of Management Annals, 19(1), 404-433.
[4] Sison, A. J. G., Daza, M. T., Gozalo-Brizuela, R., & Garrido-Merchán, E. C. (2024). ChatGPT: More than a “weapon of mass deception” ethical challenges and responses from the human-centered artificial intelligence (HCAI) perspective. International Journal of Human–Computer Interaction, 40(17), 4853-4872.
[5] Hannigan, T. R., McCarthy, I. P., & Spicer, A. (2024). Beware of botshit: How to manage the epistemic risks of generative chatbots. Business Horizons, 67(5), 471-486.

Research Partnerships and Promotion Channels
We have ties with several professional international academic networks (e.g., EURAM, AOM, INFORMS, ESSA) and will spread the news of the track amongst those communities. We are also in touch with and are ourselves members of journal editorial boards (e.g., Kybernetes, Evidence-Based HRM) and may reach out to those communities as well. Finally, we have several colleagues in our personal research collaboration networks (200+) and will spread the news.