Ingeniería organizacional inteligente impulsada por la colaboración entre humanos e IA y la IA explicable para aumentar la productividad

  • Francisco Herrera Dept. of Computer Science and Artificial Intelligence, DaSCI Research Institute, University of Granada, 18071-Granada, Spain.

DOI:

https://doi.org/10.37610/87.702

Publicado

31-12-2025

Resumen

La rápida adopción de la inteligencia artificial (IA) en la ingeniería de organización promete importantes aumentos de productividad, pero los estudios muestran una persistente «paradoja de la productividad de la IA», en la que la eficiencia a nivel de tareas no se traduce en beneficios económicos cuantificables. Este artículo examina cómo la colaboración entre humanos e IA (HAIC) y la IA explicable (XAI) pueden abordar esta brecha alineando las capacidades algorítmicas con la experiencia humana, la confianza y el diseño organizativo. Basándonos en estudios empíricos recientes, analizamos las barreras estructurales, cognitivas y sociotécnicas que limitan la realización del valor de la IA, incluyendo la integración inadecuada, la dependencia excesiva de la automatización y la opacidad inherente al sistema. Proponemos que la explicabilidad, integrada como capacidad tanto técnica como organizativa, permita la transparencia, la rendición de cuentas y la colaboración adaptativa entre los diversos grupos de interesados. Mediante un escenario simulado de ingeniería de organización, mostramos cómo la HAIC basada en XAI puede mejorar la calidad de las decisiones, redistribuir la carga de trabajo cognitiva y fomentar el aprendizaje iterativo. El análisis subraya que el verdadero potencial de productividad de la IA no reside únicamente en la automatización, sino en una integración deliberada y centrada en el ser humano que trata a la IA como un socio colaborador dentro de sistemas sociotécnicos resilientes que impulsan una ingeniería organizativa inteligente para aumentar la productividad.

Palabras clave:

Ingeniería organizacional, Inteligencia artificial explicable (XAI), colaboración entre humanos e IA (HAIC), paradoja de la productividad, confianza en la IA

Agencias de apoyo

  • This publication is part of the project “Ethical, Responsible, and General Purpose Artificial Intelligence: Applications In Risk Scenarios” (IAFER) Exp.:TSI-100927-2023-1 funded through the creation of university-industry research programs (Enia Programs), aimed at the research and development of artificial intelligence, for its dissemination and education within the framework of the Recovery, Transformation and Resilience Plan from the European Union Next Generation EU through the Ministry of Digital Transformation and the Civil Service. This work was also partially supported by Knowledge Generation Projects, funded by the Spanish Ministry of Science, Innovation, and Universities of Spain under the project PID2023-150070NB-I00.

Referencias

ABERCROMBIE, G., et al. (2024). A collaborative, human-centred taxonomy of AI, algorithmic, and automation harms. arXiv preprint arXiv: 2407.01294.

ACEMOGLU, D. (2025). The simple macroeconomics of AI. Economic Policy, 40(121), 13–58.

ACHARYA, D. B., et al. (2025). Agentic AI: Autonomous intelligence for complex goals–a comprehensive survey. IEEE Access, 13, 18912–18936.

AFROOGH, S., et al. (2024). Trust in AI: progress, challenges, and future directions. Humanities and Social Sciences Communications, 11(1), 1–30.

ALAM, M. F., et al. (2024) From Automation to Augmentation: Redefining Engineering Design and Manufacturing in the Age of NextGen-AI. An MIT Exploration of Generative AI, March. https://doi.org/10.21428/e4baedd9.e39b392d.

ANNEPAKA, Y., & PAKRAY, P. (2025). Large language models: a survey of their development, capabilities, and applications. Knowledge and Information Systems, 67(3), 2967–3022.

ARRIETA, A. B., et al. (2020). Explainable Artificial Intelligence (XAI): Concepts, taxonomies, opportunities and challenges toward responsible AI. Information fusion, 58, 82–115.

CHONG, N. S. T. (2025). The AI Productivity Paradox: Why Your AI-Powered Workday Isn’t Making You Richer. UN Blog. https://c3.unu.edu/blog/the-ai-productivity-paradox-why-your-ai-powered-workday-isnt-making-you-richer. Access July 9, 2025.

FRAGIADAKIS, G., et al. (2024). Evaluating human-AI collaboration: A review and methodological framework. arXiv preprint arXiv:2407.19098.

FREIMAN, O., et al. (2025). ‘Opacity’ and ‘Trust’: From Concepts and Measurements to Public Policy. Philosophy & Technology, 38(1), 29.

GAFNI, et al. (2024). Objectivity by design: The impact of AI-driven approach on employees' soft skills evaluation. Information and software technology, 170, 107430.

GRUETZEMACHERT, R., & WHITTLESTONE, J. (2022). The transformative potential of artificial intelligence. Futures, 135, 102884.

HÄHNEL, M., HAUSWALD, R. (2025). Trust and Opacity in Artificial Intelligence: Mapping the Discourse. Philosophy & Technology, 38(3), 115.

HEMMER, P., et al. (2023). Human-AI collaboration: the effect of AI delegation on human task performance and task satisfaction. In Proceedings of the 28th International Conference on Intelligent User Interfaces (pp. 453–463).

HERRERA, F. (2025). Reflections and attentiveness on eXplainable Artificial Intelligence (XAI). The journey ahead from criticisms to human–AI collaboration. Information Fusion, 121, 103133.

HERRERA, F., & CALDERÓN, R. (2025). Opacity as a Feature, Not a Flaw: The LoBOX Governance Ethic for Role-Sensitive Explainability and Institutional Trust in AI. arXiv preprint arXiv:2505.20304.

HOLMSTGRÖM, J., & CARROLL, N. (2025). How organizations can innovate with generative AI. Business Horizons, 68 (5), 559–573.

KE, Z., et al. (2025). A survey of frontiers in LLM reasoning: Inference scaling, learning to reason, and agentic systems. arXiv preprint arXiv:2504.09037.

KRAUSE, F., et al (2024). Managing human-AI collaborations within industry 5.0 scenarios via knowledge graphs: key challenges and lessons learned. Frontiers in Artificial Intelligence, 7, 1247712.

KULESA, A. C., et al. (2025). Productive Struggle: How Artificial Intelligence Is Changing Learning, Effort, and Youth Development in Education. Bellwether.

LOBO J. L., & DEL SER, J. (2024). Can transformative AI shape a new age for our civilization? Navigating between speculation and reality. arXiv preprint arXiv:2412.08273.

MEHROTRA, S., et al. (2024). A systematic review on fostering appropriate trust in Human-AI interaction: Trends, opportunities and challenges. ACM Journal on Responsible Computing, 1(4), 1–45.

MOLLICK, E. (2024). Co-intelligence: Living and working with AI. Penguin.

MORRIS, M. R., et al. (2024). Position: Levels of AGI for operationalizing progress on the path to AGI. In Forty-first International Conference on Machine Learning.

NAUDÉ W., et al. (2024). Artificial Intelligence: Economic Perspectives and Models. Cambridge University Press.

SENONER, J., et al. (2024). Explainable AI improves task performance in human–AI collaboration. Scientific reports, 14(1), 31150.

SIMKUTE, A., et al (2025). Ironies of generative AI: understanding and mitigating productivity loss in Human-AI interaction. International Journal of Human–Computer Interaction, 41(5), 2898–2919.

WANG, G., et al. (2025a). Hierarchical Reasoning Model. arXiv preprint arXiv:2506.21734.

WANG, L., et al. (2025b). "Good" XAI Design: For What? In Which Ways? In Proceedings of the Extended Abstracts of the CHI Conference on Human Factors in Computing Systems (pp. 1–13).

WEN, Y. et al. (2025). Trust and AI weight: human-AI collaboration in organizational management decision-making. Frontiers in Organizational Psychology, 3, 1419403.

WHITLE, J. (2025) Does AI actually boost productivity? The evidence is murky. The conversation. https://theconversation.com/does-ai-actually-boost-productivity-the-evidence-is-murky-260690

ZHANG. R., et al. (2025). The dark side of AI companionship: A taxonomy of harmful algorithmic behaviors in human–AI relationships. In Proceedings of the 2025 CHI Conference on Human Factors in Computing Systems (pp. 1–17).

Descargas

Los datos de descarga aún no están disponibles.