Human-in-the-Loop AI in HR Decision-Making: Insights from Big 4 AI Governance Reports

Year : 2026 | Volume : 16 | Issue : 01 | Page : 43 50
    By

    Laxmee Vachher,

  • Shyamali Dubey,

  1. Associate Professor, School of Management, Babu Banarasi Das University, Lucknow, Uttar Pradesh, India
  2. Associate Professor, School of Management, Babu Banarasi Das University, Lucknow, Uttar Pradesh, India

Abstract

The integration of artificial intelligence (AI) in human resource (HR) decision-making has transformed recruitment, performance evaluations, and talent management. However, biases embedded in AI-driven HR systems present significant ethical and operational challenges. Human-in-the-Loop (HITL) AI offers a hybrid approach that combines AI efficiency with human oversight to enhance fairness and accountability. This paper systematically analyses HITL AI in HR decision-making using qualitative analysis of AI governance reports from Big 4 consulting firms. Findings indicate that HITL AI enhances fairness and compliance in HR decisions but faces challenges in scalability, workforce acceptance, and regulatory complexity. The study provides insights into governance frameworks and best practices for balancing AI efficiency with human oversight. Additionally, this paper builds upon prior research on AI bias in HR analytics by extending the discussion towards practical implementations of HITL frameworks and assessing their effectiveness through industry insights.

Keywords: AI governance, human-in-the-loop AI, HR analytics, Big 4 consulting, algorithmic bias, ethical AI, explainable AI (XAI)

[This article belongs to Current Trends in Information Technology ]

How to cite this article:
Laxmee Vachher, Shyamali Dubey. Human-in-the-Loop AI in HR Decision-Making: Insights from Big 4 AI Governance Reports. Current Trends in Information Technology. 2026; 16(01):43-50.
How to cite this URL:
Laxmee Vachher, Shyamali Dubey. Human-in-the-Loop AI in HR Decision-Making: Insights from Big 4 AI Governance Reports. Current Trends in Information Technology. 2026; 16(01):43-50. Available from: https://journals.stmjournals.com/ctit/article=2026/view=236628


References

  1. Mehrabi N, Morstatter F, Saxena N, Lerman K, Galstyan A. A survey on bias and fairness in machine learning. ACM Comput Surv. 2021;54(6):1–35. doi:10.1145/3457607.
  2. Wilson HJ, Daugherty P, Bianzino N. The jobs that artificial intelligence will create. MIT Sloan Manag Rev. 2017;58(4):14. Available from: https://sloanreview.mit.edu/article/will-ai-create-as-many-jobs-as-it-eliminates/
  3. Ammanath B, Mitchell K. (2024). Preparing the workforce for ethical, responsible and trustworthy AI. [Online]. Deloitte US. Available from: https://www.deloitte.com/us/en/about/articles/ethical-technology-survey.html
  4. PwC (2023). A Year of Solving Together. Global Annual Review 2023. [online] Available from: https://www.pwc.com/gx/en/global-annual-review/2023/pwc-global-annual-review-2023.pdf.
  5. Serruys H. Navigating the AI revolution: the future of the HR role. Belgium: Ernst & Young Global Limited; 2024 Feb 15. Available from: https://www.ey.com/en_be/technical/tax/tax-alerts/2024/navigating-the-ai-revolution-the-future-of-the-hr-role
  6. KPMG International Limited. Transparency Report 2025. London: KPMG International Limited; 2025. Available from: https://kpmg.com/xx/en/about/transparency-report.html
  7. European Commission. Proposal for a regulation of the European Parliament and of the Council laying down harmonised rules on artificial intelligence (Artificial Intelligence Act) and amending certain Union legislative acts. Brussels: European Commission; 2021 Apr 21. Report No.: COM(2021) 206 final; 2021/0106(COD). 2021.
  8. Karasik AW. EEOC’s artificial intelligence evolution. Brief. 2024;53(2):12.
  9. Benraouane SA. AI Management System Certification According to the ISO/IEC 42001 Standard: How to Audit, Certify, and Build Responsible AI Systems. New York: Productivity Press; 2024. 226 p. doi:10.4324/9781003463979.
  10. Binns R. Fairness in machine learning: lessons from political philosophy. In: Friedler SA, Wilson C, editors. Proceedings of the 1st Conference on Fairness, Accountability and Transparency; 2018 Feb 23–24; New York, NY, USA. Proceedings of Machine Learning Research. Vol 81. Breckenridge, Colorado, USA: PMLR; 2018. p. 149–159. Available from: https://proceedings.mlr.press/v81/binns18a.html
  11. Raji ID, Buolamwini J. Actionable auditing: investigating the impact of publicly naming biased performance results of commercial AI products. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society; 2019 Jan 27–28; Honolulu, HI, USA. New York (NY): Association for Computing Machinery; 2019. p. 429–435. doi:10.1145/3306618.3314244.
  12. Binns R, Van Kleek M, Veale M, Lyngs U, Zhao J, Shadbolt N. “It’s reducing a human being to a percentage”: perceptions of justice in algorithmic decisions. In: Proceedings of the 2018 CHI Conference on Human Factors in Computing Systems; 2018 Apr 21–26; Montreal, QC, Canada. New York (NY): Association for Computing Machinery; 2018. p. 1–14. doi:10.1145/3173574.3173951.
  13. Floridi L, Cowls J. A unified framework of five principles for AI in society. Harv Data Sci Rev. 2019 Jul 1;1(1):535–545. Available from: https://hdsr.mitpress.mit.edu/pub/l0jsh9d1
  14. Binns R. On the apparent conflict between individual and group fairness. In: Proceedings of the 2020 Conference on Fairness, Accountability, and Transparency (FAT* ‘20); 2020 Jan; Barcelona, Spain. New York (NY): Association for Computing Machinery; 2020. p. 514–524. doi:10.1145/3351095.3372864.
  15. Barocas S, Hardt M, Narayanan A. Fairness and Machine Learning: Limitations and Opportunities. Cambridge (MA): MIT Press; 2023.
  16. Crawford K. The Atlas of AI: Power, Politics, and the Planetary Costs of Artificial Intelligence. New Haven (CT): Yale University Press; 2021.
  17. Tolan S, Miron M, Gómez E, Castillo C. Why machine learning may lead to unfairness: evidence from risk assessment for juvenile justice in Catalonia. In: Proceedings of the Seventeenth International Conference on Artificial Intelligence and Law. New York: ACM; 2019. p. 83–92. doi:10.1145/3322640.3326705.
  18. Zhang BH, Lemoine B, Mitchell M. Mitigating unwanted biases with adversarial learning. In: Proceedings of the 2018 AAAI/ACM Conference on AI, Ethics, and Society. New York: ACM; 2018. p. 335–340. doi:10.1145/3278721.3278779.
  19. Sandvig C, Hamilton K, Karahalios K, Langbort C. Auditing algorithms: research methods for detecting discrimination on internet platforms. Data Discrimination. 2014;22:4349–4357.
  20. Obermeyer Z, Powers B, Vogeli C, Mullainathan S. Dissecting racial bias in an algorithm used to manage the health of populations. Science. 2019;366(6464):447–453. doi:10.1126/science.aax2342. PubMed: 31649194.

Regular Issue Subscription Review Article
Volume 16
Issue 01
Received 15/09/2025
Accepted 20/09/2025
Published 07/02/2026
Publication Time 145 Days


Login


My IP

PlumX Metrics