Divyansh Rajat,
Akshita,
Jaspreet Kaur,
- Student, Department of Computer Science and Engineering, Baba Banda Singh Bahadur Engineering College, Fatehgarh Sahib, Punjab, India
- Student, Department of Computer Science and Engineering, Baba Banda Singh Bahadur Engineering College, Fatehgarh Sahib, Punjab, India
- Assistant Professor, Department of Computer Science and Engineering, Baba Banda Singh Bahadur Engineering College, Fatehgarh Sahib, Punjab, India
Abstract
Quick development of artificial intelligence (AI) has revolutionized a number of industries, including healthcare, banking, and government, by providing creative answers to challenging issues. However, there are serious ethical issues with growing integration of AI into crucial decision-making processes, including prejudice, a lack of transparency, abuses of data privacy, and accountability gaps. A systematic strategy that incorporates technical solutions, legal frameworks, and ethical standards is needed to address these issues. With an emphasis on fundamental concepts like equity, responsibility, transparency, privacy, and inclusion, this study offers a thorough analysis of ethical AI. It examines the strategies and resources created to guarantee the responsible application of AI, such as explainability tactics, algorithms that improve fairness, and privacy-preserving measures like federated learning and differential privacy. The study also looks into federated learning frameworks that support ethical compliance as well as AI governance solutions like AI Fairness 360, SHapley Additive exPlanations (SHAP), Local Interpretable Model-agnostic Explanations (LIME), and others. Notwithstanding developments, problems, including moral trade-offs, scalability problems, and inconsistent regulations, still exist. In order to create strong, moral AI systems, this study emphasizes the value of interdisciplinary cooperation between academics, decision makers, and business executives. Improving AI interpretability, tackling socio-technical biases, and promoting international collaboration on AI ethics are some future research avenues. In order to ensure justice, security, and accountability in AI- driven applications, stakeholders seeking to align AI technologies with ethical norms may find this paper to be a useful resource.
Keywords: AI Governance, ethical AI, explainability, fairness, privacy-preserving AI, responsible AI
[This article belongs to International Journal of Information Security Engineering ]
Divyansh Rajat, Akshita, Jaspreet Kaur. Ethical and Responsible AI: A Comprehensive Review of Principles, Methods, and Tools. International Journal of Information Security Engineering. 2026; 04(01):23-34.
Divyansh Rajat, Akshita, Jaspreet Kaur. Ethical and Responsible AI: A Comprehensive Review of Principles, Methods, and Tools. International Journal of Information Security Engineering. 2026; 04(01):23-34. Available from: https://journals.stmjournals.com/ijise/article=2026/view=239281
References
- Cannarsa M. Ethics guidelines for trustworthy AI. In: DiMatteo LA, Janssen A, Ortolani P, de Elizalde F, Cannarsa M, Durovic M, editors. The Cambridge Handbook of Lawyering in the Digital Age. Cambridge: Cambridge University Press; 2021. p. 283–297. doi:10.1017/9781108936040.022.
- Canton H. Organisation for Economic Co-operation and Development—OECD. In: Europa Publications, editor. The Europa Directory of International Organizations 2021. 23rd ed. London: Routledge; 2021. p. 677–687. doi:10.4324/9781003179900-102.
- Russell SJ, Norvig P. Artificial Intelligence: A Modern Approach. 4th ed. Harlow: Pearson; 2021.
- Hagendorff T. The ethics of AI ethics: an evaluation of guidelines. Minds Mach. 2020;30(1):99– 120. doi:10.1007/s11023-020-09517-8.
- Jobin A, Ienca M, Vayena E. The global landscape of AI ethics guidelines. Nat Mach Intell. 2019;1(9):389–399. doi:10.1038/s42256-019-0088-2.
- Fjeld J, Achten N, Hilligoss H, Nagy A, Srikumar M. Principled artificial intelligence: mapping consensus in ethical and rights-based approaches to principles for AI. SSRN Electron J. 2020. doi:10.2139/ssrn.3518482.
- Buolamwini J, Gebru T. Gender shades: intersectional accuracy disparities in commercial gender classification. In: Friedler SA, Wilson C, editors. Proceedings of the 1st Conference on Fairness, Accountability and Transparency. Proceedings of Machine Learning Research. Vol. 81. PMLR; 2018. p. 77–91. Available from: https://proceedings.mlr.press/v81/buolamwini18a.html
- Kamiran F, Calders T. Data preprocessing techniques for classification without discrimination. Knowl Inf Syst. 2012;33(1):1–33. doi:10.1007/s10115-011-0463-8.
- IBM. (2026). Artificial Intelligence (AI) Solutions. [online] IBM. Available from: https://www.ibm.com/solutions/artificial-intelligence
- Ribeiro MT, Singh S, Guestrin C. “Why should I trust you?”: explaining the predictions of any classifier. In: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining; 2016 Aug 13–17; San Francisco, CA, USA. New York (NY): Association for Computing Machinery; 2016. p. 1135–1144. doi:10.1145/2939672.2939778.
- Wachter S, Mittelstadt B, Russell C. Counterfactual explanations without opening the black box: automated decisions and the GDPR. SSRN Electron J. 2017;31:841. doi:10.2139/ssrn.3063289.
- Rudin C. Stop explaining black box machine learning models for high stakes decisions and use interpretable models instead. Nat Mach Intell. 2019;1(5):206–215. doi:10.1038/s42256-019-0048- x.
- European Parliament; Council of the European Union. Regulation (EU) 2016/679 of the European Parliament and of the Council of 27 April 2016 on the protection of natural persons with regard to the processing of personal data and on the free movement of such data, and repealing Directive 95/46/EC (General Data Protection Regulation). Off J Eur Union. 2016 May 4;59(L 119):1-88.
- Dwork C. Differential privacy: a survey of results. In: Agrawal M, Du D, Duan Z, Li A, editors. Theory and Applications of Models of Computation. TAMC 2008. Lecture Notes in Computer Science. Vol. 4978. Berlin, Heidelberg: Springer; 2008. p. 1–19. doi:10.1007/978-3-540-79228- 4_1.
- McMahan B, Moore E, Ramage D, Hampson S, Aguera y Arcas B. Communication-efficient learning of deep networks from decentralized data. In: Singh A, Zhu J, editors. Proceedings of the 20th International Conference on Artificial Intelligence and Statistics. Proceedings of Machine Learning Research. 2017;54:1273–1282. Available from: https://proceedings.mlr.press/v54/mcmahan17a.html
- Gentry C. Fully homomorphic encryption using ideal lattices. In: Proceedings of the Forty-First Annual ACM Symposium on Theory of Computing. ACM; 2009. p. 169–178. doi:10.1145/1536414.1536440.
- Shahriari K, Shahriari M. IEEE standard review—ethically aligned design: a vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. 2017 IEEE Canada International Humanitarian Technology Conference (IHTC), Toronto, ON, Canada, 2017, Jul 21– 23; Toronto, ON, Canada. 2017. p. 197–201. doi:10.1109/IHTC.2017.8058187.
- Reisman D, Schultz J, Crawford K, Whittaker M. Algorithmic Impact Assessments: a Practical Framework for Public Agency Accountability. New York (NY): AI Now Institute; 2018. Available from: https://ainowinstitute.org/reports/ai-now-report-2018.pdf
- Raji ID, Buolamwini J. Actionable auditing: investigating the impact of publicly naming biased performance results of commercial AI products. In: Proceedings of the 2019 AAAI/ACM Conference on AI, Ethics, and Society; 2019 Jan 27–28; Honolulu, HI, USA. New York (NY): Association for Computing Machinery; 2019. p. 429–435. doi:10.1145/3306618.3314244.
- Papernot N, McDaniel P, Wu X, Jha S, Swami A. Distillation as a defense to adversarial perturbations against deep neural networks. 2016 IEEE Symposium on Security and Privacy (SP), San Jose, CA, USA. 2016. p. 582–597. doi:10.1109/SP.2016.41.
- Carlini N, Wagner D. Adversarial examples are not easily detected: bypassing ten detection methods. In: Proceedings of the 10th ACM Workshop on Artificial Intelligence and Security; 2017 Nov 3; Dallas, TX, USA. New York (NY): Association for Computing Machinery; 2017. p. 3–14. doi:10.1145/3128572.3140444.
- OpenAI. (2022). Security and privacy at OpenAI. [online] OpenAI. Available from: https://openai.com/security-and-privacy/
- Hardt M, Price E, Srebro N. Equality of opportunity in supervised learning. In: Proceedings of the 30th Conference on Neural Information Processing Systems (NeurIPS 2016); 2016 Dec 5–10; Barcelona, Spain. Red Hook (NY): Curran Associates, Inc.; 2016. p. 3315–3323.
- Zemel R, Wu Y, Swersky K, Pitassi T, Dwork C. Learning fair representations. In: Dasgupta S, McAllester D, editors. Proceedings of the 30th International Conference on Machine Learning; 2013 Jun 17–19; Atlanta, GA, USA. Proceedings of Machine Learning Research. 2013;28(3):325– 333. Available from: https://proceedings.mlr.press/v28/zemel13.html
- Molnar C. Interpretable Machine Learning: A Guide for Making Black Box Models Explainable. 2019. Available from: https://christophm.github.io/interpretable-ml-book/
- Michael K. Editorial IEEE Transactions on Technology and Society editorial board profiles. IEEE Trans Technol Soc. 2024;5(2):119–148. doi:10.1109/TTS.2024.3423208.
- Selbst AD, Boyd D, Friedler SA, Venkatasubramanian S, Vertesi J. Fairness and abstraction in sociotechnical systems. In: Proceedings of the Conference on Fairness, Accountability, and Transparency (FAT* ‘19); 2019; Atlanta, GA, USA. New York: Association for Computing Machinery; 2019. p. 59–68. doi:10.1145/3287560.3287598.
- Barocas S, Hardt M, Narayanan A. Fairness and machine learning: Limitations and opportunities. Cambridge, MA: MIT Press; 2023.
- Bird S, Dudík M, Edgar R, Horn B, Lutz R, Milan V, et al. Fairlearn: A toolkit for assessing and improving fairness in AI. Microsoft; 2020. Available from: https://www.microsoft.com/en- us/research/wp-content/uploads/2020/05/Fairlearn_WhitePaper-2020-09-22.pdf
- Bantilan N. Themis-ml: A fairness-aware machine learning interface for end-to-end discrimination discovery and mitigation. J Technol Hum Serv. 2018;36(1):15–30. doi:10.1080/15228835.2017.1416512.
- Nori H, Jenkins S, Koch P, Caruana R. InterpretML: A unified framework for machine learning interpretability. [Preprint]. 2019. arXiv:1909.09223. doi:10.48550/arXiv.1909.09223.
- Nicolae MI, Sinn M, Tran MN, Buesser B, Rawat A, Wistuba M, et al. Adversarial robustness toolbox v1.0.0. [Preprint]. 2018. arXiv:1807.01069. doi:10.48550/arXiv.1807.01069
- Lacour C, Massart P, Rivoirard V. Estimator selection: A new method with applications to kernel density estimation. Sankhya A. 2017;79(2):298–335. doi:10.1007/s13171-017-0107-5.
- Melis M, Demontis A, Pintor M, Sotgiu A, Biggio B. SecML: A Python library for secure and explainable machine learning. [Preprint]. 2019. arXiv:1912.10013. doi:10.48550/arXiv.1912.10013.
- Zhang D, Maslej N, Brynjolfsson E, Etchemendy J, Lyons T, Manyika J, et al. The AI Index 2022 annual report. [Preprint]. 2022. arXiv:2205.03468. doi:10.48550/arXiv.2205.03468.
- Norouzi K, Ghodsi A, Argani P, Andi PA, Hassani H. Innovative artificial intelligence tools: exploring the future of healthcare through IBM Watson’s potential applications. In: Nguyen TA, editor. Sensor Networks for Smart Hospitals. New York: Elsevier; 2025. p. 573–588. doi:10.1016/B978-0-443-36370-2.00028-1.
- Powles J, Hodson H. Google DeepMind and healthcare in an age of algorithms. Health Technol. 2017;7(4):351–367. doi:10.1007/s12553-017-0179-1.
- Car J, Sheikh A, Wicks P, Williams MS. Beyond the hype of big data and artificial intelligence: Building foundations for knowledge and wisdom. BMC Med. 2019;17(1):143. doi:10.1186/s12916-019-1382-x.
- Weinberg L. Rethinking fairness: An interdisciplinary survey of critiques of hegemonic ML fairness approaches. J Artif Intell Res. 2022;74:75–109. doi:10.1613/jair.1.13196.
- ZestFinance (2021). Zest AI Honored in Fast Company’s 2021 Next Big Things in Tech Awards. [Online] PR Newswire. Available from: https://www.prnewswire.com/news-releases/zest-ai- honored-in-fast-companys-2021-next-big-things-in-tech-awards-301428185.html
- Dastin J. Amazon scraps secret AI recruiting tool that showed bias against women. In: Martin K, editor. Ethics of Data and Analytics: Concepts and Cases. New York: Auerbach Publications; 2022. p. 296–299. doi:10.1201/9781003278290-44.
- Modgil S. (2018). How AI startup Pymetrics wants to make hiring bias free [Online]. People Matters Global. Available from: https://sea.peoplemattersglobal.com/article/hr-technology/how-ai- startup-pymetrics-wants-to-make-hiring-bias-free-20022
- Angwin J, Larson J. Bias in criminal risk scores is mathematically inevitable, researchers say. In: Martin K, editor. Ethics of Data and Analytics: Concepts and Cases. New York: Auerbach Publications; 2022. p. 265–267. doi:10.1201/9781003278290-38.
- Rock A, Jebaseeli TJ. A content moderation system for YouTube using hybrid deep neural networks. AIP Conf Proc. 2025;3297(1):090035. doi:10.1063/5.0286780.
- Setiawan R, Ponnam VS, Sengan S, Anam M, Subbiah C, Phasinam K, et al. Certain investigation of fake news detection from Facebook and Twitter using artificial intelligence approach. Wirel Pers Commun. 2022;127(2):1737–1762. doi:10.1007/s11277-021-08720-9.
- Borrego-Díaz J, Galán-Páez J. Explainable artificial intelligence in data science: From foundational issues towards socio-technical considerations. Minds Mach. 2022;32(3):485–531. doi:10.1007/s11023-022-09603-z.
- Mehra A. Hybrid AI models: Integrating symbolic reasoning with deep learning for complex decision-making. J Emerg Technol Innov Res. 2024;11:f693–f695.
- Chinnaraju A. Explainable AI (XAI) for trustworthy and transparent decision-making: A theoretical framework for AI interpretability. World J Adv Eng Technol Sci. 2025;14(3):170–207. doi:10.30574/wjaets.2025.14.3.0106.
- Dieterle E, Dede C, Walker M. The cyclical ethical effects of using artificial intelligence in education. AI Soc. 2024;39:633–643. doi:10.1007/s00146-022-01497-w.
- Martin K. Google research: who is responsible for ethics of AI? In: Martin K, editor. Ethics of Data and Analytics: Concepts and Cases. New York: Auerbach Publications; 2022. p. 434–446. doi:10.1201/9781003278290.
- Shahriari K, Shahriari M. IEEE standard review – Ethically aligned design: A vision for prioritizing human wellbeing with artificial intelligence and autonomous systems. 2017 IEEE Canada International Humanitarian Technology Conference (IHTC), Toronto, ON, Canada. 2017. p. 197– 201. doi:10.1109/IHTC.2017.8058187.
- Butt J. Analytical study of the world’s first EU artificial intelligence (AI) act. Int J Res Publ Rev. 2024;5(3):7343–7364.
- House W. Blueprint for an AI Bill of Rights: Making automated systems work for the American people. Nimble Books; 2022.
- Van Norren DE. The ethics of artificial intelligence, UNESCO and the African Ubuntu perspective. J Inf Commun Ethics Soc. 2023;21:112–128. doi:10.1108/JICES-04-2022-0037.
- Rolnick D, Donti PL, Kaack LH, Kochanski K, Lacoste A, Sankaran K, et al. Tackling climate change with machine learning. ACM Comput Surv. 2022;55:1–96. doi:10.1145/3485128.
- Efe A. A review on risk reduction potentials of artificial intelligence in humanitarian aid sector. İnsan ve Sosyal Bilimler Dergisi. 2022;5(2):184–205. doi:10.53048/johass.1189814.
- Kidwai-Khan F, Wang R, Skanderson M, Brandt CA, Fodeh S, Womack JA. A roadmap to artificial intelligence (AI): Methods for designing and building AI-ready data to promote fairness. J Biomed Inform. 2024;154:104654. doi:10.1016/j.jbi.2024.104654.
- Goodall NJ. Machine ethics and automated vehicles. In: Meyer G, Beiker S, editors. Road Vehicle Automation. Cham: Springer; 2014. p. 93–102. doi:10.1007/978-3-319-05990-7_9.
- Borgesano F, De Maio A, Laghi P, Musmanno R. Artificial intelligence and justice: A systematic literature review and future research perspectives on Justice 5.0. Eur J Innov Manag. 2025;28(11):349–385. doi:10.1108/EJIM-01-2025-0117.
- Scharre P, Lamberth M. Artificial intelligence and arms control. [Preprint]. 2022. arXiv:2211.00065. doi:10.48550/arXiv.2211.00065.
- Unver MB. AI governance: Compromising democracy or democratising AI? SSRN Electron J. 2024. doi:10.2139/ssrn.4913658.
- Gerdes A. A participatory data-centric approach to AI ethics by design. Appl Artif Intell. 2022;36(1):2009222. doi:10.1080/08839514.2021.2009222.
- Hu B, Rong H, Tay J. Is decentralized artificial intelligence governable? Towards machine sovereignty and human symbiosis. 2025 Jan 9.

International Journal of Information Security Engineering
| Volume | 04 |
| Issue | 01 |
| Received | 11/04/2025 |
| Accepted | 18/06/2025 |
| Published | 26/03/2026 |
| Publication Time | 349 Days |
Login
PlumX Metrics