Trustworthy and Responsible AI for Human-Centric Autonomous Decision-Making Systems

Authors: Farzaneh Dehghani, Mahsa Dibaji, Fahim Anzum, Lily Dey, Alican Basdemir, Sayeh Bayat, Jean-Christophe Boucher, Steve Drew, Sarah Elaine Eaton, Richard Frayne, Gouri Ginde, Ashley D. Harris, Yani Ioannou, Catherine A Lebel, John T. Lysack, Leslie Salgado, Emma A.M. Stanley, Roberto Souza, Ronnie de Souza Santos, Lana Wells, Tyler Williamson, Matthias Wilms, Mark Ungrin, Marina Gavrilova, Mariana Bento

TMLR 2025 | Venue PDF | Archive PDF | Plain Text | LLM Run Details

Reproducibility Variable Result LLM Response
Research Type Theoretical In this paper, we review and discuss the intricacies of AI biases, definitions, methods of detection and mitigation, and metrics for evaluating bias. We also discuss open challenges with regard to the trustworthiness and widespread application of AI across diverse domains of humancentric decision making, as well as guidelines to foster Responsible and Trustworthy AI models.
Researcher Affiliation Academia Farzaneh Dehghani EMAIL Department of Biomedical Engineering, University of Calgary Mahsa Dibaji EMAIL Department of Electrical and Software Engineering, University of Calgary Fahim Anzum EMAIL Department of Computer Science, University of Calgary
Pseudocode No The paper includes diagrams such as Figure 1 and Figure 2 that summarize concepts and processes, but no structured pseudocode or algorithm blocks are present.
Open Source Code No The paper mentions third-party open-source tools like "Fairlearn is a Python library" and "Google’s What-If Tool (WIT)" as examples of bias detection and mitigation toolkits. However, it does not state that the authors are releasing any original source code for the methodology described in this paper.
Open Datasets No This paper is a review and does not present original experimental research that utilizes a specific dataset. Therefore, it does not provide access information for a dataset used in its own experiments.
Dataset Splits No This paper is a review and does not conduct original experiments. Consequently, there is no information provided regarding dataset splits for training, validation, or testing.
Hardware Specification No This paper is a review and does not conduct original experiments. Therefore, no specific hardware specifications for running experiments are provided.
Software Dependencies No This paper is a review and does not conduct original experiments. Therefore, it does not list specific software dependencies with version numbers for its own implementation.
Experiment Setup No This paper is a review and does not conduct original experiments. Consequently, there are no specific experimental setup details, such as hyperparameters or training configurations, provided.