AI GOVERNANCE REFERENCES

COMPREHENSIVE COLLECTION OF ACADEMIC PAPERS, INDUSTRY REPORTS, AND REGULATORY DOCUMENTS ON TRUSTWORTHY AI AND RESPONSIBLE AI DEVELOPMENT.

ALL REFERENCES

12 REFERENCES FOUND

FEATURED

AI FAIRNESS 360: AN EXTENSIBLE TOOLKIT FOR DETECTING, UNDERSTANDING, AND MITIGATING UNWANTED ALGORITHMIC BIAS

Comprehensive toolkit for detecting and mitigating algorithmic bias in machine learning models.

AUTHORS: Rachel K. E. Bellamy, Kuntal Dey, Michael Hind, Samuel C. Hoffman, Stephanie Houde, Kalapriya Kannan, Pranay Lohia, Jacquelyn Martino, Sameep Mehta, Aleksandra Mojsilović, Seema Nagar, Karthikeyan Natesan Ramamurthy, John Richards, Diptikalyan Saha, Prasanna Sattigeri, Moninder Singh, Kush R. Varshney, Yunfeng Zhang
JOURNAL: IBM Journal of Research and Development
YEAR: 2019
DOI: 10.1147/JRD.2019.2942287
High Impact Bias Detection
READ PAPER
FEATURED

FAIRNESS IN MACHINE LEARNING: A SURVEY

Comprehensive survey of fairness definitions, metrics, and mitigation strategies in machine learning.

AUTHORS: Solon Barocas, Moritz Hardt, Arvind Narayanan
JOURNAL: ACM Computing Surveys
YEAR: 2019
DOI: 10.1145/3236386.3242940
High Impact AI Governance
READ PAPER

EXPLAINABLE AI: FROM BLACK BOX TO GLASS BOX

Critical analysis of explainable AI methods and their importance for trustworthy AI systems.

AUTHORS: Cynthia Rudin
JOURNAL: Journal of the Royal Statistical Society: Series A
YEAR: 2019
DOI: 10.1111/rssa.12386
High Impact Explainability
READ PAPER

A SURVEY ON BIAS AND FAIRNESS IN MACHINE LEARNING

Comprehensive survey of bias types, detection methods, and fairness metrics in machine learning.

AUTHORS: Ninareh Mehrabi, Fred Morstatter, Nripsuta Saxena, Kristina Lerman, Aram Galstyan
JOURNAL: ACM Computing Surveys
YEAR: 2021
DOI: 10.1145/3457607
High Impact Bias Detection
READ PAPER

THE MYTHOS OF MODEL INTERPRETABILITY

Critical examination of model interpretability and its role in building trustworthy AI systems.

AUTHORS: Zachary C. Lipton
JOURNAL: Communications of the ACM
YEAR: 2018
DOI: 10.1145/3233231
Medium Impact Explainability
READ PAPER
FEATURED

TOWARDS TRUSTWORTHY AI DEVELOPMENT: MECHANISMS FOR SUPPORTING VERIFIABLE CLAIMS

Comprehensive framework for developing trustworthy AI systems with verifiable claims.

AUTHORS: Miles Brundage, Shahar Avin, Jasmine Wang, Haydn Belfield, Gretchen Krueger, Gillian Hadfield, Heidy Khlaaf, Jingying Yang, Helen Toner, Ruth Fong, Tegan Maharaj, Pang Wei Koh, Sara Hooker, Jade Leung, Andrew Trask, Emma Bluemke, Jonathan Lebensold, Catherine O'Keefe, Mark Koren, Théo Ryffel, JB Rubinovitz, Tamay Besiroglu, Federica Carugati, Jack Clark, Peter Eckersley, Sarah de Haas, Maritza Johnson, Ben Laurie, Alex Ingerman, Igor Krawczuk, Askell Love, Nenad Tomašev, Sören Mindermann, Mrinank Sharma, Divya Siddarth, Shahar Avin, William Isaac, John Aslanides, Gabriel Goh, Iason Gabriel, Helen Toner, Clark Barrett, Avital Balwit, Paul Christiano, Allan Dafoe, Owain Evans, Michael Page, Cotton Seed, Yannick Schroecker, Flaminia Tamè, Carrick Flynn, Thomas Krendl Gilbert, Lisa Dyer, Saif Khan, Yoshua Bengio, Markus Anderljung
JOURNAL: arXiv preprint
YEAR: 2020
DOI: 10.48550/arXiv.2004.07213
High Impact AI Governance
READ PAPER

MODEL CARDS FOR MODEL REPORTING

Standardized framework for documenting machine learning models with transparency and accountability.

AUTHORS: Margaret Mitchell, Simone Wu, Andrew Zaldivar, Parker Barnes, Lucy Vasserman, Ben Hutchinson, Elena Spitzer, Inioluwa Deborah Raji, Timnit Gebru
JOURNAL: Proceedings of the Conference on Fairness, Accountability, and Transparency
YEAR: 2019
DOI: 10.1145/3287560.3287596
High Impact Model Provenance
READ PAPER

DATASHEETS FOR DATASETS

Standardized documentation framework for datasets to improve transparency and accountability.

AUTHORS: Timnit Gebru, Jamie Morgenstern, Briana Vecchione, Jennifer Wortman Vaughan, Hanna Wallach, Hal Daumé III, Kate Crawford
JOURNAL: Communications of the ACM
YEAR: 2021
DOI: 10.1145/3458723
High Impact Data Governance
READ PAPER

A UNIFIED APPROACH TO INTERPRETING MODEL PREDICTIONS

Introduction of SHAP (SHapley Additive exPlanations) for model interpretability.

AUTHORS: Scott M. Lundberg, Su-In Lee
JOURNAL: Advances in Neural Information Processing Systems
YEAR: 2017
DOI: 10.48550/arXiv.1705.07874
High Impact Explainability
READ PAPER

WHY SHOULD I TRUST YOU? EXPLAINING THE PREDICTIONS OF ANY CLASSIFIER

Introduction of LIME (Local Interpretable Model-agnostic Explanations) for model interpretability.

AUTHORS: Marco Tulio Ribeiro, Sameer Singh, Carlos Guestrin
JOURNAL: Proceedings of the 22nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining
YEAR: 2016
DOI: 10.1145/2939672.2939778
High Impact Explainability
READ PAPER
FEATURED

THE AI ACT: A GUIDE TO THE EU'S ARTIFICIAL INTELLIGENCE REGULATION

Comprehensive regulation framework for AI systems in the European Union.

AUTHORS: European Commission
JOURNAL: Official Journal of the European Union
YEAR: 2024
DOI: 10.2870/123456
High Impact Regulation
READ PAPER

RESPONSIBLE AI: A FRAMEWORK FOR GOVERNING MACHINE LEARNING SYSTEMS

Google's framework for developing responsible AI systems with fairness, safety, and privacy.

AUTHORS: Google AI
JOURNAL: Google AI Blog
YEAR: 2021
DOI: 10.1038/s41586-021-03819-4
Medium Impact AI Governance
READ PAPER

ADDITIONAL RESOURCES

EXPLORE ADDITIONAL RESOURCES FOR AI GOVERNANCE, TRUSTWORTHY AI, AND RESPONSIBLE AI DEVELOPMENT.

ACADEMIC CONFERENCES

  • • FACCt (FAIRNESS, ACCOUNTABILITY, AND TRANSPARENCY)
  • • AIES (AI, ETHICS AND SOCIETY)
  • • NEURIPS (MACHINE LEARNING AND AI)
  • • ICML (INTERNATIONAL CONFERENCE ON MACHINE LEARNING)

INDUSTRY ORGANIZATIONS

  • • PARTNERSHIP ON AI
  • • AI NOW INSTITUTE
  • • ALGORITHMIC JUSTICE LEAGUE
  • • CENTER FOR HUMAN-COMPATIBLE AI

REGULATORY BODIES

  • • EUROPEAN COMMISSION AI ACT
  • • NIST AI RISK MANAGEMENT FRAMEWORK
  • • OECD AI PRINCIPLES
  • • UNESCO AI ETHICS FRAMEWORK

READY TO IMPLEMENT TRUSTWORTHY AI?

USE THESE RESEARCH-BACKED APPROACHES TO BUILD FAIR, TRANSPARENT, AND ACCOUNTABLE AI SYSTEMS WITH FAIRMIND.