A curated list of awesome responsible machine learning resources.
-
Updated
Jun 11, 2024
A curated list of awesome responsible machine learning resources.
A comprehensive set of fairness metrics for datasets and machine learning models, explanations for these metrics, and algorithms to mitigate bias in datasets and models.
A Python package to assess and improve fairness of machine learning models.
moDel Agnostic Language for Exploration and eXplanation
Responsible AI Toolbox is a suite of tools providing model and data exploration and assessment user interfaces and libraries that enable a better understanding of AI systems. These interfaces and libraries empower developers and stakeholders of AI systems to develop and monitor AI more responsibly, and take better data-driven actions.
推荐/广告/搜索领域工业界经典以及最前沿论文集合。A collection of industry classics and cutting-edge papers in the field of recommendation/advertising/search.
Gno: An interpreted, stack-based Go virtual machine to build succinct and composable apps + Gno.land: a blockchain for timeless code and fair open-source
Examples of techniques for training interpretable ML models, explaining ML models, and debugging ML models for accuracy, discrimination, and security.
Bias Auditing & Fair ML Toolkit
H2O.ai Machine Learning Interpretability Resources
An experimental platform for federated learning.
A curated list of awesome Fairness in AI resources
A curated list of trustworthy deep learning papers. Daily updating...
Code for reproducing our analysis in the paper titled: Image Cropping on Twitter: Fairness Metrics, their Limitations, and the Importance of Representation, Design, and Agency
Conformalized Quantile Regression
Python code for training fair logistic regression classifiers.
The LinkedIn Fairness Toolkit (LiFT) is a Scala/Spark library that enables the measurement of fairness in large scale machine learning workflows.
ST-SSL (STSSL): Spatio-Temporal Self-Supervised Learning for Traffic Flow Forecasting/Prediction
Code for using CDEP from the paper "Interpretations are useful: penalizing explanations to align neural networks with prior knowledge" https://arxiv.org/abs/1909.13584
Add a description, image, and links to the fairness topic page so that developers can more easily learn about it.
To associate your repository with the fairness topic, visit your repo's landing page and select "manage topics."