Bias Detection Guide

Learn how to detect and analyze bias in your AI models using statistical methods, custom rules, and advanced fairness metrics.

What is Bias Detection?

Bias detection in AI systems involves identifying and measuring unfair treatment of individuals or groups based on protected characteristics such as race, gender, age, or other sensitive attributes. Fairmind provides comprehensive tools to detect various types of bias in your AI models.

Types of Bias

Statistical Parity

Statistical parity ensures that the proportion of positive predictions is the same across different demographic groups. This is measured using the demographic parity ratio.

Equal Opportunity

Equal opportunity ensures that the true positive rate (sensitivity) is the same across different groups. This is particularly important for applications where false negatives have significant consequences.

Equalized Odds

Equalized odds ensures that both true positive rates and false positive rates are equal across different demographic groups.

Individual Fairness

Individual fairness ensures that similar individuals receive similar predictions, regardless of their demographic characteristics.

Getting Started with Bias Detection

Step 1: Upload Your Model

Start by uploading your trained AI model to the Fairmind platform. Supported formats include:

  • Scikit-learn models (.pkl, .joblib)
  • TensorFlow models (.h5, .pb)
  • PyTorch models (.pt, .pth)
  • ONNX models (.onnx)

Step 2: Provide Training Data

Upload your training dataset with the following requirements:

  • CSV or JSON format
  • Include protected attributes (e.g., gender, race, age)
  • Include target variable (what you're predicting)
  • Include feature columns used by your model

Step 3: Configure Bias Detection

Configure your bias detection analysis:

  • Select protected attributes to analyze
  • Choose fairness metrics to compute
  • Set thresholds for bias alerts
  • Define custom fairness rules if needed

Bias Detection Methods

Statistical Methods

Fairmind uses established statistical methods to measure bias:

  • Demographic Parity: Measures if the prediction rate is equal across groups
  • Equal Opportunity: Measures if true positive rates are equal across groups
  • Equalized Odds: Measures if both true positive and false positive rates are equal
  • Calibration: Measures if prediction probabilities are well-calibrated across groups

SHAP Analysis

SHAP (SHapley Additive exPlanations) helps understand how each feature contributes to predictions and can reveal bias in feature importance across different demographic groups.

LIME Analysis

LIME (Local Interpretable Model-agnostic Explanations) provides local explanations for individual predictions, helping identify bias in specific cases.

Interpreting Results

Fairness Metrics

The bias detection results include:

  • Disparity Ratio: Ratio of prediction rates between groups (1.0 = fair)
  • Statistical Parity Difference: Difference in prediction rates between groups
  • Equal Opportunity Difference: Difference in true positive rates between groups
  • Calibration Error: Measure of probability calibration across groups

Visualizations

Fairmind provides various visualizations to help understand bias:

  • Fairness metrics comparison charts
  • Feature importance analysis by demographic group
  • Prediction distribution plots
  • Bias trend analysis over time

Custom Fairness Rules

In addition to standard fairness metrics, you can define custom fairness rules:

  • Business-specific fairness constraints
  • Regulatory compliance requirements
  • Domain-specific bias definitions
  • Multi-attribute fairness rules

Bias Mitigation

Once bias is detected, Fairmind provides tools to help mitigate it:

  • Pre-processing: Modify training data to reduce bias
  • In-processing: Modify the training process to incorporate fairness constraints
  • Post-processing: Adjust model predictions to improve fairness

Important Note

Bias detection is currently in development. The MVP version includes basic statistical parity and demographic parity measurements. Advanced features like SHAP analysis and custom fairness rules will be available in future releases.

Best Practices

  • Always test for bias before deploying models to production
  • Consider multiple fairness metrics, not just one
  • Understand the trade-offs between fairness and accuracy
  • Document your bias detection process and results
  • Regularly re-evaluate bias as your data and models evolve

Next Steps

Continue your AI governance journey with these related guides: