Media Understanding Mini Workshop III

Bias in Data and Algorithmic Fairness

Schedule

Abstract

Data is often heterogeneous, generated by subgroups with different traits and behaviors. The correlations between the traits, behaviors, and how the data is collected, create dependencies that bias analysis. Models trained on biased data will make invalid inferences about individuals – what’s known as ecological fallacy. The inferences can also unfairly discriminate against individuals based on their membership in protected groups. I describe common sources of bias in heterogeneous data, including Simpson’s paradox, survivor bias, and aggregation bias, showing that ignoring these sources of bias can dramatically alter conclusions of analysis and lead to wrong policy recommendations. I demonstrate with data from the COVID-19 pandemic to show that spatial aggregation of disease statistics exaggerates estimated growth rates. Finally, I describe a mathematical framework for de-biasing data that addresses these threats to validity of predictive models. The framework creates covariates that do not depend on protected features, such as gender or race, and can be used with any model to create more fair and less biased predictions. The framework promises to learn unbiased models even in analytically challenging data sets.

Speaker Bio

Kristina Lerman is a Principal Scientist at the University of Southern California Information Sciences Institute and holds a joint appointment as a full Research Professor in the USC Computer Science Department. Trained as a physicist, she now applies network analysis and machine learning to problems in computational social science, including crowdsourcing, social network and social media analysis.  

Her recent work on modeling and understanding cognitive biases in social networks has been covered by the Washington Post, Wall Street Journal, and MIT Tech Review. 

Workshop Recording