In this special guest feature, David Magerman, Managing Partner, Differential Ventures, highlights how systemic bias encoded in data can cause AI systems to reinforce societal discrimination. Curating the data to reflect desired societal outcomes can remove and even reverse that behavior. Previously, David spent the entirety of his career at Renaissance Technologies, widely recognized as the world’s most successful quantitative hedge fund management company. He helped found the equities trading group at Renaissance, joining the group in its earliest days, playing a lead role in designing and building the trading, simulation, and estimation software. David holds a PhD in Computer Science from Stanford University where his thesis on Natural Language Parsing as Statistical Pattern Recognition was an early and successful attempt to use large-scale data to produce fully-automated syntactic analysis of text. David also earned a Bachelor of Arts in Mathematics and a Bachelor of Science in Computer Sciences and Information from the University of Pennsylvania.
Artificial intelligence (AI) software systems have been under attack for years for systematizing bias against minorities. Banks using AI algorithms for mortgage approvals are systematically rejecting minority applicants. Many recruiting tools using AI are biased against minority applicants. In health care, African Americans are victims of racial bias from AI-based hospital diagnostic tools. If you understand how data science research produces the models that power AI systems, you will understand why they perpetuate bias, and also how to fix it.
Merriam Webster defines stereotype as “something conforming to a fixed or general pattern.” Machine learning algorithms, which build models that power AI software systems, are simply pattern matching machines. Deep learning models, which are most commonly used in the latest wave of AI-powered systems, discover patterns in data and perpetuate those patterns in future data. This behavior is great if the goal is to replicate the world represented by the training data. However, for a variety of reasons, that isn’t always the best outcome.
The root cause of bias in data-driven AI systems comes down to how the data to train those systems is collected, and whether or not the decision-making in that data represents the corporate or societal goals of the deployed system based on that data. Data for AI systems can be derived from many sources, including: the real world, data collection efforts, and synthetic data generation.
The real world contains a lot of data. For instance, the mortgage applications from a period in the past constitute a training set for a mortgage approval system. Job applications, along with the hiring decisions and the outcomes of those hires, provide data to a human resources hiring system. Similarly, medical data from patients over a time period, including their symptoms, histories, diagnoses, and treatments, might be a useful data set for a medical treatment system.
Absent valid, useful, or available real-world data, machine learning researchers and application developers can collect data “artificially”, deciding what data they want to collect, deploying a team to design a data collection process, and going out into the world and collecting the data proactively. This data set might be more targeted to the needs of the system builder, but the decisions made in this data collection process might skew the results of the models built from that data, introducing a form of bias based on the design decisions of the data collection process.
There is a third possibility: synthetic data. If the appropriate data isn’t available, research teams can deploy synthetic data generators, to create artificial data that they believe represents the real world data the proposed application will see, along with the desired analysis that the developer wants their system to assign to that data when it sees it.
In all of these cases, there is a presumption that the AI system should model the world as it is and perpetuate the behavior it sees in the training data, regardless of its source. And, in a world historically influenced by systemic racism and bias against broad classes of minority groups, or in the case of women, even majority groups, it is not clear at all that the best outcome is a replication of the decision-making of the past.
If we know that qualified African American mortgage applicants have been denied mortgages in the past, or that the economic system has been biased against African Americans in the past, so that they are more likely to default on mortgages than white applicants, then training AI systems on historical mortgage data is only going to perpetuate the inherent bias encoded historically in the financial system. Similarly, if qualified minorities and women have been underrepresented in the job market in the past, training off of historical hiring data will likely reinforce that bias. If the real world has made the wrong decision in the past, due to systemic bias in societal infrastructure, training systems on historical data is going to reinforce that system bias, but in an algorithmic way, as opposed to in an anecdotal way, perhaps even making the problem worse.
An effective way to rectify this problem is to create targets for the proposed behavior of data-driven systems, and then engineer or curate training data sets that represent the desired outcomes. This process will allow machine learning training algorithms to learn patterns for making accurate predictions on new data while ensuring that the models capture the inputs and outputs proactively.
How does this work in reality? Let’s say you want to build a mortgage approval model that will make good decisions about loan risk, but which will treat minority applicants on a par with non-minority applicants. However, it turns out the historical data you have is biased against minority applicants. One simple way to rectify this problem is to filter the data so that the percentage of minority applicants approved for mortgages matches the percentage of non-minority applicants. By skewing the training data to represent the outcome you want to see, as opposed to the way the data reflects historical biases, you can push the machine learning algorithms to create models that treat minority applicants more fairly than they have been treated in the past.
Some people may want to view machine learning models simply as tools used to capture and represent the world as it has been, flaws and all. Given the reality of systemic racism pervading the systems that run our society, this has led to AI-driven software encoding the racism of the past and present and amplifying it through the power of technology. We can do better, however, by using the pattern-matching ability of machine learning algorithms, and by curating the data sets they are trained on, to make the world what we want it to be, not the flawed version of itself that it has more recently been.
Sign up for the free insideAI News newsletter.
Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1