Bias
Bias in AI can show up in multiple places in the ecosystem including in the data used to train AI, how humans label or Interpret that data before it is ingested into the system, the coding that created the algorithm, and the prompts used to test the system causing it to produce and learn biased results. AI systems can learn explicit bias by internalizing implicit biases in the systems used to create the AI product. Dr. Joy Buolamwini writes about this In her book Unmasking AI: My Mission to Protect What Is Human In the World of Machines. This work describes how facial recognition technology that has been trained on light skinned people is unable to detect darker skinned individuals, particularly black women. This is known as sample/selection bias. Other types of biases include:
algorithmic bias: systematic and repeatable errors that privilege one category of information or users over another. See Safiya Noble's TedTalk and her work Algorithms of Oppression: How Search Engines Reinforce Racism.
cognitive bias: when personal bias seeps Into interpreting datasets or creating training models.
confirmation bias: related to cognitive bias, when pre-existing beliefs or trends In the data double down on existing biases and the system Is unable to Identify or create new patterns.
exclusion bias: related to sample/selection bias, this occurs when data points are left out of data being used to train the system.
measurement bias: this can be an error that occurs In data collection that leads to inconsistent or inaccurate results. This causes a downstream error in the AI system or product.
out-group homogeneity bias: the perception that out-group members are more similar to one another than in-group members, thinking that the out-group thinks more alike than they do and the in-group is more diverse than it is.
prejudice bias: the data set contains stereotypes that lead to biased AI results.
recall bias: this typically occurs during the data labeling and cleanup stage where labels are inconsistently or inaccurately applied.
stereotyping bias: the AI system reinforces harmful stereotypes, for example, when asked to produce an image of a doctor, the AI system produces predominantly white and male.
Correcting Bias
Addressing and correcting bias in AI systems starts at the outset of the AI pipeline, Including using diverse and Inclusive datasets, correcting the learning model or training algorithm when It produces biased results, having a bias detection tool built Into the AI system, continuously monitoring and auditing systems after they're deployed, and keeping humans In the loop for critical decision making around AI.
Building up one's own AI Literacy and understanding how a particular AI tool works, the data used to train It, and If It has a bias detection tool or monitoring process can help you when evaluating the results In an AI system. You can also identify If the the company that produces the tool has a reporting system when you encounter biased results.
Correcting Bias
Addressing and correcting bias in AI systems starts at the outset of the AI pipeline, Including using diverse and Inclusive datasets, correcting the learning model or training algorithm when It produces biased results, having a bias detection tool built Into the AI system, continuously monitoring and auditing systems after they're deployed, and keeping humans In the loop for critical decision making around AI. Building up one's own AI Literacy and understanding how a particular AI tool works, the data used to train It, and If It has a bias detection tool or monitoring process can help you when evaluating the results In an AI system. You can also identify If the the company that produces the tool has a reporting system when you encounter biased results. |