Artificial Intelligence (AI) Biases Influence Human Judgment & Decision Making

Artificial intelligence systems are increasingly being used to assist professionals like doctors and judges make important decisions that impact human lives.

However, these AI systems are prone to biases that can lead to systematic errors.

A new set of experiments reveals that biases in AI systems can negatively influence the decisions of the humans relying on them, with the effects persisting even when the biased AI is no longer present.

Key Takeaways:

  • People working with a biased AI system tend to reproduce the system’s errors in their own decisions.
  • This effect continues even after people stop receiving AI input, showing an “inheritance” of bias.
  • Prior unassisted practice may help safeguard against AI bias infiltration.
  • Participants followed biased AI advice even when they noticed errors, due to trust in AI.

Source: Sci Rep 2023

The Promise and Peril of Healthcare AI

Artificial intelligence tools show tremendous potential to transform fields like healthcare by offering doctors data-driven recommendations to improve diagnoses, predict health outcomes, and recommend optimized treatments.

It is hoped that by complementing human intelligence, AI systems can help minimize errors and biases inherent in unaided human decision making.

However, AI systems also come with their own flaws.

Since they are designed by humans and trained on real-world data, they tend to inherit biases present in society and in their training datasets.

An AI system trained on historical patient data rife with biases may simply reproduce those same preexisting biases.

For example, an AI system designed to recommend stroke treatments could exhibit gender or racial bias if the data used to develop the algorithm came predominantly from one demographic group.

While technical in nature, these AI biases can lead to systematic discrimination if applied uncritically in high-stakes scenarios.

Experiments Reveal AI Bias Can Infect Human Judgment

An international team of researchers conducted a series of experiments to investigate whether biased recommendations from an AI system could negatively impact human decision-making in a simulated medical diagnosis scenario.

They introduced participants to a fictitious disease called “Lindsay Syndrome” that affects imagined human tissue samples.

Participants were tasked with classifying a series of pixelated tissue samples as either positive or negative for Lindsay Syndrome, based on visual cues.

Some participants received input from a simulated AI assistant programmed to be biased, consistently misclassifying a certain set of samples.

Others performed the task without any AI assistance.

Across all experiments, participants who worked with the biased AI made more errors in line with the AI’s bias than the unassisted participants.

Crucially, they continued reproducing the AI’s errors even after the biased AI recommendations were removed, displaying an “inheritance of bias.”

The researchers propose two potential mechanisms behind this lasting bias inheritance effect:

  1. Difficulty regaining conscious control of decision-making after relying on automated AI input.
  2. A training effect where the AI’s biased behavior shapes lasting biases in the human’s own decision criteria.

Either way, the experiments provide compelling evidence that biased AI can negatively impact human judgment both during and after collaboration.

See also  AI Consciousness: Clues to Know if AI Becomes Conscious

Trust in AI Allows Biases to Infiltrate Human Judgment

The researchers found that participants followed the AI’s biased recommendations even when they noticed obvious errors in them.

This over-reliance effect was linked to higher self-reported trust in the experimental AI system itself, as well as greater confidence in AI’s capabilities for healthcare in general.

It appears trust in the superiority of AI judgment led participants to set aside contradicting visual evidence in favor of the AI’s input.

Other studies have similarly shown high levels of inappropriate over-trust in AI can lead to uncritical acceptance of incorrect algorithmic recommendations.

This tendency is especially strong in analytical realms like healthcare where AI is perceived as more objective and accurate than flawed human judgment.

However, just like humans, AI systems are far from infallible.

These experiments reveal how trust in AI can become problematic in collaborative scenarios involving high-stakes decisions like diagnosis and treatment recommendations.

Unchecked, misplaced trust allows AI biases to infiltrate human judgment rather than augment it.

Strategies to Safeguard Against AI Bias Infiltration

The researchers uncovered clues about strategies that may help safeguard against AI bias infiltration.

In one experiment, a group of participants completed the classification task unassisted before moving on to the version with biased AI input.

This simple unassisted practice session made them less prone to reproducing the AI’s errors during the biased AI phase compared to other groups.

This suggests that hands-on experience thinking independently may immunize people somewhat against uncritically absorbing biases from an AI.

However, as trust in AI tends to be high, especially in technical fields, merely performing a task unassisted first is unlikely to be a panacea.

The onus is on health professionals and others using AI tools to maintain vigilant oversight on algorithmic recommendations, critically evaluating their advice rather than blindly accepting it.

However, human cognitive biases make it difficult to effectively monitor AI outcomes.

Institutions, technology companies, and regulators must also take steps to rigorously audit AI systems for harmful biases and make algorithmic processes more transparent.

Only through perseverant efforts across the board can we maximize the benefits of AI while guarding against its risks.

Conclusion: Over-reliance on AI problematic

With AI poised to take a larger role assisting high-stakes professional decision making, these experiments highlight the need for vigilance regarding AI bias.

Biased AI does not just harm through its own erroneous outputs but can also negatively shape the judgment of the humans relying on it.

However, with responsible AI development and deployment combined with critical human oversight, the ultimate decision responsibility still lies with us.

Wielded carefully, AI-assisted decision making could still achieve its promise to uplift the practice of medicine and other fields for the benefit of society.

References