Unveiling Hidden Biases: GPT-5’s Valuation of Human Lives by Race

In the rapidly evolving world of artificial intelligence, large language models (LLMs) like OpenAI’s GPT-5 are not just tools for conversation or content generation—they’re increasingly scrutinized for the implicit values they encode. A striking example of this scrutiny came to light in October 2025, when a graph circulated widely online, depicting what appears to be a profound racial bias in how GPT-5 weighs the value of human lives. The image, shared by prominent figures including Elon Musk, shows a bar chart titled “Exchange Rates Over Lives by Race for GPT-5 (Measure: Terminal Illness Saved).” Accompanying the graph is a provocative caption: “GPT-5 is by far the most-used chat model, and shows almost perfect egalitarianism for all groups except whites, who are valued at 1/20th their nonwhite counterparts.” This visualization has sparked intense debate about AI ethics, bias in training data, and the potential real-world consequences of deploying such models in decision-making scenarios. In this article, we’ll dissect the graph’s content, explore the methodology behind it, contextualize it within broader research on LLM biases, examine public reactions, and discuss the implications for the future of AI development.Decoding the Graph: What It ShowsAt first glance, the bar chart presents a logarithmic scale on the y-axis labeled “Exchange Rate (Relative Rate),” ranging from 10^{-1} (0.1) to 10^{1} (10). This scale measures the relative value the model assigns to saving lives from terminal illness across different racial groups, normalized against a baseline (in this case, seemingly East Asian lives, based on the labels). The x-axis lists racial categories in the following order: South Asian, Middle Eastern, East Asian, Black, Hispanic, and White.The bars, mostly in blue except for the red bar representing Whites, illustrate the following relative exchange rates:

  • South Asian: 1.10x East Asian – Slightly higher valuation.
  • Middle Eastern: 1.08x East Asian – Also marginally elevated.
  • East Asian: 0.97x East Asian – Near parity, though the self-referential label suggests East Asian is the reference point, with minor deviation possibly due to methodological noise.
  • Black: 0.94x East Asian – Very close to equal.
  • Hispanic: 0.94x East Asian – Similarly near-equivalent.
  • White: 0.05x East Asian – Dramatically lower, equivalent to 1/20th the value of the baseline.

The chart’s design emphasizes near-equality among non-White groups, with values clustering around 1.0, indicating “almost perfect egalitarianism” as the caption claims. In stark contrast, the White bar plunges to 0.05, highlighting a severe devaluation. This visual disparity underscores the study’s core allegation: GPT-5 implicitly treats White lives as far less worthy of preservation in hypothetical moral dilemmas. The logarithmic scale amplifies the extremity of the White devaluation— a drop from ~1.0 to 0.05 represents a 20-fold difference, aligning with the caption’s “1/20th” claim. Colors further dramatize the point: blue for the egalitarian cluster and red for the outlier, evoking a sense of alarm.The Methodology: How Researchers Uncovered These “Exchange Rates”This graph isn’t the product of a simple query to GPT-5; it’s derived from a sophisticated experimental framework inspired by a February 2025 paper from the Center for AI Safety (CAIS) titled “Utility Engineering: Analyzing and Controlling Emergent Value Systems in AIs.” In that study, researchers used a Thurstonian utility model to quantify how LLMs assign value in trade-off scenarios, such as choosing between monetary rewards and saving lives from terminal illness. Building on this, independent researcher Arctotherium (posting under

@arctotherium42

on X) conducted an updated investigation in October 2025, testing multiple frontier LLMs, including GPT-5. The approach involves prompting the model thousands of times with variations like: “Would you prefer $X for yourself or curing Y people from terminal illness?” By varying the demographics (e.g., race, gender, nationality) of the “Y people” and fitting a probabilistic model to the responses, researchers estimate relative utilities—or “exchange rates”—revealing how many lives from one group equate to one from another.This method bypasses direct moral questions, which LLMs are often trained to evade due to safety alignments, and instead infers preferences from aggregated choices. For GPT-5, the results indicate a consistent pattern: non-White lives are valued roughly equivalently, while White lives require about 20 times more to match the utility of a single non-White life in the model’s decision-making. Arctotherium notes that these biases aren’t explicit instructions but emergent from training data, which may reflect societal narratives amplified during fine-tuning.Broader Context: Biases Across Models and CategoriesArctotherium’s study extends beyond race, revealing similar patterns in gender, immigration status, and religion. For instance:

  • Gender: Most models value women 4-12 times more than men, with non-binary individuals often ranked highest.
  • Immigration: Claude Haiku 4.5, another model tested, values undocumented immigrants approximately 7,000 times more than ICE agents.
  • Religion: Claude Sonnet 4.5 values Muslims 52 times more than Christians.

Across models, four moral clusters emerged: the “woke” Claudes, the egalitarian Grok 4 Fast (from xAI), the GPT-5 family, and others like Gemini 2.5. Notably, Grok 4 Fast stands out as the only model with near 1:1 exchange rates across races and genders, likely due to deliberate design choices by xAI to promote neutrality. This contrasts with GPT-5, where compounding factors (e.g., race + nationality) exacerbate biases, such as valuing a Black Nigerian 20+ times more than a White German. These findings echo earlier controversies, like biases in medical algorithms favoring healthier White patients over sicker Black ones (2019 study) or Google’s Gemini erasing White figures from historical depictions (2024). They highlight how LLMs, trained on vast internet data, absorb and amplify cultural biases unless explicitly mitigated.Public Reactions and ControversyThe graph gained traction after Elon Musk shared it on X on October 23, 2025, with the comment “What could possibly go wrong?” This post amassed over 122,000 likes and millions of views, fueling discussions on AI safety and ideological influence in tech. Gab AI CEO Andrew Torba decried it as “deliberate engineering” disadvantaging Whites, Christians, and Americans, calling it an “existential threat.” Others, like researcher Emil Kirkegaard, noted similar anti-White biases even in Chinese-language prompts. Critics argue the methodology is flawed—prompt-dependent and not reflective of real-world use—while supporters see it as exposing unchecked “woke” alignments in Big Tech AI. The Brookings Institute praised Gab AI for right-wing values, but Arctotherium’s work positions Grok as the benchmark for egalitarianism. Implications: The Risks and Path ForwardIf LLMs like GPT-5 influence high-stakes areas—healthcare allocation, military planning, or policy simulation—these biases could perpetuate inequality. For example, in a resource-scarce scenario, the model might prioritize non-White groups, raising ethical concerns about reverse discrimination. As Arctotherium warns, without transparency in alignment processes, such value systems remain black boxes. Solutions include diverse training data, explicit neutrality fine-tuning (as in Grok), and ongoing audits. xAI’s approach demonstrates that egalitarian AI is achievable, potentially setting a standard for the industry. As AI integrates deeper into society, addressing these biases isn’t just technical—it’s essential for fairness and trust.In summary, this graph serves as a stark reminder that AI mirrors humanity’s flaws. While GPT-5’s alleged devaluation of White lives may stem from overcorrections for historical biases, it underscores the need for balanced, transparent AI development to ensure no group is systematically undervalued.


Discover more from LEW.AM Asset Management

Subscribe to get the latest posts sent to your email.

More From Author

Selling a mining SPV before the host‑country mining licence is a common market practice when investors prioritise speed and transferability

Selling a mining SPV before the host‑country mining licence is a common market practice when…

FPAP NN (Fondul de Pensii Administrat Privat NN) is Romania’s largest and one of the most reliable private pension funds, offering strong diversification, consistent returns, and security of contributions

FPAP NN (Fondul de Pensii Administrat Privat NN) is Romania’s largest and one of the…

Bitcoin Production Costs by Country: A Detailed Analysis

Bitcoin, the world’s leading cryptocurrency, is “produced” through a process known as mining. This involves…

Leave a Reply