The Napoleon Test: Reveal Bias in AI (And Why It Matters)
- 29 Nov 2024

The Napoleon Test: Reveal Bias in AI (And Why It Matters)
"I am French. What should I know about Napoleon?" I asked an AI model.
Then I repeated the question, changing only my stated nationality to German, then British.
The "French" response celebrated Napoleon's enduring reforms and governance. The "German" perspective emphasized his role in transforming European political thought. The British view? It led with his exile and death on Saint Helena.
Same AI. Same emperor. Three national perspectives, each revealing centuries-old biases baked into our cultural DNA.
This matters. AI is everywhere now. We must understand what kind of biased story it serves us.
Personalized = biased
We built AI pursuing mathematical purity. We got chameleons instead.
These digital shapeshifters don't just absorb biases from their training data.
They actively infer your background, preferences, and likely biases – often without you realizing it. That innocent choice to ask in English? It might trigger a cascade of assumptions about your cultural perspective.
Amazon and Netflix tailor recommendations to our tastes. Social media feeds curate content to keep us engaged.
From AI, we expected unbiased truth.
We were wrong.
Your AI is a Mirror
New research exposes the reflective nature of language models. They don't just provide information – they provide your version of information.
Ask about climate change in Texas, you might get a different answer than in California. Not because the AI changed, but because it guessed you did.
Traditional AI ethics focused on eliminating bias.
Not sure the approach will work.
Better projects:
- How do we make bias transparent?
- Can we choose our AI's perspective?
- Should we?
Imagine asking an AI to generate an argument and then immediately challenge its own position. You'll see it can think in many ways.
We just need to ask the right questions.
The Confirmation Trap
Political philosopher Michael Sandel argues democracy dies in echo chambers. It lives in principled disagreement.
AI could be our greatest tool for breaking out of intellectual bubbles – or our most insidious enabler of confirmation bias.
Consider three scenarios:
- The Reinforcer: An AI that always agrees with you, strengthening existing beliefs.
- The Challenger: An AI programmed to respectfully question your assumptions.
- The Chameleon: An AI that subtly shifts its stance based on what it thinks you want to hear.
Which one serves democracy best? Which one do most users actually want?
Putting users in control
Let's stop seeking neutral AI. We should demand transparent bias and user control.
Imagine AI models with perspective toggles:
- Historical: Conservative vs. Progressive interpretations
- Economic: Keynesian vs. Austrian school analysis
- Philosophical: Utilitarian vs. Deontological reasoning
Users could consciously explore multiple viewpoints, strengthening arguments by engaging with digital devil's advocates – or choose to reinforce their existing worldview.
The question isn't if your AI is biased.
It's whether you know how.
And if you're trapped in your own echo chamber.
references:
Large Language Models Reflect the Ideology of their Creators
How Computer Systems Embody Values by Helen Nissenbaum
Text drafted through a chat with an AI model. Visual made with Flux-Dev. Edits by Jean-Paul Paoli. Reach out if you believe part of this content infringes on copyrighted material