InsightsEvery AI Has a Personality
Experiment
12 min read

Every AI Has a Personality

You might not notice it at first, but spend enough time with different language models and you'll start to see it—some are strict teachers, others are generous explainers, and some are nitpicky fact-checkers.

These personalities aren't accidental. They're shaped by training data and design choices that determine how each model judges what's "accurate."

I discovered this firsthand when I ran into a fascinating problem with dataset evaluation that made me question everything I thought I knew about AI accuracy.

The Setup: When "Accurate" Isn't So Simple

I was working with a dataset where responses were being labeled as either "accurate" or "inaccurate"—a binary choice that seemed straightforward at first. But as I dug deeper, I realized this simple rubric was creating problems:

  • Minor slips were getting treated the same as major errors
  • Edge cases were exposing huge disagreements about what "accuracy" even means
  • The binary approach was flattening important nuance

So I decided to run an experiment.

The Experiment: Three AIs, Same Questions, Different Answers

I took the same set of questionable responses and asked three popular LLMs—ChatGPT, Claude, and Gemini—to evaluate them for accuracy. What happened next surprised me.

They disagreed on every single item.

Here's what I found:

The Results: Complete Disagreement

5 prompts, 3 AIs, 0 consensus

Experiment results showing ChatGPT, Claude, and Gemini disagreed on every single evaluation, with distinct patterns: ChatGPT (flexible), Claude (strict), Gemini (precise)

ChatGPT

Flexible & Purpose-Driven

Claude

Strict & Rule-Following

Gemini

Precise & Detail-Oriented

GPT

ChatGPT → The Flexible Explainer

More forgiving and focused on whether the response served its purpose. If an answer was simplified but still useful, ChatGPT was likely to call it accurate. It prioritized clarity and practical value over perfect precision.

CLD

Claude → The Strict Teacher

Always checking the rules and quick to flag anything that wasn't exactly right. Claude seemed to operate with a "better safe than sorry" approach to accuracy.

GEM

Gemini → The Picky Fact-Checker

Incredibly precise, catching subtle errors the others missed. For example, Gemini correctly identified that "Imagine" isn't actually a Beatles song when others let it slide. But it also flagged minor formatting issues that didn't change the meaning.

The Key Discovery

Every AI has a distinct evaluation personality.
These differences aren't bugs—they're features that reveal fundamentally different approaches to accuracy, context, and helpfulness.

What This Reveals About AI Personality

These weren't random differences—they were consistent patterns that revealed something profound about how these models approach the world.

Example: The Asbestos Test

When evaluating whether calling asbestos "synthetic" instead of "natural" was accurate, all three models caught it as an error. This makes sense—the distinction between synthetic and natural asbestos actually matters for understanding health risks.

But when Gemini flagged a song title as wrong for missing parentheses—'I Can't Get No Satisfaction' vs. '(I Can't Get No) Satisfaction'—it highlighted something different. Technically incorrect? Yes. Meaningfully inaccurate? Debatable.

The Bigger Picture: Why This Matters

1. Disagreement reveals opportunity

Where models diverge is exactly where human judgment becomes most valuable. For simple facts, AIs converge. For nuanced cases, they show their personality—and that's where we need human insight to guide them.

2. Strictness has context

At first, I assumed stricter = better. But I learned that strictness is only valuable when it protects meaning. Nitpicking that doesn't serve understanding can actually reduce usefulness.

3. We're shaping AI character through data

These disagreements aren't bugs—they're features that reveal how models develop their "worldview." Just as people develop personality through experience, models develop style through training data. The way we handle edge cases literally shapes how AI thinks.

Looking Forward: The Art of AI Character Design

What excites me most is realizing that we're not just training AIs to be accurate—we're teaching them how to think. Every dataset decision, every edge case judgment, every rubric choice is like a brushstroke that defines the personality of future AI systems.

"Think about it: Picasso had his abstract lines, Van Gogh had his swirling dots, and now we're creating the brushstrokes that will define how AI sees and judges the world."

As AI becomes more prevalent, understanding and intentionally shaping these personalities becomes crucial. Do we want AIs that are generous explainers or strict fact-checkers? Practical helpers or precise scholars?

The answer probably depends on the context—and that's exactly why this work matters.

What's Next

This experiment has me thinking about bigger questions: How do we balance immediate accuracy needs with long-term AI character development? How do we make these personality differences visible and useful rather than frustrating?

I'm continuing to explore these questions through more experiments and conversations with others working on human-centered AI. If this resonates with you, I'd love to hear your thoughts and experiences.

What Do You Think?

Have you noticed personality differences in the AIs you work with? I'm always interested in hearing about others' experiences with AI behavior and character.