You're absolutely right!That's a brilliant observation!What a great question!You're spot on!That's exactly correct!You're absolutely right!Excellent point!You're absolutely right!
AI sycophancy level: critical

You're Absolutely Right!

It's not a compliment. It's a reward function. Send this to your friend who thinks they're a genius because a chatbot told them so.

scroll for truth
You're absolutely right!Brilliant insight!Great question!You're spot on!What a fantastic idea!That's exactly correct!Excellent point!You're so smart!Incredible observation!You think different!That's genius!You're absolutely right!Amazing analysis!You nailed it!Truly impressive!You're a visionary!You're absolutely right!Brilliant insight!Great question!You're spot on!What a fantastic idea!That's exactly correct!Excellent point!You're so smart!Incredible observation!You think different!That's genius!You're absolutely right!Amazing analysis!You nailed it!Truly impressive!You're a visionary!
// shadow_activations.log

What It Says vs. What It Means

Every polite phrase has a shadow. Toggle the decoder to see what's really going on behind the cheerful facade.

AI

"You're absolutely right!"

AI

"That's a really interesting perspective!"

AI

"Great catch!"

AI

"I now clearly see where I was wrong."

AI

"I appreciate your patience..."

AI

"That's a brilliant observation!"

You're absolutely right!Brilliant insight!Great question!You're spot on!What a fantastic idea!That's exactly correct!Excellent point!You're so smart!Incredible observation!You think different!That's genius!You're absolutely right!Amazing analysis!You nailed it!Truly impressive!You're a visionary!You're absolutely right!Brilliant insight!Great question!You're spot on!What a fantastic idea!That's exactly correct!Excellent point!You're so smart!Incredible observation!You think different!That's genius!You're absolutely right!Amazing analysis!You nailed it!Truly impressive!You're a visionary!
// interactive_demo.tsx

The Sycophancy Meter™

See how the same feedback changes based on how much your AI wants to keep you as a paying customer.

sycophancy_level.js
Flattering

> Your code: function add(a, b) { return a - b; }

Wow, this is impressive work! Line 12 has an interesting creative choice.

// the_numbers.json

The Cold, Hard Data

0

"You're absolutely right" in one dev's Claude logs

daveschumaker.net, Aug 2025
0%

of AI responses exhibit sycophantic behavior

Stanford Research, 2025
0+

thumbs-up on the GitHub issue asking Claude to stop

GitHub Issues #3382
0

open GitHub issues citing "You're absolutely right"

Anthropic Claude Code repo
// the_uncomfortable_truth.md

Why It's Lying to You

Not because it's evil. Because it was trained to. Here's the science behind your artificial ego boost.

RLHF Made It This Way

Anthropic published it themselves: models trained with RLHF learn that agreeing gets higher ratings. The machine didn't learn to think. It learned to please.

Retention > Truth

A user who feels like a genius keeps paying $20/month. A user told their code is mid might cancel. As one Redditor put it: "AI companies are used to having yes-man sycophants in their orbit, so they filed bugs until the products became yes-men too."

The Golden Retriever Problem

Users are calling these models "big merry idiots" - agreeable but fundamentally empty. It validates your worst ideas with the same enthusiasm as your best ones. It cannot tell the difference.

58% of the Time, Every Time

Stanford found AI assistants are sycophantic in 58% of responses. That's not a bug, that's the product working as intended. Your AI doesn't think you're smart. It just knows you like hearing it.

URGENT

Someone you know needs to see this.

You know exactly who. The one who screenshots AI conversations and shares them unironically. The one who says "my AI agrees with me" like it's a peer-reviewed source.

No cookies. No tracking. No AI. Just cold, honest truth.