From Guessing Pigs to Gauging Truth
Why Crowdsourcing Doesn’t Work for Fact-Checking
There’s a classic story often told in support of the “wisdom of the crowd.” At a country fair, attendees were asked to guess the weight of a pig. No individual got it exactly right, but when all the guesses were averaged, the result was astonishingly accurate—closer than any single person’s guess.
It’s a compelling anecdote. And it has been used to justify everything from stock market models to product design feedback. But we need to be clear about one thing:
Crowds are good at estimation, not verification.
In the pig story, the crowd was asked to approximate a value within a known range. They weren’t asked to verify whether the pig existed, whether it was actually a goat, or whether a different pig had been swapped in for show. That’s where the comparison breaks down—and where Meta’s recent move to replace professional fact-checkers with a crowdsourced system exposes a dangerous flaw.
Meta’s Community Notes Experiment: A Case Study in False Equivalence
As The Washington Post recently reported, Meta’s new “Community Notes” system—designed to flag and correct misinformation on Facebook and Instagram—has effectively silenced even well-sourced corrections. Of the 65 attempted notes submitted by one contributor over four months, only three made it through Meta’s system.
That’s not wisdom. That’s algorithmic apathy.
By requiring cross-ideological agreement before any note is published, the system favors consensus over correctness. It doesn’t matter how strong the evidence is—if the community can’t agree, the truth gets buried.
And in today’s information environment, delay is denial. Falsehoods go viral in seconds. A “helpful” community note that shows up a week later isn’t helpful—it’s theater.
Why AmICredible Takes a Different Approach
This is why I built AmICredible—to address the core flaw in both traditional fact-checking and naive crowdsourcing: the lack of a credibility framework that distinguishes between fact, opinion, and intent.
AmICredible doesn’t rely on a popularity contest to determine truth. It evaluates the credibility of a statement, not just its truthfulness, by asking:
Is the claim well-supported by evidence?
Does it align with known facts and expert consensus?
Could it mislead reasonable people?
We use AI to analyze context, source integrity, and potential for distortion, then assign a credibility score—something you can actually use in conversation, decision-making, or public discourse.
It’s not just about catching lies. It’s about raising the bar for what we repeat, believe, and share.
See how AmICredible will change our conversations.
The Road Ahead
Crowdsourcing can be powerful—but only when used in the right domain. Estimating how many jellybeans are in a jar? Sure. Deciding whether climate change is real? Not so much.
If we care about truth, we can’t just “democratize” it without safeguards. We need tools that empower individuals to challenge misinformation responsibly and transparently—without waiting for the crowd to catch up.
That’s the promise of AmICredible. And that’s why, in a post-fact-checker world, tools like this are no longer optional. They’re essential.


