Skip to main content
Classic prediction markets are designed to aggregate crowd consensus, not to measure individual skill. They reward being on the correct side of a discrete outcome, and they ignore the difference between being slightly wrong and wildly wrong. This structure discretizes continuous outcomes, forcing users into arbitrarily predefined bins and eliminating measurable skill.
Both prediction markets and Trepa care about accuracy, but they measure different things.
Prediction markets evaluate the accuracy of the crowd (see the Brier score), while Trepa measures the accuracy of each participant individually as well.
Consider a typical binary market question:
“Will the U.S. tariff rate on China on December 15, 2025, be less than 25 percent?”
If the actual rate settles at exactly 25 percent, a participant who thought 24 percent and bought YES, would lose, whereas a participant who thought 40 percent and bought NO, would win. The second participant is much farther from reality, yet wins simply for being on the correct side. This highlights a structural flaw in binary markets: they can punish you for being accurate but directionally wrong.