“I Have a PERFECT System!!”

The following article was originally published on March 9th in TSP Live Insider…

📈 The KenPom Buzz

As the NCAA futures markets heat up, I’ve been sent a handful of TikToks and Twitter threads referencing an eye-catching trend:

“Over the last 22 years, every NCAA Basketball Champion has met certain KenPom criteria—specifically ranking within defined ranges for offensive and defensive efficiency.”

For 2023, this supposed system narrowed the potential champions down to:

  • Creighton
  • Kansas
  • Houston
  • Alabama
  • UConn
  • UCLA

Sounds sharp. Impressive, even. But there’s a serious flaw behind the curtain…


🔍 The Fatal Flaw: Retrofitting the System

Let’s be crystal clear:

This KenPom model wasn’t created 22 years ago and then used forward to accurately predict each subsequent champion.

Nope.

It was built backwards—after the results were already known—by identifying where past winners ranked, then adjusting the “criteria” accordingly each year to keep the model’s record perfect.

This is classic retrofitting—a model created to fit the past, not predict the future.

Let me break it down for you with a made-up example…


🧪 Example: The Shifting Goalposts Game

Let’s pretend it’s 2021. A 10-year trend shows:

“Every NCAA champion ranked Top 8 in offensive efficiency and Top 15 in defensive efficiency.”

You look at the current season and pick teams like Kansas, UCLA, Temple who meet those criteria.

But the champ turns out to be Baylor—ranked 9th in offense and 18th in defense. Whoops. The system missed.

No worries… just adjust the model for 2022:

“Now the winning team must be Top 9 in offense and Top 18 in defense!”

Boom. The model is now “11-0” over the last 11 years. Looks perfect again!

See what happened?

The model failed in real time… but once the result was in, the rules were rewritten to make the system appear flawless. It didn’t predict anything—it explained the past.


📉 This Isn’t Predictive Modeling—It’s Data Doodling

“A system that works only when its rules change every year is not a system. It’s a magic trick.”

If I did this with sports betting, you’d roast me. Imagine me saying:

“My system is 5-0 at picking Super Bowl winners. Wanna know the secret? Every winning team came from a city with two words in its name—Kansas City, Los Angeles, Tampa Bay, etc.”

Then the next year, Miami wins, so now I update it:

“The last 6 Super Bowl winners came from cities with two words or two M’s in the name.”

👏 Still 6-0! Totally worthless.


🤔 Does This Mean Those Teams Can’t Win?

Of course not. Creighton, Kansas, Houston, UConn, Alabama, UCLA—they’re all elite contenders.

But don’t think they’re strong picks because of this system. That’s putting faith in something that’s been engineered to look perfect after the fact.


🔒 Forward-Facing Models Matter More

A good system says:

“Based on metrics A, B, and C, here are our 2024 projections. Let’s track accuracy over time.”

Not:

“Here’s why our formula still looks great—even though it failed last year.”

You don’t get to move the goalposts after the game.


🧠 Bottom Line

If you see a betting system that’s:

  • Constantly updating its parameters based on past results
  • Refuses to show a true predictive track record
  • Only “works” because it redefines its terms each year

🚨 That’s not a system. It’s a story.

Be cautious of trends that look bulletproof, but only because they’ve been patched every year like a buggy app.

🎯 Sharp betting is forward-thinking, not backward-justified.


Thanks for reading, and as always—
🍀 Good luck in your action!