Skip to content

About BagCheck

Methodology, metrics, and the reasoning behind every ranking.

METHODOLOGY · 2025-26

01

The Thesis

Value is the lens.

Every NBA stat reads differently once you put a dollar next to it. Fifteen and six on a rookie scale tells you something very different than fifteen and six on a max, and BagCheck is built on that distinction. It's a set of tools that stop asking who's good and start asking who's good for what they cost.


02

The Core Idea

We fit an expectation for every salary level, then read the gap.

For any given salary, the league tends to produce a predictable amount of basketball. We fit that relationship once (with a square-root transform on the high end, since a max contract doesn't scale linearly with what a max player produces) and then score every contract by where its production lands against the curve.

Every metric downstream is a different way of reading that same gap. Some in percentiles, some in dollars, some looking forward into future contract years. The underlying mechanism is the same.

Salary vs. Win Shares regressionA diagonal line across a scatter-like chart shows the league-wide expected production for any given salary. One example player sits above the line, labeled "earning it." Another sits below the line, labeled "overpay."expected outputEARNING ITOVERPAYVET MINMID-TIERMAXREPLACEMENTAVERAGEELITEWIN SHARESSALARY →
Players above the line are producing more wins than their contract would predict, while anyone sitting below the line is overpaid for what they actually produce.

03

The Metrics

Six lenses on the same underlying question.

No single number captures contract value, and anyone who tells you otherwise is probably selling something. What you can do is build a small family of metrics that each ask a slightly different question, stay honest about what each one misses, and let a reader who knows basketball triangulate from there.

03.1Heist Rating

A percentile rank of surplus production.

percentile_rank(actual_WS − expected_WS(√salary))

A $50M player producing 10 WS and a $10M player producing 3 WS might both be roughly par for their tier. Heist Rating captures that by comparing every player to the league-wide salary curve, which raw dollars-per-win-share can't do.

Where it breaks: Heist scores rookie-scale and vet-minimum contracts against the same curve as open-market deals, even though they're structurally different beasts. The leaderboard skews young as a result, which is worth keeping in mind.

Regression excludes sub-$500K · Low-GP players flagged
03.2The Dividend

Heist Rating, denominated in dollars.

dividend = ws_surplus × avg_cost_per_ws

This takes the same production surplus that feeds Heist Rating and converts it into dollars, using the league-average cost of a Win Share. It's a way of expressing the gap in a unit people can actually feel.

Where it breaks: false precision. The difference between +$8.2M and +$7.1M lives well inside the noise of Win Shares itself, so you should really be reading The Dividend in rough buckets (positive, mildly negative, badly negative) rather than to the decimal.

03.3Legacy Tax

Three pillars measuring paying for past glory.

legacy_tax = decline × overpay × salary_weight

The idea here is to catch contracts that are priced off past performance rather than current. Three factors combine multiplicatively: decline (how far a player has fallen from peak Win Shares), overpay (how much their salary exceeds the WS-implied value), and a salary weight that keeps the metric focused on contracts big enough to matter. Miss any one of the three and the score collapses toward zero.

Where it breaks: the peak-WS baseline only looks back as far as the data we have. Players whose true peak predates our window end up with an artificially low peak, which understates their real Legacy Tax.

Age ≥ 28 · Salary ≥ $5M · 2+ prior seasons · Peak WS ≥ 1.0 · GP ≥ 50%
03.4Bag Alert

Forward-looking risk on remaining contract years.

bag_alert = (remaining_salary − projected_value) / remaining_salary

Every other metric on the page is retrospective. Bag Alert looks the other direction: it projects the rest of a contract using position-aware aging curves and a recency-weighted read of the player's history, then measures the gap between what's owed and what's likely to be earned back.

Where it breaks: the aging curves are league averages. They can't tell you that LeBron is LeBron, or that a particular guard has a game built on something other than athleticism and will age fine. On specific players, your eye will usually beat this metric; across the full league, the reverse tends to be true.

Aging curves by positionTwo curves show how production changes with age. Guards and wings peak around 27 and decline gradually. Bigs peak around 26 and decline more steeply after 30.GUARDS PEAK 27BIGS PEAK 2620253035GUARDSBIGS
The curves get applied to each remaining contract year, so players who are well past their peak while still on long-tail salary tend to flag red.
Null for expiring deals · Requires ≥ 2 remaining years
03.5Price Tag

The simplest read: dollars per Win Share.

price_tag = salary / max(WS, 0.1)

This is the blunt version of the value question. A $40M player producing 5 WS costs $8M per win, full stop. It doesn't care about context, age, role, or trajectory, which makes it useful when you just want a quick cross- contract comparison without thinking too hard.

Where it breaks: the 0.1 WS floor keeps the math alive at the low end, so a replacement-level player on $5M reads as “$50M per win,” which is technically true and practically meaningless. Price Tag works fine as a secondary display, but it's not load-bearing as a primary analytical tool.

03.6The Spread

Production rank minus salary rank.

spread = salary_rank − production_rank

Where Heist runs a regression, The Spread runs a much simpler comparison. Line the league up by pay, then by output, and look at who climbs the furthest up the second list. That surfaces the players who sit low on the salary ladder and high on the production one.

Where it breaks: the same CBA distortion that messes with Heist shows up here too. Rookies and vet-minimums dominate by construction, which is why this metric earns its keep in the middle of the salary distribution, where contracts actually get negotiated.


04

The Award Trackers

Five models, each tuned to one ballot.

We're trying to predict the vote, not the film.

None of these awards come down to a single advanced stat. MVP voting has never been a DBPM table, and ROTY isn't decided by whoever posted the highest BPM as a 19-year-old. They're narrative exercises settled by sportswriters, and the features that actually move a ballot (volume on a winning team, a first All-Star nod, a specific story about a specific season) don't line up cleanly with the features that identify the objectively best player on film.

A model built to find the best defender on tape would miss Marcus Smart's 2022 DPOY, whereas a model built around what voters actually reward catches him without having to stretch. Every weight, gate, and softmax temperature sitting behind the bars below is published. If you want to argue with the methodology, have at it.

0%
25%
50%
75%
100%
  • MVP
  • DPOY
  • ROTY
  • 6MOY
  • MIP
Counting
Advanced
Team Context
Games Played
Other
6MOY other → TS% cohort 15% · Bench purity 7%
MIP other → Current level 30% · Breakthrough 15% · Role expansion 10%
The five models don't weight signals the same way. DPOY leans hardest on counting stats because voters do, MVP stays fairly balanced across categories, and MIP reads differently altogether since it's measuring change rather than level.

MVP

Most Valuable Player

Voters reward production on a winning team, so the model carries team record at roughly the same weight as advanced stats.

GP ≥ 79% · PPG ≥ 20 · 2 per team
5 / 5
Top-1 since 2020-21

DPOY

Defensive Player of the Year

Counting stats carry most of the weight because ballots do, and a teammate discount keeps the top defense from eating three slots on the list.

Archetype-OR gate · 3 per team
6 / 7
Top-3 since 2018-19

ROTY

Rookie of the Year

Uses raw team record with no seed modifiers, because a lottery rookie shouldn't get penalized for being drafted onto a lottery team.

GP ≥ 50 · MPG ≥ 20 · Drafted
7 / 7
Top-1 since 2018-19

6MOY

Sixth Man of the Year

A continuous bench-purity penalty keeps 25-start quasi-starters from gaming the strict bench gate on volume scoring alone.

GS/GP < 0.5 · GS ≤ 25 · 2 per team
5 / 7
Top-1 since 2018-19

MIP

Most Improved Player

Three hard gates filter out injury-rebounds, mean-reverters, and established stars, leaving roughly the kind of narrative voters tend to reward.

Prior GP ≥ 50 · Peak WS < 8 · Not a 20-PPG vet
4 / 6
Top-1 since 2019-20

05

How We Think About This

Every single-number value metric is wrong in some specific way.

What matters is knowing which way. Here are four structural distortions worth keeping in mind before you read any leaderboard on this site.

Rookie scale contracts are structurally underpaid. A top-5 pick producing like a $30M player on a $12M cap hold looks like a steal, but it's really the CBA working as designed. Teams get four years of cost control as compensation for the draft's inherent risk, so celebrating those contracts as “finds” is a bit of a category error.

Veteran minimums skew the same way. Ring-chasing vets who sign for the minimum are a selection effect, since only the productive ones stick around long enough to keep getting signed. The leaderboard ends up over-rewarding them.

Supermaxes mostly show up as overpays. This one is also structural. Supermaxes get awarded to players who were elite when they signed, and elite-when-signed is a moving target. When a supermax goes underwater in year three, the more likely explanation is that basketball is hard and four-year projections are harder, not that the signing itself was wrong.

“Market value” often isn't. Bird rights let incumbent teams outbid the open market, so a contract that looks overpriced against a regression may simply reflect what it took to retain a player rather than what he would've gotten as a true free agent.

None of this invalidates the metrics; it's most of the reason they're worth reading carefully in the first place. We publish the formulas, the gates, the weights, and the backtests on purpose, so that you can argue with a transparent model rather than having to trust an opaque one.


06

CBA Reference

The thresholds that shape every roster.

Salary Cap
$154.6M

Soft cap; teams can exceed it through various exceptions.

Luxury Tax
$187.9M

Dollar-for-dollar tax payments that escalate with repeater status.

First Apron
$195.9M

Restricts sign-and-trades and eliminates the full mid-level exception.

Second Apron
$207.8M

Eliminates salary aggregation, freezes a first-rounder, and severely limits trades.


07

Go Look At Something

Pick a metric, pick a position, and go find the quadrant nobody is sitting in.