Trust the barrels

Inspired by the curious case of Harrison Bader

baderbarrels

whose average exit velocity is horrific, hard hit% is average, and barrel/contact% is great (not shown, but a little better than the xwOBA marker), I decided to look at which one of these metrics was more predictive.  Barrels are significantly more descriptive of current-season wOBAcon (wOBA on batted balls/contact), and average exit velocity is sketchy because the returns on harder-hit balls are strongly nonlinear. The game rewards hitting the crap out of the ball, and one rocket and one trash ball come out a lot better than two average balls.

Using consecutive seasons with at least 150 batted balls (there’s some survivor bias based on quality of contact, but it’s pretty much even across all three measures), which gave 763 season pairs, barrel/contact% led the way with r=0.58 to next season’s wOBAcon, followed by hard-hit% at r=0.53 and average exit velocity at r=0.49.  That’s not a huge win, but it is a win, but since these are three ways of measuring a similar thing (quality of contact), they’re likely to be highly correlated, and we can do a little more work to figure out where the information lies.

evvehardhit

I split the sample into tenths based on average exit velocity rank, and Hard-hit% and average exit velocity track an almost perfect line at the group (76-77 player) level.  Barrels deviate from linearity pretty measurably with the outliers on either end, so I interpolated and extrapolated on the edges to get an “expected” barrel% based on the average exit velocity, and then I looked at how players who overperformed and underperformed their expected barrel% by more than 1 SD (of the barrel% residual) did with next season’s wOBAcon.

Avg EV decile >2.65% more barrels than expected average-ish barrels >2.65% fewer barrels than expected whole group
0 0.362 0.334 none 0.338
1 0.416 0.356 0.334 0.360
2 0.390 0.377 0.357 0.376
3 0.405 0.386 0.375 0.388
4 0.389 0.383 0.380 0.384
5 0.403 0.389 0.374 0.389
6 0.443 0.396 0.367 0.402
7 0.434 0.396 0.373 0.401
8 0.430 0.410 0.373 0.405
9 0.494 0.428 0.419 0.441

That’s.. a gigantic effect.  Knowing barrel/contact% provides a HUGE amount of information on top of average exit velocity going forward to the next season.  I also looked at year-to-year changes in non-contact wOBA (K/BB/HBP) for these groups just to make sure and it’s pretty close to noise, no real trend and nothing close to this size.

It’s also possible to look at this in the opposite direction- find the expected average exit velocity based on the barrel%, then look at players who hit the ball more than 1 SD (of the average EV residual) harder or softer than they “should” have and see how much that tells us.

Barrel% decile >1.65 mph faster than expected average-ish EV >1.65 mph slower than expected whole group
0 0.358 0.339 0.342 0.344
1 0.362 0.359 0.316 0.354
2 0.366 0.364 0.361 0.364
3 0.389 0.377 0.378 0.379
4 0.397 0.381 0.376 0.384
5 0.388 0.395 0.418 0.397
6 0.429 0.400 0.382 0.403
7 0.394 0.398 0.401 0.398
8 0.432 0.414 0.409 0.417
9 0.449 0.451 0.446 0.450


There’s still some information there, but while the average difference between the good and bad EV groups here is 12 points of next season’s wOBAcon, the average difference for good and bad barrel groups was 50 points.  Knowing barrels on top of average EV tells you a lot.  Knowing average EV on top of barrels tells you a little.

Back to Bader himself, a month of elite barreling doesn’t mean he’s going to keep smashing balls like Stanton or anything silly, and trying to project him based on contact quality so far is way beyond the scope of this post, but if you have to be high on one and low on the other, lots of barrels and a bad average EV is definitely the way to go, both for YTD and expected future production.

 


If you play poker and would like to support the site, read about the new PKC poker app.

Uncertainty in baseball stats (and why DRC+ SD is a category error)

What does it mean to talk about the uncertainty in, say, a pitcher’s ERA or a hitter’s OBP?  You know exactly how many ER were allowed, exactly how many innings were pitched, exactly how many times the batter reached base, and exactly how many PAs he had.  Outside of MLB deciding to retroactively flip a hit/error decision, there is no uncertainty in the value of the stat.  It’s an exact measurement.  Likewise, there’s no uncertainty in Trout’s 2013 wOBA or wRC+.  They reflect things that happened, calculated in deterministic fashion from exact inputs.  Reporting a measurement uncertainty for any of these wouldn’t make any sense.

The Statcast metrics are a little different- EV, LA, sprint speed, hit distance, etc. all have a small amount of random error in each measurement, but since those errors are small and opportunities are numerous, the impact of random error is small to start with and totally meaningless quickly when aggregating measurements.  There’s no point in reporting random measurement uncertainty in a public-facing way because it may as well be 0 (checking for systematic bias is another story, but that’s done with the intent of being fixed/corrected for, not of being reported as metric uncertainty).

Point 1:

So we can’t be talking about the uncertainties in measuring/calculating these kinds of metrics- they’re irrelevant-to-nonexistent.  When we’re talking about the uncertainty in somebody’s ERA or OBP or wRC+, we’re talking about the uncertainty of the player’s skill at the metric in question, not the uncertainty of the player’s observed value.  That alone makes it silly to report such metrics as “observed value +/- something”, like ERA 0.37 +/- 3.95, because it’s implicitly treating the observed value as some kind of meaningful central-ish point in the player’s talent distribution.  There’s no reason for that to be true *because these aren’t talent metrics*.  They’re simply a measure of something over a sample, and many such metrics frequently give values where a better true talent is astronomically unlikely to be correct (a wRC+ over 300) or even impossible (an ERA below 0) and many less extreme but equally silly examples as well.

Point 2:

Expressing something non-stupidly in the A +/- B format (or listing percentiles if it’s significantly non-normal, whatever) requires a knowledge of the player’s talent distribution after the observed performance, and that can’t be derived solely from the player’s data.  If something happens 25% of the time, talent could cluster near 15% and the player is doing it more often, talent could cluster near 35% and the player is doing it less often, or talent could cluster near 25% and the player is average.  There’s no way to tell the difference from just the player’s stat line and therefore no way to know what number to report as the mean, much less the uncertainty.  Reporting a 25% mean might be correct (the latter case) or as dumb as reporting a mean wRC+ of 300 (if talent clusters quite tightly around 15%).

Once you build a prior talent distribution (based on what other players have done and any other material information), then it’s straightforward to use the observed performance at the metric in question and create a posterior distribution for the talent, and from that extract the mean and SD.  When only the mean is of interest, it’s common to regress by adding some number of average observations, more for a tighter talent distribution and fewer for a looser talent distribution, and this approximates the full Bayesian treatment.  If the quantity in the previous paragraph were HR/FB% (league average a little under 15%), then 25% for a pitcher would be regressed down a lot more than for a batter over the same number of PAs because pitcher HR/FB% allowed talent is much more tightly distributed than hitter HR/FB% talent, and the uncertainty reported would be a lot lower for the pitcher because of that tighter talent distribution.  None of that is accessible by just looking at a 25% stat line.

Actual talent metrics/projections, like Steamer and ZiPS, do exactly this (well, more complicated versions of this) using talent distributions and continually updating with new information, so when they spit out mean and SD, or mean and percentiles, they’re using a process where those numbers are meaningful, getting them as the result of using a reasonable prior for talent and therefore a reasonable posterior after observing some games.  Their means are always going to be “in the middle” of a reasonable talent posterior, not nonsense like wRC+ 300.

Which brings us to DRC+.. I’ve noted previously that the DRC+ SDs don’t make any sense, but I didn’t really have any idea how they were coming up with those numbers until  this recent article, and a reference to this old article on bagging.  My last two posts pointed out that DRC+ weights way too aggressively in small samples to be a talent metric and that DRC+ has to be heavily regressed to make projections, so when we see things in that article like Yelich getting assigned a DRC+ over 300 for a 4PA 1HR 2BB game, that just confirms what we already knew- DRC+ is happy to assign means far, far outside any reasonable distribution of talent and therefore can’t be based on a Bayesian framework using reasonable talent priors.

So DRC+ is already violating point 1 above, using the A +/- B format when A takes ridiculous values because DRC+ isn’t a talent metric.  Given that it’s not even using reasonable priors to get *means*, it’s certainly not shocking that it’s not using them to get SDs either, but what it’s actually doing is bonkers in a way that turns out kind of interesting.  The bagging method they use to get SDs is (roughly) treating the seasonal PA results as the exact true talent distribution of events, drawing  from them over and over (with replacement) to get a fake seasonal line, doing that a bunch of times and taking the SD of the fake seasonal lines as the SD of the metric.

That’s obviously just a category error.  As I explained in point 2, the posterior talent uncertainty depends on the talent distribution and can’t be calculated solely from the stat line, but such obstacles don’t seem to worry Jonathan Judge.  When talking about Yelich’s 353 +/- 6  DRC+, he said “The early-season uncertainties for DRC+ are high. At first there aren’t enough events to be uncertain about, but once we get above 10 plate appearances or so the system starts to work as expected, shooting up to over 70 points of probable error. Within a week, though, the SD around the DRC+ estimate has worked its way down to the high 30s for a full-time player.”  That’s just backwards about everything.  I don’t know (or care) why their algorithm fails under 10 PAs, but writing “not having enough events to be uncertain about” shows an amazing misunderstanding of everything.

The accurate statement- assuming you’re going in DRC+ style using only YTD knowledge of a player- is “there aren’t enough events to be CERTAIN about of much of anything”, and an accurate DRC+ value for Yelich- if DRC+ functioned properly as a talent metric- would be around 104 +/- 13 after that nice first game.  104 because a 4PA 1HR 2BB game preferentially selects- but not absurdly so- for above average hitters, and a SD of 13 because that’s about the SD of position player projections this year.  SDs of 70 don’t make any sense at all and are the artifact of an extremely high SD in observed wOBA (or wRC+) over 10-ish PAs, and remember that their bagging algorithm is using such small samples to create the values.  It’s clear WHY they’re getting values that high, but they just don’t make any sense because they’re approaching the SD from the stat line only and ignoring the talent distribution that should keep them tight.  When you’re reporting a SD 5 times higher than what you’d get just picking a player talent at random, you might have problems.

The Bayesian Central Limit Theorem

I promised there was something kind of interesting, and I didn’t mean bagging on DRC+ for the umpteenth time, although catching an outright category error is kind of cool.  For full-time players after a full season, the DRC+ SDs are actually in the ballpark of correct, even though the process they use to create them obviously has no logical justification (and fails beyond miserably for partial seasons, as shown above).  What’s going on is an example of the Bayesian Central Limit Theorem, which states that for any priors that aren’t intentionally super-obnoxious, repeatedly observing i.i.d variables will cause the posterior to converge to a normal distribution.  At the same time, the regular Central Limit Theorem means that the distribution of outcomes that their bagging algorithm generates should also approach a normal distribution.

Without the DRC+ processing baggage, these would be converging to the same normal distribution, as I’ll show with binomials in a minute, but of course DRC+ gonna DRC+ and turn virtually identical stat lines into significantly different numbers

NAME YEAR PA 1B 2B 3B HR TB BB IBB SO HBP AVG OBP SLG OPS ISO oppOPS DRC+ DRC+ SD
Pablo Sandoval 2014 638 119 26 3 16 244 39 6 85 4 0.279 0.324 0.415 0.739 0.136 0.691 113 7
Jacoby Ellsbury 2014 635 108 27 5 16 241 49 5 93 3 0.271 0.328 0.419 0.747 0.148 0.696 110 11

Ellsbury is a little more TTO-based and gets an 11 SD to Sandoval’s 7.  Seems legit.  Regardless of these blips, high single digits is about right for a DRC+ (wRC+) SD after observing a full season.

Getting rid of the DRC+ layer to show what’s going on, assume talent is uniform on [.250-.400] (SD of 0.043) and we’re dealing with 1000 Bernoulli observations.  Let’s say we observe 325 successes (.325), then when we plot the Bayesian posterior talent distribution and the binomial for 1000 p=.325 events (the distribution that bagging produces)

325posterior

They overlap so closely you can’t even see the other line.  Going closer to the edge, we get, for 275 and 260 observed successes,

At 275, we get a posterior SD of .13 vs the binomial .14, and at 260, we start to break the thing, capping how far to the left the posterior can go, and *still* get a posterior SD of .11 vs .14.  What’s going on here is that the weight for a posterior value is the prior-weighted probability that that value (say, .320) produces an observation of .325 in N attempts, while the binomial bagging weight at that point is the probability that .325 produces an observation of .320 in N attempts.  These aren’t the same, but under a lot of circumstances, they’re pretty damn close, and as N grows, and the numbers that take the place of .320 and .325 in the meat of the distributions get closer and closer together, the posterior converges to the same normal that describes the binomial bagging.  Bayesian CLT meets normal CLT.

When the binomial bagging variance starts dropping well below the prior population variance, this convergence starts to happen enough to where the numbers can loosely be called “close” for most observed success rates, and that transition point happens to come out around a full season of somewhat regressed observation of baseball talent. In the example above, the prior population SD was 0.043 and the binomial variance was 0.014, so it converged excellently until we ran too close to the edge of the prior.  It’s never always going to work, because a low end talent can get unlucky, or a high end talent can get lucky, and observed performance can be more extreme than the talent distribution (super-easy in small samples, still happens in seasonal ones) but for everybody in the middle, it works out great.

Let’s make the priors more obnoxious and see how well this works- this is with a triangle distribution, max weight at .250 straight line down to a 0 weight at .400.

 

The left-weighted prior shifts the means, but the standard deviations are obviously about the same again here.  Let’s up the difficulty even more, starting with a N(.325,.020) prior (0.020 standard deviation), which is pretty close to the actual mean/SD wOBA talent distribution among position players (that distribution is left-weighted like the triangle too, but we already know that doesn’t matter much for the SD)

Even now that the bagging distributions are *completely* wrong and we’re using observations almost 2 SD out, the standard deviations are still .014-.015 bagging and .012 for the posterior.  Observing 3 SD out isn’t significantly worse.  The prior population SD was 0.020, and the binomial bagging variance was 0.014, so it was low enough that we were close to converging when the observation was in the bulk of the distribution but still nowhere close when we were far outside, although the SDs of the two were still in the ballpark everywhere.

Using only 500 observations on the N(.325,.020) prior isn’t close to enough to pretend there’s convergence even when observing in the bulk.

325500pa

The posterior has narrowed to a SD of .014 (around 9 points of wRC+ if we assume this is wOBA and treat wOBA like a Bernoulli, which is handwavy close enough here), which is why I said above that high-single-digits was “right”, but the binomial variance is still at .021, 50% too high.  The regression in DRC+ tightens up the tails compared to “binomial wOBA”, and it happens to come out to around a reasonable SD after a full season.

Just to be clear, the bagging numbers are always wrong and logically unjustified here, but they’re a hackjob that happens to be “close” a lot of the time when working with the equivalent of full-season DRC+ numbers (or more).  Before that point, when the binomial bagging variance is higher than the completely naive population variance (the mechanism for DRC+ reporting SDs in the 70s, 30s, or whatever for partial seasons), the bagging procedure isn’t close at all.  This is just another example of DRC+ doing nonsense that looks like baseball analysis to produce a number that looks like a baseball stat, sometimes, if you don’t look too closely.

 


If you play poker and would like to support the site, read about the new PKC poker app.

Revisiting the DRC+ team switcher claim

The algorithm has changed a fair bit since I investigated that claim- at the least, it’s gotten rid of most of its park factor and regresses (effectively) less than it used to.  It’s not impossible that it could grade out differently now than it did before, and I told somebody on twitter that I’d check it out again, so here we are.  First of all, let’s remind everybody what their claim is.  From https://www.baseballprospectus.com/news/article/45383/the-performance-case-for-drc/, Jonathan Judge says:


Table 2: Reliability of Team-Switchers, Year 1 to Year 2 (2010-2018); Normal Pearson Correlations[3]

Metric Reliability Error Variance Accounted For
DRC+ 0.73 0.001 53%
wOBA 0.35 0.001 12%
wRC+ 0.35 0.001 12%
OPS+ 0.34 0.001 12%
OPS 0.33 0.002 11%
True Average 0.30 0.002 9%
AVG 0.30 0.002 9%
OBP 0.30 0.002 9%

With this comparison, DRC+ pulls far ahead of all other batting metrics, park-adjusted and unadjusted. There are essentially three tiers of performance: (1) the group at the bottom, ranging from correlations of .3 to .33; (2) the middle group of wOBA and wRC+, which are a clear level up from the other metrics; and finally (3) DRC+, which has almost double the reliability of the other metrics.

You should pay attention to the “Variance Accounted For” column, more commonly known as r-squared. DRC+ accounts for over three times as much variance between batters than the next-best batting metric. In fact, one season of DRC+ explains over half of the expected differences in plate appearance quality between hitters who have switched teams; wRC+ checks in at a mere 16 percent.  The difference is not only clear: it is not even close.

Let’s look at Predictiveness.  It’s a very good sign that DRC+ correlates well with itself, but games are won by actual runs, not deserved runs. Using wOBA as a surrogate for run-scoring, how predictive is DRC+ for a hitter’s performance in the following season?

Table 3: Reliability of Team-Switchers, Year 1 to Year 2 wOBA (2010-2018); Normal Pearson Correlations

Metric Predictiveness Error
DRC+ 0.50 0.001
wOBA 0.37 0.001
wRC+ 0.37 0.002
OPS+ 0.37 0.001
OPS 0.35 0.002
True Average 0.34 0.002
OBP 0.30 0.002
AVG 0.25 0.002

If we may, let’s take a moment to reflect on the differences in performance we see in Table 3. It took baseball decades to reach consensus on the importance of OBP over AVG (worth five points of predictiveness), not to mention OPS (another five points), and finally to reach the existing standard metric, wOBA, in 2006. Over slightly more than a century, that represents an improvement of 12 points of predictiveness. Just over 10 years later, DRC+ now offers 13 points of improvement over wOBA alone.


 

Reading that, you’re pretty much expecting a DIPS-level revelation.  So let’s see how good DRC+ really is at predicting team switchers.  I put DRC+ on the wOBA scale, normalized each performance to the league-average wOBA that season (it ranged from .315 to .326), and measured the mean absolute error (MAE) of wOBA projections for the next season, weighted by the harmonic mean of the PAs in each season.  DRC+ had a MAE of 34.2 points of wOBA for team-switching position players.  Projecting every team-switching position player to be exactly league average had a MAE of 33.1 points of wOBA.  That’s not a mistake.  After all that build-up, DRC+ is literally worse at projecting team-switching position players than assuming that they’re all league average.

If you want to say something about pitchers at the plate…

 
i-dont-think-so-homey-dont-play-that

 

Even though Jonathan Judge felt like calling me a total asshole incompetent troll last night, I’m going to show how his metric could be not totally awful at this task if it were designed and quality-tested better.  As I noted yesterday, DRC+’s weightings are *way* too aggressive on small numbers of PAs.  DRC+ shouldn’t *need* to be regressed after the fact- the whole idea of the metric is that players should only be getting credit for what they’ve shown they deserve (in the given season), and after a few PAs, they barely deserve anything, but DRC+ doesn’t grasp that at all and its creator doesn’t seem to realize or care that it’s a problem.

If we regress DRC+ after the fact to see what happens in an attempt to correct that flaw, it’s actually not a dumpster fire.  All weightings are harmonic means of the PAs.  Every position player pair of consecutive 2010-18 seasons with at least 1 PA in each is eligible.  All tables are MAEs in points of wOBA trying to project year T+1 wOBA..

First off, I determined the regression amounts for DRC+ and wOBA to minimize the weighted MAE for all position players, and that came out to adding 416 league average PAs for wOBA and 273 league average PAs for DRC+.  wOBA assigns 100% credit to the batter.  DRC+ *still* needs to be regressed 65% as much as wOBA.  DRC+ is ridiculously overaggressive assigning “deserved” credit.

Table 1.  MAEs for all players

lgavg raw DRC+ raw wOBA reg wOBA reg DRC+
33.21 31.00 33.71 29.04 28.89

Table 2. MAEs for all players broken down by year T PAs

Year T PA lgavg raw DRC+ raw wOBA reg wOBA reg DRC+ T+1 wOBA
1-99 PAs 51.76 48.84 71.82 49.32 48.91 0.284
100-399 PA 36.66 36.64 40.16 34.12 33.44 0.304
400+ PA 30.77 27.65 28.97 25.81 25.91 0.328

Didn’t I just say DRC+ had a problem with being too aggressive in small samples?  Well, this is one area where that mistake pays off- because the group of hitters who have 1-99 PA over a full season are terrible, being overaggressive crediting their suckiness pays off, but if you’re in a situation like now, where the real players instead of just the scrubs and callups have 1-99 PAs, being overaggressive is terribly inaccurate.  Once the population mean approaches league-average quality, the need for- and benefit of- regression is clear. If we cheat and regress each bucket to its population mean, it’s clear that DRC+ wasn’t actually doing anything special in the low-PA bucket, it’s just that regression to 36 points of wOBA higher than the mean wasn’t a great corrector.

Table 3. (CHEATING) MAEs for all players broken down by year T PAs, regressed to their group means (same regression amounts as above).

Year T PA lgavg raw DRC+ raw wOBA reg wOBA reg DRC+ T+1 wOBA
1-99 PAs 51.76 48.84 71.82 46.17 46.30 0.284
100-399 PA 36.66 36.64 40.16 33.07 33.03 0.304
400+ PA 30.77 27.65 28.97 26.00 25.98 0.328

There’s very little difference between regressed wOBA and regressed DRC+ here.  DRC+ “wins” over wOBA by 0.00015 wOBA MAE over all position players, clearly justifying the massive amount of hype Jonathan Judge pumped us up with.  If we completely ignore the trash position players and only optimize over players who had 100+PA in year T, then the regression amounts increase slightly- 437 PA for wOBA and 286 for DRC+, and we get this chart:

Table 4. MAEs for all players broken down by year T PAs, optimized on 100+ PA players

Year T PA lgavg raw DRC+ raw wOBA reg wOBA reg DRC+ T+1 wOBA
100+ PA 32.55 30.37 32.36 28.32 28.19 0.321
100-399 PA 36.66 36.64 40.16 34.12 33.45 0.304
400+ PA 30.77 27.65 28.97 25.81 25.91 0.328

Nothing to see here either, DRC+ with a 0.00013 MAE advantage again.  Using only 400+PA players to optimize over only changes the DRC+ entry to 25.90, so regressed wOBA wins a 0.00009 MAE victory here.

In conclusion, regressed wOBA and regressed DRC+ are so close that there’s no meaningful difference, and I’d grade DRC+ a microscopic winner.  Raw DRC+ is completely awful in comparison, even though DRC+ shouldn’t need anywhere near this amount of extra regression if it were working correctly to begin with.

I’ve slowrolled the rest of the team-switcher nonsense.  It’s not very exciting either.  I defined 3 classes of players, Stay = played both years entirely for the same team, Switch = played year T entirely for 1 team and year T+1 entirely for 1 other team, Midseason = switched midseason in at least one of the years.

Table 5. MAEs for all players broken down by stay/switch, any number of year T PAs

stay/

switch

lgavg raw DRC+ raw wOBA reg wOBA reg DRC+ T+1 wOBA
stay 33.21 29.86 32.19 27.91 27.86 0.325
switch 33.12 34.20 37.89 31.57 31.53 0.312
mid 33.29 33.01 36.47 31.67 31.00 0.305
sw+mid 33.21 33.60 37.17 31.62 31.26 0.309

It’s the same story as before.  Raw DRC+ sucks balls at projecting T+1 wOBA and is actually worse than “everybody’s league average” for switchers, regressed DRC+ wins a microscopic victory over regressed wOBA for stayers and switchers.  THERE’S (STILL) LITERALLY NOTHING TO THE CLAIM THAT DRC+, REGRESSED OR OTHERWISE, IS ANYTHING SPECIAL WITH RESPECT TO PROJECTING TEAM SWITCHERS.  These are the same conclusions I found the first time I looked, and they still hold for the current version of the DRC+ algorithm.

 


If you play poker and would like to support the site, read about the new PKC poker app.

DRC+ weights TTO relatively *less* than BIP after 10 games than after a full season

This is a cut-out from a longer post I was running some numbers for, but it’s straightforward enough and absurd enough that it deserves a standalone post.  I’d previously looked at DRAA linear weights and the relevant chart for that is reproduced here.  This is using seasons with 400+PA.

relative to average PA 1b 2b 3b hr bb hbp k bip out
old DRAA 0.22 0.38 0.52 1.16 0.28 0.24 -0.24 -0.13
new DRAA 0.26 0.45 0.62 1.17 0.26 0.30 -0.24 -0.15
wRAA 0.44 0.74 1.01 1.27 0.27 0.33 -0.26 -0.27

 

I reran the same analysis on 2019 YTD stats, with all position players and with a 25 PA minimum, and these are the values I recovered.  Full year is the new DRAA row above, and the percentages are the percent relative to those values.

1b 2b 3b hr bb hbp k BIP out
YTD 0.13 0.21 0.29 0.59 0.11 0.08 -0.14 -0.10
min 25 PA 0.16 0.27 0.37 0.63 0.12 0.09 -0.15 -0.11
Full Year 0.26 0.45 0.62 1.17 0.26 0.30 -0.24 -0.15
YTD %s 48% 47% 46% 50% 41% 27% 57% 64%
min 25PA %s 61% 59% 59% 54% 46% 30% 61% 74%

So.. this is quite something.  First of all, events are “more-than-half-deserved” relative to the full season after only 25-50 PA.  There’s no logical or mathematical reason for that to be true, for any reasonable definition of “deserved”, that quickly.  Second, BIP hits are discounted *LESS* in a small sample than walks are, and BIP outs are discounted *LESS* in a small sample than strikeouts are.  The whole premise of DRC+ is that TTO outcomes belong to the player more than the outcomes of balls in play, and are much more important in small samples, but here we are, with small samples, and according to DRC+, the TTO OUTCOMES ARE RELATIVELY LESS IMPORTANT NOW THAN THEY ARE AFTER A FULL SEASON.  Just to be sure, I reran with wRAA and extracted almost the exact same values as chart 1, so there’s nothing super weird going on here.  This is complete insanity- it’s completely backwards from what’s actually true, and even to what BP has stated is true.  The algorithm has to be complete nonsense to “come to that conclusion”.

Reading the explanation article, I kept thinking the same thing over and over.  There’s no clear logical or mathematical justification for most steps involved, and it’s just a pile of junk thrown together and tinkered with enough to output something resembling a baseball stat most of the time if you don’t look too closely. It’s not the answer to any articulable, well-defined question.  It’s not a credible run-it-back projection (I’ll show that unmistakably in the next post, even though it’s already ruled out by the.. interesting.. weightings above).

Whenever a hodgepodge model is thrown together like DRC+ is, it becomes difficult-to-impossible to constrain it to obey things that you know are true.  At what point in the process did it “decide” that TTO outcomes were relatively less important now?  Probably about 20 different places where it was doing nonsense-that-resembles-baseball-analysis and optimizing functions that have no logical link to reality.  When it’s failing basic quality testing- and even worse, when obvious quality assurance failures are observed and not even commented on (next post)- it’s beyond irresponsible to keep running it out as something useful solely on the basis of a couple of apples-to-oranges comparisons on rigged tests.

 


If you play poker and would like to support the site, read about the new PKC poker app.

A new look at the TTOP, plus a mystery

I had the bright idea to look at the familiarity vs. fatigue TTOP debate, which has MGL on the familiarity side and Pizza Cutter on the fatigue side, by measuring performance based on the number of pitches the batter had seen previously and the number of pitches that the pitcher had thrown to other players in between the PAs in question.  After all, a fatigue effect on the TTOP shouldn’t be from “fatigue”, but “relative change in fatigue”, and that seemed like a cleaner line of inquiry than just total pitch count.  Not a perfect one, but one that should pick up a signal if it’s there.  Then I realized MGL had already done the first part of that experiment, which I’d somehow completely forgotten even though I’d read that article and the followup around the time they came out.  Oh well.  It never hurts to redo the occasional analysis to make sure conclusions still hold true.

I found a baseline 15 point PA1-PA2 increase as well as another 15 point PA2-PA3 increase.  I didn’t bother looking at PA4+ because the samples were tiny and usage is clearly changing.  In news that should be surprising to absolutely nobody reading this, PAs given to starters are on the decline overall and the number of PA4+ is absolutely imploding lately.

Season Total PAs 1st TTO 2nd 3rd 4th 5th
2008 116960 42614 40249 30731 3359 7
2009 116963 42628 40186 30736 3406 7
2010 119130 42621 40457 32058 3990 4
2011 119462 42588 40458 32333 4080 3
2012 116637 42506 40336 30741 3050 4
2013 116872 42570 40422 31026 2851 3
2014 117325 42612 40618 31235 2856 4
2015 114797 42658 40245 29580 2314 0
2016 112480 42461 40128 28193 1698 0
2017 110195 42478 39912 26476 1329 0
2018 106051 42146 38797 24057 1051 0

Looking specifically at PA2 based on the number of pitches seen in PA1, I found a more muted effect than MGL did using 2008-2018 data with pitcher-batters and IBB/sac-bunt PAs removed.  My data set consisted of (game,starter,batter,pa1,pa2,pa3) rows where the batter had to face the starter at least twice, the batter wasn’t the pitcher, and any ibb/sac bunt PA in the first three trips disqualified the row (pitch counts do include pitches to non-qualified rows where relevant).  For a first pass, that seemed less reliant on individual batter-pitcher projections than allowing each set of PAs to be biased by crap hitters sac-bunting and good hitters getting IBBd would have been.

Pitches in PA 1 wOBA in PA 2 Expected** n
1 0.338 0.336 39832
2 0.341 0.335 69761
3 0.336 0.335 79342
4 0.334 0.335 82847
5 0.339 0.337 74786
6 0.347 0.338 51374
7+ 0.349 0.337 36713

MGL found a 15 point bonus for seeing 5+ pitches the first time up (on top of the baseline 10 he found), but I only get about an 11 point bonus on 6+ pitches and 3 points of that are from increased batter/worse pitcher quality (“Expected” is just a batter/pitcher quality measure, not an actual 2nd TTO prediction). The SD of each bucket is on the order of .002, so it’s extremely likely that this effect is real, and also likely that it’s legitimately smaller than it was in MGL’s dataset, assuming I’m using a similar enough sampling/exclusion method, which I think I am.  It’s not clear to me that that has to be an actual familiarity effect, because I would naively expect to see more of a monotonic increase throughout the number of pitches seen instead of the J-curve, but the buckets have just enough noise that the J-curve might simply be an artifact anyway, and short PAs are an odd animal in their own right as we’ll see later.

Doing the new part of the analysis, looking at the wOBA difference in PA2-PA1 based on the number of intervening pitches to other batters, I wasn’t sure I was going to find much fatigue evidence early in the game, but as it turns out, the relationship is clear and huge.

intervening pitches wOBA PA2-PA1 vs base .015 TTOP n
<=20 -0.021 -0.036 9476
21 -0.005 -0.020 5983
22 -0.005 -0.020 8652
23 0.004 -0.011 11945
24 0.000 -0.015 15683
25 0.004 -0.011 19592
26 0.001 -0.014 23057
27 0.005 -0.010 26504
28 0.009 -0.006 29690
29 0.015 0.000 31453
30 0.021 0.006 32356
31 0.014 -0.001 32250
32 0.020 0.005 30723
33 0.018 0.003 28390
34 0.027 0.012 25745
35 0.028 0.013 22407
36 0.023 0.008 18860
37 0.030 0.015 15429
38 0.025 0.010 12420
39 0.012 -0.003 9558
40 0.045 0.030 7362
41-42 0.032 0.017 9241
43+ 0.027 0.012 7879

That’s a monster effect, 2 points of TTOP wOBA per intervening pitch with an unmistakable trend.  Jackpot.  Hareeb’s a genius.  That’s big enough that it should result in actionable game situations all the time.  Let’s look at it in terms of actual 2nd time wOBAs (quality-adjusted).

intervening pitches PA2 wOBA (adj)
<=20 0.339
21 0.346
22 0.343
23 0.344
24 0.340
25 0.341
26 0.339
27 0.339
28 0.337
29 0.340
30 0.341
31 0.338
32 0.347
33 0.336
34 0.345
35 0.344
36 0.336
37 0.340
38 0.328
39 0.335
40 0.340
41-42 0.338
43+ 0.344

Wait what??!?!? Those look almost the same everywhere.  If you look closely, the higher-pitch-count PA2 wOBAs even average out to be a tad (4-5 points) *lower* than the low-pitch-count ones (and the same for PA1-PA3, though that needs a closer look). If I didn’t screw anything up, that can only mean..

intervening pitches PA1 wOBA (adj)
<=20 0.361
21 0.351
22 0.348
23 0.339
24 0.340
25 0.336
26 0.338
27 0.335
28 0.327
29 0.325
30 0.320
31 0.325
32 0.326
33 0.319
34 0.318
35 0.316
36 0.312
37 0.311
38 0.303
39 0.323
40 0.295
41-42 0.306
43+ 0.318

Yup.  The number of intervening pitches TO OTHER BATTERS between somebody’s first and second PA has a monster “effect on” the PA1 wOBA.  I started hand-checking more rows of pitch counts and PA results, you name it.  I couldn’t believe this was possibly real.  I asked one of my friends to verify that for me, and he did, and I mentioned the “effect” to Tango and he also observed the same pattern.  This is actually real.  It also works the same way between PA2 and PA3. I couldn’t keep looking at other TTOP stuff with this staring me in the face, so the rest of this post is going down this rabbit hole showing my path to figuring out what was going on.  If you want to stop here and try to work it out for yourself, or just think about it for awhile before reading on, I thought it was an interesting puzzle.

It’s conventional sabermetric wisdom that the box-score-level outcome of one PA doesn’t impart giant predictive effects, but let’s make sure that still holds up.

Reached base safely in PA1 PA2 wOBA (adj) Batter quality Pitcher quality
Yes 0.348 0.338 0.339
No 0.336 0.334 0.336

That’s a 12 point effect, but 7 of it is immediately explained by talent differences, and given the plethora of other factors I didn’t control for, all of which will also skew hitter-friendly like the batter and pitcher quality did, there’s just nothing of any significance here.    Maybe the effect is shorter-term than that?

Reached base safely in PA1 Next batter wOBA (adj) Next batter quality Pitcher quality
Yes 0.330 0.337 0.339
No 0.323 0.335 0.336

A 7 point effect where 5 is immediately explained by talent.  Also nothing here.  Maybe there’s some effect on intervening pitch count somehow?

Reached base safely in PA1 Average intervening pitches intervening wOBA (adj)
Yes 30.58 0.3282
No 30.85 0.3276

Barely, and the intervening batters don’t even hit quite as well as expected given that we know the average pitcher is 3 points worse in the Yes group.  Alrighty then.  There’s a big “effect” from intervening pitch count on PA1 wOBA, but PA1 wOBA has minimal to no effect on intervening pitch count, intervening wOBA, PA2 wOBA, or the very next hitter’s wOBA.  That’s… something.

In another curious note to this effect,

intervening pitches intervening wOBA (adj)
<=20 0.381
21 0.373
22 0.363
23 0.358
24 0.351
25 0.344
26 0.343
27 0.335
28 0.333
29 0.328
30 0.324
31 0.322
32 0.319
33 0.316
34 0.316
35 0.312
36 0.310
37 0.310
38 0.307
39 0.311
40 0.308
41-42 0.309
43+ 0.311

Another monster correlation, but that one has a much simpler explanation: short PAs show better results for hitters

Pitches in PA wOBA (adj) n
1 0.401 133230
2 0.383 195614
3 0.317 215141
4 0.293 220169
5 0.313 198238
6 0.328 133841
7 0.347 57396
8+ 0.369 37135

Throw a bunch of shorter PAs together and you get the higher aggregate wOBA seen in the table right above this one. It seems like the PA length effect has to be a key.  Maybe there’s a difference in the next batter’s pitch distribution depending on PA1?

Pitches in PA Fraction of PA after reached base Fraction of PA  after out wOBA after reached base wOBA after out OBP after reached base OBP after out
1 0.109 0.089 0.394 0.402 0.362 0.359
2 0.164 0.158 0.375 0.376 0.348 0.343
3 0.183 0.182 0.308 0.303 0.284 0.278
4 0.186 0.191 0.289 0.276 0.299 0.281
5 0.165 0.174 0.311 0.301 0.339 0.323
6 0.112 0.120 0.323 0.32 0.367 0.360
7 0.049 0.052 0.346 0.339 0.393 0.386
8+ 0.032 0.034 0.356 0.36 0.401 0.405

Now we’re cooking with gas.  That’s a huge likelihood ratio difference for 1-pitch PAs, and using our PA1 OBP of about .324, we’d expect to see a PA1 OBP of .370 given a 1-pitch PA followup, which is exactly what we get, and the longer PAs are more weighted to previous outs because of the odds ratio favoring outs after we get to 4 pitches.

Next PA pitches This PA1 OBP This PA1 wOBA
1 0.370 0.373
2 0.333 0.332
3 0.326 0.325
4 0.319 0.318
5 0.313 0.313
6 0.311 0.313
7 0.314 0.310
8 0.313 0.309

It seems like this should be a big cause of the observed effect. I used the 2nd/6th and 3rd/7th columns from two tables up to create a process that would “play through” the next 8 PAs starting after an out or a successful PA, deciding on the number of pitches and then whether it was an out or not based on the average values.  Then I calculated the expected OBP for PA1 based on the likelihood ratios of each number of total pitches to happen (the same way I got .370 from the odds ratio for a 1-pitch followup PA).

As it turns out, that effect alone can reproduce the shape and a little over half the spread

intervening pitches PA1 OBP (adj) model PA1 OBP
<=20 0.366 0.340
21 0.351 0.336
22 0.349 0.329
23 0.339 0.338
24 0.343 0.332
25 0.336 0.328
26 0.335 0.327
27 0.335 0.328
28 0.328 0.328
29 0.325 0.326
30 0.320 0.326
31 0.324 0.323
32 0.324 0.323
33 0.318 0.321
34 0.318 0.324
35 0.317 0.323
36 0.312 0.317
37 0.313 0.318
38 0.307 0.320
39 0.320 0.310
40 0.300 0.317
41-42 0.308 0.309
43+ 0.320 0.317

and that simple model is deficient at a number of things (correlations longer than 1 pa, different batters, base-out states, etc).  I don’t know everything that’s causing the effect, but I have a good chunk of it, and that reverse pitch count selection bias isn’t something I’ve ever seen mentioned before.  This is also a caution to any kind of analysis involving pitch counts to be very careful to avoid walking into this effect.

 


If you play poker and would like to support the site, read about the new PKC poker app.

A look at DRC+’s guts (part 1 of N)

In trying to better understand what DRC+ changed with this iteration, I extracted the “implied” run values for each event by finding the best linear fit to DRAA over the last 5 seasons.  To avoid regression hell (and the nonsense where walks can be worth negative runs when pitchers draw them), I only used players with 400+ PA.  To make sure this should actually produce reasonable values, I did the same for WRAA.

relative to average out 1b 2b 3b hr bb hbp k bip out
old DRAA 0.419 0.416 0.75 1.37 0.44 0.41 -0.08 0.03
new DRAA 0.48 0.57 0.56 1.36 0.44 0.49 -0.06 0.02
wRAA 0.70 1.00 1.27 1.53 0.54 0.60 0.01 0.00

Those are basically the accepted linear weights in the wRAA row, but DRAA seems to have some confusion around the doubles.  In the first iteration, doubles came out worth fewer runs than singles, and in the new iteration, triples come out worth fewer runs than doubles.  Pepsi might be ok, but that’s not.

If we force the 1b/2b/3b ratio to conform to the wRAA ratios and regress again (on 6 free variables instead of 8), then we get something else interesting.

relative to average PA 1b 2b 3b hr bb hbp k bip out
old DRAA 0.22 0.38 0.52 1.16 0.28 0.24 -0.24 -0.13
new DRAA 0.26 0.45 0.62 1.17 0.26 0.30 -0.24 -0.15
wRAA 0.44 0.74 1.01 1.27 0.27 0.33 -0.26 -0.27

Old DRAA was made up of about 90% of TTO runs and 50% of BIP runs, and that changed to about 90% of TTO runs and 60% of BIP runs in the new iteration.  So it’s like the component wOBA breakdown Tango was doing recently, except regressing the TTO component 10% and the BIP part 40% (down from 50%).

I also noticed that there was something strange about the total DRAA itself.  In theory, the aggregate runs above average should be 0 each year, but the new version of DRAA managed to uncenter itself by a couple of percent (that’s about -2% of total runs scored each season)

year old DRAA new DRAA
2010 210.8 -559.1
2011 127.9 -550
2012 226.8 -735.9
2013 190.4 -447.5
2014 33.7 -659.9
2015 60.1 -89.1
2016 63.3 -401.2
2017 -37.8 -318.3
2018 -50.2 -240.4

Breaking that down into full-time players (400+ PA), part-time position players (<400 PA), and pitchers, we get

2010-18 runs old DRAA new DRAA WRAA
Full-time 13912 11223 15296
part-time -6033 -7850 -9202
pitchers -7054 -7369 -6730
total 825 -3996 -636

I don’t know why it decided players suddenly deserved 4800 fewer runs, but here we are, and it took 520 offensive BWARP (10% of their total) away from the batters in this iteration too, so it didn’t recalibrate at that step either.  This isn’t an intentional change in replacement level or anything like that. It’s just the machine going haywire again without sufficient internal or external quality control.

 


If you play poker and would like to support the site, read about the new PKC poker app.

US sports unions are so, so screwed

TL;DR It’s always a good time to be a billionaire, but when you get to exploit people with super-short prime earning periods, it’s even better.

I’ve been seeing chatter about potential upcoming labor unrest in the NFL, the NHL and NBA both had a stoppage this decade, and baseball players haven’t been very happy about the lack of progress on the Harper and Machado fronts.  Furthermore, this is an era where norms have been giving way to the raw exercise of power, so I thought it would be interesting to look at upcoming negotiations under the assumption that the owners were going to try to make more money and that the players were willing to be extremely antagonistic.

Sports labor negotiations are positive sum- if the games are played, owners and players alike are much better off, over a wide range of revenue splits, than if the games weren’t played.  Under an absolute take-it-or-leave-it-forever ultimatum, the players would be willing to play for far less, and the owners would be willing to pay the players more.  The former is true because the four leagues mentioned are destination leagues- there’s nowhere else to play baseball, football, basketball, or hockey for nearly as much money.  The owners would be willing to pay more because less profit is still better than no profit.  If there were alternative markets (MLS is nowhere close to a destination league for soccer, for example), the following analysis wouldn’t be relevant.

None of the owners are going to suggest a revenue split anywhere near the minimum players might accept in a pure ultimatum (KHL might pay 20% of NHL at the top end, NPB, KBO, and EuroLeague are much worse).  That would be reducing player revenue share from ~50% to ~5-10%.  Nobody’s stupid enough to even float a proposal like that.  How much higher the owners would be willing to go is a much more interesting question.

There are reports of team revenue and operating income (profit), but if you’re skeptical of those numbers, there’s a fairly safe way to estimate an upper bound on profit.  Whatever a franchise valuation is, would the owners still be happy to own it if they also had to dump X% of the valuation into a black hole every year? If X is 0.01%, sure- that’s a 400k/year extra cost to own the Yankees (4 billion franchise value), and that’s not going to move the needle at all.  They make far, far more than that.  If X is 20%, hell no- 800MM/year down the toilet to own the Yankees would be completely insane.  Even 5% (200MM) seems like a bad idea in normal times, but let’s run with that and see where it gets us.

League averages (millions)
Franchise Valuation
Revenue
Profit
5% valuation
Profit % of Revenue
Payroll %
Previous CBA %
MLB 1645 315 29 82.25 9 54 N/A
NFL 2500 412.5 101 125 24 ~48 53
NBA 1650 246.7 52 82.5 21 ~50 50/57
NHL 630 157 25 31.5 16 50 57

(Source: Forbes articles)

MLB is structured differently, so maybe the profit % actually is lower because teams bid against each other with no hard cap, or maybe it’s fudged lower because it’s not a number that has to be signed off on by the players, but players could attempt to capture 60% of revenue as payroll, and outside of MLB (and maybe even in MLB), the owners would say yes on an ultimatum- it’s not that far above previous CBA levels.  Let’s create a hypothetical league that’s an amalgam of the non-MLB leagues to work with and assume that the owners come with some proposal around 45% of revenue to payroll in the next CBA and the players counter with 60%

League averages (millions) Franchise Valuation Revenue Profit Profit % of Revenue Payroll % Non-payroll expenses
Amalgam (current) 1800 300 60 20 50 30% / 90M
Owner offer 1800 300 75 25 45 30% / 90M
Player offer 1800 300 30 10 60 30% / 90M

If the owners cancel a season and win- the players come back the next year at 45%- then in 4 years total time, they’ll make -90M*4 (expenses) -45%*300M*3 (payroll) + 300*3 (revenue) = 135MM in profit, and the players threw away 45%*300M= 135MM by holding out and then folding (and cost the owners 165MM). If the owners had just accepted the player offer from the start, 4*30 = +120MM in profit.  So they make up for this really fast if they win.

On the player side, if they hold out and win- the owners agree to 60%- then in 4 years total time, they earn 3*60%*300 = 540MM, and if they’d just accepted the owner offer initially, they would have earned 4*45%*300 = 540MM, and the owners threw away 120MM by holding out and then folding.  The players also make up for this really fast if they win. (ignoring “harm to the game” effects which hurt both sides)

This looks like it might be a difficult kind of battle to handicap, but it’s not, for two main reasons.  The first is that the owner timescale is clearly longer than 4 years.  They can make decisions to maximize profit or future franchise value that far down the road, easily.  The sports unions however are not in the business of maximizing the amount of money that goes to players- they’re in the business of maximizing the amount of money that goes to *current voting members*, or more precisely, a bloc of current voting members large enough to certify a new agreement.

The last column in the top table shows the change from the previous CBA.  The NBA had a stoppage in 2011 when the owners tried to drop revenue from 57% to 47% with a harder salary cap, and after the stoppage, the players settled for 50% and a worse-but-not-as-bad-as-originally-offered cap change.  The NHL lockout in 2012 was close to what’s being discussed- the league was trying to drop salary from 57% to 43% (the reverse of a 32% increase) with a bunch of player-unfriendly contract terms as well, and settled for 50% without the contract issues.  The NFL lockout in 2011 (no games missed) was an attempt to drop from 53% to 42% and lengthen the season.  They settled for 48%.

Given that the average career length ranges from 3.5 years (NFL) to 5.6 (MLB) and medians are lower, it’s actually impressive leadership- or, more likely, complete player delusion about their expected future career length and anger about something they had being taken away- to get many takers on a threatened holdout that only pays off if you’re still playing 4+ years later. The owners won- huge- in all three lockouts. NHL and NBA owners got an extra 7% of revenue over 10 years at the cost of a few percent of revenue in the first year.  NFL owners got 5% extra for 10 years for nothing.

Perhaps, if NBA and NHL had aimed a little more conservatively (say, proposing 57% to 50% with the intent of settling at 52% with no games missed), they could have come out even better, but it’s not clear that they would have.  As it was, the NHLPA offered settlements at 54% instead of even trying to fully defend its territory, as did the NBPA at 53%, and they might have stuck harder to those numbers in the face of a more reasonable proposal.

It’s hard to find any example of the players outright winning a labor dispute or CBA negotiation since 1990.  Even following 1994 MLB, the players conceded ground- they averted disaster, in that they avoided the salary cap and hard-line revenue sharing, but they agreed to luxury tax numbers, and that’s just one of a number of anti-spending measures MLB has adopted since.  They can’t directly negotiate salary percentages down, so instead they reduce the club-level financial rewards of winning to limit salary growth.  Every form of revenue sharing, luxury tax, lost free agent compensation, etc. decreases the marginal revenue from spending and thereby works to suppress payroll.

Players might be able to fight back and get a consensus somewhere around a 50% jump (40% of revenue to 60%) if it were guaranteed to succeed, but of course, it’s not.  The owners would say yes to a pure ultimatum, but how do the players make it an ultimatum? It’s well-known that the best strategy in a game of chicken between non-suicidal players is to be the first one to throw your steering wheel out the window where the other person can see it.  By visibly taking away your options, you’ve left the other player in a swerve or die scenario, and you win.  Unfortunately for the players, they have no way to do that, and they’ve been demonstrably weak in every sport even when they’ve taken it to a holdout.  Against that backdrop, the owners haven’t quite thrown their steering wheel away, but the players should have absolutely no expectation that the owners will be in any kind of a hurry to use it.

The closest to strong the players have been is 1994 MLB, and that was the league trying to unilaterally impose a salary cap and revenue sharing and preceded by the owners blatantly colluding to suppress free-agent contracts.  Not “collusion” in quotes, but literally the commissioner publicly telling teams that long contracts were bad and the owners paying out multiple settlements for hundreds of millions of 1990s dollars.  And in the face of all of that, the players only stayed where they were and then conceded ground shortly after.  “Winning”, for a modern sports union, is now defined as “not losing ground horribly”.

The takeaway from that is that even if player share of revenue continues to drop to the 40% range where the players appear to have a reasonably credible ultimatum-level threat, they still don’t because they’ll just fold to a lesser offer.  If players were trying to go from 40% to 60%, and the ownership (miraculously) countered with 55%, the players would trip over themselves to ratify that agreement.  And they’d do the same thing at 50% and 45%.  Assuming they have the self-awareness to understand that in advance (the owners certainly do), they know they don’t have a credible threat at 40% (sitting out a season to go from 40% to 45% is moronic even with guaranteed success).  And in the same vein, sitting out a season to avoid going from 50% to 45% feels worse, because they’re benchmarked at 50%, but it’s equally moronic.

It’s also moronic for the owners to actually follow through with it, but because the players have folded so many times in a row, the players are acting far more strongly against their individual self-interests dollar-wise because of their shorter timescale, and the players’ marginal utility of money is much higher than that of the zillionaire owners/conglomerates, they’re likely only going to stay irrational for so long, and it’s a well-calculated risk at this point that they’re just going to fold again before too much damage is done.  The true floor of what the players will play for is still nowhere in sight IMO.

The upcoming NFL renegotiation in 2021 has all the makings of a total bloodbath for the players.  The NFL is in the worst position to defend itself, with the lowest career length, and yet the union is already-saber-rattling, two years in advance, with talk of reclaiming what they lost in the last CBA, and players like Richard Sherman are saying players have to be willing to strike.  That’s true, but… being willing to strike doesn’t mean you’re actually going to get your money back, and if players really are willing to strike without realizing that there’s a good chance that it’s just going to completely blow up in their faces, and a very high chance that most individuals come out worse even if they somehow fully win the dispute after only missing half a season… well, good luck with that.  The players are going to come in thinking they’re going to make gains, and if the owners channel their inner Nate Diaz and give the players the double birds while they wait for the inevitable tapout at something under 45% of revenue.. well, I guess I can pat myself on the back.  !remindme 2 years.

Their best hope, and it’s a slim one at that, is that the NFL owners simply aren’t in a mood for a fight.  The NBPA skated through a negotiation period in 2017 with minimal changes (and the ones approved look to me to be more like “good governance of league operations” agreements than one side trying to get over on the other), most likely because leaguewide revenues were absolutely exploding along with attendance, TV ratings, merchandise sales, etc. and neither side wanted to battle when they were both making more money than they’d even dreamed of a couple of years prior.  Maybe the NFLPA knows it has no chance in a lockout and is just trying to bluff the owners into not fighting or into aiming for fewer concessions- after all, the head of the union isn’t getting elected over and over by telling the membership that they’re all going to bend over and take it every time the owners come looking for more, even if he knows that’s true.

On the other hand, MLB players who’ve spoken out appear to be confused on a different level.  They think owners have started colluding again, and while I can’t rule that out, especially given their history, the situation appears to me to be explainable by a confluence of three factors.  First, teams are much smarter analytically and realize that big free-agent contracts to older players have been piss-poor investments (and may actually be getting worse post-steroid-era).  Second, teams are spending with more of an eye to marginal revenue than ever before.  Third, the anti-spending measures MLB has been winning concessions on for at least the last 25 years have really started coming home to roost.  Teams have been explicitly not spending money because of the luxury tax, and it should have been obvious that this sort of thing would happen more.  The owners wouldn’t have been harping on anti-spending measures for longer than most of the players have been alive if they hadn’t expected it to yield dividends.

That being said, MLB players are *still* in a better position than the other three leagues, although it’s likely to keep decaying, and trying to get much more money is like blood from a stone at this point, especially if the operating revenue estimates above are close to accurate.  MLB is harder to understand than “bargain for X% of revenue, then talk about how it gets divided” leagues, but the players- or at least enough of them that an informed union can negotiate on reality-based terms- need to understand that they’re 100% “getting screwed” currently by the concessions they’ve repeatedly made to the owners since the 1994 stoppage and most likely not getting screwed harder by a sudden recurrence of prohibited behavior.

 


If you play poker and would like to support the site, read about the new PKC poker app.