A look at DRC+’s guts (part 1 of N)

In trying to better understand what DRC+ changed with this iteration, I extracted the “implied” run values for each event by finding the best linear fit to DRAA over the last 5 seasons.  To avoid regression hell (and the nonsense where walks can be worth negative runs when pitchers draw them), I only used players with 400+ PA.  To make sure this should actually produce reasonable values, I did the same for WRAA.

relative to average out 1b 2b 3b hr bb hbp k bip out
old DRAA 0.419 0.416 0.75 1.37 0.44 0.41 -0.08 0.03
new DRAA 0.48 0.57 0.56 1.36 0.44 0.49 -0.06 0.02
wRAA 0.70 1.00 1.27 1.53 0.54 0.60 0.01 0.00

Those are basically the accepted linear weights in the wRAA row, but DRAA seems to have some confusion around the doubles.  In the first iteration, doubles came out worth fewer runs than singles, and in the new iteration, triples come out worth fewer runs than doubles.  Pepsi might be ok, but that’s not.

If we force the 1b/2b/3b ratio to conform to the wRAA ratios and regress again (on 6 free variables instead of 8), then we get something else interesting.

relative to average PA 1b 2b 3b hr bb hbp k bip out
old DRAA 0.22 0.38 0.52 1.16 0.28 0.24 -0.24 -0.13
new DRAA 0.26 0.45 0.62 1.17 0.26 0.30 -0.24 -0.15
wRAA 0.44 0.74 1.01 1.27 0.27 0.33 -0.26 -0.27

Old DRAA was made up of about 90% of TTO runs and 50% of BIP runs, and that changed to about 90% of TTO runs and 60% of BIP runs in the new iteration.  So it’s like the component wOBA breakdown Tango was doing recently, except regressing the TTO component 10% and the BIP part 40% (down from 50%).

I also noticed that there was something strange about the total DRAA itself.  In theory, the aggregate runs above average should be 0 each year, but the new version of DRAA managed to uncenter itself by a couple of percent (that’s about -2% of total runs scored each season)

year old DRAA new DRAA
2010 210.8 -559.1
2011 127.9 -550
2012 226.8 -735.9
2013 190.4 -447.5
2014 33.7 -659.9
2015 60.1 -89.1
2016 63.3 -401.2
2017 -37.8 -318.3
2018 -50.2 -240.4

Breaking that down into full-time players (400+ PA), part-time position players (<400 PA), and pitchers, we get

2010-18 runs old DRAA new DRAA WRAA
Full-time 13912 11223 15296
part-time -6033 -7850 -9202
pitchers -7054 -7369 -6730
total 825 -3996 -636

I don’t know why it decided players suddenly deserved 4800 fewer runs, but here we are, and it took 520 offensive BWARP (10% of their total) away from the batters in this iteration too, so it didn’t recalibrate at that step either.  This isn’t an intentional change in replacement level or anything like that. It’s just the machine going haywire again without sufficient internal or external quality control.


US sports unions are so, so screwed

TL;DR It’s always a good time to be a billionaire, but when you get to exploit people with super-short prime earning periods, it’s even better.

I’ve been seeing chatter about potential upcoming labor unrest in the NFL, the NHL and NBA both had a stoppage this decade, and baseball players haven’t been very happy about the lack of progress on the Harper and Machado fronts.  Furthermore, this is an era where norms have been giving way to the raw exercise of power, so I thought it would be interesting to look at upcoming negotiations under the assumption that the owners were going to try to make more money and that the players were willing to be extremely antagonistic.

Sports labor negotiations are positive sum- if the games are played, owners and players alike are much better off, over a wide range of revenue splits, than if the games weren’t played.  Under an absolute take-it-or-leave-it-forever ultimatum, the players would be willing to play for far less, and the owners would be willing to pay the players more.  The former is true because the four leagues mentioned are destination leagues- there’s nowhere else to play baseball, football, basketball, or hockey for nearly as much money.  The owners would be willing to pay more because less profit is still better than no profit.  If there were alternative markets (MLS is nowhere close to a destination league for soccer, for example), the following analysis wouldn’t be relevant.

None of the owners are going to suggest a revenue split anywhere near the minimum players might accept in a pure ultimatum (KHL might pay 20% of NHL at the top end, NPB, KBO, and EuroLeague are much worse).  That would be reducing player revenue share from ~50% to ~5-10%.  Nobody’s stupid enough to even float a proposal like that.  How much higher the owners would be willing to go is a much more interesting question.

There are reports of team revenue and operating income (profit), but if you’re skeptical of those numbers, there’s a fairly safe way to estimate an upper bound on profit.  Whatever a franchise valuation is, would the owners still be happy to own it if they also had to dump X% of the valuation into a black hole every year? If X is 0.01%, sure- that’s a 400k/year extra cost to own the Yankees (4 billion franchise value), and that’s not going to move the needle at all.  They make far, far more than that.  If X is 20%, hell no- 800MM/year down the toilet to own the Yankees would be completely insane.  Even 5% (200MM) seems like a bad idea in normal times, but let’s run with that and see where it gets us.

League averages (millions)
Franchise Valuation
5% valuation
Profit % of Revenue
Payroll %
Previous CBA %
MLB 1645 315 29 82.25 9 54 N/A
NFL 2500 412.5 101 125 24 ~48 53
NBA 1650 246.7 52 82.5 21 ~50 50/57
NHL 630 157 25 31.5 16 50 57

(Source: Forbes articles)

MLB is structured differently, so maybe the profit % actually is lower because teams bid against each other with no hard cap, or maybe it’s fudged lower because it’s not a number that has to be signed off on by the players, but players could attempt to capture 60% of revenue as payroll, and outside of MLB (and maybe even in MLB), the owners would say yes on an ultimatum- it’s not that far above previous CBA levels.  Let’s create a hypothetical league that’s an amalgam of the non-MLB leagues to work with and assume that the owners come with some proposal around 45% of revenue to payroll in the next CBA and the players counter with 60%

League averages (millions) Franchise Valuation Revenue Profit Profit % of Revenue Payroll % Non-payroll expenses
Amalgam (current) 1800 300 60 20 50 30% / 90M
Owner offer 1800 300 75 25 45 30% / 90M
Player offer 1800 300 30 10 60 30% / 90M

If the owners cancel a season and win- the players come back the next year at 45%- then in 4 years total time, they’ll make -90M*4 (expenses) -45%*300M*3 (payroll) + 300*3 (revenue) = 135MM in profit, and the players threw away 45%*300M= 135MM by holding out and then folding (and cost the owners 165MM). If the owners had just accepted the player offer from the start, 4*30 = +120MM in profit.  So they make up for this really fast if they win.

On the player side, if they hold out and win- the owners agree to 60%- then in 4 years total time, they earn 3*60%*300 = 540MM, and if they’d just accepted the owner offer initially, they would have earned 4*45%*300 = 540MM, and the owners threw away 120MM by holding out and then folding.  The players also make up for this really fast if they win. (ignoring “harm to the game” effects which hurt both sides)

This looks like it might be a difficult kind of battle to handicap, but it’s not, for two main reasons.  The first is that the owner timescale is clearly longer than 4 years.  They can make decisions to maximize profit or future franchise value that far down the road, easily.  The sports unions however are not in the business of maximizing the amount of money that goes to players- they’re in the business of maximizing the amount of money that goes to *current voting members*, or more precisely, a bloc of current voting members large enough to certify a new agreement.

The last column in the top table shows the change from the previous CBA.  The NBA had a stoppage in 2011 when the owners tried to drop revenue from 57% to 47% with a harder salary cap, and after the stoppage, the players settled for 50% and a worse-but-not-as-bad-as-originally-offered cap change.  The NHL lockout in 2012 was close to what’s being discussed- the league was trying to drop salary from 57% to 43% (the reverse of a 32% increase) with a bunch of player-unfriendly contract terms as well, and settled for 50% without the contract issues.  The NFL lockout in 2011 (no games missed) was an attempt to drop from 53% to 42% and lengthen the season.  They settled for 48%.

Given that the average career length ranges from 3.5 years (NFL) to 5.6 (MLB) and medians are lower, it’s actually impressive leadership- or, more likely, complete player delusion about their expected future career length and anger about something they had being taken away- to get many takers on a threatened holdout that only pays off if you’re still playing 4+ years later. The owners won- huge- in all three lockouts. NHL and NBA owners got an extra 7% of revenue over 10 years at the cost of a few percent of revenue in the first year.  NFL owners got 5% extra for 10 years for nothing.

Perhaps, if NBA and NHL had aimed a little more conservatively (say, proposing 57% to 50% with the intent of settling at 52% with no games missed), they could have come out even better, but it’s not clear that they would have.  As it was, the NHLPA offered settlements at 54% instead of even trying to fully defend its territory, as did the NBPA at 53%, and they might have stuck harder to those numbers in the face of a more reasonable proposal.

It’s hard to find any example of the players outright winning a labor dispute or CBA negotiation since 1990.  Even following 1994 MLB, the players conceded ground- they averted disaster, in that they avoided the salary cap and hard-line revenue sharing, but they agreed to luxury tax numbers, and that’s just one of a number of anti-spending measures MLB has adopted since.  They can’t directly negotiate salary percentages down, so instead they reduce the club-level financial rewards of winning to limit salary growth.  Every form of revenue sharing, luxury tax, lost free agent compensation, etc. decreases the marginal revenue from spending and thereby works to suppress payroll.

Players might be able to fight back and get a consensus somewhere around a 50% jump (40% of revenue to 60%) if it were guaranteed to succeed, but of course, it’s not.  The owners would say yes to a pure ultimatum, but how do the players make it an ultimatum? It’s well-known that the best strategy in a game of chicken between non-suicidal players is to be the first one to throw your steering wheel out the window where the other person can see it.  By visibly taking away your options, you’ve left the other player in a swerve or die scenario, and you win.  Unfortunately for the players, they have no way to do that, and they’ve been demonstrably weak in every sport even when they’ve taken it to a holdout.  Against that backdrop, the owners haven’t quite thrown their steering wheel away, but the players should have absolutely no expectation that the owners will be in any kind of a hurry to use it.

The closest to strong the players have been is 1994 MLB, and that was the league trying to unilaterally impose a salary cap and revenue sharing and preceded by the owners blatantly colluding to suppress free-agent contracts.  Not “collusion” in quotes, but literally the commissioner publicly telling teams that long contracts were bad and the owners paying out multiple settlements for hundreds of millions of 1990s dollars.  And in the face of all of that, the players only stayed where they were and then conceded ground shortly after.  “Winning”, for a modern sports union, is now defined as “not losing ground horribly”.

The takeaway from that is that even if player share of revenue continues to drop to the 40% range where the players appear to have a reasonably credible ultimatum-level threat, they still don’t because they’ll just fold to a lesser offer.  If players were trying to go from 40% to 60%, and the ownership (miraculously) countered with 55%, the players would trip over themselves to ratify that agreement.  And they’d do the same thing at 50% and 45%.  Assuming they have the self-awareness to understand that in advance (the owners certainly do), they know they don’t have a credible threat at 40% (sitting out a season to go from 40% to 45% is moronic even with guaranteed success).  And in the same vein, sitting out a season to avoid going from 50% to 45% feels worse, because they’re benchmarked at 50%, but it’s equally moronic.

It’s also moronic for the owners to actually follow through with it, but because the players have folded so many times in a row, the players are acting far more strongly against their individual self-interests dollar-wise because of their shorter timescale, and the players’ marginal utility of money is much higher than that of the zillionaire owners/conglomerates, they’re likely only going to stay irrational for so long, and it’s a well-calculated risk at this point that they’re just going to fold again before too much damage is done.  The true floor of what the players will play for is still nowhere in sight IMO.

The upcoming NFL renegotiation in 2021 has all the makings of a total bloodbath for the players.  The NFL is in the worst position to defend itself, with the lowest career length, and yet the union is already-saber-rattling, two years in advance, with talk of reclaiming what they lost in the last CBA, and players like Richard Sherman are saying players have to be willing to strike.  That’s true, but… being willing to strike doesn’t mean you’re actually going to get your money back, and if players really are willing to strike without realizing that there’s a good chance that it’s just going to completely blow up in their faces, and a very high chance that most individuals come out worse even if they somehow fully win the dispute after only missing half a season… well, good luck with that.  The players are going to come in thinking they’re going to make gains, and if the owners channel their inner Nate Diaz and give the players the double birds while they wait for the inevitable tapout at something under 45% of revenue.. well, I guess I can pat myself on the back.  !remindme 2 years.

Their best hope, and it’s a slim one at that, is that the NFL owners simply aren’t in a mood for a fight.  The NBPA skated through a negotiation period in 2017 with minimal changes (and the ones approved look to me to be more like “good governance of league operations” agreements than one side trying to get over on the other), most likely because leaguewide revenues were absolutely exploding along with attendance, TV ratings, merchandise sales, etc. and neither side wanted to battle when they were both making more money than they’d even dreamed of a couple of years prior.  Maybe the NFLPA knows it has no chance in a lockout and is just trying to bluff the owners into not fighting or into aiming for fewer concessions- after all, the head of the union isn’t getting elected over and over by telling the membership that they’re all going to bend over and take it every time the owners come looking for more, even if he knows that’s true.

On the other hand, MLB players who’ve spoken out appear to be confused on a different level.  They think owners have started colluding again, and while I can’t rule that out, especially given their history, the situation appears to me to be explainable by a confluence of three factors.  First, teams are much smarter analytically and realize that big free-agent contracts to older players have been piss-poor investments (and may actually be getting worse post-steroid-era).  Second, teams are spending with more of an eye to marginal revenue than ever before.  Third, the anti-spending measures MLB has been winning concessions on for at least the last 25 years have really started coming home to roost.  Teams have been explicitly not spending money because of the luxury tax, and it should have been obvious that this sort of thing would happen more.  The owners wouldn’t have been harping on anti-spending measures for longer than most of the players have been alive if they hadn’t expected it to yield dividends.

That being said, MLB players are *still* in a better position than the other three leagues, although it’s likely to keep decaying, and trying to get much more money is like blood from a stone at this point, especially if the operating revenue estimates above are close to accurate.  MLB is harder to understand than “bargain for X% of revenue, then talk about how it gets divided” leagues, but the players- or at least enough of them that an informed union can negotiate on reality-based terms- need to understand that they’re 100% “getting screwed” currently by the concessions they’ve repeatedly made to the owners since the 1994 stoppage and most likely not getting screwed harder by a sudden recurrence of prohibited behavior.


2/05/19 DRC+ update- some partial fixes, some new problems

BP released an update to DRC+ yesterday purporting to fix/improve several issues that have been raised on this blog.  One thing didn’t change at all though- DRC+ still isn’t a hitting metric.  It still assigns pitchers artificially low values no matter how well they hit, and the areas of superior projection (where actually true) are largely driven by this.  The update claimed two real areas of improvement.


The first is in treating outlier players.  As discussed in C’mon Man- Baseball Prospectus DRC+ Edition by treating player seasons individually and regressing them, instead of treating careers, DRC+ will continually fail to realize that outliers are really outliers. Their fix is, roughly, to make a prior distribution based on all player performances in surrounding years, and hopefully not regress the outliers as much because it realizes something like them might actually exist.  That mitigates the problem a little, sometimes, but it’s still an essentially random fix.  Some cases previously mentioned look better, and others, like Don Kessinger vs. Larry Bowa still don’t make any sense at all.  They’re very similar offensive players, in the same league, overlapping in most of their careers, and yet Kessinger gets wRC-DRC bumped from 72 to 80 while Bowa only goes from 70 to 72, even though Kessinger was *more* TTO-based.

To their credit- or at least to credit their self-awareness, they seem to know that their metric is not reliable at its core for valuation.  Jonathan Judge says

“As always, you should remember that, over the course of a career, a player’s raw stats—even for something like batting average—tend to be much more informative than they are for individual seasons. If a hitter consistently seems to exceed what DRC+ expects for them, at some point, you should feel free to prefer, or at least further account for, the different raw results.”

Roughly translated, “Regressed 1-year performance is a better estimation of talent that 1-year raw performance, but ignoring the rest of a player’s career and re-estimating talent 1 year at a time can cause discrepancies, and if it does, trust the career numbers more.” I have no argument with that.  The question remains how BP will actually use the stat- if we get more fluff pieces on DRC+ outliers who are obviously just the kind career discrepancies Judge and I talked about, that’s bad.  If it is mainly used to de-luck balls in play for players who haven’t demonstrated that they deserve much outlier consideration, that’s basically fine and definitely not the dumbest thing I’ve seen lately.


This, on the other hand, well might be.

Mark Melancon 2011 1 1 -3 2 -0.1
Dan Runzler 2011 1 1 -17 2 -0.1
Matt Guerrier 2011 1 1 -13 2 -0.1
Santiago Casilla 2011 1 1 -12 2 -0.1
Josh Stinson 2011 1 1 -15 2 -0.1
Jose Veras 2011 1 1 -14 2 -0.1
Javy Guerra 2011 1 1 -15 2 -0.1
Joey Gathright 2011 1 1 81 1 0

Not just the blatant cheating (Gathright is the only position player on the list), but the DRC+ SDs make no sense.  Based on one identical PA, DRC+ claims that there’s a 1 in hundreds of thousands chance that Runzler is a better hitter than Melancon and also assigns negative runs to a walk because a pitcher drew it.  The DRC+ SDs were pure nonsense before, but now they’re a new kind of nonsense. These players ranged from 9-31 SD in the previous iteration of DRC+, and while the low end of that was still certainly too low, SDs of 1-2 are beyond absurd, and the fact that they’re that low *only for players with almost no PAs* is a huge red flag that something inside the black box is terribly wrong.  Tango recently explored the SD of wRC+/WAR and found that the SDs should be similar for most players with the same number of PA.  DRC+ SDs done correctly could legitimately show up as slightly lower, because they’re the SD of a regressed stat, but that’s with an emphasis on slightly.  Not SDs of 1 or 2 for anybody, and not lower SDs for pitchers and part-time players who aren’t close to a season full of PAs.

Park Adjustments

I’d observed before that DRC+ still contains a lot of park factor and they’ve taken steps to address this.  They adjusted Colorado hitters more in this iteration while saying there wasn’t anything wrong with their previous park factors.  I’m not sure exactly how that makes sense, unless they just weren’t correcting for park factor before, but they claim to be park-isolated now and show a regression against their park factors to prove it.  Of course the key word in that claim is THEIR park factors.  I reran the numbers from the linked post with the new DRC+s, and while they have made an improvement, they’re still correlated to both Fangraphs park factor and my surrounding-years park factor estimate at the r=0.17-0.18 level, with all that entails (still overrating Rockies hitters, for one, just not by as much).


DRC+ and Team Wins

A reader saw a television piece on DRC+, googled and found this site, and asked me a simple question: how does a DRC+ value correlate to a win? I answered that privately, but it occurred to me that team W-L record was a simple way to test DRC+’s claim of superior descriptiveness without having to rely on its false claim of being park-adjusted.

I used seasons from 2010-2018, with all stats below adjusted for year and league- i.e. the 2018 Braves are compared to the 2018 NL average.  Calculations were done with runs/game and win% since not all seasons were 162 games.

Team metric r^2 to team winning %
Run Differential 0.88
wRC+ 0.47
Runs Scored 0.43
OBP 0.38
wOBA 0.37
OPS 0.36
DRC+ 0.35

Run differential is cheating of course, since it’s the only one on the list that knows about runs allowed, but it does show that at the seasonal level, scoring runs and not allowing them is the overwhelming driver of W-L record and that properly matching RS to RA- i.e. not losing 5 1-run games and winning a 5-run game to “balance out”- is a distant second.

Good offense is based on three major things- being good, sequencing well, and playing in a friendly park.  Only the first two help you to outscore your opponent who’s playing the game in the same park, and Runs Scored can’t tell the difference between a good offense and a friendly park.  As it turns out, properly removing park factor noise (wRC+) is more important than capturing sequencing (Runs Scored).

Both clearly beat wOBA, as expected, because wRC+ is basically wOBA without park factor noise, and Runs Scored is basically wOBA with sequencing added.  OBP beating wOBA is kind of an accident- wOBA *differential* would beat OBP *differential*- but because park factor is more prevalent in SLG than OBP, offensive wOBA is more polluted by park noise and comes out slightly worse.

And then there’s DRC+.  Not only does it not know sequencing, it doesn’t even know what component events (BB, 1B, HR, etc) actually happened, and the 25% or so of park factor that it does neutralize is not enough to make up for that.  It’s not a good showing for the fancy new most descriptive metric ever when it’s literally more valuable to know a team’s OBP than its DRC+ to predict its W-L record, especially when wRC+ crushes the competition at the same task.


Mashers underperform xwOBA on air balls

Using the same grouping methodology as The Statcast GB speed adjustment seems to capture about 40% of the speed effect, except using barrel% (barrels/batted balls), I got the following for air balls (FB, LD, Popup):

barrel group FB BA-xBA FB wOBA-xwOBA n
high-barrel% 0.006 -0.005 22993
avg 0.006 0.010 22775
low-barrel% -0.002 0.005 18422

These numbers get closer to the noise range (+/- 0.003), but mashers simultaneously OUTPERFORMING on BA while UNDERPERFORMING on wOBA while weak hitters do the opposite is a tough parlay to hit by chance alone because any positive BA event is a positive wOBA event as well.  The obvious explanation to me, which Tango is going with too, is that mashers just get played deeper in the OF, and that that alignment difference is the major driver of what we’ve each measured.


The Statcast GB speed adjustment seems to capture about 40% of the speed effect

Statcast recently rolled out an adjustment to its ground ball xwOBA model to account for batter speed, and I set out to test how well that adjustment was doing.  I used 2018 data for players with at least 100 batted balls (n=390).  To get a proxy for sprint speed, I used the average difference between the speed-unadjusted xwOBA and the speed-adjusted xwOBA for ground balls.  Billy Hamilton graded out fast.  Welington Castillo didn’t.  That’s good.  Grouping the players into thirds by their speed-proxy, I got the following


speed Actual GB wOBA basic xwOBA speed-adjusted xwOBA Actual-basic Actual- (speed-adjusted) n
slow 0.215 0.226 0.215 -0.011 0.000 14642
avg 0.233 0.217 0.219 0.016 0.014 16481
fast 0.247 0.208 0.218 0.039 0.029 18930

The slower players seem to hit the ball better on the ground according to basic xwOBA, but they still have worse actual outcomes.  We can see that the fast players outperform the slow ones by 50 points in unadjusted wOBA-xwOBA and only 29 points after the speed adjustment.


DRC+ isn’t even a hitting metric

At least not as the term is used in baseball.  Hitting metrics can adjust for nothing (box score stats, AVG, OBP, etc), league and park (OPS+, wRC+, etc), or more detailed conditions (opposing pitcher and defense, umpire, color of the uniforms, proximity of Snoop Dogg, whatever).  They don’t adjust for the position played.  Hitting is hitting, regardless of who does it.  Unless it’s not.  While fooling around with the data for DRC+ really isn’t any good at predicting next year’s wOBA for team switchers and The DRC+ team-switcher claim is utter statistical malpractice some more, it looked for all the world like DRC+ had to be cheating, and it is.

To prove that, I looked at seasons with exactly 1 PA and 1 unintentional walk for the entire season, and the DRC+ for those seasons.

Audry Perez
Spencer Kieboom
John Hester
Joey Gathright
Red Sox
J.c. Boscan
Mark Melancon
George Sherrill
Antonio Bastardo
Dan Runzler
Jose Veras
Matt Reynolds
Tony Cingrani
Antonio Bastardo
Javy Guerra
Josh Stinson
Aaron Thompson
Brandon League
J.j. Hoover
Santiago Casilla
Jason Garcia
Chris Capuano
Edubray Ramos
Matt Guerrier
Liam Hendriks
Blue Jays
Phillippe Aumont
Randy Choate
Joe Blanton
Jacob Barnes
Sean Burnett
Robert Carson

That’s a pretty good spread.  The top 5 are position players, the rest are pitchers.  DRC+ is blatantly cheating by assigning pitchers very low DRC+ values even when their offensive performance is good and not doing the same for 1-PA position players.  wOBA and wRC+ don’t do this, as evidenced by Kieboom (#5) right there with 3 pitchers with the same seasonal stat line.  It’s also not using data from prior seasons because that was Kieboom’s only career PA to date, and when Livan Hernandez debuted in 1996 for one game with 1 PA and 1 single, he got a DRC+ of -14 for his efforts.  It’s just cheating, period.  And it doesn’t learn either.  Even when Bumgarner was hitting in 2014-2017, his DRC+s were -15, 4, -17, and -19.

I also included the DRC+ SDs here just to show that they’re complete nonsense.  Pitcher Mark Melancon (15 +/- 14) has one career PA. Pitcher Robert Carson (-43 +/- 7) also has one career PA. Pitcher Randy Choate (-28 +/- 52) had one PA that year and 5 a decade earlier.  What in the actual fuck?

The entire DRC+ project is a complete farce at this point.  The outputs are a joke***  The SD values are nonsense (table above). The pillars it stands on are complete bullshit.  It’s more descriptive of the current season than park adjusted stats because it’s not anywhere near a park-adjusted stat, even though it claims to be.  It’s more predictive than park-adjusted stats for next year’s team because it’s somewhat regressed, meaning it basically can’t lose, and it’s also cheating the same way descriptiveness does by keeping a bunch of park factor.  Its claimed “substantial improvement over predicting wOBA for team switchers” is statistical malpractice to begin with, and now we see that the one area where it did predict significantly better than regressed wOBA, very-low-PA players, is driven by (almost) ignoring actual results for pitchers and saying they sucked at the plate no matter how well they really hit (and treating low-PA position players with the exact same stat lines as average-ish).

***Check out DRA- land where Billy Wagner is 26 percent more valuable on a per-inning basis than Mariano Rivera and almost as valuable for his career.  I love Billy Wagner, but still, come on.

RIP 12/29/2018.  Comment F to pay respects.