DRC+ still contains a lot of park factor

Required knowledge: DRC+ and park factors

TL;DR read the title above, the rant 3 paragraphs down, and the very bottom

DRC+ is supposed to be a fully park-adjusted metric, but from the initial article, I couldn’t understand how that could be consistent with the reported results without either an exceptional amount of overfitting or extremely good luck.  Team DRC+ was reported to be more reliable than team wRC+ at describing the SAME SEASON’s team runs/PA.  Since wRC+ is based off of wOBA, team wOBA basically is team scoring offense (r=0.94), and DRC+ regresses certain components of wOBA back towards the mean quite significantly (which is why DRC+ is structurally unfit for use in WAR), it made no sense to me that a metric that took away actual hits that created actual runs from teams with good BABIPs and invented hits in place of actual outs for teams with bad BABIPs could possibly correlate better to actual runs scored than a metric that used what happened on the field.  It’s not quite logically impossible for that to be true, but it’s pretty damn close.

It turns out the simple explanation for how a park-adjusted significantly regressed metric beat a park-adjusted unregressed metric is the correct one.  It didn’t. DRC+ keeps in a bunch of park factor and calls itself a park-adjusted metric when it’s simply not one, and not even close to one.  The park factor table near the bottom of the DRC+article should have given anybody who knows anything about baseball serious pause, and of course it fits right in with DRC+’s “great descriptiveness”.

RANT

How in the hell does a park factor of 104 for Coors get published without explanation by any person or institution trying to be serious?  The observed park factors (halved) the last few years, in reverse order: 114 (2018), 115, 116, 117, 120, 109, 123… You can’t throw out a number like Coors 104 like it’s nothing.  If Jonathan Judge could actually justify it somehow- maybe last year we got a fantastic confluence of garbage pitchers and great situational hitting at Coors and the reverse on the road while still somehow only putting up a 114, where you could at least handwave an attempt at a justification, then he should have made that case when he was asked about it, but instead he gave an answer indicative of never having taken a serious look at it.  Spitting out a 104 for Coors should have been like a tornado siren going off in his ear to do basic quality control checks on park effects for the entire model, but it evidently wasn’t, so here I am doing it instead.

/RANT

The basic questions are “how correlated is team DRC+ to home park factor?” and “how correlated should team DRC+ be to home park factor?”.  The naive answer to the second question is “not correlated at all since it’s park adjusted, duh”, but it’s possible that the talented hitters skew towards hitters’ parks, which would cause a legitimate positive correlation, or that they skew towards pitchers’ parks, which would cause a legitimate negative correlation.  As it turns out, over the 2003-2017 timeframe, hitting talent doesn’t skew at all, but that’s an assertion that has to be demonstrated instead of just assumed true, so let’s get to it.

We need a way to make (offensive talent, home park factor) team-season pairs that can measure both components separately without being causally correlated to each other.  Seasonal team road wOBA is a basically unbiased way to measure offensive quality independent of home park factor because the opposing parks played in have to average out pretty similarly for every team in the same league (AL/NL)**.  If we use that, then we need a way to make a park factor for those seasons that can’t include that year’s data, because everything else being equal, an increase in a team’s road wOBA would decrease its home park factor****, and we’re explicitly trying to avoid nonsense like that.  Using the observed park factors from *surrounding years*, not the current year, to estimate the current year’s park factor solves that problem, assuming those estimates don’t suck.

** there’s a tiny bias from not playing road games in a stadium with your park factor, but correcting that by adding a hypothetical 5 road games at estimated home park factor doesn’t change conclusions)

**** some increase will be skill that will, on average, increase home wOBA as well and mostly cancel out, and some increase will be luck that won’t cancel out and would screw the analysis up

Methodology

I used all eligible team-batting-seasons, pitchers included, from 2003-2017.  To estimate park factors, I used the surrounding 2 years (T-2, T-1, T+1, T+2) of observed park factors (for runs) if they were available, the surrounding 1 year (T-1, T+1) otherwise, and threw out the season if I didn’t have those.  That means I threw out all 2018s as well as the first and last years in each park.  I ignored other changes (moved fences, etc).

Because I have no idea what DRC+ is doing with pitcher-batters, how good its AL-NL benchmarking is, and the assumption of nearly equivalent aggregate road parks is only guaranteed to hold between same-league teams, I did the DRC+ analysis separately for AL and NL teams.

To control for changing leaguewide wOBA in the 2003-2017 time period, I used the same wOBA/LgAvGwOBA wOBA% method I used in DRC+ really isn’t any good at predicting next year’s wOBA for team switchers for wOBA and DRC+, just for AL teams and NL teams separately for the reasons above.  After this step, I did analyses with and without Coors because it’s an extreme outlier.  We already know with near certainty that their treatment of Coors is kind of questionable batshit crazy and keeps way too much park effect in DRC+, so I wanted to see how they did everywhere else.

Results

The park factor estimation worked pretty well.  2 surrounding year PF correlated to the  observed PF for the year in question at r=0.54 (0.65 with Coors) and the 1 surrounding year at r=0.52 (0.61 with Coors).  The 5-year FanGraphs PF, WHICH USES THE YEAR IN QUESTION, only correlates at r=0.7 (0.77 with Coors) and the 1 and 2 year park factors correlate to the Fangraphs PF at 0.87 and 0.96 respectively.  This is plenty to work with given the effect sizes later.

Team road wOBA% (squared or linear) correlates to the estimated home park factor at r = -0.03, literally nothing, and with the 5 extra hypothetical games as mentioned in the footnote above, r=0.02, also literally nothing.  It didn’t have to be this way, but it’s convenient that it is.  Just to show that road wOBA isn’t all noise, it correlates to that season’s home wOBA% at r=0.32 (0.35 with the adjustment) even though we’re dealing with half seasons and home wOBA% contains the entire park factor.  Road wOBA% correlates to home wOBA%/sqrt(estimated park factor) at r=0.56 (and wOBA%/park factor at r=0.54).  That’s estimated park factor from surrounding years, not using the home and road wOBA data in question.

Home wOBA% is obviously hugely correlated to estimated park factor (r=0.46 for home wOBA%^2 vs estimated PF), but park adjusting it by correlating

(home wOBA%)^2/estimated park factor TO estimated park factor

has r= -0.00017.  Completely uncorrelated to estimated PF (it’s pure luck that it’s THAT low).

So we’ve established that road wOBA really does contain a lot of information on a team’s offensive talent (that’s a legitimate naive “duh”), that it’s virtually uncorrelated to true home park factor, and that park-adjusted home wOBA% (using PF estimates from other seasons only) is also uncorrelated to true home park factor.  If DRC+ is a correctly park-adjusted metric that measures offensive talent, DRC+% should also have to be virtually uncorrelated to true home park factor.

And… the correlation of DRC+% to estimated park factor is r= 0.38 for AL teams, r=0.29 for NL teams excluding Colorado, r=0.31 including Colorado.  Well then.  That certainly explains how it can be more descriptive than an actually park-adjusted metric.

 

The DRC+ team-switcher claim is utter statistical malpractice

Required knowledge: MUST HAVE READ/SKIMMED DRC+ really isn’t any good at predicting next year’s wOBA for team switchers and a non-technical knowledge of what a correlation coefficient means wouldn’t hurt.

In doing the research for the other post, it was baffling to me what BP could have been doing to come up with the claim that DRC+ was a revolutionary advance for team-switchers.  It became completely obvious that there was nothing particularly meaningful there with respect to switchers and that it would take a totally absurd way of looking at the data to come to a different conclusion.  With that in mind, I clicked some buttons and stumbled into figuring out what they had to be doing wrong.  One would assume that any sophisticated practitioner doing a correlation where some season pairs had 600+ PA each and other season pairs had 5 PA each would weight them differently… and one would be wrong.

I decided to check 4 simple ways of weighting the correlation- unweighted, by year T PA, by year T+1 PA, and by the harmonic mean of year T PA and year T+1 PA.

Table 1.  Correlation coefficients to year T+1 wOBA% by different weighting methods, minimum 400 PAs year T.

400+ PA Harmonic Year T PA Year T+1 PA unweighted N
switch wOBA 0.34 0.35 0.34 0.34 473
switch DRC+ 0.35 0.35 0.34 0.35 473
same wOBA 0.55 0.53 0.55 0.51 1124
same DRC+ 0.57 0.55 0.57 0.54 1124

The way to read this chart is to compare the wOBA and DRC+ correlations for each group of hitters- switch to switch (lines 1 and 2) and same to same (lines 3 and 4).  It’s obvious that wOBA should correlate much better for same than switch because it contains the entire park effect which is maintained in “same” and lost in “switch”, but DRC+ behaves the same way because DRC+ also contains a lot of park factor even though it shouldn’t

In the 400+ year T PA group, the choice of weighting method is almost completely irrelevant. DRC+ correlates marginally better across the board and it has nothing to do with switch or stay.  Let’s add group 2 to the mix and see what we get.

Table 2.  Correlation coefficients to year T+1 wOBA% by different weighting methods, minimum 100 PAs year T.

100+ PA Harmonic Year T PA Year T+1 PA unweighted N
switch wOBA 0.31 0.29 0.29 0.26 1100
switch DRC+ 0.33 0.31 0.32 0.29 1100
same wOBA 0.51 0.47 0.50 0.44 2071
same DRC+ 0.54 0.51 0.53 0.47 2071

The values change, but DRC+’s slight correlation lead doesn’t, and again, nothing is special about switchers except that they’re overall less reliable. Some of the gaps widen by a point or two, but there’s no real sign of the impending disaster when the low-PA stuff that favors DRC+ comes in.  But what a disaster there is….

Table 3.  Correlation coefficients to year T+1 wOBA% by different weighting methods, all season pairs.

1+ PA Harmonic Year T PA Year T+1 PA unweighted N
switch wOBA 0.45 0.41 0.38 0.37 1941
switch DRC+ 0.54 0.47 0.58 0.57 1941
same wOBA 0.62 0.58 0.53 0.52 3639
same DRC+ 0.67 0.62 0.66 0.66 3639

The two weightings (Harmonic and Year T) that minimize the weight of low-data garbage projections stay saner, and the two methods that don’t (year T+1 and unweighted) go bonkers and diverge by around what BP reports, If I had to guess, I have more pitchers in my sample for a slightly bigger effect and regressed DRC+% correlates a bit better.  And to repeat yet again, the effect has nothing to do with stay/switch.  It’s entirely a mirage based on flooding the sample with bunches of low-data garbage projections based on handfuls of PAs and weighting them equally to pairs of qualified seasons.

You might be thinking that that sounds crazy and wondering why I’m confident that’s what really happened.  Well, as it turns out- and I didn’t realize this until after the analysis- they actually freaking told us that’s what they did.  The caption for the chart is “Table 3: Reliability of Team-Switchers, Year 1 to Year 2 wOBA (2010-2018); Normal Pearson Correlations”.  Normal Pearson correlations are unweighted. Mystery confirmed solved.

 

DRC+ really isn’t any good at predicting next year’s wOBA for team switchers

UPDATE:

The decent performance I get for DRC+ projecting low-PA players is from assigning pitchers terrible DRC+s no matter how well they hit.  The rest of the post is fine, but this is all even more bullshit than I realized at the time of writing.

/UPDATE

 

Required knowledge: wOBA and DRC+

Part 2: The DRC+ team-switcher claim is utter statistical malpractice

TL;DR: raw DRC+ is a little better overall than projecting everybody to be league average, but actually worse than that for team-switchers. Best-regressed DRC+ has about a 2.5-point MAE improvement over “everybody hits league average” for switchers and a 1 point MAE improvement over best-regressed wOBA. Regressed DRC+ has a huge advantage projecting very-low-PA seasons and starts losing to regressed wOBA around the 300 PA mark.

Having already demonstrated and explained why DRC+ is structurally unfit for use in WAR/BWARP, the purpose of this next experiment was to test the claims here and here that DRC+ was something special when it came to projecting next year’s wOBA for team-switchers. It fails that test convincingly, but a little regression work gives a decent projection for players that *don’t* switch.  Unfortunately for DRC+, that prediction is only marginally better overall than the same methodology using wOBA, and only has a real advantage at the low-PA end.  Regressed wOBA rules the high-PA end.

Methodology Overview

To test their claim, and to account for leaguewide wOBA changing every year, I normalized every batter-season’s wOBA onto a 100 scale by taking (batter wOBA)/(league average wOBA for that season) * 100.  I’ll call that wOBA% from now on.  Normalization to wOBA% makes sense because many of the factors that influence leaguewide wOBA changes in the upcoming year, from the changing strike zone to the changing baseball itself are not something DRC+ tries to predict- or ever should try to predict. Using wOBA% instead of raw wOBA removes a good deal of nonsense noise at no cost.

Team switchers were not explicitly defined, but since the test sample must be composed of players who had PAs in consecutive seasons, I’m defining a team switcher as anybody who didn’t appear entirely for the same team in both years (e.g. half a season for team A followed by 1.5 seasons for team B is a team switcher).

I also normalized DRC+ to DRC+% similarly since it was coming out a bit under 100, but since DRC+ is on the run scale, I used 100*Sqrt(Max(DRC+,0)/100) to put it on the same scale as wOBA.  Seasons from 2010-2018 were used, although 2010 was only used to project 2011.  Every pair of consecutive seasons where a player had at least 1 PA was eligible.

MAEs were calculated weighted by the harmonic mean of the PAs in year T and year T+1.  Best regressions were determined the same way (using all pairs of seasons, not just switchers).  Since MAEs are calculated in wOBA%, I multiplied by 3.20 (i.e. a wOBA of .320) to put them back on the wOBA-points scale to report.

Tests and Results

The first thing I tried was simply using year T DRC+% as the projection for year T+1 wOBA% and benchmarking that against an “everybody 100 wOBA%” (LgAvg) projection and restricting it to pairs of qualified seasons (500+ PA in each).  DRC+ had an MAE of 25.1 points of wOBA against LgAvg’s 34.3 overall, but 26.5 vs 26.1 on team switchers.

In the attempt to see if the signal could be improved, I regressed year T DRC+% with league average (100 wOBA%) PAs and the minimum weighted MAE came with adding 89 average PAs.  Doing the same optimization with year T wOBA% came up with adding 332 average PAs.  For reporting purposes, I broke the players up into 3 groups

  1. 0-99 PAs in year T, which is just enough to capture all pitchers (2016 Bumgarner, 97 PA) as well as a bunch of callups and fill-ins who aren’t really MLB quality.
  2. 400+ PAs in year T, which is all full-time players and primary sides of platoons, etc.  That number is kind of arbitrary, but it’s a little over 50% of the average PA per position, assuming some PHing, and moving it around 25 PAs isn’t really going to affect the big picture analysis anyway
  3. 100-399 PAs in year T to cover everybody else.

 

This is a sample report.

Table 1.  wOBA MAEs, 400+ PA in both seasons.  LgAvg=100

Min 400 PA both seasons raw DRC+% LgAvg regd DRC+% regd wOBA% year T+1 wOBA% year T wOBA% N
all 26.0 32.3 25.1 24.5 106.6 107.5 1152
switch 28.4 26.7 26.9 25.8 103.6 105.3 303
same 25.2 34.3 24.4 24.1 107.7 108.2 849

Any PA cutoff biases the sample, but using a PA cutoff in both seasons is especially bad form because it excludes players who would have reached the cutoff in year T+1 if they hadn’t been benched for sucking.  Even with artificially tight performance constraints, the regressions are virtually useless for team switchers- only 1 point of MAE improvement for wOBA and nothing for DRC+. To avoid the extra bias problem, future results will include all (1+ PA) year T+1 seasons.

Table 2. wOBA MAEs, 400+ PA in year T, any PA year T+1.  LgAvg=100

Min 400 PA year T raw DRC+% LgAvg regd DRC+% regd wOBA% year t+1 wOBA% year T wOBA% N
all 27.6 32.6 26.6 26.3 105.0 106.6 1597
switch 30.3 28.8 29.0 28.2 101.0 103.9 473
same 26.4 34.2 25.6 25.5 106.6 107.7 1124

The bias in Table 1 is apparent now.  This population is simply worse to begin with in year T (marginal hitters were more likely to suck and get benched in year T+1 and not show up in Table 1) and dropped off more to year T+1.  Back to the post topic, neither DRC+ nor wOBA are any good for switchers, wOBA is a bit ahead of DRC+, and the projection for same-team players is a clear improvement on LgAvg.

Table 3.  100-399+ PA in year T, any PA year T+1.  LgAvg=100

100-399 PA year T raw DRC+% LgAvg regd DRC+% regd wOBA% year t+1 wOBA% year T wOBA% N
all 36.8 36.2 34.7 34.5 98.5 97.8 1574
switch 38.5 38.3 36.3 36.5 94.3 95.2 627
same 35.7 34.9 33.6 33.2 101.4 99.5 947

The league average errors are a good bit worse and now DRC+ and wOBA are pretty useless for everything, offering at best a 2-point improvement over LgAvg.  Also, the quality of the players here is clearly worse because….. better players get more PAs and make it into Table 2 instead.

Table 4. wOBA MAEs, 1-99+ PA in year T, any PA year T+1.  LgAvg=100

1-99 PA year T raw DRC+% LgAvg regd DRC+% regd wOBA% year t+1 wOBA% year T wOBA% N
all 110.2 102.2 62.9 92.5 84.3 69.8 2409
switch 91.4 96.7 64.3 87.7 73.3 69.8 841
same 120.3 105.2 62.2 95.1 90.2 69.7 1568

And this is interesting. Garbage hitters, giant MAEs, and regressed DRC+ winning by a mile for a change.  The other interesting thing here is that the players teams keep improve *a ton* and the ones they let go keep being godawful at the plate.  Somebody should look into that in more detail.

Seeing that the 100-399 PA group at least resembled MLB-quality hitters, albeit not the good ones, and that the 1-99 PA group was an abomination at the plate (it did include all the pitchers), I wondered what would happen if I cheated a little and tried to optimize on the 100+ PA group instead of everybody.  That group looks like

Table 5.  wOBA MAEs, 100+ PA in year T, any PA year T+1.  LgAvg=100

Min 100 PA year T raw DRC+% LgAvg regd DRC+% regd wOBA% year t+1 wOBA% year T wOBA% N
all 30.4 33.7 29.1 28.8 102.7 103.9 3171
switch 33.3 32.2 31.6 31.2 98.6 100.8 1100
same 28.9 34.5 27.7 27.5 104.9 105.6 2071

Again, useless for switchers, solid improvement for the ones who stayed.  Based on this, I decided to reoptimize based on a LgAvg of 103 using only players with 100+ PA year T just to see what would happen.

Trying a different league average

This is starting to look down the rabbit hole of regressing to talent (more PA is a proxy for more talent as we’ve seen) instead of to pure league average, but let’s see what happens.  The regression amounts came out to 243 added PA for DRC+ and 410 added PA for wOBA.  Doing that came up with

Table 6.  wOBA MAEs, 100+ PA in year T, any PA year T+1.  LgAvg=103

Min 100 PA year T raw DRC+% LgAvg regd DRC+% regd wOBA% year t+1 wOBA% year T wOBA% N
all 30.4 33.2 28.7 29.1 102.7 103.9 3171
switch 33.3 33.7 31.3 31.9 98.6 100.8 1100
same 28.9 32.9 27.2 27.6 104.9 105.6 2071

Well, that’s not an auspicious start, marginally helping the stayers (the switchers were closer to 100, so regressing towards 103 isn’t any help).  Let’s see if there’s any benefit in either group individually.

Table 7.  wOBA MAEs, 100-399 PA in year T, any PA year T+1.  LgAvg=103

100-399 PA year T raw DRC+% LgAvg regd DRC+% regd wOBA% year t+1 wOBA% year T wOBA% N
all 36.8 38.7 34.5 35.6 98.5 97.8 1574
switch 38.5 41.9 36.7 38.0 94.3 95.2 627
same 35.7 36.6 33.0 33.9 101.4 99.5 947

Well, that was a disaster for these guys. The top line using 100 LgAvg was 36.8 / 36.2 / 34.7 / 34.5 before, and regressing them further from their talent shockingly didn’t do any favors.  Was this made up for by the full-time players?

Table 8.  wOBA MAEs, 400+ PA in year T, any PA year T+1.  LgAvg=103

400+ PA year T raw DRC+% LgAvg regd DRC+% regd wOBA% year t+1 wOBA% year T wOBA% N
all 27.6 30.8 26.1 26.3 105.0 106.6 1597
switch 30.3 29.1 28.3 28.5 101.0 103.9 473
same 26.4 31.5 25.2 25.4 106.6 107.7 1124

Not really.  This is a super marginal improvement over the previous top line of 27.6 / 32.6 / 26.6 / 26.3.  Only the LgAvg projection really benefits at all, and that’s not what we’re interested in.  Changing league average a little and optimizing over only MLB-quality hitters doesn’t seem to really accomplish anything great for the DRC+ or wOBA regressions.

Conclusion

There’s little-to-nothing to BP’s claim that DRC+ is something special for team switchers.  Raw DRC+% is worse than league average, and the MAE for best-regressed DRC+ is about 2 points better than league average overall, and that entire benefit is from the low-PA end.  It projects team-switching full-time players worse than assuming they’re league average.  However, for the really low-PA players, it is more accurate raw and much more accurate regressed than league average.

Likewise there’s also absolutely nothing to the claim that DRC+ is a significant improvement over wOBA for predicting year T+1 wOBA for switchers- the gap is actually *smaller* for switchers.  1.1 points of MAE for switchers and 1.5 points for stayers.  The best regression of DRC+ absolutely does shine in the very-low-PA group, but it’s also not good in the full-time player category.  Regressed DRC+ and regressed wOBA actually do make fairly decent, much better than league average predictions for full-time players who *stay*, for whatever an untested model fit to in-sample data is worth.

 

C’mon Man- Baseball Prospectus DRC+ Edition

Required knowledge: A couple of “advanced” baseball stats.  If you know BABIP, wRC+, and WAR, you shouldn’t have any trouble here.  If you know box score stats, you should be able to get the gist.

Baseball Prospectus recently introduced its Deserved Runs Created offensive metric that purports to isolate player contribution to PA outcomes instead of just tallying up the PA outcomes, and they’re using that number as an offensive input into their version of WAR.  On top of that, they’re pushing out articles trying to retcon the 2012 Trout vs. Cabrera “debate” in favor of Cabrera and trying to give Graig Nettles 15 more wins out of thin air. They appear to be quite serious and all-in on this concept as a more accurate measure of value.  It’s not.

The exact workings of the model are opaque, but there’s enough description of the basic concept and the gigantic biases are so obvious that I feel comfortable describing it in broad strokes. Instead of measuring actual PA outcomes (like OPS/wOBA/wRC+/etc) or being a competitive forecasting system (Steamer/ZIPS/PECOTA), it’s effectively just a shitty forecast based on one hitter-season of data at a time****.

It weights the more reliable (K/BB/HR) components more and the less reliable (BABIP) components less like projections do, but because it’s wearing blinders and can’t see more than one season at a time, it NEVER FUCKING LEARNS**** that some players really do have outlier BABIP skill and keeps over-regressing them year after year.  This is methodologically fatal.  It’s impossible to salvage a one-year-of-stats-regressed framework.  It might work as a career thing, but then year X WAR would change based on year X+1 performance.

Addendum for clarity: If DRC+ regresses each season as though that’s all the information it knows, then adds those regressed seasons up to determine career value, that is *NOT* the same as correctly regressing the total career.  If, for example, BABIP skill got regressed 50% each year, then DRC+ would effectively regress the final career value 50% as well (as the result of adding up 50%-regressed seasons), even though the proper regression after 8000 PAs is much, much less.  This is why the entire DRC+ concept and the other similarly constructed regressed-season BP metrics are broken beyond all repair.  /addendum

 

****The description is vague enough that it might actually use multiple years and slowly learn over a player’s career, but it definitely doesn’t understand that a career of outlier skill means that the outlier skill (likely) existed the whole time it was presenting, so the general problem of over-regressing year after year would still apply, just more to the earlier years. Trout has 7 full years and he’s still being underrated by 18, 18, and 11 points the last 3 years compared to wRC+ and 17 points over his whole career.

DRC+ loves good hitters with terrible BABIPs and particularly ones with bad BABIPs and lots of HRs.  Graig Nettles and his career .245 +/- .005 BABIP / 390 HRs looks great to DRC+ (120 vs 111 wRC+, +14.7 wins at the plate), as do Mark McGwire (164 vs 157, +8.5 wins), Harmon Killebrew (150 vs 142, +16.2 wins), Ernie Banks (129 vs 118, +20.8 wins), etc.  Guys who beat the hell out of the ball and run average-ish BABIPs are rated similarly to wRC+, Barry Bonds (175 vs 173), Hank Aaron (150 vs 153), Willie Mays (150 vs 154), Albert Pujols (147 vs 146), etc.

The flip side of that is that DRC+ really, really hates low-ISO/high BABIP quality hitters.  It underrates Tony Gwynn (119 vs 132, -12.9 wins) because it can’t figure out that the 8-time batting champ can hit. In addition, it hates Roberto Alomar (110 vs 118, -10.4 wins) Derek Jeter (105 vs 119, -17.9 wins), Rod Carew (112 v 132, -18.7 wins), etc.  This is simply absurd.

C’mon man.