This is a cut-out from a longer post I was running some numbers for, but it’s straightforward enough and absurd enough that it deserves a standalone post. I’d previously looked at DRAA linear weights and the relevant chart for that is reproduced here. This is using seasons with 400+PA.
relative to average PA | 1b | 2b | 3b | hr | bb | hbp | k | bip out |
old DRAA | 0.22 | 0.38 | 0.52 | 1.16 | 0.28 | 0.24 | -0.24 | -0.13 |
new DRAA | 0.26 | 0.45 | 0.62 | 1.17 | 0.26 | 0.30 | -0.24 | -0.15 |
wRAA | 0.44 | 0.74 | 1.01 | 1.27 | 0.27 | 0.33 | -0.26 | -0.27 |
I reran the same analysis on 2019 YTD stats, with all position players and with a 25 PA minimum, and these are the values I recovered. Full year is the new DRAA row above, and the percentages are the percent relative to those values.
1b | 2b | 3b | hr | bb | hbp | k | BIP out | |
YTD | 0.13 | 0.21 | 0.29 | 0.59 | 0.11 | 0.08 | -0.14 | -0.10 |
min 25 PA | 0.16 | 0.27 | 0.37 | 0.63 | 0.12 | 0.09 | -0.15 | -0.11 |
Full Year | 0.26 | 0.45 | 0.62 | 1.17 | 0.26 | 0.30 | -0.24 | -0.15 |
YTD %s | 48% | 47% | 46% | 50% | 41% | 27% | 57% | 64% |
min 25PA %s | 61% | 59% | 59% | 54% | 46% | 30% | 61% | 74% |
So.. this is quite something. First of all, events are “more-than-half-deserved” relative to the full season after only 25-50 PA. There’s no logical or mathematical reason for that to be true, for any reasonable definition of “deserved”, that quickly. Second, BIP hits are discounted *LESS* in a small sample than walks are, and BIP outs are discounted *LESS* in a small sample than strikeouts are. The whole premise of DRC+ is that TTO outcomes belong to the player more than the outcomes of balls in play, and are much more important in small samples, but here we are, with small samples, and according to DRC+, the TTO OUTCOMES ARE RELATIVELY LESS IMPORTANT NOW THAN THEY ARE AFTER A FULL SEASON. Just to be sure, I reran with wRAA and extracted almost the exact same values as chart 1, so there’s nothing super weird going on here. This is complete insanity- it’s completely backwards from what’s actually true, and even to what BP has stated is true. The algorithm has to be complete nonsense to “come to that conclusion”.
Reading the explanation article, I kept thinking the same thing over and over. There’s no clear logical or mathematical justification for most steps involved, and it’s just a pile of junk thrown together and tinkered with enough to output something resembling a baseball stat most of the time if you don’t look too closely. It’s not the answer to any articulable, well-defined question. It’s not a credible run-it-back projection (I’ll show that unmistakably in the next post, even though it’s already ruled out by the.. interesting.. weightings above).
Whenever a hodgepodge model is thrown together like DRC+ is, it becomes difficult-to-impossible to constrain it to obey things that you know are true. At what point in the process did it “decide” that TTO outcomes were relatively less important now? Probably about 20 different places where it was doing nonsense-that-resembles-baseball-analysis and optimizing functions that have no logical link to reality. When it’s failing basic quality testing- and even worse, when obvious quality assurance failures are observed and not even commented on (next post)- it’s beyond irresponsible to keep running it out as something useful solely on the basis of a couple of apples-to-oranges comparisons on rigged tests.