WPA.LI follows two key constraints. The first is that, for a given game state (i.e. the inning, the score, the number of outs, and the placement of any runners on base), the relative value of a play is determined by how much that play affects the team's chances of winning. If the bases are empty, a walk is credited the same as a single. If the bases are loaded with the winning run on third, a walk is credited the same as a home run. This constraint works exactly like WPA (as one might expect from a WPA-based metric).
The second constraint differentiates WPA.LI from WPA. One of the properties of WPA is that some situations are inherently weighted more strongly than others. A key at bat late in a close game can swing a team's chances of winning by several times as much as the same result in a blowout, and it is credited accordingly. WPA.LI, on the other hand, ensures that the average play in every situation gets the same weight.
So, on the one hand, you have WPA, which weights PAs according to their immediate impact on the game. One clutch PA might be worth as much as 4 or 5 normal PAs, and one mop-up PA might be worth practically nothing. On the other hand, you have WPA.LI, which weights every PA equally, just like most other stats do. Basically, it is linear weights, but with the ability to tailor the value of each event to the specific situation rather than sticking to a blanket value for each event across all situations. While WPA tells the story of clutch hitting (who got the big hit when the team most needed production), WPA.LI tells the story of situational hitting (who got on base when the team needed baserunners, put the ball in play when the strikeout was most costly, or hit for power when advancing runners quickly was more important than getting another guy on first).
There is a third important constraint which WPA.LI does not adhere to, however. Ideally, the average value of each event would match its linear weights value. If a home run is worth 1.4 runs above average across all situations, then you would like the average WPA.LI value of a HR to be 1.4 runs (or rather, the equivalent value on the wins scale). That is not the case, however.
The following linear weights values represent the average change in run and win expectancy for that event across all situations, along with the average WPA.LI value of each event. All three versions have been placed on the runs scale by setting the value of the out at -.27 in order to make them easier to compare directly:
As you can see, WPA.LI does fine at assigning the correct value to most events, but the value of the HR is way off. This may seem counterintuitive; if WPA.LI just creates custom linear weights for each situation based on the WPA values, why would the average WPA.LI value be different from the average WPA value? We can look at the mathematical relationship between WPA and WPA.LI to see why this is.
For a single play, we have WPA = WPA.LI * LI. Now, let X be a variable that represents the set of WPA.LI values for all home runs, and Y be a variable that represents the set of LI values for all home runs:
X = WPA.LI
Y = LI
WPA = XY
The linear weights value of the home run will be the expected value (i.e. the mean) of the set of all home runs:
linear.weights(WPA) = E[XY]
linear.weights(WPA.LI) = E[X]
In order for the linear weights values implied by WPA and WPA.LI to be equal, then E[XY] has to equal E[X]. The relationship between these two values can be explored using covariance, which is defined as:
COV(X,Y) = E[XY] - E[X]*E[Y]
Rearranging, we get:
E[XY] = COV(X,Y) + E[X]*E[Y]
Now, let E[XY] = E[X] + d, where d is an error term representing the difference between E[XY] and E[X]. If E[XY] = E[X], then d=0.
E[X] + d = COV(X,Y) + E[X]*E[Y]
d = COV(X,Y) + E[X]*(E[Y] - 1)
Note that when two variables are independent, their covariance is zero (in which case the first term will be zero), and that when an event occurs randomly across all situations, E[Y] = 1 (because the average Leverage Index is 1), and the second term will equal zero.
From this, we can see that there are two things that can cause the WPA.LI value of an event to deviate from its proper value. One, the WPA.LI value of an event is not independent of the LI of the situation. Two, the event does not occur randomly across all situations, so that the average LI value for that event is not 1. If either or both of these is the case, then the average WPA.LI value of an event will deviate from its WPA value (unless the two error terms cancel each other out).
Both of these are in fact the case for the HR. Home runs are worth slightly more, relative to other events, in lower-leverage situations (i.e. WPA.LI value is negatively correlated with LI for HR), and home runs, like other extra base hits, happen slightly more often in low-leverage situations than in high-leverage situations. Both of these sources of error are in the same direction, and their cumulative effect is that the WPA.LI value of a HR is about .016 wins higher than the WPA value (.0108 from the covariance and .0056 from the average LI).
Because WPA.LI assigns a higher value to HR than does WPA, WPA.LI will be skewed high for home run hitters relative both to WPA and to static linear weights. This complicates comparisons between WPA.LI and other stats. For example, the stat Clutch, as defined as WPA-WPA.LI*, runs into problems with high-HR hitters.
*Clutch is actually WPA/LI - WPA.LI, where WPA/LI is literally WPA divided by the average Leverage Index for the player, but this is really hard to write without confusing WPA divided by LI for WPA.LI, which is usually written as WPA/LI. That, by the way, is why I have been using WPA.LI instead of WPA/LI to refer to that stat.
The pattern of sluggers rating worse than contact hitters in Clutch rankings has been noted by various observers. As shown in the Book Blog thread (see especially Cyril Morong's posts), a player's HR rate correlates strongly with his Clutch rating. Similarly, a player's HR rate in one season predicts his Clutch in the following year even better than his Clutch from that season does (year-to-year r for Clutch for hitters with at least 300 PAs in both seasons is about .06; for year-1 HR rate to year-2 Clutch, it is about -.12).
Take a look at the top 10 hitters in HR/PA from 2000-2011 (min 1000 PAs):
Collectively, these 10 hitters average a Clutch rating of -3.94 wins over about 5000 PAs. This effect is entirely due to the bias in WPA.LI with regard to home runs, though, and not to any deficiency in clutch hitting by the group. If we compare WPA not to WPA.LI, but to linear weights (taken as the average WPA value of each event) for these players, we see that their WPA contributions are almost exactly what we would expect from their context-ignorant production:
This version of Clutch (WPA - linear weights) removes the HR-bias of the WPA.LI version. Clutch.LW shows almost no correlation with HR rate (either for the same year or adjacent years), and the leader board becomes HR-neutral.
While it appears that many of the top sluggers in the game have been particularly un-clutch based on the FG and BR leader boards, this is probably not actually the case. The mathematical properties of WPA.LI (specifically the possibility of correlation with LI and of non-randomness of events) just happen to skew the results in that direction. This can be addressed by using linear weights values (especially linear weights derived from WPA) as the context neutral baseline to compare against WPA rather than using WPA.LI.
Note: All win probability and leverage index numbers used in this post come from the tables created here. These figures are based on 1993-2010 data and are not calibrated to 2000-2011, nor are they adjusted for the different run-environments of each park (as the FG and B-R figures are). As a a result, the WPA figures here won't match those sites, but should serve fine for illustrative purposes.