Explaining Dangerous Fenwick

A few weeks ago, I started published my particular version of shot and distance data after each Oiler game.  One of the metrics I calculate and publish is what I call “Dangerous Fenwick”, or maybe the more dry but more accurate “Danger Adjusted Fenwick” (DFF).

I published the DFF rates per 60 and average danger and distance of shots given up by Oiler defensemen and D pairings after last nights Montreal game.

Those numbers matched up really well with the eye test on the defensemen and defensive pairings, which was encouraging, because I’ve posted before on how tough it can be to find good fancystats for assessing defensemen.

Got a lot of positive feedback on the results as well.

So… fantastic!  The new metric is looking good! With one small problem.

What exactly *is* “Dangerous Fenwick”?

WTF is DFF?

One of the most common, and valid, criticisms of shot metrics like Corsi or Fenwick (or just plain old shots for that matter), is they don’t take into account how dangerous the shot attempts were.  A shot is a shot is a shot.  It is easy to conceive of a team that ‘gooses’ Corsi by taking a lot of perimeter shots (*cough* Eakins *cough* though I don’t think that theory matches at all with the data).

Dangerous Fenwick tries to adjust for that issue by deliberately accounting for the danger level of the shots taken.

Here’s a simple thought experiment to demonstrate how it works.

Suppose Team A has a 10 ft slapshot in its favour, and Team B manages to get a 50 ft wrist shot.  The Fenwick for this situation?  50% each.  Both teams get credited one shot attempt each.  10 ft slap shot vs 50 ft wrister … is 50% fair?  You be the judge!

That’s where Dangerous Fenwick comes in.

According to my work, a 10 ft slapshot is 2.348 times as dangerous as an average shot.    A 50 ft wrist shot is 0.260 times as dangerous.

So where regular Fenwick credits Team A and Team B with 50% each, Dangerous Fenwick credits Team A with ( 2.348 / ( 2.348 + 0.26 ) ) = 90%, and Team B will get 10%.  A 10 ft slapshot is about 9x as dangerous as a 50 ft wrister, and Dangerous Fenwick has accounted for that difference.

Do that danger calculation individually for every single unblocked shot attempt for and against, add it all up … and you’ve got Dangerous Fenwick in a nutshell!

In my post-game pages, I calculate a variety of DFF numbers for the players and the team.  I also show the ‘danger rating’ of each goal scored.

If all you want to understand is the basics of what Dangerous Fenwick is and why it exists, you can probably stop reading now.  For those who’ve never met a TL;DR article and want the background, justifications, explanations, and other gory details … please do carry on.

The Wherefores

The idea behind Danger Adjusted Fenwick is (if you’re familiar with it) exactly the same as Score Adjusted Corsi – rather than throw data away, we include it all, but we adjust it.

In this case, what I’ve done is generated goal scoring probability curves for each shot type by distance over the last five years.  Note that this is not particularly original conceptual work – forms of these types of adjustments have been tried by many people in may different ways.  I’m mostly applying my own methodology to a large volume of data.

Once we have goal scoring probabilities for every type of shot, we can compare that to the average goal probability (i.e. sh%) of all shots, and we have ourselves an adjustment factor for every shot.

For example, this is a slightly more detailed version of the calculation I showed above:

» Suppose Team A took 5 unblocked 20 ft slap shots, and 5 unblocked 30 ft wrist shots.

» Team B took 5 unblocked 30 ft slap shots, and 5 unblocked 40 ft wrist shots.

» Team A’s Fenwick is therefore 50% = 20 shots out of a total of 40 (20+20) shot attempts.  The Fenwick shows as even despite Team A having significantly closer shots.

Now let’s work that in Dangerous Fenwick terms:

» A slap shot from 20 ft is 1.752 times as likely to score as the average shot.  From 30 ft, it is 1.17 times.

» A wrist shot from 30 ft is 0.870 times as likely to score as the average shot.  From 40 ft it is 0.438 times.

So here’s how we calculate the danger for Team A and Team B:

Team A
5 slap shots 1.752 danger 8.76 subtotal
5 wrist shots 0.87 danger 4.35 subtotal
Total Danger Fenwick For (Team A) 13.11
Team B
5 slap shots 1.17 danger 5.85 subtotal
5 wrist shots 0.483 danger 2.415 subtotal
Total Danger Fenwick For (Team B) 8.265

You see what’s happened?

» Team A’s 10 shots are now counted as 13.11, because they’re dangerous.

» Team B’s 10 shots are counted as 8.265 because they’re much less dangerous.

» The Dangerous Fenwick For for Team A is 13.11 and DFF Against is 8.265 (vice versa for Team B).

» DFF% is now calculated exactly as you’d expect:  for / ( for + against ) = 13.11 / ( 13.11 + 8.265 ) = 13.11 / 21.375 = 61.3%.

In other words, once you’ve accounted for dangerous shots, Team A is now at 61.3%, and Team B is at 38.7%.  The Dangerous Fenwick number is accounting for the fact that Team A had much more dangerous chances than Team B did.

The 61.3% reflects all data i.e. shots taken by both teams, but is much more reflective of the balance of play than the 50% of Fenwick.

With the real thing, there are individual adjustments made for every single shot, but otherwise its pretty much exactly as I’ve shown it above.  One difference: I also count shots that occur within 4 seconds of a previous shot as a ‘rebound’, and add 4% to the sh% adjustment for that rebound.

You can calculate other things, like ‘dangerous fenwick against per 60’ just as you can with regular shot metrics.  Suppose a team gave up gave up 10 unblocked shots while a specific defenseman was on the ice – 5 of them were 20 ft slap shots (danger = 1.75) and 5 were 30 ft wrist shots (danger = 0.87).

» His Fenwick against is 10.

» But his ‘danger adjusted Fenwick against’ is 5 x 1.75 + 5 x 0.87 = 13.1.  He’s going to be penalized (bigger number against) because he’s giving up a lot of mighty dangerous shots.

» His ‘average danger’ is going to be (5 x 1.75 + 5 x 0.87) / (5+5) = 1.31.  He’s giving up shots that on average are 31% more likely to score than an ‘average’ shot.

» If he was on the ice for 20 minutes and gave up those 10 shots, his “dangerous Fenwick against/60” is going to be 13.1 / 20 * 60 = 39.3.

And there you have it.

Dangerous Fenwick – a number that is comparable to other shot metrics, incorporates high, medium, and low danger shots into one number, and can and will be available for all Oiler players.

That includes Dangerous Fenwick for and against, percentages, and rates per 60, which can be calculated for teams, players, pairings, and lines.  On my data pages, I’m showing the data for defensemen and D pairings, but in the associated data files that you can download for each game, I have that same data for every player and every forward line as well.

Cool?

The Whys

DFF is similar in calculation and spirit to the new breed of shot metrics that take into account shot type and location, such as war-on-ice’s Scoring Chances metric, and DTMAboutHeart’s “Expected Goals”.

I suspect if you looked at those metrics (and I have done some comparisons), you’ll often get similar results.

So why bother creating yet another metric when some pretty good ones exist already?

A few reasons:

  • This way, I can calculate these statistics for a purpose and at a granularity (specifically, for Oiler D and D pairings) which may not (right now, is not) available using those other statistics. Note: if you are interested in seeing this for another team, I can generate that data for you quite easily.
  • I like the idea of a statistic that is derived from and comparable to the more traditional large-sample-size shot metrics like Corsi and Fenwick. It is the same basic reason I think people have adopted Score Adjusted Corsi.
  • I think there is a weakness in the way that war-on-ice calculates their metrics – they use shot location, but don’t account for shot type. To me, this is an odd oversight.  Shot type matters a great deal in determining if a shot is truly a scoring chance.  A 20 ft slapshot is more dangerous than a 15 ft backhand, but using only shot location treats the backhand as the more dangerous of the two.  When I started working on this stuff, I built some shot danger heat maps using shot types and locations and you can see that shot type makes a significant difference in what should be treated as a dangerous location.
  • The scoring chance data, for maximum utility, is separated into low, medium, and high danger chances. This makes team comparisons difficult.  For example, if Team A has h/m/l  (7, 7, 7) chances, and Team B has (6, 7, 14), which team carried the play?  Team A had one more high danger chance, but half as many low danger chances.  These may be ‘low danger’, but they aren’t ‘no danger’, so you can’t discount them entirely.  Comparing three numbers is more difficult than comparing one number.  I like the idea of a single blended number that incorporates low/medium/high information.
  • In general, I’m uncomfortable with the methodologies that are commonly used to determine danger vs shot location for shots. They often seem to rely on multiple regression, which I find odd – you cannot and should not assume linear relationships.  I decided to go with what I feel is a better methodology to derive this: 5 years worth of shot data and “locally weighted scatterplot smoothing”, or LOWESS, to derive the curves.  See the last section for more details.
  • I think (but do not know for sure) that DTMAboutHeart’s xG model takes into account the same things I do. He may even do a better job at them than I.  More research needed!  At a high level, it seems to me that if you are going to convert to a metric that uses ‘goals’ as its underlying measure, then you would need to account for the sh% of the shooter in some way (not just location and type), and also the sv% of the goalie!  I don’t think he does that, in which case xG may be a bit misleading as a label (minor criticism).  Someday I think we’ll get there, but this current set of metrics is a necessary intermediate step.
  • Why Fenwick and not Corsi?  Because the NHL doesn’t include distances for blocked shots, and I don’t want to impute those numbers (seems to me that it isn’t likely to add to the validity of the results).

Long story short: “Danger Adjusted Fenwick” is my way of trying to address all of the above issues.

Black Magic and Unicorn (Theme of the Day) Blood

Some of you won’t be satisfied with the explanations I gave above, and will want even more detail.  And I’m good with that.  Please do read the more detailed explanation below – feedback is grand.

The technique:

1 – I took shot data (war on ice) for the last five years, and calculated the sh% for every team for every shot type, for distance buckets of size 2 (eg. 0 to <2 feet, 2 to <4 feet, etc).  Sh% for one team x one distance x one shot type = one data point.  I dropped any data point that had less than 5 shots uses to calculate the sh%.

2 – I ‘jittered’ this data to have random variation in X (distance) to remove some of the artefacts from the 2 foot buckets.

3 – I applied a LOWESS algorithm to provide a smoothed curve for each shot type by distance.

4 – Since the data often ended more than 1 ft away, or less than 60 ft away, I used a simple linear regression to extend the curve where needed to 1 foot or out to 60 feet.  Beyond 60 feet, I apply 0.25 as the adjustment factor regardless.  Not very rigorous, but very few shots occur from there so it shouldn’t affect the overall results at all.

5 – I converted the resulting curves back to adjustment factors for every shot type for every distance from 1 to 60.  The CSV file containing these adjustment factors can be downloaded here if you want to see what that looks like.

6 – Here’s an example of a couple of shot data displays, with the LOWESS smoother applied.

figure_3

figure_6

7 – Here’s what the final chart looks like with ALL the shot type curves superimposed.  I’ve also included the ‘average shot curve’, and you can see by the deviation why it is important to account for shot types!

figure_9

8 – You can download the complete set of charts for all the shot types here.

Hope that you find this data and this explanation useful, and any feedback would be appreciated, as I will continue to refine this stuff as I go.

Things Left to Do

And indeed, I have many things left I want to do, which I’ll get to over the next few weeks to months.

More data validation – the numbers have been vetted and debugged and compared, but more QA is always good.  One of my biggest worries is over the quality of the NHL data from which my data is scraped.  When it comes to shot location and distance, it’s got a ton of error, and a single game may allow that error to significantly distort the results. Wish there was a clever way to address this that didn’t involve manual QA.

Statistical validation – assessing the predictive value.  In the ‘olden days’, the idea of rolling shot quality into shot metrics was dismissed as irrelevant.  It had minimal effect, little of it repeatable.  Those analyses always struck me as compromised by small data samples, and recent work on e.g. scoring chances has shown shot quality adjusted data to have statistical validity on par or better than traditional Corsi or Fenwick.  So at some point, I’ll run the same type of analyses on DFF to see how well it compares. For the moment, however, I’m using it as a descriptive rather than a predictive statistic, so the testing is lower priority than other projects.

Rink bias adjustment – this data is fine for looking at each game since the biases are shared.  But tougher to compare with other venues in away games due to consistent distance recording biases.  Need to adjust for this rink bias at some point soon.  Hoping I can mooch an algorithm and adjustment factors from somebody, the same way that Micah Blake McCurdy was generous enough to share his score adjustment factors with me.

Shot locations – move from using just distance to using x,y location.  As this information is only available for shots (not blocked shots), I either have to ‘impute’ the locations or restrict myself to shots only.  I will take the latter route if and when I do this.  Rather than use the war-on-ice shot zones idea, I want to use the approach taken in the classic work on ‘expected Goals’ by Michael Schucker, which used a 2D LOWESS model on shot locations.  But I’m scared of that 2D LOWESS smoother, I can see trying to get it working causing (more) graying of hair!


Good thing I don’t have a day job or three kids!  Wait …

26 thoughts on “Explaining Dangerous Fenwick

  1. 1) Hypothesis: The way you calculate danger, zone start adjustment is implicit/embedded in DFF.

    Right?

    i.e. If one calculates DFF, one probably really doesn’t need to zone start adjust it.

    2) Instead of danger adjustment, have you thought of a “Vollman” type of charge with x axis of average distance, a y-axis of danger, with a z-axis (bubble size) of Corsi or Fenwick (however you want to adjust it).

    Then for an individual defenseman or a defensive pair one could group a defenseman or a pair in “defending ability” vs “possesion ability”

    Like

    1. Or a consider the pair (distance, danger) a vector. And use 1) TOI in game situation as the x-axis, 2) the magnitiude of the (distance, danger) vector as the y-axis, and 3) a corsi-type measurement as the bubble.

      One would probably have to figure out a weighting/normalization of the (distance,danger) vector so the magnitude would make sense.

      Like

  2. The 2-D plot I would like to see is Fenwick (or Corsi, or Score-adjusted Corsi or your favorite shot metric) on the x-axis (scaled between 0 and 1) [as a proxy for Expected Goals For]

    AND

    Expected Goals Against on the y-axis. (I think you can calculate this with your shot curves) [or your DANGER as as substitute for EGA, but I think explicit Expected Goals Against would be more striking to the advanced stat community]

    And this could be done for individual D, D-pairs, (any combination of players…)

    i.e. A proxy for a expected goals for vs. expected goals against for chart.

    This chart (I think) is the Holy Grail.

    For a good player EGF > EGA. EGF/EGA > 1. Whoever can calculate EGF and EGA the best will be king of the world.

    The shot curve are the way to go about doing this, but I think one should try to get to EGF and EGA, rather than attempt to adjust Fenwick or Corsi. Fenwick or Corsi is a decent enough first order proxy for EGF.

    Like

  3. I’m curious how you see a relationship between Valiquette’s green shots and dangerous Fenwick. Perhaps I don’t understand them well enough, but they seem to be two sides of the same coin. Which shots are more likely to lead to goals (which shots are less likely to be saved)?

    There are commonalities. Both see a difference between a 15-foot slap shot and a 40-foot wrister.

    The difference I see is that he factors in the play leading to a shot, as much as the distance from the net. The Royal Road is a good example; if you have the goalie moving from side to side the shooter has a much greater likelihood of scoring (and the goalie has a much lower likelihood of making a save).

    I’m also curious about Valiquette’s use of screens in evaluating shot quality. Fayne’s wrister against the Flames is a good example. How does dangerous Fenwick evaluate a 40-foot wrister with/without a screen? I think if Hiller sees that shot cleanly, it’s a save. With the traffic, it looks like he doesn’t see it until it’s past him.

    My last question was in terms of the player. I wouldn’t know how to begin to quantify it, but there is a difference between a Shea Weber slap shot and a Justin Schultz slap shot, which is why teams all look for that PP QB with a big shot. Not all slap shots are the same.

    Thanks for thinking about this.

    Like

    1. Really good questions Ray. I’ll see if I can address all of them as best I can:

      1 – I have seen Valiquette’s analysis, and there is value there, no question. That said, my recollection (correct me if I’m wrong) is that he only looked at ~100 or so games. That’s a really good start, but IMO not nearly enough to draw rigorous conclusions. That’s basically three games per team in a single season. For contrast, the probability curves for Dangerous Fenwick are based on more than 5,000 games.

      2 – Similarly, the Royal Road concept and the idea of tracking shot quality are something that Chris Boyle has worked on a lot. I *really* liked his Shot Quality Project idea, and it was even one of my inspirations for working on DFF. The problem is that the data that eventually came out was small sample, and then the large sample data of late has had some serious concerns raised as to its veracity. So I’m approaching with caution.

      3 – Where it does look promising is in Ryan Stimson’s passing data project, which uses a form of crowdsourcing to collect that data. I haven’t yet looked at it, but I do think the idea of looking at what happens immediately before a shot is the next major frontier in upping our understanding of shot quality, at least until SportVu (or whatever it is) comes out. I would like to incorporate it, but not sure that will be in the short term.

      4 – I think screens are a key aspect to assessing chances, but to my knowledge, no one is tracking that. The only metric that actually captures this is Dave Staples (often unjustly) derided Scoring Chances work.

      5 – You are 100% correct, incorporating shooter quality is a key item. The only one that I know does this is DTMAboutHeart’s xG metric. The catch is that no one, including DTM, in my opinion has done a good job of separating shooter quality from shooting location. We do know that a huge part of scoring is where you shoot from, and that issue typically swallows up differences like Weber vs Schultz and their slap shots. It’s part of what has made Corsi (which ignores those things utterly) still have such longevity and statistical validity. I do have a thought about trying to suss out shooter quality (normalize for location and shot type and see if that gives you anything of value), but I don’t think I will incorporate that until at least a couple of iterations down the road (location first).

      Thanks for your feedback!

      Like

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s