The NHL Awards Show is meant to be an entertaining experience for fans, but it is impossible to satisfy everyone. Last year, many people were outraged that Drew Doughty was named the NHL’s best defenceman after a season in which the numbers suggested that he was slightly worse than Jake Muzzin, his own defence partner. In a sense, this outcome represents a fundamental divide amongst hockey fans with respect to how we evaluate defencemen. Traditionalists argue that Doughty was the best defenceman in the league that season and therefore deserved to win the Norris trophy. Those who value objective statistics, however, believe that Doughty was probably not the best defenceman on his own team, let alone the best in the entire league. Despite these differences in opinion, one fact that both sides can agree on is that Doughty-Muzzin is an elite defence pairing.

For the L.A. Kings and their fans, the “Doughty versus Muzzin” argument matters less than the “Doughty AND Muzzin” argument; if there was an award for the NHL’s best defence pairing, Doughty and Muzzin would likely be in the conversation every year. While the Norris trophy attempts to identify the best defenceman in the league, I will now try to quantify the output of elite defence pairings. By the end of this post, you will have a better idea of how to spot an elite defence pairing by looking at their statistics.

We’ll define a defence pairing as two defensemen who have played together for at least 500 minutes at even-strength since the 2013-14 season. Since I downloaded the data from corsica.hockey during the All-Star break, this sample does not include data from games played after January 27, 2017. In order to figure out how elite defence pairings perform, the first thing we need to do is determine what separates an elite pairing from a good pairing.

**Defining Elite – An Important Analogy:**

The first step we need to take is to determine what separates an elite pairing from a good pairing. In order to explain my methodology, we’ll use an analogy from school.

Imagine that you are in school and your teacher is returning your test grades. The teacher gives you two numbers: your mark (84%) and the class average (80%). Immediately, you know that you are 4% above average. This is certainly good because it tells you that you did better than most of your classmates. What it doesn’t tell you is *how much* better you did. You know that you’re mark is somewhere amongst the students who scored above average on the test, but you do not know your specific location amongst them. Is your mark exceptional, or are you barely above average? If you want to answer this, you need to ask your teacher for one more piece of information: the standard deviation. Using this third number along with the two numbers from before allows you to calculate your z-score. If you are already familiar with z-scores, you can skip the next section.

**Calculating and Interpreting Z-Scores:**

Your z-score is the answer to the question: how many standard deviations above the class average is my mark on the test? I prefer using z-scores to interpret my test results whenever possible because they are easily to calculate and simple to interpret. In fact, you are already halfway there! You already know that your mark is 4% above average because you subtracted your mark from the average. If you divide that number by the standard deviation, you have successfully calculated your z-score.

Your z-score will be a positive number if your mark is above average or a negative number if it is below average. If your z-score is 1, your mark on the test is 1 standard deviation *above* the class average. A z-score of -1 means that your mark is 1 standard deviation *below *average. Assuming we’re dealing with a normal distribution of test marks, 68% of your classmates will have z-scores somewhere between -1 and 1. The remaining 32% are either below -1 or above 1. Since we’re determining whether your test score is exceptional or merely good, we need to know if your z-score is above 1 or somewhere between 0 and 1.

If the standard deviation is 5, your z-score is 0.8 (since 4/5 = 0.8) which tells you that you are 0.8 standard deviations above the mean. That is good, but not great. But if the standard deviation was actually 3, your z-score would be 1.33 (since 4/3 = 1.33) which means that your mark is 1.33 standard deviations above the mean — a very good score! Ultimately, z-scores help us distinguish between exceptional and unexceptional results.

**How can z-scores help us define “elite”?**

Similarly to how we can use z-scores to interpret a mark on a test, we can also use them to interpret the results of a certain defence pairing in a specific statistic. Since there are a variety of statistics that can be used to evaluate defence, we will look at each statistic one-by-one. In any given statistic, I will define elite as any result that translates to a z-score of 1 or greater. In general, this will translate to the results that fall into the 84th percentile or greater*. If we want to determine if Doughty-Muzzin are elite shot suppressors, for example, we need to know the z-score of their on-ice CA60 (shots against per 60 minutes). Their CA60 of 44.87 is 1.83 standard deviations better than the CA60 of an average defence pairing, so they are elite shot suppressors.

*Why the 84th percentile? Recall from the previous section that 32% of all results will have z-scores that are either less than -1 or greater than 1. If these two subsets of the sample account for 32% of the entire sample, then each individual subset accounts for 32/2 = 16% of the sample. Since we’re concerned with the best (i.e. elite) results, we’re looking at the results with the highest z-scores and highest percentiles. 100 is the highest possible percentile, so 100 – 16 = 84, hence the 84th percentile or greater.

We now have sufficient information to be able to answer the following question: what do the statistics of an elite defence pairing look like? In the chart below, you can see for yourself, with explanations to follow.

The first column of this chart is titled “Average of Elite Pairings” because it is the average result of all pairings whose z-scores were at least 1. While the first column gives us a general idea of what should be considered an elite result in each statistic, the second column, “Cut-off,” is the worst result in the sample which was considered elite. In other words, the “Cut-off” is the worst result in the sample that translates to a z-score greater than or equal to 1. Aside from z-scores, the third column is another way to translate where a cut-off ranks amongst the sample population. For example, in order for a defence pairing to be considered elite at suppressing scoring chances, they need to have a SCA60 of 6.47 (or lower) which translates to the 83rd percentile (or better) and a z-score greater than or equal to 1.04. Altogether, the chart above is a useful reference for evaluating the performance of above-average defence pairings.

**Applications:**

We’ve seen how z-scores can be a useful tool for distinguishing between elite and above-average results. The most obvious way that we can use them to judge defence pairings is by comparing the performance to other pairings using z-scores. I have done this with Doughty-Muzzin in the chart below:

When Doughty and Muzzin are on the ice together, they are elite shot suppressors and are above-average at preventing scoring chances and in expected goals. None of this is surprising.

Another way to incorporate z-scores into our evaluations is to make a leaderboard for pairings in a certain statistic using z-scores as a cut-off rather than arbitrarily doing a “top 10” or “top 15.” I made this viz so you can see all defence pairings who are elite in CA60. You can also use the z-CA60 filter to view all of the worst pairings in CA60 as well, by setting it to show everyone with a z-score of less than 1. (A few Leafs pairings will show up here, by the way).

**Concluding Thoughts:**

Altogether, the days leading up to the NHL Awards are often filled with debates surrounding certain trophies. Last year, it was Drew Doughty at the centre of the traditionalist versus analytics debate. Despite the divergent opinions on this topic, his play with Muzzin is certainly deserving of more recognition. Although there are no NHL awards for elite defence pairings, you can use z-scores for the results of individual players as well. So regardless of which player(s) will be involved in the debates this year, you now have another tool in your repertoire to objectively separate elite performance from above-average performance.