Quality of competition makes player analysis difficult. It tilts the scale. You can’t compare two players if one has faced tough opposing players while the other plays against 3rd and 4th liners.
We’ve continuously refined our QoC metric. We started by using average opponent corsi weighted by ice time against. While this is directional and indicative, it’s not accurate (as players on strong Corsi teams get skewed as “strong players” and vice-versa for players on weak possession teams). We then developed our own QoC metric and went into some detail here.
But consider this example to highlight the limitations of player comparisons without accounting for QoC. Nate Schmidt had a puck movement score of +0.99 in 2016/17. John Klingberg was -0.03. Can you conclude Schmidt is a better puck mover than Klingberg?
Of course not. Having ran our analysis through each team, we can begin to develop true “classes” of performance based both on a defenseman’s ability as a puck-mover and the quality of opposing forwards they faced. Consider the following:
Each row represents a role. Look at the far left (first column) called ‘Pairing’ with 3 rows under it labelled 1, 2, and 3. QoC high and low refer to the range of quality of competition faced by players in that “class”.
So if you take the first row “Pairing 1 – QoC High and Low” we’re talking about the ~60 defensemen who faced the highest level of competition in the league (i.e., the 1 and 2 on each team).
The next 3 squares (with label 1st tier, 2nd tier, 3rd tier) refer to the breakdown within each class. For example, among the 60 defensemen in the first row who faced QoC ranging from 2.756 to 3.096, the 1st tier of them had an average score of 1.73 (with a range of -0.18 to 6.13).
This is really insightful: Obtaining a PMF of -0.18 against top competition places you among the top 20 defensemen in the league who faced similar top competition. Comparatively, the same score against weaker competition would not classify in the top 20 (it would be tier 2).
The reason this breakdown is critical is because it provides key context to comparing players. Consider the John Klingberg / Nate Schmidt example from above:
Klingberg’s PMF of -0.03 is actually a top tier performance considering he faced top opposing forwards. Schmidt, on the other hand, faced weak QoC. So even though his PMF of +0.99 appears higher than Klingberg, when looking at the 3rd row of the above chart, that score isn’t even top tier among 3rd pairing defensemen who faced similar competition.
A quick snapshot at some examples
There are still nuances we’re working through. For example, Franson was a strong puck mover (PMF +4.60) but faced weaker QoC than Lindholm and Niskanen – although all three qualify as ‘2nd pairing QoC’. We’re currently working on building a QoC adjusted puck movement score which will better account for this.
But the takeaway is clear: QoC clearly plays a big role in most performance metrics. By properly accounting for this, one can draw more accurate conclusions and make more informed decisions. For example, assuming you can slot Colin Miller onto your first pairing would be a stretch. Rather, one would conclude that his strong performance in a sheltered 3rd line role might indicate he can be effective with more responsibility (i.e., a 2nd pairing role). OR – one might say that we envision Colin Miller on our 3rd pairing and for the right price, we could acquire a strong 3rd pairing defenseman.
In evaluating a continuous game like hockey, there’s no such thing as perfect. But when comparing players, the quality of competition that each faces has a major impact on their performance. And the failure to recognize this – and account for it – will lead to misinformed decision making.