Advanced metrics are here to stay. We've reached a point where box score stats are no longer sufficient when evaluating teams and players. Not that the box score was ever sufficient, but there was a time when that was all we had. Now, averaging everything over 100 possessions, per X minutes, and expressing some statistics as percentages rather than averages is what most people use when determining how well a particular team and/or player is or isn't performing.
John Hollinger, the Vice President of Basketball Operations for the Memphis Grizzlies, created his PER (Player Efficiency Rating) metric while working for OregonLive.com. Hollinger's PER is designed to calculate all of a player's contributions on the court into one number. Ken Pomeroy is the go-to source for advanced stats about men's college basketball. His team rankings, along with his adjusted offensive and defensive rankings, are among the most referenced stats in college basketball.
While advanced metrics have allowed us to understand basketball in ways we might never have imagined, particularly the new SportsVu stuff on NBA.com, there are pitfalls when it comes to how we use the data. Causation vs. correlation, action vs. intent, and confirmation bias are a just a few pitfalls that one can fall victim to when analyzing data.
Recently, Andrew Chiappazzi, publisher for Colonials Corner, predicted the adjusted offensive and defensive efficiency metrics for each team in the NEC based on a 3-year average. It's a really great read that can be found here. It's a great read because of how well Chiappazzi frames everything in it's proper context.
The teams in the NEC, like many mid-major conferences, tend to play against tougher opponents during the non-conference portion of their schedule. The Robert Morris Colonials are no exception; they played at Kentucky, at Oklahoma State, and hosted a very good Toledo team.
Chiappazzi took the overall adjusted offensive and defensive calculations over the last three years and averaged them. He then averaged those metrics for just the NEC games over the last three years. The delta between those averages was then applied to every team's current figures in order to create projections. As you might expect, every team saw an increase in their adjusted offense and a decrease in their adjusted defense.
Furthermore, he provided some context as to why the numbers seemed high based on his knowledge from observing trends over the last several seasons. He predicted how the teams will finish the season based on the delta between the projected adjusted offensive and defensive calculations. Lastly, he compared his projections to KenPom's projections, and footnoted the philosophical changes in RMU's defense that might affect the numbers.
Not only did Chiappazzi demonstrate that he understood the metrics, but more importantly, he framed them within a proper context. It just slays me sometimes when I read a tweet or an article that randomly lists some advanced metrics without also noting why that metric is significant or not. I understand that it's hard to frame things on Twitter when you're limited to 140 characters. Also, some metrics are generally understood and don't need to be framed as much as others.
With that said, if I told you Pitt is averaging 1.19 points per possession; how much would that actually mean to most fans? By no means am I suggesting that fans aren't educated or have a grasp of advanced metrics. I am suggesting, however, that there isn't a well-known benchmark for gauging what is and isn't a good average.
Now, if I said the Panthers are averaging 1.19 points per possession, and in a four-way tie for seventh in the country, that would have a lot more meaning. One might grasp the significance of the metric even more if I added that the median is 1.06 points per possession, and the low mark is 0.84.
Sure, tweeting out some random advanced stat that most people don't know how to find makes you sound smart. I guess including it in an article just to keep up with the times covers your bases. However, most people can see right through that without having to fully understand the metrics that are being presented.
Proper context is vital when referencing advanced metrics. Per game stats are easily understood, but generally speak to what happened as opposed to how it happened. That isn't a bad thing, in fact, that is why we have some advanced metrics.
- Stats courtesy of Stat Sheet.