The Last Change

The college hockey community has, by and large, embraced a purely objective-based system for selecting the field for the NCAA tournament. Fans like it, coaches like it, we like it.

Why?

Well, all subjective systems are open to biases — even subconscious ones — that can skew results. Further, it is the belief of the large majority of the hockey community that even a perception of political decision-making is not worth the hassle that a subjective system brings.

Still, no objective system is perfect — and some are less perfect than others.

In the current system, there is frustration with some of the built-in injustices — many of which we’ve addressed in the last few months and years. To address these, the committee has tried many times over the years to tweak things. But the tweaks, and the continual changes in interpretation, only create more frustration.

Not wanting to beat a dead horse any longer, we present our proposal for a system that can stand the test of time — something that we can all live with in peace and harmony for years to come.

Our approach to this proposal is to guide the process, not dictate. College hockey coaches and the NCAA Division I Men’s Ice Hockey Committee should be able to choose the things that are most important to them. Our role, then, is to help everyone understand the implication and ramification of each decision, and how they interact with each other. And to clearly spell out the options.

Introduction

Given the desire for a purely objective system, it is imperative to ensure the “correct” objective system is used. If not, you become a slave to numbers that are faulty.

By starting from the ground up, we free ourselves from being tied to the old system. Instead of tweaking the old system, and “fixing” things by plugging round pegs into square holes, we can reevaluate everything from the start.

This is not to suggest the current system is a disaster. The current system has managed to choose a good field, and create an outstanding tournament. Further, no system is perfect, and it should not be assumed perfection can be achieved.

However, we believe it is possible to create a system that is on much better logical footing.

By the same token, everything in this proposal should be familiar. We are just reevaluating each step in a logical, orderly fashion. This proposal can help sift out the truly debateable issues, from the points which should be absolute. For example, it should be up to the hockey committee and coaches to decide which methods and criteria to use … while we can help create the most logical implementation of those ideas.

This proposal is meant to be followed, step-by-step. By making a series of choices, the committee will have, in the end, the system they want, that also works well.

The Steps

Step I. Do we want an objective-based system?

If the answer is no, stop now; the rest of the article is meaningless. If the answer is yes, continue.

Step II. KRACH vs. RPI

RPI (Ratings Percentage Index) is the foundation for which selections are made in nearly every NCAA sport. Some sports use RPI as a guide, then jump off with a number of subjective ideas to help choose the field (like men’s basketball). Others take RPI and then build onto it with other objective criteria (like ice hockey).

If, however, this foundation is flawed, it’s like trying to calculate a trip to the moon using textbooks from the ancient Greeks.

RPI’s biggest flaw is that defeating a weak opponent causes your RPI to go down. Why this happens in RPI, and why KRACH is therefore better, is a bit more technical. Please read the KRACH FAQ to better understand. But, in essence, RPI and KRACH have the same goals — to account for strength of schedule — KRACH just does it much better.

We believe switching from RPI to KRACH (or any of its equivalents) is an absolute. There is an understandable hesitance for one NCAA sport to go against the grain of every other NCAA sport. But, its willingness to be unique is part of what makes college hockey so great.

College Hockey should take this bold step, and be a trendsetter for the NCAA. Use KRACH.

Step III. Should home-ice be a factor?

This year, bonus points — in the form of an RPI adjustment — were introduced for “good wins.”

The impetus for this is the belief that teams which play a disproportionate number of nonleague road games are at a disadvantage. For example, if an Ivy League school plays most of its nonleague games on the road in large Western buildings, it skews their nonleague results negatively.

Unfortunately, the method chosen to “fix” this problem — RPI bonus points — is riddled with flaws (details available upon request). Further, it fails to even adequately address the original problem.

A far superior way to handle this problem is to adjust KRACH for home ice. This is done within the mathematical scheme of KRACH itself, and is therefore logically correct. It’s a simple home-ice advantage built into the formula. KRACH is then re-balanced based upon the amount of home, neutral and road games played.

Like the bonus points, a “Home-ice KRACH” also would have had minimal effect on this year’s selections. That, however, is not a good reason to keep using a poor method. It could matter. Therefore, you might as well be using a method that works as intended.

If the committee wants to build a home-ice factor into the system, we recommend using this method instead of “bonus points.”

Step IV. Do we want to add other criteria?

There are three ways to go from this point.

  1. Use KRACH by itself. Tournament selection is based solely on straight KRACH, 1-16.
  2. Use other criteria to break close calls. This is a method used in other sports, and has been used in the past in hockey. Since no mathematical formula is perfect — even one as good as KRACH — it’s logical to say that KRACH ratings within a certain range of each other are a figurative tie, and then use other criteria only to break those kinds of “ties.” (details of the “close range” would be worked out later)
  3. Use other criteria all the time, with a method of Pairwise Comparisons. This is what’s used now (i.e. the Pairwise Rankings).

Obviously, if choice No. 1 is used, the remainder of the steps are irrelevant. Otherwise, continue.

Step V. Choose the criteria

The committee should go through each criterion, step-by-step, and determine whether to use it or not. Once determining whether to use a criterion, that is where this proposal comes in. We have recommended ways that each criterion can be implemented properly and effectively, once it is philosophically agreed upon to be used at all.

A. Head-to-head

In a system of pairwise comparisons, the usefulness of this criterion is apparent. The committee may also choose to consider home ice as a factor in this criterion.

B. Common Opponents

The use of a common opponents criterion is extremely sound. However, it is somewhat susceptible to a strength of schedule factor. For example, Team A could play a poor opponent 4 times, and a strong opponent once. Team B could play that same poor opponent once, and the same strong opponent 4 times.

Should the committee wish to continue using this criterion, we recommend KRACH-ifying it; i.e. re-balancing it to adjust for strength of schedule within those common opponents. Even though the difference in strength of schedule in such a case will not be that great, this will still give a more true measure of the record against common opponents.

Options:

  1. Do not use this criterion
  2. Use a KRACH-ified criterion that adjusts for strength of schedule
  3. Do not adjust for strength of schedule (current)

C. Record vs. TUC

The usefulness of this criterion is in measuring how a team does against strong opposition. For example: Team A and Team B each play a strong opponent and a weak one. Team A beats the weak one, and loses to the strong one. Team B beats the strong one, and loses to the weak one. KRACH and RPI would rate these teams equal.

By adding a “Record vs. TUC” criterion, Team B gets an added benefit, for having beaten the “strong” team.

The argument against this criterion is: What’s the difference? If you get a boost for beating a good team, why not also get the penalty for losing to the bad team?

A more technical problem with this criterion has always been: Where do you draw the line? The definition of TUC is arbitrary in nature. Also, what if the TUCs Team A beat were all in the Top 5, while Team B beat TUCs in the 15-20 range?

Each of these problems has its own KRACH-based solution. On the latter, a KRACH-ified strength of schedule can be applied. On the former, there are two possible solutions: keeping a precise cutoff, but one based on KRACH; or create a sliding scale that weights each win more and more, the tougher and tougher and opponent you beat.

It is up to the committee whether or not they philosophically would like to keep this criterion. Does the committee want to continue to reward “good wins” even though “bad losses” aren’t penalized?

If so, remember what you are doing. You are saying: It’s better to beat a good team and lose to a bad team, than to beat a bad team and lose to a good team. An arguable point.

Options:

  1. Do not use this criterion
  2. Use a KRACH-ified criterion that adjusts for strength of schedule, with an arbitrary cutoff for the definition of TUC
  3. Use a KRACH-ified criterion that adjusts for strength of schedule, with a sliding scale of win strength, based on KRACH itself
  4. Use KRACH to apply TUC cutoff, but don’t apply strength of schedule (same as current, but with KRACH-based cutoff replacing RPI-based one)

D. “Down the Stretch”

The philosophy of crediting a team for playing better “down the stretch” is used in many sports. The debate, of course, is whether a team’s season should be evaluated as a whole, equally, or should they be penalized for doing poorly at the end.

Again, this is a question the committee has to answer. And this is one of the reasons the committee removed this criterion.

However, the other reason the criterion was removed, was because of the strength of schedule inequity in different team’s last 16 games.

Should the committee determine they want some sort of “Down the Stretch” criterion, rest assured that the strength of schedule issue can be rectified. Just as in Common Opponents, the Last 16 (or whatever amount of games used) can be KRACH-ified; i.e. re-balanced to account for strength of schedule.

Another interesting proposal for rewarding “Down the Stretch” play, is to give more weight to conference tournament wins. This can be accomplished by just flat out rewarding conference wins, or by creating a criterion that gradually rewards wins more and more as the season goes on.

Therefore, the options are:

  1. Do not use this criterion
  2. Use a KRACH-ified Last 16 (or any other number of games) that adjusts for strength of schedule
  3. Build the criterion so it gradually weights games more and more as the season goes on
  4. Build the criterion so tournament wins only are rewarded

Effects on 2002-2003

The biggest complaint commonly heard about the 2003 NCAA tournament was the selection of St. Cloud State with a 17-15-5 record, over Minnesota-Duluth, which was 22-15-5 and defeated SCSU in the first-round of the WCHA tournament. This, detractors said, was all the proof they needed that the system was flawed.

Oddly, using straight KRACH, St. Cloud also would have made the tournament over Minnesota-Duluth. In fact, Duluth was not even that close.

Is this also a flaw in KRACH?

No. This shows clearly that St. Cloud’s selection is not an outrageous injustice. In fact, it’s perfectly logical — at least assuming that only the season as a whole is calculated, and not the many other factors which can be added in.

To explain:

SCSU’s schedule was so tough this year, its average opponent was an NCAA tournament team. Going just above .500 against this competition, therefore, would also make you an NCAA tournament team.

As a way of simplifying this explanation … let’s say SCSU played Mankato St. 36 times this year (Mankato was the lowest at-large team other than SCSU itself).

If SCSU went 18-18 against Mankato State, what would this say? It says SCSU is equally as good as Mankato. Therefore, assuming we agree that Mankato is a legitimate NCAA tournament team, it’s perfectly logical to put SCSU into the tournament, even though it was only .500.

This is essentially what happened to SCSU this year. Instead of being an outrageous injustice, it was merely an oddity, but perfectly valid.

On the other hand, it’s clear that some sort of “Down the Stretch” criterion would have hurt St. Cloud’s chances at selection — especially if it specifically considered conference tournament games — and the seedings of Maine and North Dakota.

However, under any scenario run, Duluth is hardly the beneficiary. Time and again, any number of mathematical analysis shows clearly that St. Cloud played a tougher schedule, even within its own conference (the WCHA plays an unbalanced schedule). If St. Cloud drops out because of a “Down the Stretch” criterion, the most common beneficiaries are Providence and Michigan State.

Seeding

This article is about selection, not seeding … but one comment on the latter:

While a completely objective system is highly desirable for the selection of teams, we believe the committee should afford itself some leeway when it comes to seeding.

Seeding should maintain a set of “guidelines,” but with enough flexibility built in so the committee avoids boxing itself into a corner. By allowing itself leeway and the use of common sense, it can avoid problematic situations, such as this year’s Cornell-Mankato St. game, and the New Hampshire-Boston University second-round game.

Conclusion

We believe it’s possible to settle on a system that is on sound logical footing, and includes all of the criteria that the hockey coaches and committee deem important.

We hope that “fear of change” will not hinder this kind of progress.

On the other hand, we understand that many are tired of the constant tinkering each year. We hope this proposal is sound enough so that, once settled on, we can live comfortably with the results for a long time.