The Story Behind The Numbers

USCHO’s Chris Lerch has provided an excellent explanation of how and why teams were selected for the 2002-03 Division III men’s NCAA field. The constant mantra was the selection committee went by the numbers, which I hear over and over again.

On Sunday, one of the committee members told me personally, “it’s strictly by the numbers.” Lerch ends his column with “Numbers don’t lie, but they sure can disappoint.”

The problem with all this “by the numbers” talk is the numbers may not be the right ones. Even more importantly, the numbers are worthless if they are not interpreted correctly. Numbers may not lie, but they can be misused. I believe that is exactly what has happened this year.

I’m not advocating secret bonus points for more “worthy” wins as they are apparently going to do in Division I this year (in actuality, I do like this idea, but only if you have firm criteria to follow to make it objective). I have solutions for each of the problems that arose this year.

Let me state up front that generally speaking, I defend what the selection committees do, and the process they follow. It is rare that I disagree with who they pick, and even when I do, I understand why they did it, and can live with it.

Some of you may know that I vigorously supported the women’s selection committee last year when they chose Elmira over UW-Stevens Point for the lone Pool C bid in a volatile situation that makes this year’s men’s controversy look like a minor disagreement. With what they had to work with, they properly applied the criteria and rightfully selected Elmira.

I also agree with this year’s men’s Pool B selection of Elmira over RIT. The numbers supported it, and there was nothing wrong with the numbers that could lead to any misinterpretation.

However, sometimes a decision is just flat wrong. This is the case with selecting UW-River Falls over UW-Superior, and it has nothing to do with the fact that Superior is the defending national champion. It has to do with the fact that Superior, by the numbers, is the better team.

That is supposed to be the ultimate goal of the selection committee — to pick the better team. Sure, they have to start somewhere — for instance, they cannot say that a team with a poor record is really better because they got all their injured players back. I have no problem with using PWR and other systems to whittle the choices down to a select few.

However, once they are down to a limited number of choices, their responsibility is to select the best team. To do this, they can use selection criteria as well (and they should), but they must be very careful in studying the numbers to ensure they are properly telling the story, and not get wrapped around the axle with the process, allowing themselves to misinterpret the numbers.

The irony of this year is that the committee did have some wiggle room. The NCAA selection handbook says that the criteria would be looked at in priority order. So, the following argument could have been followed even without accepting my proposed solutions.

So, why didn’t Superior get selected? Because in this case, the numbers themselves tell a deceiving story.

Let’s run down the situation with River Falls and Superior.

The first criterion is in region winning percentage. River Falls led this category .776 to .759. Next came head-to-head results. This is the most telling number. Superior led this 2-0-1.

If these two teams were from different conferences and played one game through the course of the year, I can see arguing that one nonconference game in the middle of the year shouldn’t have such an effect on the selection. Okay, that’s why it’s the second criterion, and not the only one.

However, when you have two teams in the same conference playing games, including playoffs, that count more on their schedule than any nonconference game, what better way can you possibly have to determine which of the two is better?

I can’t think of one.

In this case, it wasn’t even close. Superior never lost to River Falls in three games. Not once. Nada. Zippo. Zilch.

Yet, the committee, so hung up on the “by the numbers” process, never allowed itself to find out what the numbers were really saying. In this case, they were saying that Superior is the better team.

One might ask how we can allow the committee to look at this subjectively? Won’t this cause more problems than it creates?

Yes, it can, but I have a solution. Do what they used to do — count each victory as one point. In this case Superior would have had two points in the process to River Falls’ zero, instead of just one that the current process gave. And if two teams did play one nonconference game like I mentioned above, no problem. It would be one point just like now, and it wouldn’t bear any greater weight, like I suggested.

Okay, now let’s move on to the next three criteria — results against common opponents, strength of schedule, and results against tournament teams.

Strength of schedule is an excellent measure, but you still have to be careful with it. I don’t like the idea that teams like Neumann or for the women, MIT, can drag SOS down for everyone in that conference with no recourse in the criteria. On the other hand, teams do have some control over their nonconference schedule. Thus SOS, when looked at properly, is a good criterion. There are various ways SOS can be adjusted objectively to take into account these situations.

I also like results against tournament teams better than common opponents, because I do believe that you should be measured on how well you might be expected to perform in the playoffs.

Now, here is the catch — if you are not careful, you will penalize a team for a tough schedule not once but twice, despite the fact that the better SOS will give a team one point.

This situation is a perfect example. Superior had the better SOS (.551 to .480). Of course by doing so, it ran the risk of losing more games against common opponents, which it did (.740 to River Falls .818).

Now, get this — the results against tournament teams turn out to be a perfect subset of the common opponents. This means that you are counting those results not once, but twice.

In this situation, Superior was 1-4-1 against teams in the tournament (it beat St. John’s and went 0-4-1 against St. Norbert). River Falls was 1-2, splitting with St. Norbert and losing to St. John’s. However, those records are also included in the common opponents record, and were the reason for the difference there.

The process therefore weighs how these teams do against other teams more than how they do against each other.

And isn’t the best way to measure the better of two teams to see how they did against each other, and not against other teams? Especially when these teams played each other in three crucial conference games?

Granted, the records could have gone opposite ways in these two criteria, and there are other scenarios which could produce a weaker SOS and still lose out on common opponents and tournament team records.

Again, though, that is my point. If you don’t take the time to understand what the numbers are telling you, you will not understand that sometimes, as in this case, you are penalizing a team twice for the same thing. I also have an objective solution to this situation — if the tournament teams are a subset of the common opponents and they tell the same story, you can’t use both.

If the above two suggestions were carried out this year — placing more emphasis on head-to-head and less on records against other teams — Superior would have been rightfully selected for the last Pool C bid over River Falls.

Rightfully — because when properly studying the numbers, one sees that Superior is indeed the better team. This is ultimately what the committee is supposed to decide, a decision they could have made if they allowed themselves to truly study the numbers and learn what they were saying.