URL for this frameset: http://slack.net/~whelan/tbrw/tbrw.cgi?2000/fineprint.shtml
Okay, of course life is never completely easy. First of all, a general caveat. I'm not on the selection committee, the current system of seeding the tournament is an inexact science, so even though the NCAA is working to educate the public on the process, we can never be sure what the committee will do until it's announced. But it's my hope that by thinking about these things ahead of time, we can spend the afternoon of March 19, 2000 waiting to find out whether the committee does X, Y, or maybe Z, and not be stunned to learn they've chosen Q. (Even with some preparation, I've been taken by surprise each year, but I've learned not to make too many assumptions about the one subjective area that remains: placing teams in regions.)
The schedule for Air Force only includes 19 regular-season games against Division I teams, which means that unless they meet Niagara or Army in the College Hockey America playoffs (all of the other teams in the CHA are Division II), they will not play 20 D1 games. In addition, the University of Vermont has cancelled the remainder of its season, after accumulating a 5-9-3 D1 record. The NCAA has ruled that games against both teams will still contribute to the selection process, but no team with less than 20 D1 games will be considered a "Team Under Consideration" or be eligible for the tournament. This has no effect on Vermont, who have finished under .500 anyway, but could make a difference if Air Force finishes at or above .500 with only 19 D1 games.
The five conferences that were around last year were easily placed in regions, with the WCHA and CCHA in the West and the MAAC, ECAC and Hockey East in the East. But the newest Division I league, College Hockey America, has three tournament-eligible teams straddling the two regions. Air Force is clearly in the West and Army in the East. Niagara is located between the Westernmost ECAC team (Cornell) and the Easternmost CCHA team (Ohio State), but it seems reasonable to place them in the East as well, if the MAAC, containing Canisius, is an Eastern conference.
When calculating opponents' winning percentage for a given team, games against that team are not included. However, the opponents' opponents' percentage is simply calculated by averaging the "opponents' percentage" (as specified above), which subtracts games against the intermediate team but not those against the initial team. That is to say, if Vij is the number of times team i has beaten team j, Nij=Vij+Vji is the number of times they've played, Vi=∑jVij is the total number of wins for team i and Ni=∑jNij is the total number of games they've played, then team i's RPI is given by
0.35*Vi/Ni + 0.50 *
+ 0.15 * ∑j (Nij/Ni)*∑k(Njk/Nj)*(Vk-Vkj)/(Nk-Nkj)
Air Force will be considered a TUC only if they finish with a non-losing record and play 20 or more Division I games, which means facing Niagara or Army in the College Hockey American playoffs.
If a team with a losing record earns an automatic berth by winning its conference tournament, they are considered a TUC for all calculations.
When comparing two teams, their head-to-head games are subtracted from each team's record against Teams Under Consideration. I.e., in the comparison between team A and team B, this criterion actually compares team A's record against all TUCs except team B to team B's record against all TUCs except team A.
Since head-to-head games are not included in records vs common opponents (after all, no team plays itself), one should be careful using conference record as a starting point for record against common opponents when the two teams are in the same conference.
The observant reader will have noticed that I say PWR stands for "pairwise rating", while USCHO uses the term "pairwise ranking". The way I see it, since the PWR is the number of comparisons that a team wins, it's not actually a ranking. If there are 24 Teams Under Consideration, a team which wins comparisons with the other 23 teams has a PWR of 23, while its ranking according to the PWR would be 1.
Contrary to some opinions expressed on HOCKEY-L, I consider KRACH a superior rating system to RPI not because its harsh judgement of the MAAC teams last year agrees with some preconceived ideas of the conference's weakness, but because it is demostrably more precise and efficient at doing what RPI was designed to to in the first place. RPI is supposed to judge the strength of a team's performance by combining its winning percentage with its strength of schedule. The problem, in large part, is that the notions of "strength" are not the same. Strength, as defined by the Ratings Percentage Index, is seven parts winning percentage, ten parts opponents' winning percentage and three parts opponents' opponents' winning percentage. On the other hand, the strength-of-schedule which is part of the RPI is effectively ten parts winning percentage and only three parts opponents' winning percentage. This means that while RPI tries to correct for a team playing an unusually weak or strong schedule, it assumes that the winning percentages of that team's opponents are, on average, accurate indicators of their strengths, and does not do much to correct for them. In a case where a group of teams is predominantly playing one another, that assumption will not be valid if all the teams in the group are weaker than average. On the other hand, the Bradley-Terry method, upon which KRACH is based, requires ratings to satisfy the self-consistent set of equations, in which the rating of each team is related to their winning percentage and the ratings of their opponents. In the face of this kind of "feedback", the relative strength of two almost-but-not-quite completely "insular" groups of team will be established by whatever basis for comparison exists, such as the performance against the Independents last season.
There is only one regular season champion per conference, so if two or more teams are tied for the championship, whichever team is seeded first in the conference playoffs, based on the league tiebreaker system, is considered to be the regular season champion for these purposes.
This is an incredibly silly way to run things, since it almost ensures that the lowest-ranked team(s) in an over-represented region will be shipped back into its own region, while other teams will end up out of their region because they beat out the team(s) in question. It rewards a team for being seventh or eighth, rather than fifth or sixth, in the region. It would be a lot more sensible just to say that the bottom two teams from an over-represented region get shipped out, and an under-represented region sends its bottom team if it's got five teams, or no one if it's got four.
Since the committee is basically at its discretion in seeding the eight non-bye teams in the two regionals, The designation of the seventh and perhaps eighth teams in a region as belonging to the other region shouldn't keep them from being kept in their adopted region even if that means that only one pair of teams is technically swapped. I.e., there's probably nothing wrong with, for example,
E1, E2, E3, E4, W5, W7
W1, W2, W3, W4, W6, E5
if attendance or conference matchup considerations make it preferable to the default arrangement of
E1, E2, E3, E4, W5, W6
W1, W2, W3, W4, W7, E5
even though the former arrangement technically keeps five of the six "Eastern" teams (including W7) in the East regional.
In fact, the committee need not even keep four teams in their own region in the case of a regional imbalance like this; last year, with seven Western and five Eastern teams, the NCAA sent three Western teams into the East Regional in exchange for two "true" Eastern teams and the transplanted seventh-rated Western team, leaving an "East" Regional that took only half its teams from the East.
Also note that if the committee knows one of the regionals is going to sell out regardless, they won't need to worry about attendance considerations for that regional.