URL for this frameset: http://slack.net/~whelan/tbrw/tbrw.cgi?2003/krach.shtml
Game results taken from US College Hockey Online's Division I composite schedule
Starting this season, US College Hockey Online is featuring the current KRACH, calculated from the latest scores, so if you just want the latest KRACH rankings, you should go there. For Joe Schlobotnik's geeky analysis of the system, with ratings recalculated daily, read on.
To spell out the definition of the KRACH explicitly, if Vij is the number of times team i has beaten team j (with ties as always counting as half a win and half a loss), Nij=Vij+Vji is the number of times they've played, Vi=∑jVij is the total number of wins for team i, and Ni=∑jNij is the total number of games they've played, the team i's KRACH Ki is defined indirectly by
Vi = ∑j Nij*Ki/(Ki+Kj)
An equivalent definition, less fundamental but more useful for understanding KRACH as a combination of game results and strength of schedule, is
Ki = [Vi/(Ni-Vi)] * [∑jfij*Kj]
where the weighting factor is
fij = [Nij/(Ki+Kj)] / [∑kNik/(Ki+Kk)]
Note that fij is defined so that ∑jfij=1, which means that, for example, if all of a team's opponents have the same KRACH rating, their strength of schedule will equal that rating.
Finally, the definition of the KRACH given so far allows us to multiply everyone's rating by the same number without changing anything. This ambiguity is resolved by defining a rating of 100 to correspond to a RRWP of .500, i.e., a hypothetical team which would be expected to win exactly half their games if they played all 60 Division 1 schools the same number of times.
KRACH has been put forth as a replacement for the Ratings Percentage Index because it does what RPI in intended to do, namely judge a team's results taking into account the strength of their opposition. It does this without some of the shortcomings exhibited by RPI, such as a team's rating going down when they defeat a bad team, or a semi-isolated group of teams accumulating inflated winning percentages and showing up on other teams' schedules as stronger than they really are. The two properties which make KRACH a more robust rating system are recursion (the strength of schedule measure used in calculating a team's KRACH rating comes from the the KRACH ratings of that team's opponents) and multiplication (record and strength of schedule are multiplied rather than added).
The strength-of-schedule contribution to RPI is made up of 2 parts opponents' winning percentage and 1 part opponents' opponents' winning percentage. This means what while a team's RPI is only 1 parts winning percentage and 3 parts strength of schedule, i.e., strength of schedule is not taken at face value when evaluating a team overall, it is taken more or less at face value when evaluating the strength of a team as an opponent. (You can see how big an impact this has by looking at the "RPIStr" column on our RPI page.) So in the case of the early days of the MAAC, RPI was judging the value of a MAAC team's wins against other MAAC teams on the basis of those teams' records, mostly against other MAAC teams. Information on how the conference as a whole stacked up, based on the few non-conference games, was swamped by the impact of games between MAAC teams. Recently, with the MAAC involved in more interconference games, the average winning percentage of MAAC teams has gone down and thus the strength of schedule of the top MAAC teams is bringing down their RPI substantially. However, when teams from other conferences play those top MAAC teams, the MAAC opponents look strong to RPI because of their high winning percentages. (In response to this problem, the NCAA has changed the relative weightings of the components of the RPI from 35% winning percentage/50% opponents' winning percentage/15% opponents' opponents' winning percentage back to the original 25%/50%/25% weighting. However, this intensifies RPI's other drawback of allowing the strength of an opponent to overwhelm the actual outcome of the game.)
KRACH, on the other hand, defines the strength of schedule using the KRACH ratings themselves. This recursive property allows games further down the chain of opponents' opponents' opponents etc to have some impact on the ratings. Games among the teams in a conference are very good for giving information about the relative strengths of those teams, but KRACH manages to use even a few non-conference games to set the relative strength of that group to the rest of the NCAA. And if a team from a weak conference is judged to have a low KRACH desipte amassing a good record against bad competition, they are considered a weak opponent for strength-of-schedule purposes, since the KRACH itself is used for that as well.
One might consider bringing the power of recursion to RPI by defining an "RRPI" which was made up of 25% of a team's winning percentage and 75% of the average RRPI of their opponents. (This sort of modification is how the RHEAL rankings are defined.) However, this would not change the fact that the rating is additive. So, for example, a team with a .500 winning percentage would have an RPI between .125 and .875, no matter what their strength of schedule was. Similarly, a team playing against an extremely weak or strong schedule only has .250 of leeway based on their actual results.
With KRACH, on the other hand, one is multiplying two numbers (PF/PA) and SOS which could be anywhere from zero to infinity, and so no matter how low your SOS rating is, you could in principle have a high KRACH by having a high enough ratio of wins to losses.
This definition defines the KRACH indirectly, so it can be used to check that a given set of ratings is correct, but to actually calculate them, one needs to do something like rewrite the definition in the form
Ki = Vi / [∑jNij/(Ki+Kj)]
This still defines the KRACH ratings recursively, i.e., in terms of themselves, but this equation can be solved by a method known as iteration, where you put in any guess for the KRACH ratings on the right hand side, see what comes out on the left hand side, then put those numbers back in on the right hand side and try again. When you've gotten close to the correct set of ratings, the numbers coming out on the left-hand side will be indistinguishable from the numbers going in on the right-hand side.
The other (equivalent) definition is already written as a recursive expression for the KRACH ratings, and it can be iterated in the same way to get the same results.
It should be pointed out that if someone hands you a set of KRACH ratings and you only want to check that they are correct, it's much easier. You just calculate the expected number of wins for each team according to
Vi = ∑j Nij*Ki/(Ki+Kj)
And check that you come up with the actual number of wins. (Once again, a tie counts as half a win and half a loss.)
As described in Ken Butler's explanation of the KRACH, the methods described so far break down if a team has won all of their games. This is because their actual winning percentage is 1.000, and it's only possible for that to be their expected winning percentage if their rating is infinitely compared to those of their opponents. Now, if it's only one team, we could just set their KRACH to infinity (or zero in the case of a team which has lost all of their games), but there are more complicated scenarios in which, for example, two teams have only lost to each other, and so their KRACH ratings need to be infinite compared to everybody else's and finite compared to each other. The good news is that this sort of situation almost never exists at the end of the season; the only case in recent memory was Fairfield's first Division I season, when they went 0-23 against tournament-eligible competition.
An older version of KRACH got around this by adding a "fictitious team" against which each team was assumed to have played and tied one game, which was enough to make everyone's KRACH finite. However, this had the disadvantage that it could still effect the ratings even when it was no longer needed to avoid infinities.
The current version of KRACH does not include this "fictitious team", but rather checks to see if any ratios of ratings will end up needing to be infinite to produce the correct expected winning percentages. The key turns out to be related to the old game of trying to prove that the last-place team is better than the first-place team because they beat someone who beat someone who beat someone who beat the champions. If you can take any two teams and make a chain of wins or ties from one to the other, then all of the KRACH ratings will be finite.
If that's not the case, you need to work out the relationships teams have to each other. If you can make a chain of wins and ties from team A to team B but not the other way around, team A's rating will need to be infinite compared to team B's, and for shorthand we say A>B (and B<A). If you can make a chain of wins and ties from team A to team B and also from team B to team A, the ratio of their ratings will be a unique finite number and we say A~B. If you can't make a chain of wins and ties connecting team A and team B in either direction, the ratio of their ratings could be anything you like and you'd still get a set of ratings which satisfied the definition of the KRACH, so we say A%B (since the ratio of their ratings can be thought of as the undetermined zero divided by zero). Because of the nature of these relationships, we can split all the teams into groups so that every team in a group has the ~ relationship with every other team in the group, but not with any team outside of its groups. Furthermore if we look at two different groups, each team in the first group will have the same relationship (>, <, or %) with each team in the second group. We can then define finite KRACH ratings based only on games played between members of the same group, and use those as usual to define the expected head-to-head winning percentages for teams within the same group. For teams in different groups, we don't use the KRACH ratings, but rather the relationships between teams. If A>B, then A has an expected winning percentage of 1.000 in games against B and B has an expected winning percentage of .000 in games against A. In the case where A%B there's no basis for comparison, so we arbitrarily assign an expected head-to-head winning percentage of .500 to each team.
In the case where everyone is in the same group (again, usually true by the middle of the season) we can define a single KRACH rating with no hassle. If they're not, we need the ratings plus the group structure to describe things fully. However, the Round-Robin Winning Percentage (RRWP) can still be defined in this case and used to rank the teams, which is another reason why it's a convenient figure to work with.