Show Notes

Since its inception in 1998, the best argument for the BCS has been that college football fans love debates.  You know it’s true.  Everyone loves a good fight.  SEC people fighting with Big 12 people fighting with Pac-12 people fighting with computers fighting with humans fighting with Condoleeza Rice and Mark May.  Nobody can agree on much of anything, which of course, makes everything somewhat interesting.

Still, the BCS is a better system than the one before it.  And in an effort to settle some pre-BCS debates, Patrick Fleming created his own computer ranking system,The Fleming System (TFS), in 1994.  Now, with our help, he’s rolling it out to a larger audience in hopes of answering some questions about the 2013 season and, hopefully, figuring out ways to make it even better.  (We’re serious about this.  Leave comments.)

But first, a little Q&A with Patrick… (or jump right to the school rankings or conference rankings)

What’s your background?

I make a living as a college chemistry professor.  It is through the study of molecular spectroscopy that I was introduced to the mathematics needed to rate college football teams (although my first attempts were applied to rating college hockey teams.)  I did my undergraduate studies at the University of Notre Dame, and graduate studies at the Ohio State University.  I also spent time at both Tulane and Arizona State as a postdoctoral researcher.  I currently teach at the Claremont Colleges in Southern California, near where I was born and raised.  

Why make your own rankings?

My intent was to “prove” that my alma mater, Notre Dame, was robbed following the 1993 season, having beaten Florida State head-to-head, but ending up ranked lower than the Seminoles. To my dismay, my computer proved me wrong and ranked the Irish behind both Florida State and Nebraska.

How do you make your own rankings?  What’s the mathematical mumbo jumbo behind this?

My system uses a least-squares fitting mechanism to find the team ratings.

It works like this: I define a function, called a game-outcome-measure (or GOM.) This is a mathematical function that indicates how much better the winning team is than a losing team in a given game. Ratings for each team are chosen such that the differences in the ratings (GOMcalculated) mimic the GOMs for the games actually played (GOMobserved).  Specifically, the least-squares method minimizes x2 played, and defined as:

formula

by choosing the optimal set of team ratings. 

The method can be applied to any definition for the GOM. For example, in one set of ratings, I use a GOM that is 10 points, meaning that the winning team is 0 points better than the losing team, irrespective of the final margin of victory (MOV.) In another, I use the MOV, a consideration for defense and the location of the game played to calculate the GOM. The choice of how to calculate the GOM is actually pretty arbitrary. However, once the definition is determined, it is applied uniformly across all of the games played.

What other rankings systems are similar, and how is this one different?

Jeff Sagarin, who has been publishing his ratings system since before the birth of the World Wide Web, also uses a least-squares fitting method. The difference between our systems (mostly based on speculation, because I have never met Jeff Sagarin or discussed computer ratings with him) are:

  1. Sagarin uses a different GOM than I do.
  2. Sagarin treats home-field-advantage differently than I do.
  3. Sagarin uses a smaller data set, including only NCAA Division I programs (both FBS and FCS.) My ratings include teams across all three NCAA divisions, plus NAIA and independent programs that belong to neither the NCAA nor the NAIA.

The results of my rating are generally very similar to those of Jeff Sagarin. Comparisons of computer ratings systems are available each week on the web at http://masseyratings.com/cf/compare.htm.

Until mid-season, everyone’s ratings are influenced by an “initial bias.” My initial bias is to assume that all teams are equal to one another, until a game is actually played. Then it is the game outcomes that determine the relative ratings for the teams. But as a result, there is no way to rate two teams that are not somehow connected through common opponents (or opponents’ opponents.) Eventually, with the exception of the NCAA DII Northern Sun Intercollegiate Conference and DIII New England Small College Sports Conference, all of the teams will be connected through the schedule. Other systems may use an initial bias system (usually based on last year’s final ratings) that does not assume teams are equal before the season. Both methods have merits, but by about mid season, there is no need to worry about the initial bias, since all of the teams will be connected. However, that can be wild movements early in the season unless, like Sagarin, a mechanism is in place to prevent it.

What are the flaws in this system?

It is important to understand that no ranking system (including the human polls) is perfect. As any football fan understands, the game of football is all about matchups. And often times, a “weak” team that matches up favorably with a “strong” team may enjoy a competitive advantage. But ratings systems have to average over these sort of considerations, and instead describe a very complex system using a single numerical index. Clearly, football teams are much more complex systems. An improvement might be to rate the offense and defense of a team separately, but even that would be too simplistic a ratings mechanism.

The problem is that to accomplish a computer rating including 752 teams and a fifteen (or so) week season, it is impractical to look at much more than just the scores of games and locations of games played. But again, all football fans know that scores can be misleading as to how completive a game may actually have been (or not have been). So the bottom line is to remember that a computer system is never perfect, but it does apply its criteria uniformly. A human poll does not apply criteria uniformly, but it does allow for adjustment based on various aspects that make one team better than another. So pick your poison!


Below, The Fleming System ranks individual schools and conferences. Please leave your thoughts in the comments section.

The Fleming System – Rankings by School

Rankings as of October 7, 2013; sort by clicking column header

[table id=2 /]

The Fleming System – Rankings by Conference

Rankings as of October 7, 2013; sort by clicking column header

[table id=3 /]
 

View Patrick Fleming’s full catalog of rankings, including some for other collegiate sports, by clicking here.  Leave your comments below.