I've been engaged in a SoS debate with another fan in my BSU-UGa video comments on Youtube recently. I'd like to point out that it's been a good conversation and I have a good opinion of the guy I'm speaking with.
My claim is that Strength of Schedule (SoS) is too variable to be used as a major standard of who plays in a BCS Bowl or NCG and who does not. His claim is that SoS erases doubts on a given team's candidacy while a lack of SoS only leaves question marks. Both claims have merit to them to certain extents imo.
The major applicable question then is, what are we measuring when we say SoS and how much of it is required for reasonable evaluation? How many quality wins does a team need to qualify their candidacy and what constitutes a quality win? To this end, I think we are faced with a near unsolvable riddle.
How many quality wins are needed for evaluation purposes?
Mike Tyson was the youngest boxer to win a belt in the history of the sport. He raced up the rankings and didn't face a bevy of #1 Contenders prior to his first title fight. It didn't matter either, everyone knew the guy had the goods. Similarly, Floyd Mayweather is widely considered the best pound for pound boxer in the world. Yet, when you look at his resume, most of his "quality wins" came against great fighters who had long since passed their prime. Regardless, it's easy to witness Floyd's skillset and make a reasonable determination that he is an elite boxer at least.
In each of these examples, neither fighter has needed to lean heavily on their "resume" to become as good as they did. The overwhelming contributing factors were inherent talent, determination, hard work ethic, good instruction, and practiced development of that talent.
USC dominated a weak PAC-10 in it's heyday and went on to either dominate or perform strongly in all of their NCG appearances. Florida State set the gold standard for program excellence when they went on a 10 year run of never finishing outside the Top 5. Most of this was accomplished while playing in an ACC conference that was, amazingly, weaker than it is now.
What constitutes a Quality Win?
SoS arguments seem to revolve around "names" on a team's schedule rather than the actual quality of the team. But are these traditional powers truly a good measuring point of an adversary's relative strength?
Last year it took a ridiculous four losses before Florida finally fell out of the Top 25. If you were lucky enough to dismantle them early in the year you received varying Quality Win points in the Harris Poll. If you were unlucky enough to beat that same Florida team late in the year you received zilch.
Both the Big East and ACC seem to get free passes from the various fanbases when it comes to their candidacy for BCS and NCG Bowls and yet their SoS's are very poor top to bottom.
SoS varies across the nation and even within the same conferences such as the SEC.
If we were all to play the same schedules annually but in different order our SoS would differ. So where do we draw the line? While I would like to see more proof in the pudding across the board myself, unless we institute a playoff there's no way to equalize things across the board.
Who has a strong SoS and who doesn't?
I recall being shocked at tOSU's miserably weak SoS last year. I was shocked because I assumed, like many others, that a Big 10 slate would produce a solid SoS. How much are we assuming on these other "name" institutions' schedules?
My followup question would be...
How much does it matter?
The aforementioned tOSU went on to win their BCS bowl game against Arkansas, a solid SEC team with a much stronger SoS. The same story applies to TCU's win over Wisconsin, Florida St.'s win over South Carolina, and UCF's win over Georgia.
FSU is a rising power right now, their talent and ability is plainly evident. In the future, were FSU to push for a NCG berth would anyone complain? I don't think so and correctly so. Yet their SoS is shaky at best, the ACC doesn't even meet all the criteria to be an AQ conference. That, in no way however, will determine FSU's strength as a team.
Oregon had a very weak SoS by year's end and performed admirably against Auburn, who had the nation's toughest SoS, in the NCG. This rolls into another point entirely.
SoS arguments are not applied evenly.
While teams like Nevada, Boise St., and TCU (all of which won their bowl games against AQ or soon to be AQ teams) are consistently hammered over their SoS and actively resisted for potential BCS births and especially NCG appearances, teams like Florida St., Miami, Virginia Tech, West Virginia, Oregon, and tOSU don't seem to receive any complaints whatsoever. Where's the standard at? It doesn't exist.
How exactly is SoS being applied as a measurement?
Is it used to get an accurate read on a team's relative power or is it deemed a "fairness" issue in that teams with stronger SoS's are more likely to endure more injuries in both quantity and seriousness? Is it both?
With regards to the "accurate read" aspect, how many quality wins are needed to make a reasonable assessment?
The poll votes are from coaches, former players, and certain expert analyst types who have been doing this professionally for some time now. Kirk Herbstreit knows football better than most of us, he doesn't need to see a 17-10 slugfest from a team week in and week out to make a solid assessment. Similarly, I doubt very much that Mark Richt is using Andy Staples' latest article on Boise St. as his primary preparation. Richt is crunching film right now. Well, if BSU's SoS is not good enough to make a solid evaluation then why is Richt wasting his time?
The short answer is he isn't; there is more than enough evidence on the field for Richt to fully understand what he is facing and make plans for how to attack and defend it. Just like there is enough evidence for voters to make a reasonable assessment on BSU's candidacy for a BCS or NCG berth.
In regards to the "fairness" aspect, my response is "tough, deal with it".
A) it's a tough game and injuries occur everywhere. Players get injured running wind sprints and in pickup B-ball games. Everyone has to deal with it. B) There is no way to quantify that across the board from conference to conference. C) The stated intent of the BCS is to place, within reason, the top two teams in the nation. Not the teams that ran the stiffest gauntlet or had to deal with the most injuries. It seeks to place the top two teams in terms of ability which, in the current system, is defined by their ranking.
A Playoff is the only way I can see of resolving the "fairness" aspect and that is a hypothetical scenario that does not exist right now. Under the system that DOES exist, operated under a voting system, I believe we should apply the current standards across the board. If that standard cannot be applied across the board then it cannot be used as a major measurement tool.
The more I speak to pro-SoS people the more I think it's more about "fairness" than "evaluation". But what is fair? There's varying levels of SoS across the nation. Someone will always have a tougher road, that is a product of chance and conference affiliation. Even the NFL, who has a playoff system, has varying SoS's.
A tough schedule from beginning to end is nice to have but not required at all beyond what is needed to make a reasonable evaluation. I would define that by what a coach needs to effectively evaluate an opponent. After a certain point, SoS is an accolade and not a mandate.
In summary, I strongly question the entire SoS parameter as a major standard of measurement for the following reasons:
- SoS favors "name" over "performance"
- SoS cannot be applied evenly across the board
- SoS is an argument used against some institutions and not others
- SoS, beyond a couple of quality wins, is not required to make strong evaluations by people who know what they are looking at and these same people make up the vast majority of the voting populace
- SoS is more of an accolade than a unit of measurement for a team's overall ability
- SoS ranks so far down on the list of things that make a team great it hardly merits noting