Dr. Jon Acker is an old buddy of mine from another publication we both used to write for. He sent me this to look over and it’s just far too good to go unpublished, so with permission from Dr. Acker, here are his well researched and thought out conclusions on recruiting and success in college football. Enjoy! Larry Burton
Looking to the Stars: Can Recruiting Class Rankings Predict Team Success in College Football?
By: Jon Charles Acker, Ph.D. The University of Alabama
Anyone familiar with rankings of high school football prospects knows all about the “stars.” Stars equal a star athlete, and the more stars the bigger the star athlete is expected to perform. Since the advent of scholarship limitations, schools can no longer stock-pile high caliper student-athletes. This has led to greater competition and more uncertainty for schools in attracting top recruits. This is further complicated by the fact that recruiting is a two-way street, in that a student selects a potential institution to attend and that institution must also select that student to recruit.
The recruiting process has really become a sport in itself, with great fanfare given to individual recruits based on overall nationwide and position ratings and for institutions for obtaining top recruiting classes. Institutions vie for commitments throughout the year, with activity accelerating throughout the college football season as many prospective recruits are brought to the institution to witness games and experience the campus atmosphere. After football season ends in early-January there is a tremendous push to obtain the desired athletes, with head coaches, assistant coaches, and other designated recruiters traveling tirelessly to meet with the athlete and talk with their parents and high school coaches. The recruiting season culminates in early-February with the annual “Letter of Intent Day.” This is the day whereby potential student-athletes may formally accept a scholarship to attend an institution.
There are various rating services which assess high school athletes individually and collectively for whole recruiting classes. These services include Scout.com, Rivals.com, ESPN, and 247sports.com. All services use a two to five-star rating scale. Of course, all ratings are a best estimate by the rating services and the manner by which they determine these ratings differ. That said, there is still a lot of agreement in rating players and, thus, in recruiting classes.
However, a five-star athlete coming out of high school is by no means guaranteed to have a stellar college career and positively impact a team’s performance. A minefield of impediments await, including obtaining academic eligibility before entering school, maintaining academic eligibility after getting into school, fitting into the scheme of the team, maintaining strength and conditioning, avoiding injury, and avoiding trouble. Then there’s also the possibility of the student transferring or leaving early for the pros. But perhaps the most important variable lies in pure coaching and personal growth. “Coaching and development are still the best difference-makers… A lot of development still has to happen between the ages of 17 and 23,” says Scott Kennedy, Scout’s director of scouting. (Carey, 2008, p. 14C)
The history and impact of recruiting limitations.
The National Collegiate Athletic Association (NCAA) is the primary regulator for intercollegiate athletics. “It’s perhaps most important and controversial activity involves enforced restrictions on player recruiting, eligibility, and compensation” (Eckard, 1998, p. 347).
From the 1950s through the 1970s it was perceived that perennial powerhouse schools such as Alabama, Notre Dame, Oklahoma, etc. were stockpiling promising players in an effort to deny their talent from competing teams. Thus, in an attempt to achieve more parity in college football, the NCAA in 1977 limited Division I-A schools to 95 scholarships, then again reduced it to 85 in 1992 while further adding that no more than 25 can be given out in any one year.
The evidence, however, is mixed as to whether competition has been increased or even decreased in time. Sutter and Winkler (2003) looked at parity or competitive balance in several ways comparing the pre-limits time frame of 1957-76 and the post-limits time frame of 1982-2001. First, they looked at balance “within games,” i.e., is there a closer final score and more lead changes? Second, they looked at balance “within seasons,” i.e., is there a smaller winning percentage between the top and bottom teams in the standings? Lastly, they looked at balance “across seasons,” i.e., are different teams winning conference titles, and finishing in the AP Top 10 or AP Top 20 over time? They found that competitive balance is actually significantly lower since the implementation of scholarship limits for “within games” and “within seasons,” while the “across seasons” results are mixed (reduced parity for AP Top 20, increased parity for AP Top 10, no difference for conference titles). (p. 8)
Has anyone else studied recruiting class rankings and team success?
Little has been attempted to relate recruit rankings to team success. Two such attempts were found on the topic, however.
Langelett (2003) undertook a similar study to this endeavor but on a much more limited basis. He restricted his exploratory variable to “top-10” recruiting classes and his dependent variable to rankings in the Associated Press (AP) and USA Today Coaches polls’ “top-25.” Langelett used recruiting class ratings devised by Allen Wallace of Sports Illustrated and Tom Lemming of ESPN. He looked at a five-year model to account for the possibility of a red-shirt year. What he found was that “recruiting does indeed have a significant effect on team performance over the next 5 years.” He further concluded that the largest effect was found with a players’ freshman year (the second year of the model accounting for the red-shirt year) with subsequent years having a discounted effect. (p. 243)
The online publication SMQ (Sunday Morning Quarterback) related team rankings from Rivals from 2002-2007 to team success as measured by the winning percentage against BCS conference schools. The top BCS conference schools were studied. Results were compared by the ordinal position of the recruiting class and the team winning percentage. For example, Alabama’s average class rank in the Rival’s ratings from 2002-2007 was 16th, i.e., 15 schools had higher six-year averages. Alabama’s rank in winning percentage against BCS conference schools during that same time period was 36th. Thus, according to SMQ, Alabama greatly underperformed it’s Rivals ranking. Nine of the twelve SEC teams in the analysis underperformed, also, causing SMQ to claim that the SEC is either “massively overrated in the recruiting rankings…, or something else is going on.” The obvious limitation to such simplistic analyses is there is no control for schedule strength. Thus, it would be expected that teams in the strongest conferences, like the SEC, would under perform.
In the end, the effect of recruiting and subsequent performance in college football may have dramatic tangible effects at many institutions. Success or failure in college football can impact revenue sources, in both ticket sales and apparel or memorabilia sales; it can affect a school’s applicant pool for future students; and it could affect gift giving by alumni. Then there are the intangibles in how it affects the psyche of the whole fan base.
How can one ascertain the impact of recruiting classes on team success?
Let me say upfront that I have been studying this topic for years and have tested many weighted models utilizing the Scout and Rivals rankings to determine the best measure to predict team success. In fairness, Scout and Rivals did equally well in their predictive utility. But, in the end, I chose to focus only on Scout’s rankings and chose the one weighted model that did best over time.
First, recruiting class data were collected for all Football Bowl Subdivision (formerly Division I-A) teams from 2010 through 2014 from the Scout.com service. My previous work on this topic determined the best weighted model employs the four most recent classes. Thus, to see how well the model predicted the final results for the 2013 season the recruiting classes of 2010, 2011, 2012 and 2013 were utilized.
Second, all of my previous research determined that the best weighted model in predicting team success consisted of these weights: Year 1 (2010) = 15%, Year 2 (2011) = 20%, Year 3 (2012) = 25%, Year 4 (2013) = 40%. You can see that most of the impact on team success comes from years 3 and 4, yet a surprising amount still comes from the underclassmen. Applying these weights to the four Scout recruiting class rankings gives a score which when sorted shows the weighted model rank (i.e., independent variable) of estimated team talent among FCS schools.
Third, an appropriate dependent variable needs to be identified as a measure of team strength. I chose the Sagarin Rating end-of-season rank for only FCS teams. The Sagarin Rating has the longest history of the mathematical algorithms used to assess team strength. They are, in this author’s opinion, the best at reflecting team strength. The Rating algorithm is a sliding scale which factors in margin of victory.
Having one independent variable (weighted model rank) and one dependent variable (Sagarin Rating rank) the analyses utilized were simple correlation (r), the resultant coefficient of determination (R2) and linear regression.
How well did the model do in predicting the 2013 outcome?
The simple correlation between the independent variable (weighted model rank) and dependent variable (Sagarin Rating end-of-season rank) was an impressive .715, which produces a coefficient of determination (R2) of .51. That R2 of .51 means that over half of the variance, or outcome, for the 2013 season was explained by the model for these four seasons. Having over half of the variance explained in any social science study is phenomenal.
The table, below, shows the 2013 results for all 120 FCS schools that had the requisite four years of Scout class rankings.
You can see that Alabama was projected as having the most team talent (Model Rank), followed by Texas, Auburn, LSU and Florida State. The final Sagarin Rating rank (Sagarin Rank) relates where the team fell after the season and shows Florida State at the top, followed by Oregon, Alabama, Auburn and Stanford. The last column (Difference) signifies the disparity between the two and tells whether a team outperformed or underperformed against the model’s expectations. California had the largest under-performance at -68, while Utah State had the largest over-performance at +75.
When you produce a scatterplot of this data you get the figure shown below.
A lot of people in 2013 were shocked at Auburn’s success in making the BCS national championship game against Florida State. Based on the estimated team talent it should have shocked no one. But, remember, according to this model half of a team’s success can be explained by the four recruiting classes, which leaves another half that is unexplained. Florida fans certainly didn’t expect a 4-8 season, while Missouri fans likely didn’t expect to win the SEC East. Things, to put it nicely, happen.
What does the model predict for 2014?
You can see there are some small changes at the top. Ohio State jumps 7 spots to replace Alabama as having the most expected talent. The Tide drop one spot, Auburn remains at third, while LSU and Florida State trade spots between 4th and 5th. The biggest drop is for UTEP, dropping 18 spots from 2013, followed by Brigham Young at 17 spots. The biggest gain is for Ohio, jumping 23 spots, followed by Western Kentucky at 21 spots.
Final thoughts
“Recruiting rankings guarantee nothing” was a headline in the USA Today sports section a day after signing day 2008 (Carey, 2008, p. 14C). Truer words were never spoken, but then again nothing can guarantee success in college football. That said, recruiting rankings do offer insight into the probability of success.
The model explained about 50% of the variance in team success, but again, the other 50% is unexplained. Is the remaining 50% due to coaching? Is it player development? Is it luck? Is it something else? Or, is it all of the above? Regardless of other factors at play, it is apparent that bringing in as much raw talent as possible, as judged by rating services, specifically Scout in this study, is a good indicator of future success.
My personal expectation is that the 2014 national champion will be one of the teams in the top 5 of the model ranking. Those teams have the talent to win it all and surely one of them will get the necessary breaks to bring home the title. Time will tell.
References
Carey, J. (2008, February 7). Recruiting rankings guarantee nothing: Inexact science has many variables. USA Today. p. 14C.
Sutter, D., & Winkler. S. (2003). NCAA Scholarship Limits and Competitive Balance in College Football. Journal of Sports Economics, 4, 3-18.
Langlett, G. (2003). The Relationship Between Recruiting and Team Performance in Division IA College Football. Journal of Sports Economics, 4, 240-245.
Eckard, E. W. (1998). The NCAA Cartel and Competitive Balance in College Football. Review of Industrial Organization. 13, 347-369.