As an alum of Wake Forest University, I have been following with interest their much-trumpeted announcement that they will no longer require undergraduate applicants to submit SAT scores. Although the decision seems to have been easily reached in Winston-Salem (“this decision has the full support of the Board of Trustees, the Cabinet, Deans, and staff in the Office of Admissions”), the extensive supporting justifications implies they are getting a more mixed reaction from former students. An extensive FAQ section on the website addresses such questions (and I assume someone must be asking them) as: Will we enroll less qualified students just to increase religious, ethnic, or socioeconomic diversity? Will this policy devalue my Wake Forest degree? Isn’t this a very risky move to be the first national university to drop the requirement? What was the real motivation for this change? Was it to boost application numbers? Grab media headlines? And so on.
The ambivalence implied in such questions (which is not always allayed by the happy talk responses, which seem to imply no downside whatever to the decision) is undoubtedly related to the fact that Wake Forest is the most prominent private university to make such a move (the only others that seem to land anywhere close to the same level of prominence are much smaller colleges whose missions seem more specifically designed to attract eclectic student bodies, like Bates College in Maine, Bennington and Middlebury in Vermont, a range of art schools for whom the SAT would seem obviously less relevant to a judgment of likely success, and a handful of others like King’s College [PA]). I hesitate to say this, in part because I am undoubtedly overlooking other exceptional schools that do not use the SAT, and in part because you’ll be pleased to know that the SAT is optional at God’s Bible School & College (God apparently chose Cincinnati as the most worthy site for his college).
A much longer list of large public universities operate on an SAT-optional basis, often because they base admission decisions on a standard formula where a good SAT score can be used as a substitute when a poorer class rank or high school GPA means the student falls short of an automatic admission threshold. The University of California system made major headlines when it announced its own research on the predictive value of the SAT on early college grades (they found a more modest effect than that claimed by the College Board people and their 600-pound-gorilla status in higher education prompted the CB to revamp the test entirely).
I am not an expert in this debate, and carry no brief for the undoubtedly self interested research funded by the College Board. But as I read more fully I’m struck by the very complex nature of the questions raised by the Wake Forest decision, the manner by which the announcement seems designed to obscure some of the complexities of the research (this despite the fact that the university seems to have been significantly influenced by the good work of one of their own faculty, Joseph Soares), and left wondering why there has as yet been no scramble as other national universities hurry to follow suit. The insinuation made by the WFU director of admissions and provost that others retain SAT requirements in the attempt to game the ratings (like the US News & World Report ones) strikes me as too glib, and I think is an unfair indictment of the vast majority of admissions operations, which I think are at least as likely scrambling like crazy to preserve and enhance diversity and yield rates and actually struggle to look at SAT scores in the wider context of all the available information.
Why is the issue complicated? Isn’t it well established that the SAT discriminates by race and gender and class? Well, actually, the research there is not so definitive, and even the Wake press release refers to the “possible testing bias.” A good overview of the research literature, summarized by Dr. Rebecca Zwick for the National Association for College Admission Counseling (Zwick is an education professor at UC-Santa Barbara), finds a high level of conflicting evidence.
A central point of contention, of course, rests on the hard to assess distinction to be made between the test itself (do its questions discriminate, for example, by relying on examples and vernaculars derived from the majority culture?) and the broader cultural disparities the scores may be reflecting. The College Board typically argues that they have made serious efforts to make their examinations race/class/gender neutral, and that the reported test score gaps reflect the poorer educational opportunities available to historically marginalized populations. And while some early advocates of standardized testing were clearly motivated by racism (and the idea that a fair test would vindicate their views about the intellectual superiority of whites), others like Harvard’s president James Conant saw the test as a great equalizer and a gateway of opportunity for students who lacked the resources to otherwise document their academic potential.
The idea that the SAT is rigged against, for example, persons of color is given regularly-reinforced plausibility by the commonly reported finding that average scores by group can vary considerably based on racial difference, where students of color (on average) score lower. This fact is complicated by another often replicated and potentially contrary finding, much less frequently reported in the popular press, namely, that taking into account both SAT scores and high school grades tend to over-predict minority student first year grades. Such over-prediction may be the result of the poor actual mentoring resources available for economically poorer or first-generation college students (and other explanations abound), but even if true, one might argue that the SAT score makes a more optimistic prediction about likely performance than will often result. (The over-prediction issue got a lot of attention in Bowen and Bok’s influential The Shape of the River, published in 1998).
Do test scores accurately predict college success? Well, they do seem a decently reliable predictor of first-year college grades, and when considered along with high school class rank and grade point average, there is good evidence that the SAT makes a distinctive predictive contribution. This is a point elided by the Wake Forest press releases, which emphasize the low predictive value of the SAT without noting that almost every other potential measure has even less predictive value. WFU emphasizes the better predictability of high school grades, but everyone obviously looks at those too, and the College Board’s major point is that looking at both has significant value.
Still, critics are also right to note that this contribution can be discounted as rather minor and its value offset by the otherwise bright students who don’t score well and are thus unfairly excluded from college by their low scores (this seemed to play a big part in Wake Forest’s decision). And there is compelling evidence that high SAT scores may more fully reflect socioeconomic advantages more than true aptitude.
The challenge, of course, is finding an admissions standard immune to this criticism. One can invite students to write essays but richer students are far more likely to have access to tutors and editors. Admissions interviews? Those seem likely to be highly subjective single person one-shot encounters and logistically impossible to reliably perform at a university with a large student population. Amount of participation in school activities? Rich kids have more leisure time to be involved in such activities and more access to resources that will enable their success in them (since, again, rich families can underwrite a child’s success in high school chess clubs or debate contests or in musical contests). It strikes me as unusual that the SAT people, having built a serious writing exam into the new format in 2003, has apparently found few takers in university admissions offices for the resulting scores, even though the early evidence seems to show that writing scores correlate even better to early college success than the old verbal and quantitative scores. And how about looking just at high school grades, which is the logic of the Texas policy of guaranteeing anyone whose grades put them in the top 10% of their graduating class? Well, that doesn’t work perfectly either, in part because the very ethnic and gendered gaps that prompt concerns about the SAT scores also infect high school GPA’s. And yes, SAT test takers can invest in coaches, but some evidence reasonably suggests that SAT scores are less vulnerable to swings enabled by practice testing and coaching then, say, the quality of an admissions essay might be.
My point is simply to play devil’s advocate, since Wake Forest seems to have made their difficult decision on the most sincere basis. And truth be told, I’m happy to defer to their experts (and the many others in their camp in the California and Texas systems and elsewhere). And I must say I have considerable confidence (if only because of their resources) in the WFU commitment to enhance their admissions operation to get to know applicants better, which I am sure will assure the very high continuing quality of incoming classes.
But I still wonder whether any great university should be lessening its access to any serious barometer of aptitude (and it seems likely that because admissions officers will obviously know students have the option to submit SAT scores, they will tend to dismiss high reported scores as simply the result of self-selection, which has the effect of making scores irrelevant altogether at the decision making margin). Why not simply weight the test less heavily than making it optional? Wake says only an optional SAT reporting scheme will induce what they think is the high number of applicants scared away by the high SAT averages to jump into the applicant pool, but then says they will continue to publish these averages while also happily proclaiming that its application pool size is surging. One wonders how the anecdotal sense that some refuse to apply because they think they’ll be rejected squares with the actual evidence of bounding popularity.
One irony, of course, is that the WFU decision is being made at the very time when reliance on never-ending testing is only increasing in K-12 education under the pressures of the invidious No Child Left Behind federal mandates. Such a fact may be the very reason to affirm the courage of the university’s action, but may also evidence its likely futility, arriving as it does in a climate where students are at risk of only being taught to the kind of tests Wake Forest and others now seek to set aside.