BOOK EXCERPT

Ivy League's meritocracy lie: How Harvard and Yale cook the books for the 1 percent

"We are credentializing a new elite by legitimizing people with an inflated sense of their own merit"

Published January 11, 2015 6:00PM (EST)

Minnie Driver and Matt Damon in "Good Will Hunting"    (Miramax Pictures)
Minnie Driver and Matt Damon in "Good Will Hunting" (Miramax Pictures)

Excerpted from "The Tyranny of the Meritocracy: Democratizing Higher Education in America"

A special lottery is to be held to select the student who will live in the only deluxe room in a dormitory. There are 100 seniors, 150 juniors, and 200 sophomores who applied. Each senior’s name is placed in the lottery 3 times; each junior’s name, 2 times; and each sophomore’s name, 1 time. What is the probability that a senior’s name will be chosen?

Does this kind of question look familiar? For most of you, it probably does: it represents just one of the nearly two hundred questions that presently make up the SAT. (The answer, by the way, is 3/8, or 37.5 percent, for those among us who prefer percentages to fractions.) For nearly a century, universities across the country have used SAT scores and other quantifiable metrics to make decisions about admitting one candidate versus another—decisions that can have far-reaching impact on both the admitted and declined candidates’ educational, social, professional, and financial futures. On the basis of what? we might ask. Originally the acronym SAT stood for Scholastic Aptitude Test, on the strength of the argument that a high schooler’s success on the test correlated with his or her success in the increasingly rigorous environment of college. As evidence of this correlation dwindled, the name was changed first to the Scholastic Assessment Test (keeping the handy, well-known acronym) and later to the SAT Reasoning Test. Call it what you will, the SAT still promises something it can’t deliver: a way to measure merit. Yet the increasing reliance on standardized test scores as a status placement in society has created something alien to the very values of our democratic society yet seemingly with a life of its own: a testocracy.

Allow me to be clear: I’m not talking about all tests. I’m a professor; I believe in methods of evaluation. But I know, too, that certain methods are fairer and more valuable than others. I believe in achievement tests: diagnostic tests that are used to give feedback, either to the teacher or to the student, about what individuals have actually mastered or what they’re learning. What I don’t believe in are aptitude tests, testing that—by whatever new clever code name it goes by—is used to predict future performance. Unfortunately, that is not how the SAT functions. Even the test makers do not claim it’s a measure of smartness; all they claim is that success on the test correlates with first-year college grades, or if it’s the LSAT (Law School Admission Test), that it correlates with first-year law school grades.

As I’ll explain later, such a correlation is slight at best. In any case, it’s certainly not a barometer of merit. Merit is much too big a concept to simply refer to how you’re going to do in your first year of college or law school. Because if all we cared about is how well you do in your first year of college, we would have college programs last only one year, right? Why would you have to be there and pay tuition for three more years? We do and we must care about more than freshman-year grades—we care about whether students learn something in college, whether they grow into themselves on the way to becoming better citizens and making their distinctive contributions to society. What we really care about are all the things that the testocracy can’t measure.

How then did we get to a place where American higher education appears more concerned with applicants’ test scores and alumni financial contributions than with the education of current students and the contributions of alumni to our society as a whole? A review of America’s curious history of—and relationship with—an obsessive culture of testing may help answer these questions.

* * *

“Manly, Christian character.” That was the ideal that Endicott Peabody, a member of the New England Brahmin class, hoped to cultivate in the boys who attended his private boarding school, Groton. Peabody founded Groton in 1884 with the purpose of building character and embedding the value of “noblesse oblige” into the social fabric of late-nineteenth-century America. Groton students, like young men from seven other boarding schools in the northeastern United States, were to embody character, manliness, and athleticism. The “Big Three” colleges—Harvard, Yale, and Princeton—validated these ideals by admitting nearly all boarding-school applicants and conferring honorary degrees upon Peabody.

Admission into the “Big Three” was fairly easy if the applicant possessed a “manly, Christian character.” He had to pass subject-based entrance exams devised by the colleges, but the tests weren’t particularly hard, and he could take them over and over again to pass. Even if a student didn’t pass the required exams, he could be admitted with “conditions.” Once enrolled at Harvard, Yale, or Princeton, he would focus primarily on his social life, clubs, sports, social organizations, and campus activities, while often ignoring his academic work.

Admissions began to change, however, when Charles William Eliot became president of Harvard in 1869. Annoyed with “the stupid sons of the rich,” Eliot sought to draw into the university’s fold capable students from all segments of society. To ensure that smart students could attend Harvard regardless of their means, Eliot, in 1898, abolished the archaic Greek admission exams that were popular up until that time. He also replaced Harvard’s admissions exams with exams created by the College Entrance Examination Board because it tripled the number of locations where applicants could be tested. The result of Eliot’s changes was the admission of more public school students, including Catholics and Jews.

A. Lawrence Lowell, Eliot’s successor, attempted to reverse the trend of admitting those without WASP status and values. The “Jewish problem” in particular alarmed Lowell. The number of Jews at Harvard had increased steadily, from 10 percent in 1909, to 15 percent in 1915, to 21.5 percent in 1922. In addition to their growth in numbers, Jews generally outperformed non-Jewish students academically. Lowell worried that Harvard might suffer the same fate as Columbia, which experienced “WASP flight” as more Jewish students started to enroll. In response, Lowell limited freshman enrollment to one thousand and altered the admissions criteria to include an emphasis on “character,” legacy, and athleticism rather than solely on academic achievement. Additionally, the application process now required interviews and photos, as well as letters of recommendation. Initially a method to limit Jewish enrollment, the notion of a “well-rounded” applicant was born in the first half of the 1920s.

But altering admissions criteria to benefit socially desirable students was not enough for Harvard. With an increasingly complex university admissions process, a new and uniform system was needed to separate the wheat from the chaff. The SAT became the solution that the ruling elite had been desperately seeking for all this time to perpetuate itself: a testocracy, disguised as a meritocracy.

* * *

The origins of the SAT can be traced to the turn of the twentieth century, when the College Board, the nonprofit organization that owns the rights to the modern-day SAT, administered the nation’s first college entrance examinations, in 1901. Unlike today’s SAT, these exams were entirely composed of essays that required students to engage with subjects as far-ranging from each other as Latin, world history, and physics. The birth of these exams came at about the same time as another social scientific phenomenon: intelligence testing.

In 1905, French psychologist Alfred Binet developed the world’s first IQ test, which aimed to produce a set of predictable results from which one could “derive a rating of . . . ‘mental age’ ” and “identify slow learners [who] could be given special help in school.” Binet’s theories were eventually adapted by the United States military during World War I, when Harvard professor and IQ-test advocate Robert Yerkes convinced Army brass to allow him to evaluate nearly two million soldiers to identify top talent who could be promoted to the rank of officer. The results were striking: according to Yerkes, “The native-born scored higher than the foreign-born, less recent immigrants scored higher than more recent immigrants, and whites scored higher than Negroes.”  In 1923, Carl C. Brigham, a Princeton psychology professor and leading figure in the growing anti-immigration movement of the time, authored a treatise titled A Study of American Intelligence, in which he relied heavily upon Yerkes’s findings to conclude that “American intelligence is declining, and will proceed with an accelerating rate as the racial admixture becomes more and more extensive.”

This belief, which Brigham helped to perpetuate, was lampooned by F. Scott Fitzgerald, a Princeton graduate, in his novel The Great Gatsby, published two years later, in 1925.

“Civilization’s going to pieces,” broke out Tom violently. “I’ve gotten to be a terrible pessimist about things. Have you read ‘The Rise of the Colored Empires’ by this man Goddard?”

“Why, no,” I answered, rather surprised by his tone.

“Well, it’s a fine book, and everybody ought to read it. The idea is if we don’t look out the white race will be—will be utterly submerged. It’s all scientific stuff; it’s been proved. . . .

“This fellow has worked out the whole thing. It’s up to us, who are the dominant race, to watch out or these other races will have control of things. . . .

“This idea is that we’re Nordics. I am, and you are, and you are, and—” After an infinitesimal hesitation he included Daisy with a slight nod, and she winked at me again. “—And we’ve produced all the things that go to make civilization—oh, science and art, and all that. Do you see?”

There was something pathetic in his concentration, as if his complacency, more acute than of old, was not enough to him any more.

The College Board selected Professor Brigham to spearhead the design of a new, nationwide college entrance exam, and on June 23, 1926, Brigham oversaw the very first administration of what was then called the Scholastic Aptitude Test.

News of the SAT’s success eventually made its way up to Cambridge, Massachusetts, where James Bryant Conant presided as president of Harvard University (from 1933 to 1953). Unlike many of his peers at the time, Conant openly embraced the Jeffersonian ideal of a “natural aristocracy of talents and virtue” —a forerunner of the twentieth-century idea of the meritocracy. In 1934, Conant assigned two of his assistant freshman deans, Henry Chauncey and Wilbur J. Bender, the task of identifying high-performing middle-class and ethnic-immigrant students for the possible receipt of need-blind scholarships to the university.  The two men offered up Brigham’s SAT as the optimal screen through which eligible candidates could be filtered. Conant accepted their recommendation, mandating that applicants take the test in order to be considered for scholarships.

A battle that had begun with idealistic rhetoric succumbed to a Trojan horse: the SAT and a budding testocracy confirmed the existing order as inevitable, because the tests demonstrated that the elite possessed unassailable merit. Harvard’s adoption of the SAT subsequently set a new gold standard in the world of education. Chauncey went on to found the Educational Testing Service, in 1947, which has inherited the College Board’s role as administrator of the SAT (and has developed a host of popular graduate-level entrance exams in its own right). By the 1950s, the College Board had grown to around three hundred members, and more than half a million students sat for the exam every year during that period. Test-preparation companies, such as Kaplan and the Princeton Review, thrived as a result of the SAT’s rise, and “much of the curriculum in American elementary and secondary education [was] reverse-engineered to raise SAT scores” to ensure admission to top universities.

This leaves us in a particular quandary today, best described by Lucy Calkins, founding director of the Reading and Writing Project at Columbia University’s Teachers College. Referring to the most recently appointed president of the College Board, she asks, “The issue is: Are we in a place to let Dave Coleman control the entire K–12 curriculum?”

* * *

This is not to say that the testocracy has continued to gain ground unabated. Close to eight hundred colleges have decreased or eliminated reliance on high-stakes tests as the way to rank and sort students. In the current environment, however, moving away from merit by the numbers takes guts. The testing and ranking diehards, intent on maintaining their gate-keeping role, hold back and even penalize administrators who take such measures. The presidents of both Reed College and Sarah Lawrence College report experiencing forms of retribution for refusing to cooperate with the “ranking roulette.”

At the center of this conflict is the wildly popular US News & World Report’s annual college-rankings issue—the bible of university prestige. In the book Crazy U, Andrew Ferguson describes meeting Bob Morse, the director of data research for US News and the lead figure behind the publication’s college rankings. Morse, a small man who works in an unassuming office, is described by Ferguson as “the most powerful man in America.” And for good reason: students and parents often rely upon the rankings—reportedly produced only by Morse and a handful of other writers and editors—as a proxy for university quality. These rankings rely heavily on SAT scores for their calculations. Without such data available from, for example, Sarah Lawrence, which stopped using SAT scores in its admissions process in 2005, Morse calculated Sarah Lawrence’s ranking by assuming an average SAT score roughly 200 points below the average score of its peer group. How does US News justify simply making up a number? Michele Tolela Myers, the president of Sarah Lawrence at the time the school stopped using the SAT, reported that the reasoning behind the lowered ranking was explained to her this way: “[Director Morse] made it clear to me that he believes that schools that do not use SAT scores in their admission process are admitting less capable students and therefore should lose points on their selectivity index.”

This is the testocracy in action, an aristocracy determined by testing that wants to maintain its position even if it has to resort to fabrication. What is it they are so desperate to protect? The answer initially seems to be that the SAT can predict how well students will do in college and thus how well-prepared they are to enter a particular school. There is a relationship between a student’s SAT score and his first-year college grades. The problem is it’s a very modest relationship. It is a positive relationship, meaning it is more than zero. But it is not what most people would assume when they hear the term correlation.

In 2004, economist Jesse Rothstein published an independent study that found only a meager 2.7 percent of grade variance in the first year of college can be effectively predicted by the SAT. The LSAT has a similarly weak correlation to actual achievement in law school. Jane Balin, Michelle Fine, and I did a study at the University of Pennsylvania Law School, where we looked at the first-year law school grades of 981 students over several years and then looked at their LSAT scores. It turned out that there was a modest relationship between their test scores and their grades. The LSAT predicted 14 percent of the variance between the first-year grades. And it did a little better the second year: 15 percent. Which means that 85 percent of the time it was wrong. I remember being at a meeting with a person who at the time worked for the Law School Admission Council, which constructs the LSAT. When I brought these numbers up to her she actually seemed surprised they were that high. “Well,” she said, “nationwide the test is nine percent better than random.” Nine percent better than random. That’s what we’re talking about.

So, if the SAT does not correlate with the grades a student will get in college, how can a student’s performance in college be predicted? William C. Hiss and Valerie W. Franks, both formerly of the Bates College admissions department, released a report in 2014 that studied thirty-three colleges and universities that required neither the SAT nor its very popular competitor the ACT for admission.  Now, which students did or did not choose to submit their standardized-test scores is in itself interesting—overwhelmingly those students who did not submit a score were women, minority students, or those who would be the first in  their family to go to college, which should tell us a lot about the SAT right there.

In reviewing the performance of more than eighty-eight thousand students, Hiss and Franks found that students who perform well in college were the ones who had gotten strong grades in high school, even if they had weak SAT scores. They also found that students with weaker high school grades did less well in college—even if they had stronger SAT scores.  Summing up their findings they wrote, “Many of us who have spent our careers as secondary and university faculty and administrators find compelling the argument that ‘what students do over four years in high school is more important than what they do on a Saturday morning.’”

So, if the SAT does not measure aptitude—and if it doesn’t even pretend to measure achievement—then what does it measure? I have argued for years that the SAT is actually more reliable as a “wealth test” than a test of potential, and the most recent results bear this out. Below are figures released in 2013 by the College Board that correlate SAT scores with the family income of the test taker.

FAMILY INCOME

AVERAGE SAT SCORE (OUT OF 2400)

FOR 2013 COLLEGE-BOUND SENIORS

$0,000 - $20,000

1326

$20,000-$40,000

1402

$40,000-$60,000

1461

$60,000-$80,000

1497

$80,000-$100,000

1535

$100,000-$120,000

1569

$120,000-$140,000

1581

$140,000-$160,000

1604

$160,000-$200,000

1625

More than $200,000

1714

Now that is a correlation! This is what I refer to as the “Volvo effect.” In Crazy U, Ferguson talks about how the parents of his son’s friends and classmates were spending $30,000 to $35,000 to prepare their children for college. That isn’t the amount they had to pay for a premier boarding school mind you—that was the amount they paid to hire someone to tutor their child on the SAT and to help them write their “statement of interest” essays on their college applications. When these students get in to a particular college we say that this process reflects the fairness of the meritocracy, but really it only reflects the fact that the elite dominate the entry to higher education. These students aren’t smarter than the other students. Or to put it another way: they may be smart, but they are not necessarily those most likely to contribute to our society; they simply come from families that have more money to pay people to prepare them for the SAT, to test-prep them for their high school grades, and to pay for viola lessons so they can stand out more in the admissions process.

The SAT’s most reliable value is its proxy for wealth. It is normed to white, upper-middle-class performance, as numerous studies have shown when the test is viewed through the lenses of race and class. The figures below, from 2013, show this in stark relief.

TEST-TAKER ETHNICITY

AVERAGE SAT SCORE (OUT OF 2400)

FOR 2013 COLLEGE-BOUND SENIORS

Black or African American

1278

Mexican or Mexican American

1354

Puerto Rican

1354

Other Hispanic, Latino, or Latin American

1355

American Indian, Alaska Native

1427

Other

1501

White

1576

Asian, Asian American, Pacific Islander

1645

Is this a case of merit belonging to one race and not to another? Or is it the case that if you have grown up in a particular environment, such as one where your parents lack the funds to prepare you for these standardized tests or lack an advanced level of education themselves, you will not do as well on the SAT? There are other reasons why students of various ethnicities may underperform on the SAT. One of these is a phenomenon called “stereotype threat,” a term coined by Claude Steele of Stanford University (now provost of the University of California at Berkeley) to describe the anxiety a person may experience when he or she has the potential to confirm a negative stereotype about his or her social group. Many first- and second-generation immigrants of color test well, for example, because they retain a national identity free of America’s racial caste system and enjoy material and cultural advantages, including professional or well-educated parents. They do not internalize the stigma of race and are thus less affected by the anxiety of confirming assumptions of intellectual inferiority that depresses test scores of highly motivated students who are African American, Mexican American, or of Puerto Rican heritage.

I know this threat is real. One summer not too long ago, I was engaged in a long-term writing project and recruited an absolutely brilliant young man who is Latino. Enrique (not his real name) has a photographic memory. I mean, he blew my mind. I have never seen anybody who could tell you, “Oh, well that’s on page 384. It’s in the middle of the page. I think it’s the first paragraph, not the second one.” But Enrique could not do well on the LSAT, though he practiced taking it close to thirty times. Enrique grew up in a low-income community, so arguably that had something to do with the verbal references that he might have missed. But a lot more of it had to do with stereotype threat: he was too tense. Postscript to this story: Enrique was subsequently selected to be a Rhodes scholar. So what, really, are we talking about here?

If we can agree that the SAT, LSAT, and other standardized tests most reliably measure a student’s household income, ethnicity, and level of parental education, then we can see that reliance on such test scores narrows the student body to those who come from particular households. Then we must decide how to ensure that we open the admissions doors to a greater diversity of students—not just the ones from privileged backgrounds. I want to make it clear that I am not talking about affirmative action here. The loud debate over affirmative action is a distraction that obscures the real problem, because right now affirmative action simply mirrors the values of the current view of meritocracy. Students at elite colleges, for example, who are the beneficiaries of affirmative action tend to be either the children of immigrants or the children of upper-middle-class parents of color who have been sent to fine prep schools just like the upper-middle-class white students. The result? Our nation’s colleges, universities, and graduate schools use affirmative-action-based practices to admit students who test well, and then they pride themselves on their cosmetic diversity. Thus, affirmative action has evolved in many (but not all) colleges to merely mimic elite-sponsored admissions practices that transform wealth into merit, encourage over-reliance on pseudoscientific measures of excellence, and convert admission into an entitlement without social obligation.

No, the question, as I said in the previous chapter, is this: How do we move from admission to mission? Further: How do we move past that moment of admission, which may only confirm one’s present status, to granting an opportunity for a diverse and worthy group of individuals to learn how to work together collectively and/or creatively to help solve the deep challenges confronting our communities, our economy, and our educational experiences in a democratic society? Of course, some of this has to do with how we define success. A study of Harvard alumni over three decades, which culminated in the 1990s, defined “success” by income, community involvement, and professional satisfaction. Researchers found a high correlation between those criteria and two criteria that might not ordinarily be associated with Harvard freshmen: low SAT scores and a blue-collar background. This is echoed by college admissions officers at elite universities today, who report— when asked what predicts life success—that, above a minimum level of competence, “initiative” or “hunger” are the best predictors. Marlyn McGrath Lewis, director of admissions for Harvard, says, “We have particular interest in students from a modest background. Coupled with high achievement and a high ambition level and energy, a background that’s modest can really be a help. We know that’s the best investment we can make: a kid who’s hungry.” That’s certainly the message of Derek Bok and William Bowen’s The Shape of the River, that those who are motivated to take advantage of an opportunity, when given the opportunity, can and often do succeed, often in ways that are different than their more privileged peers. The African American students in the Bok-Bowen study, for example, became leaders within their communities at much higher rates than their more affluent and better-scoring white counterparts.

When I speak here of diversity, I’m not talking strictly along color or gender lines either. When the GI Bill was first proposed, toward the end of World War II, some university officials did their best to get it defeated. They were appalled by the prospect of what they saw as a mob of unprepared, unsuitable men trying to be their students. To their surprise, the veterans—many of them poor, most the first in their families to attend college—proved to be among the best students of their generation. By broadening access to college for those who had served their country, the GI Bill helped fuel the post–World War II economic boom while leveling the playing field for many Americans. The bill epitomized our country’s dual commitments: to open opportunity across the economic spectrum and to invest in people who will give back to society.

We see the problem of restricted access today in the new elite class, which passes on its privileges in the same way that the old elite from twentieth-century America passed on its privileges. But there is an even more worrisome aspect of the new elite. The old elite felt that it had inherited its privileges; in order to defend the social oligarchy over which it reigned, the old elite felt the need to give back through public service or a financial commitment to the greater good. The old elite recognized that it had been privileged by the accident of birth, so the message to those who were out of luck was that you were unfortunate but it was through no defect of your own.

The new elite, on the other hand, feels that it has earned its privileges based on intrinsic, individual merit. The message, therefore, to those who are not part of this elite is “You are stupid. You simply don’t matter. I deserve all the advantages I’m granted.” This attitude manifests in the jobs that college grads now take. For example, the student-run Harvard Crimson ran an article in 2007 about that year’s graduating class smirking that “only” 43 percent of female graduates entered finance and consulting compared to 58 percent of male graduates. The article, entitled “ ’07 Men Make More,” explained—with apparent disdain—that women choose jobs in lower-paying fields such as education and public service.

Despite the economic downturn of recent years, the striking number of Harvard graduates entering finance and consulting has persisted. The class of 2013 senior survey showed that more than 30 percent of the 2013 class had jobs in those fields. After consulting and finance, the technology/engineering industry captured 13 percent of Harvard graduates that year. The Crimson again emphasized—with what seems to me to be the appearance of similar disdain—the preference of women to pursue less-lucrative work in education, media, and health care rather than in finance, consulting, and technology.

The top career choices of many male Harvard students—whether it is 2007 or 2013—are severely lacking in any element of service. This is the damage that we are doing through our testocracy. We are credentializing a new elite by legitimizing people with an inflated sense of their own merit and little unwillingness to open up to new ways of problem solving. They exude an arrogance that says there’s only one way to answer a question—because the SAT only gives credit for the one right answer.

The world, by contrast, provides us with more than one correct answer to most questions. In the face of mounting criticism, the College Board has recently proposed changes to the SAT, including reducing the use of obscure vocabulary words, narrowing the areas from which the math questions will be drawn, and making the essay section optional. But individuals such as Bard College president Leon Botstein find these proposed changes are too little, too late because they don’t address the test’s real problem. In an eloquent rebuttal, Botstein writes:

The essential mechanism of the SAT, the multiple choice question, is a bizarre relic of long outdated twentieth century social scientific assumptions and strategies. As every adult recognizes, knowing something or how to do something in real life is never defined by being able to choose a “right” answer from a set of possible answers (some of them intentionally misleading). . . . No scientist, engineer, writer, psychologist, artist, or physician—and certainly no scholar, and therefore no serious university faculty member—pursues his or her vocation by getting right answers from a set of prescribed alternatives that trivialize complexity and ambiguity.

Meaningful participation in a democratic society depends upon citizens who are willing to develop and utilize these three skills: collaborative problem solving, independent thinking, and creative leadership. But these skills bear no relationship to success in the testocracy. Aptitude tests do not predict leadership, emotional intelligence, or the capacity to work with others to contribute to society. All that a test like the SAT promises is a (very, very slight) correlation with first-year college grades.

But once you’re past the first year or two of higher education, success isn’t about being the best test taker in the room any longer. It’s about being able to work with other people who have different strengths than you and who are also prepared to back you up when you make a mistake or when you feel vulnerable. Our colleges and universities have to take pride not in compiling an individualistic group of very-high-scoring students but in nurturing a diverse group of thinkers and facilitating how they solve complex problems creatively—because complex problems seem to be all the world has in store for us these days.

Excerpted from "The Tyranny of the Meritocracy: Democratizing Higher Education in America" by Lani Guinier (Beacon Press, 2015). Reprinted with permission from Beacon Press. All rights reserved.


By Lani Guinier

In 1998, Lani Guinier became the first woman of color appointed to a tenured professorship at Harvard Law School. Before her Harvard appointment, she was a tenured professor at the University of Pennsylvania Law School.

MORE FROM Lani Guinier