The Problem With the SAT’s Idea of Objectivity

Faced with the messy realities of entrenched privilege, the College Board is trying to find a quantitative solution.

Students take a test.
Frederick Florin / Getty Images

Students taking the SAT will soon be subjected to a new kind of assessment. On top of their math and verbal results, indicating what knowledge they were able to summon internally while taking the exam, they’ll be placed along a scale of adversity: a representation of the external. By calculating students’ social, economic, and family background, the College Board hopes to add new context to students’ test scores. Evaluating students on factors far beyond their control might seem like a novel attempt in leveling the playing field, but in some ways, it actually brings the test closer to its conflicted origins.

The adversity index was first piloted by 10 colleges in 2017. It consists of 15 factors meant to approximate the degree of disadvantage a student has faced, including the crime rate in her neighborhood, the rigor of her high-school curriculum, and the estimated education level of her parents. Students don’t see their numbers, but admissions officers do, and have full discretion in whether or not to consider them when making admissions decisions. One of the pilot colleges, for example, only used the score when deciding whether to reevaluate an applicant it had initially rejected.

Students report only the high school they attend and their address, and the College Board uses publicly available data to determine the scores from there. Crime rates, poverty rates, housing values, and the like are derived based on where students live. Family context, such as parents’ educational achievements, is based on averages in a student’s neighborhood.

Although the index is aimed at diversifying universities, it does not use race to determine students’ scores. Black and white students in the same neighborhood would presumably receive the same scores, as the relevant information comes from city-level, publicly available data. Several states, including California and Oklahoma, ban public universities from considering race in admissions. One of the 2017 pilots took place at a college in Florida, which banned taking race into account in 1999.

Thus far, the index appears to be making good on its intentions. Yale University is one of the schools that is already using the adversity index on a trial basis for all applicants, The Wall Street Journal reports. Since last year, the number of low-income and first-generation freshmen the school admitted has doubled to almost 20 percent of its incoming class.

Indices such as the College Board’s new scoring system are, by definition, numerical. But adversity isn’t quantitative, it’s qualitative: the entirety of external influences in one’s life, and indeed one’s ancestors’ lives. All 15 factors that make up the index are measurable, but they’re also subjective, the result of decades or centuries of environmental and historical legacy.

The College Board is essentially trying to find a quantitative solution to the messy realities of entrenched privilege, realities that are only amplified by the very college-admissions system the board is hoping to improve. It’s a noble goal and an appealing premise: that algorithms—orderly, objective, unburdened by bias or history—can solve problems we humans can’t. But these systems are only as good as the metrics that feed their calculations, and the people making them.

Take, for example, crime rates. Any sociologist modeling crime will explain that the figure isn’t reflective of the actual number of crimes that happen in a given neighborhood or city, but rather of the number of crimes that are reported to police, which is complicated by a host of factors including the race of alleged perpetrators and a community’s relationship with its police force. (White-collar crime, for instance, is hugely underrepresented in many statistics.) Further, the notion of what is criminal varies. Let’s say two students in different states live in neighborhoods with identical rates of marijuana usage. The neighborhood in a state with legalized marijuana would see a very different crime rate and, potentially, students would receive a different score.

Just because data are numerical doesn’t mean they’re objective. When they’re tied to different societal outcomes, they’re given meaning and made to tell a story.

A teenager living in a neighborhood with a high crime rate, a high poverty rate, many single-parent households, and high schools that don’t offer advanced classes might be deemed remarkably resilient by the College Board’s measurement, and the adversity index might help her get into an elite school. But the same numbers would mark her as more likely to commit crimes and less deserving of a loan or a reprieve from jail when applied in financial or criminal-justice systems, which source the same public data to make algorithmic decisions about other outcomes. The same numbers mean different things in different contexts. They don’t hold a single, objective truth, but rather provide evidence for a social hypothesis.

In this case, the hypothesis is that students who did not grow up in privilege relative to their peers have had to work harder, and that extra work should come to bear on the college-admissions process. But the history of the SAT itself shows us that numbers can also be used to enforce power systems. The original Scholastic Aptitude Test was invented in 1926 by Carl Brigham, a Princeton alumnus and avowed eugenicist who created the test to uphold a racial caste system. He advanced this theory of standardized testing as a means of upholding racial purity in his book A Study of American Intelligence. The tests, he wrote, would prove the racial superiority of white Americans and prevent “the continued propagation of defective strains in the present population”—chiefly, the “infiltration of white blood into the Negro.”

Francis Galton, who coined the term eugenics in 1883, was also the modern father of a number of key statistical methods, including correlation and regression. He used his statistical acumen to test and measure the physiological and psychological behaviors of white European men, with the long-term goal of determining which ones were fit to reproduce.

Galton would die tied to his beliefs, but Brigham grew to regret inventing the SAT, writing in 1930 that SAT test scores don’t measure innate ability passed through genes, but are instead “a composite including schooling, family background, familiarity with English and everything else, relevant and irrelevant.” That sounds shockingly similar to the stance in favor of the adversity index: that exam scores are inseparable from the external contexts bearing down or lifting up students as they receive their education and take the test.

The point isn’t that algorithms broadly, or the adversity index specifically, are racist—they’re not. But scoring people based on social factors, unlike scoring them based on correct answers to a math test, is a subjective exercise, even though there might be numbers involved.

This holds true across the aims of standardized testing. The original purpose of the SAT was to prove racial superiority, while the index promotes diversity. These are opposite goals, but they exploit the same methodology: using quantitative measurements to create a cohesive logic and enforce a narrative—about students, about neighborhoods, about social order.

Sidney Fussell is a former staff writer at The Atlantic, where he covered technology.