Return to Home PageA Reply by Chris Carter
Reply to "Oh Look, a Testing Critic!"

Kimberly Swygert, author of a right-wing website called Number 2 Pencil, recently published an on-line critique of my website, The Case Against Standardized Tests. In addition, she also published a couple of “Replys” on her website, in which a couple of people expressed views supporting her critique. I wrote to Kimberly, and asked her if she would let me post a response on her website. She refused, so I am posting a response here, along with her critique. The original can be read at

Anyway, her critique begins with a few sarcastic and irrelevant remarks, and then she quotes me as writing:

“If you read my article, you will see that these tests have no validity as predictors of actual accomplishment in any field."

Kimberly then offers this explanation:

“That's because SAT scores predict college grades, which aren't necessarily linked to later performance. Thus, the flurry of low to negative correlations between scores and later accomplishment that the author cites in this article are beside the point. The SAT has never claimed to predict success in life, so criticizing it for failing to do so is incorrect. What's more, for someone who goes to a lot of trouble to explain what a correlation is, Carter leaves out a discussion of restriction of range, possibly because to do so would leave the door open for contradiction of his theories.

An extended discussion of restriction of range wouldn't be appropriate here, but to sum it up quickly, a correlation is a measure of how multiple variables co-relate, or co-vary. If one of those variables has restricted variance, the correlation of that variable with any other variable will be "restricted" or lowered (closer to zero). If a variable does not vary, it cannot co-vary.

SAT scores for college students are restricted, because, for the most part, if you have a low SAT score, you don't get in. So, as a hypothetical example, let's say that most everyone who goes to Harvard has an SAT of higher than 1200. That leaves us with scores of between 1200 to 1600 to correlate with some measure of college success, or later success in life. Given that even smart people will screw up, fail to be "successful," or simply choose to stay out of the rat race, it's very possible for a Harvard grad with an SAT of 1200 will do fine, while one with an SAT of 1500 may drop out, or go bankrupt years later. That results in lowered correlations, but it doesn't necessarily follow from this that the SAT is not useful in college admissions. SAT scores tend to correlate with other measures of intelligence, and as long as we believe intelligence affects college performance, then colleges will have more success with high-SAT admittees than low-SAT scorers.”

Cost of Testing

I have two comments to make here. First of all, my article concerns ALL multiple-choice standardized tests that are meant as predictors: IQ tests (the granddaddy of them all), SAT, GMAT, LSAT, MCAT, etc. These tests do have a very limited ability to predict grades in school, but a central point of my article is this: if the tests are useless as predictors of any form of actual accomplishment, and almost-useless as predictors of grades once we take into account previously earned grades, then perhaps we should think long and hard about paying ETS over $300 million per year for taking these tests, and spending another $100 million on coaching schools. And these figures do not include the opportunity cost of teachers giving students drills as preparation for these tests. Kimberly simply ignores the cost of testing.

Range Restriction

Secondly, Kimberly remarked: “for someone who goes to a lot of trouble to explain what a correlation is, Carter leaves out a discussion of restriction of range, possibly because to do so would leave the door open for contradiction of his theories.” But if Kimberly had bothered to read my footnotes, she would have discovered that the objection about restriction of range has been taken into account. Crouse and Trusheim, a statistician and psychologist, wrote a book called The Case Against the SAT, in which they examined the evidence and concluded that the SAT has almost zero predictive validity for college grades, once previously earned grades are considered. The following is taken directly from footnote #2 in my article:

“And it is important to note here that these findings do not result from a restricted range in test scores. Crouse and Trusheim write:

‘Our results do not, however, arise because of restricted ranges. Recently, ETS searched its Validity Study Service records for the College Board and found twenty-one colleges where the distributions of SAT scores and high school records are virtually identical to those for the over-all SAT taking population. In these carefully chosen colleges with unrestricted range for high school records and SAT scores, the optimal equation for predicting freshman grades using high school records and SAT scores is among the best we have seen…. If any data should show large benefits of the SAT, it should be these.

Yet they do not. … the gains in freshman grades for the students selected with the SAT only average 0.03 on a four-point scale, again almost identical to the gains we report above.’ (Ibid, page 67)”

In other words, ETS found 21 colleges that basically let in everyone, high grades, low grades, high scores, low scores, and any combination thereof. Crouse and Trusheim analyzed the data, and got the same results: near-zero validity for SAT scores once we have previously earned grades. Sorry Kimberly, but range restriction does not offer a way out.

Bias Against Students with Deep Minds

Kimberly then quotes me again:

“Perhaps most surprisingly, there is evidence that these tests are biased against students with deep minds.”

To which she comments:

“Pardon me while I snicker uncontrollably. "Deep minds" sounds like a concept you think about while passing a bong around. The SAT is test of basic skills which are very likely to come in handy for college classes. Will very smart - forgive me, "deep" - students find the test boring? Probably. Will it be less than useful for predicting how those students at the very high end of ability do in school? Most likely. But the only way that "deep" students will bomb the SAT in large numbers is if they fail to learn basic geometry and algebra, or how to discern the main point of a paragraph.”

But as I wrote in my article,

“This criticism of standardized tests is not new. Banesh Hoffman, professor of mathematics and former collaborator with Albert Einstein, made exactly this point in his 1962 book The Tyranny of Testing. According to Dr. Hoffman, it is the multiple-choice format that is to blame. ‘Multiple choice tests penalize the deep student, dampen creativity, foster intellectual dishonesty, and undermine the very foundations of education’ he remarked in a 1977 interview.”

Perhaps Kimberly finds this funny, but I fail to see the humor in giving ETS millions of dollars every year in order to load the dice against many of the people who can contribute the most to this world.

Alternatives to Standardized Testing

Kimberly writes:

“Unfortunately, Carter's site looks like it will just be rehashing the myths and bashing tests unconditionally, while not providing much of an alternative for states that want all their students to meet certain standards, or universities that are flooded with thousands of applicants each year. If testing is so bad, what's the best alternative? Carter believes it is, "Samples of work, references, statements of purpose, and extra-curricular activities," all of which are fine, but not necessarily verifiable, or comparable across students, or shown to be predictive of college success. Never fails to amaze me how people who will nitpick to three decimal places the predictive validity of the SAT will offer up, as an alternative, things like "statements of purpose" for which no predictive reliability data exist.”

What I actually say in the article is this:

“As Bok and Bowen conclude, admissions committees need to abandon their narrow preoccupation with predicting first year grades, and focus on admitting those applicants that are likely to contribute the most to their field and to society. Samples of work, references, statements of purpose, and extra-curricular activities are all better indicators of future behavior than test scores.”

William G. Bowen is the former President of Princeton University, and Derek Bok is the former President of Harvard University. None of us ever argued that “samples of work, references, statements of purpose, and extra-curricular activities” are useful for predicting grades. Predicting grades with slightly-increased accuracy is completely beside the point. The point is, these things are better indicators of future accomplishment (although I would be the first to admit that a statement of purpose should probably be taken with a grain of salt, especially if it is not consistent with everything else in the student’s portfolio).

No Connection with Princeton University

Kimberly quotes me again:

“Incidentally, despite having a mailing address in Princeton, New Jersey, ETS has no connection with Princeton University. Its luxurious headquarters, including tennis courts, a swimming pool and a private hotel, are in Lawrence Township, not Princeton. The Princeton mailing address is merely for public relations.”

To which she comments:

“How EVIL! Actually, the Princeton link is very simple. In the 1920's, Carl C. Brigham, the Princeton professor who published A Study of American Intelligence, came up with his own version of the Army Intelligence exam to use as an admissions test for Princeton freshmen. Brigham was hired by the College Board (which is in NYC) to lead a committee to develop the test that eventually became the SAT, which was administered for the first time in 1926 - 22 years before ETS officially opened its doors.”

I reiterate: ETS has NO formal links to Princeton University. The Princeton mailing address is purely for public relations. The fact that the person who invented the SAT worked at Princeton is of no consequence when we are wondering why ETS tries to deceive us into thinking that there might be some link to the university, by using a dishonest mailing address. It is as if a former Princeton student kept a mailbox in Princeton, and had his mail forwarded to him. By the same token, there is no reason why ETS has to maintain a mailing address in Princeton. The City Hall of Lawrence Township (actually called Lawrenceville, ‘Lawrence Township’ is the colloquial name) has the following mailing address:

Township of Lawrence
2207 Lawrence Road, P.O. Box 6006
Lawrenceville, New Jersey 08648-3164

The United States Postal Service website gives 08648-3164 as the zip code for Lawrenceville (Lawrence Township).

And, for those who are interested, here is a quote from Carl Brigham, inventor of the SAT:

"The Nordics are rulers, organizers, and aristocrats... individualistic, self-reliant, and jealous of their personal freedom... as a result they are usually Protestant... The Alpine race is always and everywhere a race of peasants... The Alpine is the perfect slave, the ideal serf... the unstable temperment and the lack of reasoning power so often found among the Irish... Our figures, then, would rather tend to disprove the popular belief that the Jew is intelligent... he has the head form, stature, and colour of his Slavic neighbors. He is an Alpine Slav."
                                                                            Carl Brigham, 1923

I doubt Kim would endorse this quote, but it does show some of the sordid history of the testing industry, and the very real dangers of using test scores to classify people.

For another example of ETS dishonesty, look no further than the name of their most famous test: the SAT. It used to be called the Scholastic Aptitude Test, but ETS was challenged in court to defend the use of the term “aptitude”, and the claim that that could measure it. ETS lost the case, and was forced to drop the name “Scholastic Aptitude Test”. So what did they do? They changed the official name of the test to SAT! If you ask ETS what SAT stands for, they will tell you that SAT does not stand for anything. So, why not give the test an honest name? Something like the First Year College Grade Predictor? This would end the misconception that SAT still stands for Scholastic Aptitude Test, and would clearly indicate how the test scores are meant to be used.

Finally, Kimberly writes:

“Therefore, the claim that ETS has no connection to Princeton University, or that ETS chose Princeton purely for PR, is both laughable and easily disproved through a bit of Googling. But I suppose the truth didn't fit with Carter's meta-theory about how all psychometricians are eeeevil capitalists, though.”

I have no problem with ETS making money. What I have are problems with are the facts that ETS does not have an honest mailing address, does not give it’s flagship product an honest name, pronounces over and over again that coaching does not work while running a profitable side-business selling coaching material, and calls itself a non-profit “testing service” while raking in millions. If ETS cannot be trusted about such simple matters, how can we trust it on less obvious matters?

Kimberly, I challenge you to post this reply on your website.

Return to Main Page