Devoted Reader Erin T. sent along an email in which she expressed her astonishment about an anti-testing site on the web. Why, I never knew such things existed, did you?The author of this website, Chris Carter, cites Wacky Alfie Kohn's books approvingly, and also posted this "press release" to a teacher's e-board, which is how Erin found it:
Because, Lord knows, we need more criticism of testing out there, and it needs to be easier to find. I mean, doesn't it get to you how reporters fall all over themselves to print only the good things about tests?
"Write these tests"? "Forced"? I love it when someone restates a requirement as being "forced" to do something, especially when schools vary widely as to how much emphasis they place on test scores.
I would think it was your duty as educators to be familiar with the research and hard data surrounding these tests, and be familiar with the cases for and against testing. Is there any particular reason that educators should have only this one viewpoint?
That's because SAT scores predict college grades, which aren't necessarily linked to later performance. Thus, the flurry of low to negative correlations between scores and later accomplishment that the author cites in this article are beside the point. The SAT has never claimed to predict success in life, so criticizing it for failing to do so is incorrect. What's more, for someone who goes to a lot of trouble to explain what a correlation is, Carter leaves out a discussion of restriction of range, possibly because to do so would leave the door open for contradiction of his theories.
An extended discussion of restriction of range wouldn't be appropriate here, but to sum it up quickly, a correlation is a measure of how multiple variables co-relate, or co-vary. If one of those variables has restricted variance, the correlation of that variable with any other variable will be "restricted" or lowered (closer to zero). If a variable does not vary, it cannot co-vary.
SAT scores for college students are restricted, because, for the most part, if you have a low SAT score, you don't get in. So, as a hypothetical example, let's say that most everyone who goes to Harvard has an SAT of higher than 1200. That leaves us with scores of between 1200 to 1600 to correlate with some measure of college success, or later success in life. Given that even smart people will screw up, fail to be "successful," or simply choose to stay out of the rat race, it's very possible for a Harvard grad with an SAT of 1200 will do fine, while one with an SAT of 1500 may drop out, or go bankrupt years later. That results in lowered correlations, but it doesn't necessarily follow from this that the SAT is not useful in college admissions. SAT scores tend to correlate with other measures of intelligence, and as long as we believe intelligence affects college performance, then colleges will have more success with high-SAT admittees than low-SAT scorers.
For what school? Every schools weights the SAT and ACT differently, because those tests hold different predictive validity for different populations. To average over all schools is to again mislead the reader. And the author reports that, "The SAT has the most predictive validity of the tests1, with correlation coefficients ranging from .2 to .5 at most (R-squared ranging from .04 to .25)," as though this were a bad thing. Obviously, he's unaware of just how rare a correlation of .5 is in the social sciences, especially for one test, taken on one day, with a limited number of items. It's such a tiny snapshot of performance that the correlations of .2 to .5 are just amazing.
Does Carter know of any other snapshot that is this cheap, standardized, and quick for schools to use that will have that high of a correlation with college grades?
Oh, again with the bias. As OpinionJournal likes to point out, if the world were to come to an end, the NYT would print the headline, "Armageddon arrives; women, minorities hit hardest." I have to give Carter credit for trying to define bias in his article, but then he wedges the idea of bias in where it shouldn't belong, here:
Sorry, but that's a definition of "differential impact," not bias. And while differential impact IS a topic that should be discussed thoughtfully, to lump this kind of effect in under bias is misleading. If the "poor" simply do not learn as much - quite possible given the likelihood of deprived homes and poorly-funded schools - then the test is actually performing correctly in assigning them lower scores.
Pardon me while I snicker uncontrollably. "Deep minds" sounds like a concept you think about while passing a bong around. The SAT is test of basic skills which are very likely to come in handy for college classes. Will very smart - forgive me, "deep" - students find the test boring? Probably. Will it be less than useful for predicting how those students at the very high end of ability do in school? Most likely. But the only way that "deep" students will bomb the SAT in large numbers is if they fail to learn basic geometry and algebra, or how to discern the main point of a paragraph.
Anyway, that's my take. I also just have to point out that, although I started N2P as a way of rebutting anti-testing articles, I don't believe all test criticism is bad. There's plenty of room for debate on issues of standard setting, high stakes for young kids, differential impact as mentioned above, and so forth, and it would be great to see a website that discusses these issues without resorting to that old journalistic standby, "Critics say".
Unfortunately, Carter's site looks like it will just be rehashing the myths and bashing tests unconditionally, while not providing much of an alternative for states that want all their students to meet certain standards, or universities that are flooded with thousands of applicants each year. If testing is so bad, what's the best alternative? Carter believes it is, "Samples of work, references, statements of purpose, and extra-curricular activities," all of which are fine, but not necessarily verifiable, or comparable across students, or shown to be predictive of college success. Never fails to amaze me how people who will nitpick to three decimal places the predictive validity of the SAT will offer up, as an alternative, things like "statements of purpose" for which no predictive reliability data exist.
Oh, wait, before I go, I just have to quote this portion of his article, if only because I am amazed that he believes it is correct, or to the point:
How EVIL! Actually, the Princeton link is very simple. In the 1920's, Carl C. Brigham, the Princeton professor who published A Study of American Intelligence, came up with his own version of the Army Intelligence exam to use as an admissions test for Princeton freshmen. Brigham was hired by the College Board (which is in NYC) to lead a committee to develop the test that eventually became the SAT, which was administered for the first time in 1926 - 22 years before ETS officially opened its doors.
two founders of ETS, Henry Chauncey and James Conant, met Brigham in 1933
when they traveled to Princeton and decided the SAT could be of use for
Harvard students, and the rest is history. Why they incorporated in Princeton
rather than Boston, I'm not sure, but the first ETS office was in fact
in Princeton proper, in Professor Brigham's original space on Nassau Street:
The need to move was obvious. In 1954, Chauncey had a vision of its future when he took a hike along Stony Brook and saw a stretch of open farmland that looked perfect for ETS headquarters. ETS' move to its 360-acre campus in Lawrence Township was complete by 1958. It was a complex of low-rise, modern, brick buildings, but for the first few years it also shared space with a working dairy farm.
Therefore, the claim that ETS has no connection to Princeton University, or that ETS chose Princeton purely for PR, is both laughable and easily disproved through a bit of Googling. But I suppose the truth didn't fit with Carter's meta-theory about how all psychometricians are eeeevil capitalists, though.
Posted by kswygert at July 6, 2004 11:10 AM
In addition, northern Lawrence Township does not have its own post office and thus is assigned to the Princeton post office. I lived a time in East Amwell, NJ, which also did not have its own post office, so I had a mailing address of Hopewll -- a town in a different county. Blame the US postal service.... err wait .... next the critics will claim ETS is a government conspiracy run by the Illuminati, or at least the Republican party. Posted by: Kensington at July 6, 2004 01:41 PM
But of course, the evil Conant knew about this post office reassignment
before he bought those "luxury" grounds in Lawrence... (As for
people who bitch about ETS's nice offices, I wonder if they realize that
2500 people work for that company, and having a lot of space in rural
NJ is not unheard of for a company that size, non-profit or no.) Posted
by: Kimberly at July 6, 2004 01:45 PM