Write Like the Unabomber

Those of us with children in their junior and senior years of high school are quite familiar by now with the wringer that is the college admissions process. At the center of that process is the dreaded Scholastic Aptitude Test (SAT) administered by the College Board, supposedly as a predictor of student performance in a college environment. For decades, it had the same basic format, a math section and a verbal section, each worth 800 points, for a combined “perfect” score of 1600. Two years ago, the College Board added a third section to the test, a “writing” section centered on a series of essays, each graded on a scale of 1-6 (six being perfect), using a set of “objective” criteria developed by the College Board. The “perfect” SAT score is now 2400. However, many schools, including most of the Ivy League schools, do not yet use the scores of the SAT writing section in their student admissions process. Most require their own essays, which are judged and scored by their own reviewers using their own criteria. The reasons cited by most (and I sat through these lectures at several schools) are (a) lack of a track record for the section as a predictor; and (b) disagreements with the evaluation criteria used for scoring the test. Over at the Phi Beta Cons blog at National Review Online, Robert Verbruggen is upset by this:

Last night I learned that Northwestern University, for the most part, just ignores the new writing portion of the SAT. This isn’t uncommon, and it’s a shame, because that part is the best of the three tests at predicting college GPAs.

Surely VerBruggen jests when he suggests that the writing test is the most accurate at determining GPA? First of all, the test has not been given long enough to determine whether it is a predictor of anything, let alone student performance. Second, VerBruggen cites as evidence a story in USA Today (!) that in turn cites a representative of the College Board (!) who merely asserts that “studies” (conducted by the College Board!) show this to be the case ). Well, where are the data, who peer reviewed the studies, and is the College Board, which administers the SAT, and which makes millions each year from it, really an impartial, unbiased observer? (In fairness, VerBruggen also informed me privately that there is a second study by University of California that parallels the College Board, but he did not provide data.) Finally, the writing test is the least objective and most biased portion of the test, which proves nothing except the ability of the student to write in accord with the College Board’s conception of good writing. I had long ago proposed an experiment in which works by great authors were subjected to the same grading criteria used on the SAT writing test. Too late! The Princeton Review had already done it, submitting essays by a number of modern writers. Results: Ernest Hemingway got a 3;, William Shakespeare a 2; Gertrude Stein, 1. Only one writer received the coveted perfect 6–Ted Kaczynski, the Unabomber. As Princeton Review relates:

The Unabomber, Ted Kaczynski, scores highest, owing to his following the highly formulaic requirements detailed by the test’s creators. The article also shows that Shakespeare might have saved himself a lot of grief if he’d dropped the poetic flair and focused on what the SAT graders look for. Results also show that Hemingway wouldn’t test out of freshman composition, and that Stein would most likely be taking remedial courses at her local community college.

In other words, because of its mechanistic approach to writing, the writing test rewards people who follow arbitrary and generally false “rules,” such as short sentence structure, avoidance of the passive voice, and not beginning sentences with “and” or “but.” One can generate the same effect by evaluating the works of great writers by using the grammar checker in Microsoft Word. It’s useful, if you are totally illiterate, but the tendency is to homogenize writing and make it conform to a very low common denominator of style.

VerBruggen did not agree with my criticism, and was harshly critical in turn of the Princeton Review article. According to him, “Schools should not train students to turn in papers that sound like Shakespeare anyway,” which is debatable. One ought to ask, though, whether it should be encouraging people who write like the Unabomber. One final criticism of the new SAT format, based on conversations with students who have taken it. At three hours, the old SAT pressed the limits of human endurance. The new test, at four hours plus, is a mind-numbing, finger-cramping ordeal, which does not seem to prove anything other than certain students have a high pain threshold. Since the writing test is administered last, with the students at the limits of their energy and suffering from mental fatigue, one has to wonder how well anyone could write under those circumstances.

Related Content