The statistic that 1-in-5 college women are the victims of sexual assault is so ubiquitous, and advocates so insistent that “the science is settled,” that it can lead to predictable outrage when different reports—like a new analysis from the American Association of University Women showing 89 percent of colleges and universities reported zero rapes to the federal government in 2015—have different results. It’s proof, these advocates say, that too few women go to authorities and too few of those authorities pass along to the fed what does get reported. But it’s not a cause to panic, according to the authors of the influential, controversial campus sexual assault surveys that gave us 1-in-5 in the first place.
The AAUW analysis is based on data provided by schools to the Department of Education under the Clery Act. Schools are to notify the department of every sexual assault reported on or adjacent to campus. Dr. Christopher Krebs of RTI International—the author of the 2007 study, underwritten by the National Institute for Justice, that determined, “Overall, 19% of undergraduate women reported experiencing attempted or completed sexual assault since entering college”—told me his latest data on the subject had already replicated the existing gap between the rate of sexual assaults recorded in anonymous multi-campus surveys and the much lower rate at which these incidents are being reported under the Clery Act.
Krebs surveyed 23,000 undergraduates at nine institutions or 2016’s Campus Climate Survey Validation Study and found 60 reported incidents that met the Clery criteria. According to that year’s Clery data, there were 40 such incidents. Krebs describes that as well within the margin of error for such a survey. “If you think our survey data are meaningless or flawed, but you think Clery Act data represent reality—then actually our survey data look pretty good,” Krebs said.
While it’s important to realize the vast majority of sexual assaults don’t get reported, he said, taking the 1-in-5 statistic and applying it universally, or using it politically, is a misleading representation. “People have taken our work from previous studies and used 1-in-5 from it. They create statistics that they then want to use as if they’re a national average, or that this is the magnitude of the problem everywhere. We’ve never said that. But that’s how it gets used.”
Misleading as 1-in-5’s sensationalized invocation tends to be, it might help inspire universities to confront the possibility that unreported assaults could be taking place on their campuses. “We advocate for schools doing a methodologically rigorous survey of their students,” Krebs said, “And only then do I think the data are really going to do some good.”
When it comes to data-driven truth-seeking on individual campuses, where the rates of sexual assault—reported and unreported—vary widely, obsessing over national averages is counterproductive. Having a new number to freak out about—9 out of 10 colleges report zero rapes under Clery in 2015, in scary contrast to the oft-repeated 1-in-5—helps political advocacy but hurts understanding. “Our study more than any other practically shows that there’s huge variation from one school to another,” Krebs said. “The biggest finding in our research is that, no, it really varies from one campus to another. If I’m a sheriff in Charlotte, North Carolina the national homicide rate or burglary rate or carjacking rate doesn’t tell me anything about my problem.” Reductive, one-size-fits-all data leads to policies that might not help much, I reflected. “I think it can be misleading,” Krebs agreed. And can it even chill reporting, I wondered: “Absolutely, absolutely,” he said.
Reducing a comprehensive study to an attention-grabbing ratio won’t help anyone figure out what’s actually happening at individual schools, where the rates of sexual assaults, reported and unreported, invariably diverge from the norm. Even the definition of sexusal assault varies from campus to campus, he told me. Krebs’ data, under the auspices of the Justice Department, reupped the oft-cited statistic after the figure had been doubted and debated, even by the Washington Post fact checker. But now that it’s won back credibility in the public eye, with help from the Post‘s own polling even, 1-in-5 still doesn’t mean what everyone thinks it means. “For some people it’s all about shock and awe and grabbing attention,” Krebs said. “On both sides, you see media outlets that are just looking for the highest number.”
Another study, published by the American Association of Universities in 2015, found a rate similar to that of Krebs’ 2007 study after surveying three times as many schools—but with a low response rate that made the entire field vulnerable to debunking, critics contended. One of the authors, David Cantor of Westat, defends the findings precisely because of its results’ significant repeatability. But even he doesn’t always love the way they’re applied. “I’m not saying that everybody is interpreting these the way we intend them to,” he told me.
The evident underreporting problem may not be quite what it looks like, either. “There a lot of reasons why people don’t report,” he added. “Very few of them have to do with being urged by people to report things. There are real personal consequences in reporting something. Those are there whether the administration is open to it or not.” Still, rates of sexual assaults reported in surveys as opposed to those reported to administrators differ by different degrees on different campuses. The culture of an individual campus or dorm depends on much more than a self-selecting survey of dozens of schools can detect. “We did find that there was a lot of variation across the schools,” Cantor said, echoing Krebs.
“One in five” simply isn’t as representative as advocates, politicians, and reporters of a certain stripe make it out be. Underreporting is not a new phenomenon. In fact, one obvious takeaway is that the spectacularly flawed adjudication system hoisted on colleges—a system that lends itself to hoaxes, mishandled cases, and false accusations—doesn’t seem to be encouraging a higher number of victims to come forward. We can’t actually know whether young women feel more or less comfortable seeking help now than they did before. What we can reasonably deduce—from the available data, but also from being humans—is that more students feel comfortable confiding their experiences in anonymous surveys that in reports to administrators and police.
Sussing out the numbers nearest to the real campus rape rate at an individual school will also require rigorous data collection and commitment to an honest and easily understood definition of what actually constitutes rape, sexual assault, sexual violence, sexual harassment. “Many people define it differently,” Krebs noted. “You really have to read the fine print. One study used the phrase ‘sexual violence’ as their main outcome, but when you read the fine print you realized that included in their measure of sexual violence was sexual harassment. Verbal sexual harassment was under this umbrella of ‘sexual violence.’ The definitions are important.”
The Government Accountability Office diagnosed a definitional discrepancy disorder last summer. A report from the government watchdog determined inconsistent definitions of sexual assault across different federal agencies impeded accurate data collection. It’s an understandable problem, really. While we’re struggling toward a higher consciousness of the primordial mess, it’s only appropriate that we come up against the predictable communication breakdown.
A uniformly unambiguous definition of sexual assault would do better to serve public safety than a dozen sensational headlines, I’ll venture. And, while we’re on the subject of public safety, there’s one most important and most often overlooked aspect of the crisis, if we’re to call it that. Are college campuses such dens of secret sexual predation—or is life itself?
“That the campuses are that much different than the general population still is not really been shown at least empirically,” Cantor told me. “All of these surveys are just of campuses, they’re not of women of similar age who are going to college or not going to college.” One report from the Bureau of Justice Statistics analyzed 18 years of National Crime Victimization survey data, finding no marked difference between college-age women in and out of school. Dorms and frat basements draw more federally funded surveys—but there’s no reason, other than upper-middle-class myopia, to assume they’re more dangerous in this respect than bars and break rooms. Critics dismiss the Bureau of Justice Statistics’ comparative findings, however, because the National Crime Victimization survey’s definition of “sexual assault victimization” is too precise.
The analysis of 2015’s Clery Act data from the American Association of University Women, released Wednesday with the hand-wringing headline, tellingly plugs AAUW’s own 2005 study. That study defined sexual assault unusually broadly, according to the contemporary Times article on their findings, in order to declare that a two-thirds majority of college students had experienced sexual harassment and thus sowed panic, laying the groundwork for a generation of crisis-promotion. AAUW could appear to conflate broadly defined misconduct with violent rape—a well-meaning definitional widening perhaps, but one that undermines accurate data collection and the original intent of Clery Act.
Whatever definition the fine print actually assigns it, any word pairing that includes the adjectival cognate of sex and a criminal-sounding abstract noun stirs an emotional response—fear, anger, and disgust—and alights the instinct to protect. The thought of college-age woman at risk necessarily clouds our capacities from objective judgment. It’s good news for humanity—but counterproductive to aim for an emotional response first, when the real goals are accurate data collection and effective policymaking.