Google reportedly implemented a new “sensitive topics” review system this year that employees say is designed to restrict scientists from saying anything in their reports that might cast the search giant’s projects in a bad light.
Under the guidelines, researchers must clear research projects that involve “face and sentiment analysis and categorizations of race, gender, or political affiliation” with legal, public relations, and policy teams before moving forward with the projects, according to Reuters, which obtained access to internal websites outlining the review system.
“Advances in technology and the growing complexity of our external environment are increasingly leading to situations where seemingly inoffensive projects raise ethical, reputational, regulatory or legal issues,” one webpage said.
Employees told the outlet that the policy has been in effect since June.
The process is designed to provide an additional layer of scrutiny on top of Google’s already rigorous review standards that identify and prevent disclosing trade secrets.
One senior Google manager reportedly told researchers to “take great care to strike a positive tone” in their report on content-recommendation technology, which has recently come under fire for pulling internet users down rabbit holes of increasingly radical content.
“This doesn’t mean we should hide from the real challenge,” the manager added.
Margaret Mitchell, a senior researcher at Google, said that it is running the risk of interfering with or squashing reports that could identify potential harms caused by new technologies.
“If we are researching the appropriate thing given our expertise and we are not permitted to publish that on grounds that are not in line with high-quality peer review, then we’re getting into a serious problem of censorship,” Mitchell said.
Google’s research page says that the company gives “individuals and teams the freedom to emphasize specific types of work” and to “work on important research problems that are not tied to immediate product needs.”
Google faced criticism earlier this month after it fired a prominent artificial intelligence researcher who criticized the company for its “lack of progress in hiring women and minorities as well as biases built into its artificial intelligence technology,” according to the New York Times. The researcher, Timni Gebru, said that Google also told her to retract a paper that identified flaws in new language recognition technology that Google relies on to power its search engine.
Studies that identify potential biases in Google’s services is one of the “sensitive topics” mentioned in the company’s new policy, according to Reuters. The policy also warned against research that involved China, Iran, Israel, COVID-19, the oil industry, home security, insurance, location data, religion, self-driving vehicles, and telecommunications.
The Washington Examiner reached out to Google for further comment.