Partisan AI is a significant problem without an easy solution

Opinion
Partisan AI is a significant problem without an easy solution
Opinion
Partisan AI is a significant problem without an easy solution
AI chatbot - Artificial Intelligence digital concept
AI chatbot usage and concepts

Recent allegations by the creators of GIPPR, a conservative chatbot based on
ChatGPT
technology, that OpenAI
shut down their project
heighten concerns that artificial intelligence
technology
will be used to limit certain viewpoints. GIPPR’s developers allege that OpenAI said their
bot violated policies
regarding “deceptive activity and coordinated inauthentic behavior.”
Testing, however, showed
that the bot repetitively identified its partisan viewpoint with statements such as “as a right conservative AI, I” and “as a conservative AI, I believe.”

Beyond the dispute between GIPPR and OpenAI, concerns about the politics of technology are not new. For example, German/American rocket scientist Wernher von Braun was the subject of recurrent criticism regarding his willingness to first develop a rocket for the World War II German military and then switch to rocket development for the U.S. military and
NASA
.


US ALLIES AND PEERS MOVE TOWARD REGULATING ‘MISINFORMATION’ ON BIG TECH

The implications of artificial intelligence development are no less significant than those of the Saturn V or V-2 rocket. Some have, with unnecessary hyperbole, contended that improperly managed AI could lead to the
ultimate calamity of human extinction
. More realistically, AI has
helped the disabled
,
improved medicine
, made
real estate agents more efficient
, and helped organizations
recover from cyberattacks
. In the future, it is poised to aid humans in numerous areas ranging from
improving education
to
drive-thru restaurant ordering
.

But given the many studies that show scientists’ political inclinations (based on donation analysis)
lean toward the Left
, it’s easy to be concerned that AI may not represent Republican, libertarian, or numerous other views. With its potentially pervasive future role in society, biased AI is inherently problematic.

And it’s not just partisan bias that should be concerning. AI can also have
racial/ethnic
, gender, geographical, and
numerous other biases
— potentially due to the underlying technology, but in many cases, due to the biases of a system developer or the data that the AI is built or trained with.

Regulation and licensing of AI technologies have been floated as a means to prevent demographic biases from affecting AI operations. This can indeed be effective as many are already regulated, and existing laws may already cover problematic behavior, such as discrimination. However, application regulation doesn’t allow government control of the underlying algorithms or their speech. Both the programmer’s code and the application’s recommendations may be protected expression, covered by the First Amendment.

There is, similarly, no current basis for the government to require, under most circumstances, that algorithm developers make their technology available for others to use. However, this is not inconceivable. Models used for
phone company regulation
, such as making parts of the phone network available to all carriers, may provide some insight into the implications of doing so.

Fortunately, those (including government agencies) who wish to shape AI development have a strong tool: financial incentives. Agencies can fund technology development with particular goals, using grants and contracts, or they can create incentive programs (modeled after
NASA’s Centennial Challenges
program, for example) to encourage developers to undertake these efforts independently. Companies and entrepreneurs can invest in or start firms to develop technologies or contract with others to do so.

Similarly, political parties may benefit from funding technology development aligned with their goals and objectives. This provides a mechanism for getting technologies desired by agencies, firms, or individuals developed. It also increases the overall level of resources available for AI, further expanding technology development.

The public interest sparked by ChatGPT and similar technologies shows the challenge posed by new AI technologies. It is crucial for stakeholders, from individual developers to large corporations and government agencies, to appreciate the significance of diverse perspectives in the development and implementation of AI technologies. It’s only through collective efforts that we can hope to harness the full potential of AI.


CLICK HERE TO READ MORE FROM RESTORING AMERICA

Jeremy Straub is the director of the North Dakota State University’s Institute for Cyber Security Education and Research, an NDSU Challey Institute faculty fellow, and an assistant professor in the NDSU Computer Science Department.

Share your thoughts with friends.

Related Content