Senators aim to boost AI knowledge with study sessions on fast-growing technology

As a growing number of lawmakers in both parties demand action on regulations for artificial intelligence, Senate Majority Leader Chuck Schumer (D-NY) has scheduled three briefings on the subject for his fellow senators, including a classified session.

Dates for the briefings are yet to be announced, but the first will be about AI generally. The second concerns America’s leadership in developing AI technology. And the third classified meeting will deal with AI-related issues around national defense and intelligence.

HOW SAUDI ARABIA IS USING SPORT TO EXPAND ITS REACH

The use of the technology has skyrocketed since the wide release of the generative AI chatbot ChatGPT late last year. A Pew Research Center study only two weeks later found more than half of the public interacts with AI at least once a day. Data firm Statista reported ChatGPT had 1 million users in five days, making it the most quickly adopted consumer application in history.

Regulators have taken note. Concerns include bias in results and recommendations, misinformation, privacy, fraud, and job loss, among others not yet known.

Schumer’s briefings come on the heels of months spent crafting a high-level regulatory approach to the technology. He’s calling for “guardrails” that focus on transparency, government reporting, and the somewhat subjective aligning of these systems “with American values.” Those familiar with his plan say it would also involve new AI technologies being reviewed and tested by experts before being released.

In a parallel effort, Sen. Michael Bennet (D-CO) introduced a bill to create an AI task force as “a top-to-bottom review of existing AI policies across the federal government.” It “would generate specific regulatory and legislative recommendations to ensure that the federal government’s AI tools and policies respect civil rights, civil liberties, privacy, and due process.”

President Joe Biden’s administration has signaled its inclination to support regulating the technology. Officials met with leading AI companies, sought public comment on accountability measures for AI systems, and released a “Blueprint for an AI Bill of Rights.” The administration has so far stopped short of defining many of the terms used in the plan.

Within the Department of Commerce, the National Telecommunications Administration is looking into the merits of AI audits and certification. They have requested comments from the public on how best the technology might be regulated.

Federal Trade Commission Chairwoman Lina Khan expressed concerns about the detrimental effects AI might have on competition and the prevalence of fraud. But Khan pledged to use already existing laws to counter these impacts.

AI regulatory laws were introduced in at least 17 states in 2022. That number is sure to grow as the number of uses and users of AI increases. What could become a 50-state patchwork of different regulatory regimes would surely increase the cost of deploying some AI technologies and presumably slow that progress.

Meanwhile, nations around the world are considering their own approaches to AI regulation. Laws governing social media, data management, and privacy differing among nations have created confusion for consumers and record-setting fines for America’s leading tech companies. The same problems could be next for AI.

“Given these risks and concerns, it is crucial to develop standardized agreements that ensure that AI is balanced in its development,” Shane Tews, a nonresident senior fellow at the American Enterprise Institute, told the Washington Examiner. “This may take the form of an industry-specific, responsible-use agreement, or consensus best practices to avoid unintended harm.”

Tews acknowledges that AI’s rapid pace of advancement makes it difficult to predict its future capabilities and applications and, therefore, equally difficult to regulate against its risks without stifling beneficial innovation. She suggested that “multi-stakeholder approaches, standardization, certification, or regulatory sandboxes” might be the best path forward in developing AI principles.

Others are more broadly opposed to regulating AI. Adam Thierer of the R Street Institute told the Washington Examiner, “Artificial intelligence policy now threatens to become an all-out war on computation as regulatory schemes take aim at every layer of the production stack, including AI apps, models, chips, and even data centers.”

CLICK HERE TO READ MORE FROM THE WASHINGTON EXAMINER

Because AI stretches across so many industries — online applications in search, content creation, education, healthcare uses, customer service, and many, many more — the regulatory reach may be similarly limitless. Any plan this early in the life of the technology is bound to be broad and vague.

“The code cops are coming for AI, and it’s a nightmare scenario for American competitiveness and consumers,” Thierer warned.

Related Content