Senators and representatives held separate hearings March 8 on the perils and promise of artificial intelligence (AI), signaling lawmakers’ growing regulatory appetite in the wake of actions on the technology from the Biden administration.
“AI is no longer a matter of science fiction nor is it a technology confined to research labs. AI is a technology that is already being deployed and broadly adopted as we speak,” said Aleksander Mądry, a computing professor at the Massachusetts Institute of Technology (MIT), in written testimony for the House hearing, held by the House Oversight’s Subcommittee on Cybersecurity, Information Technology, and Government Innovation.
Earlier that same day, the Senate Homeland Security & Government Affairs Committee held its own hearing. One of the Senate’s witnesses, Brown University Professor Suresh Venkatasubramanian, contributed to the Biden administration’s new “AI Bill of Rights,” released to little fanfare in Oct. 2022.
Venkatasubramanian also praised Biden’s Feb. 2023 executive order on racial equity. It explicitly instructs federal agencies to “[advance] equity” when using AI systems.
Before the Biden administration acted on AI, the Trump administration, in 2019, launched the American Artificial Intelligence Initiative.
Through his fiscal year 2021 budget proposal, Trump also sought to double federal research & development spending on nondefense AI.
House Talks AI
Eric Schmidt, the former CEO of Google, laid out three AI-related expectations from platforms he believes everyone would find acceptable in his testimony before the House.
“First, platforms must, at minimum, be able to establish the origin of the content published on their platform. Second, we need to know who specifically is on the platform representing each user or organization profile. Third, the site needs to publish and be held accountable to its published algorithms for promoting and choosing content,” he said in written testimony.
Rep. Nancy Mace (R-S.C.), who chairs the House’s cybersecurity subcommittee, illustrated the power of new AI innovations in a very direct way.
She delivered an opening statement that she revealed was written by OpenAI’s ChatGPT platform. ChatGPT is an example of the burgeoning generative AI technologies that can convincingly mimic human writing, visual art, and other forms of expression.
“We need to establish guidelines for AI development and use. We need to establish a clear legal framework to hold companies accountable for the consequences of their AI systems,” said Mace-as-ChatGPT.
Her AI-written statement also warned that AI could “be used to automate jobs, invade privacy, and perpetuate inequality.”
The subcommittee’s ranking member, Rep. Gerry Connolly (R-Va.), noted that the federal government laid much of the groundwork for the Information Age half a century ago, suggesting there may be a precedent for more intensive federal involvement today.
The predecessor to the Internet, the U.S. Advanced Research Projects Agency Network (ARPANET), was the work of the U.S. Department of Defense, thanks in large part to pioneering computer scientist J.C.R. Licklider.
Speaking before the Senate, Jason Matheny of the Rand Corporation spoke of the key national security challenges presented by AI.
Those include “the potential applications of AI to design pathogens that are much more destructive than those found in nature,” according to his written testimony.
Bias a Concern
At the state level, AI-related legislation has emerged across the country over the past half-decade.
In 2019, Illinois broke new ground with the Artificial Intelligence Video Interview Act. The law makes employers who use AI to analyze video interviews of job applicants disclose that fact prior to the interview.
A 2022 amendment requires employers to gather data on the race and ethnicity of such interviewees so as to identify any racial bias in subsequent hiring.
Similar concerns were voiced by the Democrats’ witness at the House cybersecurity hearing, University of Michigan intermittent lecturer and AI ethicist Merve Hickok.
Hickok’s prescriptions? Among other things, additional hearings and a possible “Algorithmic Safety Bureau.”
“You need to hear from those who are falsely identified by facial recognition [and those] wrongly denied credit and jobs because of bias built in algorithmic systems,” she said in written testimony.
ChatGPT’s Politics
Meanwhile, others worry about the leftward skew of ChatGPT.
EpochTV’s Jeff Carlson has written about the program’s apparent political bias on everything from Biden and Trump to the events of Jan. 6, 2021.
In the latter case, writes Carlson, ChatGPT made a false claim about Officer Brian Sicknick, saying he had been killed by protesters. It corrected that claim when prompted.
“ChatGPT appeared to ‘know’ that its first response was purposefully misleading—but only after it had been caught in the lie. This was a pattern that would be repeated in subsequent conversations with ChatGPT,” Carlson wrote.
Venture capitalist Marc Andreessen has warned about the ideological dimension of current debates over AI and its hazards.
“It’s not an accident that the standard prescriptions for putative AI risk are ‘draconian repression of human freedom’ and ‘free money for everyone,'” Andreessen wrote on Twitter.
“The outcome of the AI safety argument has to be global authoritarian crackdown on a level that would make Stalin blush. It’s the only way to be sure,” he added.
From The Epoch Times