Senator Mark Warner (D-Va.) spoke to Georgetown University students about the intersection of artificial intelligence and democracy as part of the Georgetown Institute of Politics and Public Service’s guest speaker series Nov. 2.
GU Politics Executive Director Mo Elleithee (SFS ’94) moderated the event, jointly hosted by GU Politics and the McCourt School of Public Policy’s Tech and Public Policy Program. Warner, who currently serves as chair of the Senate Select Committee on Intelligence, discussed the dangers that artificial intelligence (AI) poses to democracy and the need for stronger federal regulations.
Warner highlighted recent developments in artificial intelligence, including the release of OpenAI’s ChatGPT in November 2022 and the launch of Meta’s alternative model, LlaMA, which uses open-source code, in February 2023. He said LlaMA, because its code is readily available for people to download, made it easy for anyone in the world to leverage the program for both benign and malicious purposes.
Warner focused on two areas of AI application he thinks have critical implications for national security: elections and the economy. He explained that the rapid advancement of AI comes with an onslaught of risks for the American public.
“What are the two domains of AI today that I think have the most immediate risk in terms of a large-scale massive undermining?” Warner said at the event. “The first is faith in public elections. The other is faith in our public markets, where there has been a lot less focus.”
Warner said the speed and scale at which AI can be developed and implemented makes it a powerful tool for bad actors, who may use AI to illegally manipulate the stock market or create deepfakes of political candidates.
Additionally, Warner said AI has influenced the United States’ position on the global stage by creating new tactics for threatening and defending national security. He said technological development is an important factor in establishing international power, making the issue a priority from not only a military perspective but also a political one.
“AI is a component part in the battle between authoritarian regimes and non-authoritarian regimes on who wins the war on tech,” Warner said.
Will Severn (CAS ’27), an attendee at the event, hopes the event will encourage students to think beyond AI’s implications for academics and grapple with the effects of AI in other arenas.
“I think as students, a lot of the ways that AI interacts with us in most of our lives is related to schoolwork, ChatGPT, all the other iterations of generative AI, but this is really a landscape that touches every industry, every facet of our society in this country,” Severn told The Hoya.
Warner also spoke about Congress’ role in the regulation of AI, acknowledging that in the past, Congress has failed to successfully regulate social media, another prominent technology affecting U.S. citizens.
“Congress has been pathetically ineffective on putting guardrails on social media,” Warner said.
Warner said there is a persistent gap between Congress’ purported desire to regulate technology and the amount of legislation that they have historically passed.
“It’s easy to raise your hand, harder to actually put words on a page,” Warner said.
Warner, who has a strong track record of participating in bipartisan efforts to regulate technology in Congress, advocated for an increase in AI regulation while ensuring the United States leverages AI to its full potential, such as through further investment in industries powering AI like semiconductor manufacturing.
Warner issued an open letter to CEOs of several AI companies in April 2023, urging them to combat bias, prioritize security and responsibly release new technologies. Warner wrote that companies should recognize the risks new technologies bring and incorporate ethical development practices like regulations ensuring that AI protects user privacy and data.
“As companies like yours make rapid advancements in AI, we must acknowledge the security risks inherent in this technology and ensure AI development and adoption proceeds in a responsible and secure way,” Warner wrote in the letter.
Warner said that there seems to be a global consensus on the need for better regulation of AI, noting that many countries came together at the 2023 AI Safety Summit, an international conference discussing AI regulation, to foster collaboration in AI regulation. He also cited the creation of the United Nations’ AI Advisory Body as a signal of international cooperation on the issue of AI.
Warner said there needs to be an alignment of interest between independent researchers, lawmakers, companies and civil society members to make AI safe and helpful. While critics of AI say that the technology can perpetuate biases and spread discrimination, AI companies oppose regulations on the grounds that such policies would stifle innovation, slowing development and leaving the U.S. at a disadvantage to global competitors like China.
“We’re going to need the best and the brightest to help us with this,” Warner said.
Severn said since there is still a lot that the public doesn’t understand about the future of AI, it can feel overwhelming for those interested in the field.
“With technology and AI especially, it feels as though the more time you spend, the more you learn, but the less you feel you know,” Severn said.
Warner said he encourages students to participate in discussions about the ethical use of artificial intelligence and the future of technology in America.
“The mission is important, but we need people to step up to that,” Warner said.