
X/@GEORGETOWNSFS | Princeton professor Arvind Narayanan assessed the power of AI on disinformation, politics and economy in this year's Loewy Lecture in Technology and International Affairs.
Arvind Narayanan, a Princeton University professor of computer science, explored the power of artificial intelligence (AI) in a book talk at Georgetown University on Nov. 4.
At the event, hosted by Georgetown’s science, technology and international affairs program, Narayanan explained key findings from his newly published book “AI Snake Oil: What Artificial Intelligence Can Do, What It Can’t, and How to Tell the Difference.” The lecture was this year’s installment of the department’s Loewy Lecture in Technology and International Affairs series, which invites professors to discuss topics in technology policy.
Narayanan said his motivation to co-write the book with Sayash Kapoor, a doctoral candidate at Princeton, stemmed from an unexpectedly viral talk he gave on the lack of legitimacy behind an AI-powered hiring software.
“I was confused for a while of why is it that this particular thing that, of all the things I said, really caught on, and I realized that it was not because I had said something really profound, but because all of us suspect that a lot of what we’re being sold as AI products is overhyped or might not actually work,” Narayanan said at the event.
Narayanan noted that, although many people are concerned about AI, most lack the background knowledge to discern between statements of truth and hype. His book aims to equip people with the tools to decide what to believe regarding claimed advances in AI technology.
“Unfortunately, while the AI industry has many critics, there are relatively few people saying, ‘Look, I understand how AI works, I build AI and, as far as I know, there is no one way by which this kind of thing can work,’” Narayanan said. “I realized that there was a big need to bring that kind of message to people, to give people the tools so that they will be able to discern which claims they can believe and which ones they should be skeptical about.”
Narayanan then discussed the role of AI in government, referencing a recent mayoral candidate’s attempt to let an AI chatbot run Cheyenne, Wyo. Narayanan argued that such a move would circumvent the fundamental purpose of government: to gather real human perspectives and work through societal conflicts.
“The reason political decisions are contentious is because politics is the forum that we have chosen as a society to resolve our deepest differences,” Narayanan said. “The debate is not inefficiency, it’s not a distraction, it’s the very point of the process, and to try to automate that is to completely miss the point.”
Transitioning to AI’s impact on the economy and labor market, Narayanan addressed worries that AI will eventually take over all jobs — not just those in politics — explaining that, although certain tasks are becoming automated, the number of roles for humans will not necessarily decrease.
“In terms of the impact on jobs, there are lots of nuances and we shouldn’t simply assume that there’s going to be a negative impact because AI has automated certain things that a particular kind of professional does,” Narayanan said.
Rajesh Veeraraghavan, an associate professor of science, technology and international affairs at Georgetown who attended the event, said students should not be concerned about AI diminishing opportunities to get hired.
“People might be worried about jobs,” Veeraraghavan told The Hoya. “I don’t think that one should worry about it, and even if it happens, it’s going to take a very long time.”
Narayanan went on to discuss the problem of disinformation spreading quickly on social media, saying the issue is not AI-specific, but rather representative of technology corporations’ failure to regulate disinformation content.
“If we treat it as an AI problem, we’ve missed what the real interventions need to be in,” Narayanan said.“Those are what social media companies are doing about the propagation of fake information.”
Christian Hale (GRD ’26), an attendee at the event, said the risks of AI in propagating disinformation are not necessarily new given existing technology with the potential for disinformation like photo and video editing software.
“Foreign actors have been a thing for years, propaganda, etc. It’s the same risks but now it’s just being labeled AI, and reframing it as something that already exists is my biggest takeaway,” Hale told The Hoya.
Narayanan said the best time to act on AI is the present, stressing the need to avoid past inaction in regulating social media.
“We kind of missed our window as a society with social media, and it’s our hope that we don’t repeat that same mistake with AI,” Narayanan said.