Georgetown University’s Newspaper of Record since 1920

The Hoya

Georgetown University’s Newspaper of Record since 1920

The Hoya

Georgetown University’s Newspaper of Record since 1920

The Hoya

Explainer: What Georgetown Students Should Know About Artificial Intelligence


No longer the stuff of science fiction, artificial intelligence (AI) and machine learning (ML) are now a part of everyday life. 

Earlier this year, the editorial board of The Hoya called on Georgetown to adopt strategies to uphold academic integrity given the popularity of AI chatbots, such as ChatGPT, that can write entire essays and complete college-level homework assignments. With AI and ML impacting almost every aspect of modern life, knowing how they work is important to navigating spaces both on and offline.

Will Fleisher, a Georgetown philosophy professor specializing in the ethics of AI, said the impacts of AI and ML are already being felt in various spaces beyond just academic or workplace applications.  

“Computer systems developed with ML are already influencing our lives in myriad ways,” Fleisher wrote to The Hoya, “Who you date, where you work, where you live, what the state knows about you, and even the shape of democratic discourse in your community, are all being impacted by the output of artificial intelligence systems.” 

Artificial intelligence is what it sounds like — a computer system that can perform tasks in an intelligent way, similar to how a human might. Machine learning is considered a subset of AI, and is the process by which a computer can learn independently of human input; a common way to program a computer to do this is through a neural network, or an algorithm modeled off the human brain that contains information the computer learns from.

At Georgetown, many scientists are studying how AI learns and applying it to solve complex problems.

Nicolas Bocock (CAS ’23), a student researcher working with physics professor David Egolf, is currently working on trying to understand how AI learns. He utilizes neural networks in his research, feeding them handwritten numbers displayed as images to teach them to recognize the correct digit. 

“We are looking at what’s called deep machine learning,” Bocock said in an interview with The Hoya. “What it does is it basically takes each image and what the image actually is. It makes a guess as to what it thinks the thing is. Then it sees when it’s right, and it sees when it’s wrong. And it’s able to basically adjust its inner mechanics to get it right or get it better the next time.”

Bradley Fugetta (CAS ’23), another student researcher in the physics department working with Professors Amy Liu and Gen Yin, applies ML to study the interactions between magnets and how to improve magnetic storage devices, such as hard drives. 

“In order to train our neural network, we simulated thousands of materials and exposed each simulated material to the same series of external magnetic fields,” Fugetta wrote. “At each applied field, the magnetization of the material was recorded. Eventually, the network was able to accurately estimate the strength of the interaction for materials based only on this large-scale magnetization data.”

Twitter/@NeuroscienceNew | Artificial Intelligence is increasingly prevalent in day-to-day lives.

One common concern about AI is that it inherits social biases from its training data, which is created by humans, and tends to be biased with respect to gender and race. 

“What’s interesting about the fear of ChatGPT in AI like this is what really comes down to a fear of ourselves,” Bocock said. “They’re learning from our internet, they’re learning from our things when these machines learn, they’re learning from us. And so they will pick up on the things that we don’t do so well already.”

Fleisher explained that AI models often discriminate against marginalized groups because of the data they learn from.

“It’s well-known that AI systems trained with machine learning can be biased: they can be less accurate for people from marginalized groups, or can otherwise compound social injustice,” Fleisher wrote. “There has been a great deal of research about this problem, but even figuring out how to detect or measure bias has proven to be a difficult problem.”

Fleisher also mentioned a lesser-known concern about AI — that the technology can exacerbate  climate change due to the amount of energy needed to run certain programs, including Large Language Models that can create, recognize and predict text. According to a study by the Council on Foreign Relations, training a single AI system can produce over 250,000 pounds of CO2. 

“The power requirements for developing and deploying contemporary AI such as Large Language Models are immense,” Fleisher wrote. “This means that using these sophisticated systems comes at a big cost to people everywhere, especially those who will be most affected by climate change.”

Despite these issues, Fleisher said AI and ML will continue to play a pivotal, exciting role in society — but caution is needed.

“ML technology is genuinely exciting, and it promises incredible new opportunities to help people,” Fleisher wrote. “But it also poses serious risks to justice, democracy, and human well-being. There is an urgent need for new thinking about how to fairly and democratically govern this technology.”

Leave a Comment
More to Discover

Comments (0)

All The Hoya Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *