Georgetown University’s Newspaper of Record since 1920

The Hoya

Georgetown University’s Newspaper of Record since 1920

The Hoya

Georgetown University’s Newspaper of Record since 1920

The Hoya

Fulbright Scholar Discusses the Duality of AI in Cybersecurity in School of Continuing Studies Seminar

Fulbright+Scholar+Discusses+the+Duality+of+AI+in+Cybersecurity+in+School+of+Continuing+Studies+Seminar

The Georgetown School of Continuing Studies (SCS) hosted Gayan Benedict, the Chief Technology Office of Salesforce Australia and New Zealand and Fulbright Scholar at the Georgetown University Cyber SMART research collaborative, at an Oct. 26 seminar. 

The event was the first seminar in the Responsible AI Intersectoral Series which focuses on the ethical implementation of AI and the future of AI integration in fields such as marketing, human resources and cybersecurity.

Benedict discussed the benefits and consequences of artificial intelligence in the cybersecurity field in his talk titled “AI: A Double-Edged Sword for Cybersecurity Professionals.” Frederic Lemieux, director of the SCS master’s programs in Applied Intelligence and Cybersecurity Risk Management, moderated the discussion.

According to Benedict, professionals from every industry are turning to AI and other emerging technologies to support ongoing cyber projects. 

“I think it’s human nature normally to look at what we currently do and then ask how AI disrupts or changes that,” Benedict said. “AI stands to accelerate, amplify, broaden and deepen all the sorts of things which are already in train.” 

Georgetown University | At an event hosted by the School of Continuing Studies on Oct. 26, Dr. Gayan Benedict discussed the potential risks and rewards of AI in cybersecurity, emphasizing the balance between technology and human insight in defending against threats in the 21st century.

Lemieux said that AI will be especially useful in closing the cyber workforce gap in regions such as Latin America and Asia. 

“AI is kind of thriving because organizations ask themselves what they can do to protect themselves effectively without humans. AI is providing a lot of automated solutions in areas where we are seeing concerns right now,” Lemieux told The Hoya. “There seems to be a soft underbelly with the healthcare sector targeted by ransomware, as well as areas we don’t think about very often like the world of law firms,” Lemieux said.

Lemieux said cyber attacks have exposed pre-existing notions of security that no are no longer guaranteed, especially for lawyers.

“They kind of think of their client-attorney privilege as something that is granted, but research by one of my students revealed that they don’t have a lot of protection — yet they have a lot of secrets,” Lemieux said.

Benedict said integrating AI can fundamentally help organizations become more resilient in their cyber emergency protocols. Synthetic data, or data created by AI itself, can be used to train the system as it learns to detect its own vulnerabilities and weaknesses without compromising real sensitive data in the process.

“AI is very good at creating realistic synthetic data to test your controls and identify vulnerabilities, which potentially doesn’t violate the sort of privacy and confidentiality concerns that you would otherwise be limited by,” Benedict said. “But the problem is that the bad guys have access to that as well, and as we’ve seen recently the quality of phishing has gone up,”. 

Benedict said that this training will be crucial as escalating offensive AI measures outpace the human ability to respond to these attacks.

“If you’re reliant on humans in some way, then AI gives an attacker the ability to scale up attacks — very precise, targeted attacks — in a way which can overwhelm secure human constraint controls if you’re relying on analysts and individuals to detect and prevent them,” Benedict said. “I think AI is creating new ways of attacking human controls, not system-based controls.”

Benedict said that when AI is applied to supplement human intelligence, professionals can tend to lean too much on technology to catch mistakes and as a result develop tunnel vision that hinders their effectiveness.

“We can get so wedded to specific metrics and objectives that we lose sight of the big risk that we may be exposed to,” Benedict said. “When you walk the floor and you see your team, and they’re worrying about patching and vulnerabilities, and they’ve got metrics they’re not quite targeting, that says to me that they’re so operationally consumed that they are likely to be blindsided to creative threats.”  

Benedict said a balance is required between the human and the machine, specifically honing analytical prowess while maintaining ethical boundaries.

“My guidance is that you want to be really clear what those ethical nightmare scenarios are and do everything you can to hardwire them into all of your decision-making posturing,” Benedict said.

Leave a Comment
More to Discover

Comments (0)

All The Hoya Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *