Georgetown University’s Newspaper of Record since 1920

The Hoya

Georgetown University’s Newspaper of Record since 1920

The Hoya

Georgetown University’s Newspaper of Record since 1920

The Hoya

KPMG Employees Discuss the Role of Equity and Accessibility in Developing AI Technologies and Services in School of Continuing Studies Seminar

KPMG Employees Discuss the Role of Equity and Accessibility in Developing AI Technologies and Services in School of Continuing Studies Seminar

The Georgetown University School of Continuing Studies (SCS) hosted Lisa Mathews, Zach Yarmolovich and Peter Piper of KPMG, a multinational accounting and advisory services firm, in a Nov. 9 seminar titled, “Can AI Technologies Be Inclusive and Accessible?”

The event was the second seminar in the SCS’s year-long Responsible AI Intersectoral Series, which focuses on the inevitable rise of AI and the need to prioritize ethically conscious technology in the workplace.

Mathews, Yarmolovich and Piper discussed the implementation of AI in various sectors based on their experience at KPMG, one of the Big Four accounting firms, in their talk. Shadi Abouzeid, assistant professor of the practice in the SCS, moderated their discussion.

Mathews, a former insider threat program senior official at KPMG, is currently the head of Government Contract and Special Compliance Programs, Ethics & Compliance at KPMG and a professor in the Technology Management department in the SCS. Mathews discussed the ethical and regulatory considerations that must be taken when implementing AI-based technologies. 

“We know AI is here. You know it’s real. It’s real for business, it’s real for the public,” Mathews said at the event. “It has the potential to be the most disruptive technology since the Internet. It can transform enterprises, industries and our economy in ways that are very difficult to predict.”

“There are laws explicitly for job seekers that are dealing with this issue. So when people are looking to work somewhere, companies are now required in Maryland, Illinois and in New York City to inform whether AI is being used to conduct the screening,” Mathews added.

Mathews said the recently signed Executive Order 14110 on Safe, Secure, and Trustworthy Development and Use of Artificial Intelligence would allow for safer development of AI through additional testing.

“There has to be a society-wide effort that includes the government, the private sector, academia, civil society — we have to make sure that this is safe,” Mathews said. “Test your stuff, make sure that it’s been ethically developed. Make sure that’s going to be compliant with any applicable federal laws like equal employment.”

Georgetown University | At the second seminar in the Responsible AI series hosted by the SCS, professionals from KPMG explored the potential for AI models to address biases in areas like employment and criminal justice through increased trustworthiness, education, and inclusivity.

Abouzeid said he agrees with the possibility of ethical AI to support not only an organization’s mission but the employees themselves.

“Like any new technology AI has its good, bad, and ugly parts,” Abouzeid wrote to The Hoya. “AI has the potential to enable confidence and writing abilities for dyslexic individuals, focus for ADHD individuals and others.” 

Yarmolovich, a former security architect, engineer and Business Information Security Officer at KPMG said there is a need for AI models that address disparities against vulnerable populations who may not have been considered in the design process.

“You’re building a query and structuring this knowledge repository to respond to prompting and queuing for someone else to then execute their task on,” Yarmolovich said at the event. “Is it going to actually allow them to reach that full potential? Did you take into account someone else’s perceptions?” 

Yarmolovich said new AI applications, such as using AI for decision-making in the criminal justice system, can make biased AI models especially harmful. 

“If you’re making decisions about criminal justice outcomes, parole systems, at the end of the day, they need to be continuously evaluated and then assessed against a whole number of other things outside of just the measures of performance that you’re throwing in at the initial point,” Yarmolovich said.

Piper, a security architect and engineering director at KPMG, said looking inward when detecting biases is important in increasing trust in AI.

“I have a bias. Everybody has a bias,” Piper said at the event. “It comes with our environment. But we need to think about how we get to a place where we could achieve responsibility and trustworthiness in AI.”

Piper said the best way to improve trustworthiness in AI is by expanding technology education.

“How do you get past your bias? How do you help other people get past the bias? For me, education’s paramount, and that’s, I guess, hopefully, why we’re here.”

Leave a Comment
More to Discover

Comments (0)

All The Hoya Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *