A team of three Georgetown University graduate students won an international computational linguistics competition by developing the leading system for discourse relation classification, identifying and categorizing sequential phrases in sentences, the Georgetown linguistics department announced Sept. 22.
The competition, DISRPT Shared Task, requires teams to create models that detect implicit connections between statements, such as recognizing “A causes B” or “B concedes a counterpoint to A.” The Georgetown team — which included Zhuoxuan Nymphea Ju (GRD ’28), Jingni Wu (GRD ’25) and Abhishek Purushothama (GRD ’29) — ranked first among five competitors in the relation classification category, which focuses on the semantic or rhetorical relations between text segments.

Ju, who was the primary author of the study, said he did not expect to win the challenge.
“I was honestly quite surprised to win,” Ju wrote to The Hoya. “Even though this isn’t exactly a conference paper, it’s my first academic project where I was the main contributor.”
Ju added that the entire process of creating the group’s model and submitting their findings was full of surprises, which allowed him to explore different paths.
“The whole process was also full of surprises,” Ju wrote. “At first, we planned to use an encoder-based model. Since we thought we had enough time to explore, we asked one teammate to try a decoder-based model as well, and it performed well even in the first trial. That completely changed our direction.”
Purushothama said the team started small and slowly built a larger model.
“We took an incremental engineering approach, which was guided by an experimental process,” Purushothama wrote to The Hoya. “We prototyped and built out a pipeline for one language, and expanded it to a few of the languages.”
Purushothama added that each group member brought unique skills that allowed the project to succeed, citing Amir Zeldes, a computational linguistics professor who mentored the team, as a key influence.
“The strengths of our different team members helped make this happen,” Purushothama wrote. “Prof. Zeldes’s expertise helped us make the most of the data. Zhuoxuan crafted and optimized the model training, with Jingni testing and selecting the features we wanted to add to the system. I myself am more of an engineer, and was able to help out with (tangling and) untangling the data and system.”
Zeldes said he is glad to have helped the students and proud they were able to collaborate effectively.
“Mentoring the team has meant a lot to me, and it’s especially satisfying to see them win when I know how hard they worked,” Zeldes wrote to The Hoya. “Making a shared task submission means getting to know the data and task involved, engineering a system that is robust enough to be run by someone else (the shared task reproducer team), running experiments and writing an academic paper about it.”
Zeldes added that he worked with the team from the beginning and it was fulfilling to see them grow throughout the process.
“As a mentor, my role starts with recruiting students, explaining the task and previous approaches to solving it, and directing and coordinating the team’s activities to make sure everything runs smoothly,” Zeldes wrote. “Then once the work starts to get more technical, we look at experimental results together and I can make suggestions on how to improve different aspects based on what the system is doing better/worse at the moment.”
Purushothama said the project helped him better understand the role collaborative submissions can have in research throughout his career.
“My intention in joining the project was to work more goal-directed and straightforward building a system in contrast to other research endeavors, and especially to collaborate with the excellent folks on the team,” Purushothama wrote. “I have a newfound appreciation for shared task submissions, and the role they can play in research programs moving forward.”
Ju said the team’s future goals include focusing on expanding their model and making it more accessible to others.
“We’re now working on making the model more accessible so that anyone can easily try it out,” Ju wrote. “We hope that students and researchers interested in discourse relations will give it a try and find it useful.”