Keegan Hines, a former Georgetown adjunct professor and the current vice president of machine learning at Arthur AI, discussed the rapid rise in generative Artificial Intelligence (AI) programs and Georgetown’s potential in adapting to software like ChatGPT.
The Master of Science in Data Science and Analytics program in the Graduate School of Arts & Sciences hosted the talk on March 17. The discussion centered on the rapid development of generative AI over the past six months.
Hines said generative AI has the capacity to radically change people’s daily lives, including how students are taught and how entertainment is consumed.
“I definitely think we’re going to see a lot of personal tutoring technologies coming up for both little kids and college students,” Hines said at the event. “I have a feeling that in the next year, someone will try to make an entirely AI-generated TV show. It’s not that hard to imagine an AI-generated script, animation and voice actors.
“Imagine what Netflix becomes. Netflix is no longer ‘recommend Keegan the best content’; Netflix is now ‘create something from scratch which is the perfect show Keegan’s ever wanted to see,’” Hines added.
Hines then discussed algorithms that generate text. He said the principal goal of these algorithms is to create deep learning systems that can understand complex patterns over longer time scales.
Hines said one challenge AI faces is that it can provide users with incorrect information.
“These models say things and sometimes they’re just flatly wrong,” Hines said. “Google got really panned when they made a product announcement about Bard and then people pointed out Bard had made a mistake.”
Bard, Google’s AI chatbot, incorrectly answered a question about the James Webb Space Telescope in a video from the program’s launch Feb. 6, raising concerns about Google’s rushed rollout of Bard and the possibility for generative AIs to spread misinformation.
Hines said the potential for bias and toxicity in AI is present, as seen with Microsoft’s ChatGPT-powered Bing search engine, which manufactured a conspiracy theory relating Tom Hanks to the Watergate scandal.
“There’s been a lot of research in AI alignment,” Hines said. “How do we make these systems communicate the values we have?”
Teaching and learning in all levels of education will need to adapt to changes in technology, according to Hines.
“One example is a high school history teacher who told students to have ChatGPT write a paper and then correct it themselves,” Hines said. “I think this is just the next iteration of open book, internet, ChatGPT. How do you get creative testing someone’s critical thinking on the material?”
Hines said OpenAI, the company behind ChatGPT, noticed larger, more complex language models were more accurate than smaller models due to lower levels of test loss or errors made during training.
“A small model has a high test loss whereas a really big model has a much more impressive test loss,” Hines said. “The big model also requires less data to reach an equivalent amount of test loss.”
OpenAI’s hypothesis was that the secret to unlocking rapid advancement in artificial intelligence lies in creating the largest model possible, according to Hines.
“There didn’t seem to be an end to this trend,” Hines said. “Their big hypothesis was, ‘let’s just go crazy and train the biggest model we can think of and keep going.’ Their big bet paid off and these strange, emergent, semi-intelligent behaviors are happening along the way.”
Hines said he is optimistic about the field’s future, and he predicted AI will be able to produce even more complex results, such as creating a TV show. “It was really only about ten years ago that deep learning was proven to be viable.” Hines said. “If we’re going to avoid the dystopian path and go down the optimistic path, generative AI will be an assistant. It will get you 80% of the way and you do the next 20%.”