Georgetown University’s Newspaper of Record since 1920

The Hoya

Georgetown University’s Newspaper of Record since 1920

The Hoya

Georgetown University’s Newspaper of Record since 1920

The Hoya

C-3PO or ‘The Terminator’? — A Word on AI

C-3PO+or+%E2%80%98The+Terminator%E2%80%99%3F+%E2%80%94+A+Word+on+AI

If you were to ask a dozen people to describe what they think of when they hear the phrase “artificial intelligence,” or AI, you would likely receive a dozen different responses. While the optimists in the group might reference friendly robots like C-3PO as the culmination of AI, the pessimists will likely point to “The Terminator” as the reason we should proceed down the AI rabbit hole with caution. When it comes to AI, most people only have a vague understanding of what the term actually means and how close we may, or may not, be from seeing some of these sci-fi characters in real life.

Broadly, AI is described as the development of computer systems capable of performing tasks that typically require human intelligence. Such tasks include language translation, visual perception and speech recognition. However, in order to truly understand AI, several more nuanced classifications are necessary.

Today, AI advancements are more properly referred to as narrow or weak AI developments. Narrow AI means that a computer system is able to execute one task extremely well, if not perfectly. That is, software can either beat the 18-time world champion “GO” player, or successfully tag your friends in the most recent batch of Facebook photos you uploaded but not both. General or strong AI, on the other hand, represents the ability of software to “learn” how to outperform humans in virtually all tasks, simultaneously.

While general AI has not yet been realized, narrow AI has become fairly integrated into our everyday lives. Whether it be facial recognition software or advancements to Google’s search algorithm, AI is helping people to perform tasks more quickly and with greater accuracy than ever before.

What is it going to take to leap from narrow to general AI?

The path to general AI largely deals with the capability of computer software to learn as humans do. AI first appeared in the 1950s with software being able to execute a single narrow task to perfection. This was achieved by software being hardcoded to account for a myriad of possibilities. In essence, a programmer gives a machine a specific set of instructions to accomplish a particular task.

In the 1980s, AI developed further, and “machine learning” emerged. Machine learning represents the ability of algorithms to be developed that allow machines to synthesize large amounts of data, learn from it and conduct their own decision making processes. Machine learning represents the departure of hardcoded software into adaptive algorithms that bring AI a palpable step closer to human learning and intelligence.

Another major step toward general AI was realized around 2010 with the advent of “deep learning.” Deep learning is a branch of machine learning that allows algorithms to actually model higher levels of abstractions. In other words, deep learning offers the opportunity for machines to be presented with infinitely large data sets and learn how to ignore all but the most relevant information. These advancements in deep learning have brought about the driverless car and the ability for machines to diagnose tumors more accurately than even the best trained radiologists.

While machine and deep learning have spurred many developments in the sophistication of AI, general intelligence is still a fairly distant dream for AI researchers.

What is most challenging about attaining general AI, though, is the “common sense” problem.

Although deep learning has allowed machines to “learn” what information is important in order to accomplish a particular task, AI has not developed to the point of being able to model “predictive learning”. Predictive learning is the core feature of human learning in which past experience gives way to forming future conclusions. Modeling predictive learning, more colloquially referred to as common sense, into an algorithm for machines presents the greatest barrier to attaining general AI.

This “common sense” problem of general AI is what is currently preventing researchers from creating machines that mimic or even perfect human intelligence. It would appear that, as long as predictive learning cannot be algorithmically modeled, we should continue to see machines that surpass their human counterparts in one particular area or task but not all areas and tasks simultaneously.

Ultimately, it would appear that the old adage is as true for people as it is for AI: You really cannot teach common sense.

Bianca DiSanto is a senior in the McDonough School of Business. Think Tech appears every Friday.

 

Leave a Comment
More to Discover

Comments (0)

All The Hoya Picks Reader Picks Sort: Newest

Your email address will not be published. Required fields are marked *