Definition

deep learning agent

Contributor(s): Matthew Haughn

A deep learning agent is any autonomous or semi-autonomous AI-driven system that uses deep learning to perform and improve at its tasks. Systems (agents) that use deep learning include chatbots, self-driving cars, expert systemsfacial recognition programs and robots.

Deep learning uses a system of layers where input is processed and then the processed output is passed on as input to the next layer, functioning much like neurons in the human brain. Through this system of input and output processing, deep learning agents model abstract thought in data. Deep learning systems can functionally can be broken down into two major categories: retrieval and generative models.

In retrieval-based models, functionality relies on a previously collected database of responses; the AI software matches questions to answers. This is the simpler type of model and generates no new responses. Retrieval-based models work well within narrowly defined roles. In chatbot AI, they don't make grammatical errors unless those errors are in their database. However, this model is not capable of handling new questions and can tend to inconsistency when asked the same question in a semantically different way.

In generative models, the AI is more complex and doesn't rely on a previously collected database of answers. They respond to queries with newly generated code or phrases. These models can be used to simulate wide areas of conversation in chat bots or deal with new situations in general much more capably. These models simulate conversation with humans on broader topics better than retrieval-based systems but may make grammatical errors and also can be taught poor responses. 

Because one of the goals of AI developers is to create an artificial system that can fool someone into thinking it is human, the more convincing, personality-consistent generative models have received more attention. These models learn by performing their tasks instead of using a more heuristic approach to match questions to responses (as retrieval based models do), which means they can more easily be misled. Such was the case of Microsoft's Tay. The chatbot turned racist and genocidal after interactions with the public on Twitter helped it learn these traits. Following this, Microsoft took down the bot to make alterations.

This was last updated in December 2016

Continue Reading About deep learning agent

Dig Deeper on AI technologies

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

File Extensions and File Formats

SearchBusinessAnalytics

SearchCRM

SearchCIO

SearchDataManagement

SearchERP

Close