Definition

recurrent neural networks

Contributor(s): Matthew Haughn

Recurrent neural networks are a type of advanced neural network that uses directed cycles in memory in order to perform recurrent tasks on a body of data. RNNs are able to build on the efforts of earlier types of neural networks that may have been limited by fixed size input and output vectors. RNNs are used in deep learning and in the development of models that simulate the activity of neurons in the human brain.

Used primarily in scientific and statistical analysis, RNNs can be purposed for tasks such as image recognition, natural language processing, machine translation and even models that create and edit text a character at a time. In some text driven applications an RNN might be able to create written works that might be mistaken for the creation of a human author.

Writing by RNNs is a form of computational creativity. This simulation of human creativity is made possible by the AI’s understanding of grammar and semantics learned from its training set. It is of note that an AI trained on Shakespeare would produce writing similar to that of Shakespeare. RNNs were first created in the 1980’s, John Hopfield invented Hopfield networks; RNNs where all connections are symmetric in 1982.

This was last updated in April 2018

Continue Reading About recurrent neural networks

Dig Deeper on Pattern recognition and machine learning

PRO+

Content

Find more PRO+ content and other member only offers, here.

Start the conversation

Send me notifications when other members comment.

By submitting you agree to receive email from TechTarget and its partners. If you reside outside of the United States, you consent to having your personal data transferred to and processed in the United States. Privacy

Please create a username to comment.

-ADS BY GOOGLE

File Extensions and File Formats

Powered by:

SearchBusinessAnalytics

SearchCRM

SearchCIO

SearchDataManagement

SearchERP

Close