Recurrent neural networks are a type of advanced neural network that uses directed cycles in memory in order to perform recurrent tasks on a body of data. RNNs are able to build on the efforts of earlier types of neural networks that may have been limited by fixed size input and output vectors. RNNs are used in deep learning and in the development of models that simulate the activity of neurons in the human brain.
Used primarily in scientific and statistical analysis, RNNs can be purposed for tasks such as image recognition, natural language processing, machine translation and even models that create and edit text a character at a time. In some text driven applications an RNN might be able to create written works that might be mistaken for the creation of a human author.
Writing by RNNs is a form of computational creativity. This simulation of human creativity is made possible by the AI’s understanding of grammar and semantics learned from its training set. It is of note that an AI trained on Shakespeare would produce writing similar to that of Shakespeare. RNNs were first created in the 1980’s, John Hopfield invented Hopfield networks; RNNs where all connections are symmetric in 1982.