A recursive neural network is a kind of deep neural network created by applying the same set of weights recursively over a structured input, to produce a structured prediction over variable-size input structures, or a scalar prediction on it, by traversing a given structure in topological order. These networks were first introduced to learn distributed representations of structure (such as logical terms),[1] but have been successful in multiple applications, for instance in learning sequence and tree structures in natural language processing (mainly continuous representations of phrases and sentences based on word embeddings).
In the simplest architecture, nodes are combined into parents using a weight matrix (which is shared across the whole network) and a non-linearity such as the tanh {\displaystyle \tanh } hyperbolic function. If c 1 {\displaystyle c_{1}} and c 2 {\displaystyle c_{2}} are n {\displaystyle n} -dimensional vector representations of nodes, their parent will also be an n {\displaystyle n} -dimensional vector, defined as:
where W {\displaystyle W} is a learned n × 2 n {\displaystyle n\times 2n} weight matrix.
This architecture, with a few improvements, has been used for successfully parsing natural scenes, syntactic parsing of natural language sentences,[2] and recursive autoencoding and generative modeling of 3D shape structures in the form of cuboid abstractions.[3]
RecCC is a constructive neural network approach to deal with tree domains[4] with pioneering applications to chemistry[5] and extension to directed acyclic graphs.[6]
A framework for unsupervised RNN has been introduced in 2004.[7][8]
Recursive neural tensor networks use a single tensor-based composition function for all nodes in the tree.[9]
Typically, stochastic gradient descent (SGD) is used to train the network. The gradient is computed using backpropagation through structure (BPTS), a variant of backpropagation through time used for recurrent neural networks.
The universal approximation capability of RNNs over trees has been proved in literature.[10][11]
Recurrent neural networks are recursive artificial neural networks with a certain structure: that of a linear chain. Whereas recursive neural networks operate on any hierarchical structure, combining child representations into parent representations, recurrent neural networks operate on the linear progression of time, combining the previous time step and a hidden representation into the representation for the current time step.
An efficient approach to implement recursive neural networks is given by the Tree Echo State Network[12] within the reservoir computing paradigm.
Extensions to graphs include graph neural network (GNN),[13] Neural Network for Graphs (NN4G),[14] and more recently convolutional neural networks for graphs.
This artificial intelligence-related article is a stub. You can help Wikipedia by expanding it.