Long short-term memory

The Long Short-Term Memory (LSTM) cell can process data sequentially and keep its hidden state through time.

Long short-term memory (LSTM)[1] is a type of recurrent neural network (RNN) aimed at dealing with the vanishing gradient problem[2] present in traditional RNNs. Its relative insensitivity to gap length is its advantage over other RNNs, hidden Markov models and other sequence learning methods. It aims to provide a short-term memory for RNN that can last thousands of timesteps, thus "long short-term memory".[1] The name is made in analogy with long-term memory and short-term memory and their relationship, studied by cognitive psychologists since early 20th century.

It is applicable to classification, processing and predicting data based on time series, such as in handwriting,[3] speech recognition,[4][5] machine translation,[6][7] speech activity detection,[8] robot control,[9][10] video games,[11][12] and healthcare.[13]

A common LSTM unit is composed of a cell, an input gate, an output gate[14] and a forget gate.[15] The cell remembers values over arbitrary time intervals and the three gates regulate the flow of information into and out of the cell. Forget gates decide what information to discard from the previous state by mapping the previous state and the current input to a value between 0 and 1. A (rounded) value of 1 means to keep the information, and a value of 0 means to discard it. Input gates decide which pieces of new information to store in the current cell state, using the same system as forget gates. Output gates control which pieces of information in the current cell state to output by assigning a value from 0 to 1 to the information, considering the previous and current states. Selectively outputting relevant information from the current state allows the LSTM network to maintain useful, long-term dependencies to make predictions, both in current and future time-steps.

Motivation

In theory, classic RNNs can keep track of arbitrary long-term dependencies in the input sequences. The problem with classic RNNs is computational (or practical) in nature: when training a classic RNN using back-propagation, the long-term gradients which are back-propagated can "vanish", meaning they can tend to zero due to very small numbers creeping into the computations, causing the model to effectively stop learning. RNNs using LSTM units partially solve the vanishing gradient problem, because LSTM units allow gradients to also flow with little to no attenuation. However, LSTM networks can still suffer from the exploding gradient problem.[16]

The intuition behind the LSTM architecture is to create an additional module in a neural network that learns when to remember and when to forget pertinent information.[15] In other words, the network effectively learns which information might be needed later on in a sequence and when that information is no longer needed. For instance, in the context of natural language processing, the network can learn grammatical dependencies.[17] An LSTM might process the sentence "Dave, as a result of his controversial claims, is now a pariah" by remembering the (statistically likely) grammatical gender and number of the subject Dave, note that this information is pertinent for the pronoun his and note that this information is no longer important after the verb is.

Variants

In the equations below, the lowercase variables represent vectors. Matrices and contain, respectively, the weights of the input and recurrent connections, where the subscript can either be the input gate , output gate , the forget gate or the memory cell , depending on the activation being calculated. In this section, we are thus using a "vector notation". So, for example, is not just one unit of one LSTM cell, but contains LSTM cell's units.

LSTM with a forget gate

The compact forms of the equations for the forward pass of an LSTM cell with a forget gate are:[1][15]

where the initial values are and and the operator denotes the Hadamard product (element-wise product). The subscript indexes the time step.

Variables

Letting the superscripts and refer to the number of input features and number of hidden units, respectively:

  • : input vector to the LSTM unit
  • : forget gate's activation vector
  • : input/update gate's activation vector
  • : output gate's activation vector
  • : hidden state vector also known as output vector of the LSTM unit
  • : cell input activation vector
  • : cell state vector
  • , and : weight matrices and bias vector parameters which need to be learned during training
  • : sigmoid function.
  • : hyperbolic tangent function.
  • : hyperbolic tangent function or, as the peephole LSTM paper[18][19] suggests, .

Peephole LSTM

A peephole LSTM unit with input (i.e. ), output (i.e. ), and forget (i.e. ) gates

The figure on the right is a graphical representation of an LSTM unit with peephole connections (i.e. a peephole LSTM).[18][19] Peephole connections allow the gates to access the constant error carousel (CEC), whose activation is the cell state.[18] is not used, is used instead in most places.

Each of the gates can be thought as a "standard" neuron in a feed-forward (or multi-layer) neural network: that is, they compute an activation (using an activation function) of a weighted sum. and represent the activations of respectively the input, output and forget gates, at time step .

The 3 exit arrows from the memory cell to the 3 gates and represent the peephole connections. These peephole connections actually denote the contributions of the activation of the memory cell at time step , i.e. the contribution of (and not , as the picture may suggest). In other words, the gates and calculate their activations at time step (i.e., respectively, and ) also considering the activation of the memory cell at time step , i.e. .

The single left-to-right arrow exiting the memory cell is not a peephole connection and denotes .

The little circles containing a symbol represent an element-wise multiplication between its inputs. The big circles containing an S-like curve represent the application of a differentiable function (like the sigmoid function) to a weighted sum.

Peephole convolutional LSTM

Peephole convolutional LSTM.[20] The denotes the convolution operator.

Training

An RNN using LSTM units can be trained in a supervised fashion on a set of training sequences, using an optimization algorithm like gradient descent combined with backpropagation through time to compute the gradients needed during the optimization process, in order to change each weight of the LSTM network in proportion to the derivative of the error (at the output layer of the LSTM network) with respect to corresponding weight.

A problem with using gradient descent for standard RNNs is that error gradients vanish exponentially quickly with the size of the time lag between important events. This is due to if the spectral radius of is smaller than 1.[2][21]

However, with LSTM units, when error values are back-propagated from the output layer, the error remains in the LSTM unit's cell. This "error carousel" continuously feeds error back to each of the LSTM unit's gates, until they learn to cut off the value.

CTC score function

Many applications use stacks of LSTM RNNs[22] and train them by connectionist temporal classification (CTC)[23] to find an RNN weight matrix that maximizes the probability of the label sequences in a training set, given the corresponding input sequences. CTC achieves both alignment and recognition.

Alternatives

Sometimes, it can be advantageous to train (parts of) an LSTM by neuroevolution[24] or by policy gradient methods, especially when there is no "teacher" (that is, training labels).

Success

There have been several successful stories of training, in a non-supervised fashion, RNNs with LSTM units.

In 2018, Bill Gates called it a "huge milestone in advancing artificial intelligence" when bots developed by OpenAI were able to beat humans in the game of Dota 2.[11] OpenAI Five consists of five independent but coordinated neural networks. Each network is trained by a policy gradient method without supervising teacher and contains a single-layer, 1024-unit Long-Short-Term-Memory that sees the current game state and emits actions through several possible action heads.[11]

In 2018, OpenAI also trained a similar LSTM by policy gradients to control a human-like robot hand that manipulates physical objects with unprecedented dexterity.[10]

In 2019, DeepMind's program AlphaStar used a deep LSTM core to excel at the complex video game Starcraft II.[12] This was viewed as significant progress towards Artificial General Intelligence.[12]

Applications

Applications of LSTM include:

Timeline of development

1989: Mike Mozer's work on "focused back-propagation"[49] anticipates aspects of LSTM, which the LSTM paper cites.[1]

1991: Sepp Hochreiter analyzed the vanishing gradient problem and developed principles of the method in his German diploma thesis,[2] which was considered highly significant by his supervisor Jürgen Schmidhuber.[50]

1995: "Long Short-Term Memory (LSTM)" is published in a technical report by Sepp Hochreiter and Jürgen Schmidhuber.[51]

1996: LSTM is published at NIPS'1996, a peer-reviewed conference.[14]

1997: The main LSTM paper is published in the journal Neural Computation.[1] By introducing Constant Error Carousel (CEC) units, LSTM deals with the vanishing gradient problem. The initial version of LSTM block included cells, input and output gates.[52]

1999: Felix Gers, Jürgen Schmidhuber, and Fred Cummins introduced the forget gate (also called "keep gate") into the LSTM architecture,[53] enabling the LSTM to reset its own state.[52]

2000: Gers, Schmidhuber, and Cummins added peephole connections (connections from the cell to the gates) into the architecture.[18][19] Additionally, the output activation function was omitted.[52]

2001: Gers and Schmidhuber trained LSTM to learn languages unlearnable by traditional models such as Hidden Markov Models.[18][54]

Hochreiter et al. used LSTM for meta-learning (i.e. learning a learning algorithm).[55]

2004: First successful application of LSTM to speech Alex Graves et al.[56][54]

2005: First publication (Graves and Schmidhuber) of LSTM with full backpropagation through time and of bi-directional LSTM.[25][54]

2005: Daan Wierstra, Faustino Gomez, and Schmidhuber trained LSTM by neuroevolution without a teacher.[24]

2006: Graves, Fernandez, Gomez, and Schmidhuber introduce a new error function for LSTM: Connectionist Temporal Classification (CTC) for simultaneous alignment and recognition of sequences.[23] CTC-trained LSTM led to breakthroughs in speech recognition.[26][57][58][59]

Mayer et al. trained LSTM to control robots.[9]

2007: Wierstra, Foerster, Peters, and Schmidhuber trained LSTM by policy gradients for reinforcement learning without a teacher.[60]

Hochreiter, Heuesel, and Obermayr applied LSTM to protein homology detection the field of biology.[36]

2009: An LSTM trained by CTC won the ICDAR connected handwriting recognition competition. Three such models were submitted by a team led by Alex Graves.[3] One was the most accurate model in the competition and another was the fastest.[61] This was the first time an RNN won international competitions.[54]

2009: Justin Bayer et al. introduced neural architecture search for LSTM.[62][54]

2013: Alex Graves, Abdel-rahman Mohamed, and Geoffrey Hinton used LSTM networks as a major component of a network that achieved a record 17.7% phoneme error rate on the classic TIMIT natural speech dataset.[27]

2014: Kyunghyun Cho et al. put forward a simplified variant of the forget gate LSTM[53] called Gated recurrent unit (GRU).[63]

2015: Google started using an LSTM trained by CTC for speech recognition on Google Voice.[57][58] According to the official blog post, the new model cut transcription errors by 49%.[64]

2015: Rupesh Kumar Srivastava, Klaus Greff, and Schmidhuber used LSTM principles[53] to create the Highway network, a feedforward neural network with hundreds of layers, much deeper than previous networks.[65][66][67] 7 months later, Kaiming He, Xiangyu Zhang; Shaoqing Ren, and Jian Sun won the ImageNet 2015 competition with an open-gated or gateless Highway network variant called Residual neural network.[68] This has become the most cited neural network of the 21st century.[67]

2016: Google started using an LSTM to suggest messages in the Allo conversation app.[69] In the same year, Google released the Google Neural Machine Translation system for Google Translate which used LSTMs to reduce translation errors by 60%.[6][70][71]

Apple announced in its Worldwide Developers Conference that it would start using the LSTM for quicktype[72][73][74] in the iPhone and for Siri.[75][76]

Amazon released Polly, which generates the voices behind Alexa, using a bidirectional LSTM for the text-to-speech technology.[77]

2017: Facebook performed some 4.5 billion automatic translations every day using long short-term memory networks.[7]

Researchers from Michigan State University, IBM Research, and Cornell University published a study in the Knowledge Discovery and Data Mining (KDD) conference.[78][79][80] Their Time-Aware LSTM (T-LSTM) performs better on certain data sets than standard LSTM.

Microsoft reported reaching 94.9% recognition accuracy on the Switchboard corpus, incorporating a vocabulary of 165,000 words. The approach used "dialog session-based long-short-term memory".[59]

2018: OpenAI used LSTM trained by policy gradients to beat humans in the complex video game of Dota 2,[11] and to control a human-like robot hand that manipulates physical objects with unprecedented dexterity.[10][54]

2019: DeepMind used LSTM trained by policy gradients to excel at the complex video game of Starcraft II.[12][54]

2021: According to Google Scholar, in 2021, LSTM was cited over 16,000 times within a single year. This reflects applications of LSTM in many different fields including healthcare.[13]

2024: A modern upgrade of LSTM called xLSTM is published by a team leaded by Sepp Hochreiter. One of the 2 blocks (mLSTM) of the architecture are parallelizable, which allows it to keep up with transformer-based models, the other ones (sLSTM) allow state tracking.[81][82]

See also

References

  1. ^ a b c d e Sepp Hochreiter; Jürgen Schmidhuber (1997). "Long short-term memory". Neural Computation. 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. PMID 9377276. S2CID 1915014.
  2. ^ a b c Hochreiter, Sepp (1991). Untersuchungen zu dynamischen neuronalen Netzen (PDF) (diploma thesis). Technical University Munich, Institute of Computer Science.
  3. ^ a b Graves, A.; Liwicki, M.; Fernández, S.; Bertolami, R.; Bunke, H.; Schmidhuber, J. (May 2009). "A Novel Connectionist System for Unconstrained Handwriting Recognition". IEEE Transactions on Pattern Analysis and Machine Intelligence. 31 (5): 855–868. CiteSeerX 10.1.1.139.4502. doi:10.1109/tpami.2008.137. ISSN 0162-8828. PMID 19299860. S2CID 14635907.
  4. ^ Sak, Hasim; Senior, Andrew; Beaufays, Francoise (2014). "Long Short-Term Memory recurrent neural network architectures for large scale acoustic modeling" (PDF). Archived from the original (PDF) on 2018-04-24.
  5. ^ Li, Xiangang; Wu, Xihong (2014-10-15). "Constructing Long Short-Term Memory based Deep Recurrent Neural Networks for Large Vocabulary Speech Recognition". arXiv:1410.4281 [cs.CL].
  6. ^ a b Wu, Yonghui; Schuster, Mike; Chen, Zhifeng; Le, Quoc V.; Norouzi, Mohammad; Macherey, Wolfgang; Krikun, Maxim; Cao, Yuan; Gao, Qin (2016-09-26). "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation". arXiv:1609.08144 [cs.CL].
  7. ^ a b Ong, Thuy (4 August 2017). "Facebook's translations are now powered completely by AI". www.allthingsdistributed.com. Retrieved 2019-02-15.
  8. ^ Sahidullah, Md; Patino, Jose; Cornell, Samuele; Yin, Ruiking; Sivasankaran, Sunit; Bredin, Herve; Korshunov, Pavel; Brutti, Alessio; Serizel, Romain; Vincent, Emmanuel; Evans, Nicholas; Marcel, Sebastien; Squartini, Stefano; Barras, Claude (2019-11-06). "The Speed Submission to DIHARD II: Contributions & Lessons Learned". arXiv:1911.02388 [eess.AS].
  9. ^ a b c Mayer, H.; Gomez, F.; Wierstra, D.; Nagy, I.; Knoll, A.; Schmidhuber, J. (October 2006). "A System for Robotic Heart Surgery that Learns to Tie Knots Using Recurrent Neural Networks". 2006 IEEE/RSJ International Conference on Intelligent Robots and Systems. pp. 543–548. CiteSeerX 10.1.1.218.3399. doi:10.1109/IROS.2006.282190. ISBN 978-1-4244-0258-8. S2CID 12284900.
  10. ^ a b c "Learning Dexterity". OpenAI. July 30, 2018. Retrieved 2023-06-28.
  11. ^ a b c d Rodriguez, Jesus (July 2, 2018). "The Science Behind OpenAI Five that just Produced One of the Greatest Breakthrough in the History of AI". Towards Data Science. Archived from the original on 2019-12-26. Retrieved 2019-01-15.
  12. ^ a b c d Stanford, Stacy (January 25, 2019). "DeepMind's AI, AlphaStar Showcases Significant Progress Towards AGI". Medium ML Memoirs. Retrieved 2019-01-15.
  13. ^ a b Schmidhuber, Jürgen (2021). "The 2010s: Our Decade of Deep Learning / Outlook on the 2020s". AI Blog. IDSIA, Switzerland. Retrieved 2022-04-30.
  14. ^ a b Hochreiter, Sepp; Schmidhuber, Juergen (1996). LSTM can solve hard long time lag problems. Advances in Neural Information Processing Systems.
  15. ^ a b c Felix A. Gers; Jürgen Schmidhuber; Fred Cummins (2000). "Learning to Forget: Continual Prediction with LSTM". Neural Computation. 12 (10): 2451–2471. CiteSeerX 10.1.1.55.5709. doi:10.1162/089976600300015015. PMID 11032042. S2CID 11598600.
  16. ^ Calin, Ovidiu (14 February 2020). Deep Learning Architectures. Cham, Switzerland: Springer Nature. p. 555. ISBN 978-3-030-36720-6.
  17. ^ Lakretz, Yair; Kruszewski, German; Desbordes, Theo; Hupkes, Dieuwke; Dehaene, Stanislas; Baroni, Marco (2019), "The emergence of number and syntax units in", The emergence of number and syntax units (PDF), Association for Computational Linguistics, pp. 11–20, doi:10.18653/v1/N19-1002, hdl:11245.1/16cb6800-e10d-4166-8e0b-fed61ca6ebb4, S2CID 81978369
  18. ^ a b c d e f Gers, F. A.; Schmidhuber, J. (2001). "LSTM Recurrent Networks Learn Simple Context Free and Context Sensitive Languages" (PDF). IEEE Transactions on Neural Networks. 12 (6): 1333–1340. doi:10.1109/72.963769. PMID 18249962. S2CID 10192330.
  19. ^ a b c d Gers, F.; Schraudolph, N.; Schmidhuber, J. (2002). "Learning precise timing with LSTM recurrent networks" (PDF). Journal of Machine Learning Research. 3: 115–143.
  20. ^ Xingjian Shi; Zhourong Chen; Hao Wang; Dit-Yan Yeung; Wai-kin Wong; Wang-chun Woo (2015). "Convolutional LSTM Network: A Machine Learning Approach for Precipitation Nowcasting". Proceedings of the 28th International Conference on Neural Information Processing Systems: 802–810. arXiv:1506.04214. Bibcode:2015arXiv150604214S.
  21. ^ Hochreiter, S.; Bengio, Y.; Frasconi, P.; Schmidhuber, J. (2001). "Gradient Flow in Recurrent Nets: the Difficulty of Learning Long-Term Dependencies (PDF Download Available)". In Kremer and, S. C.; Kolen, J. F. (eds.). A Field Guide to Dynamical Recurrent Neural Networks. IEEE Press.
  22. ^ Fernández, Santiago; Graves, Alex; Schmidhuber, Jürgen (2007). "Sequence labelling in structured domains with hierarchical recurrent neural networks". Proc. 20th Int. Joint Conf. On Artificial Intelligence, Ijcai 2007: 774–779. CiteSeerX 10.1.1.79.1887.
  23. ^ a b Graves, Alex; Fernández, Santiago; Gomez, Faustino; Schmidhuber, Jürgen (2006). "Connectionist temporal classification: Labelling unsegmented sequence data with recurrent neural networks". In Proceedings of the International Conference on Machine Learning, ICML 2006: 369–376. CiteSeerX 10.1.1.75.6306.
  24. ^ a b c Wierstra, Daan; Schmidhuber, J.; Gomez, F. J. (2005). "Evolino: Hybrid Neuroevolution/Optimal Linear Search for Sequence Learning". Proceedings of the 19th International Joint Conference on Artificial Intelligence (IJCAI), Edinburgh: 853–858.
  25. ^ a b Graves, A.; Schmidhuber, J. (2005). "Framewise phoneme classification with bidirectional LSTM and other neural network architectures". Neural Networks. 18 (5–6): 602–610. CiteSeerX 10.1.1.331.5800. doi:10.1016/j.neunet.2005.06.042. PMID 16112549. S2CID 1856462.
  26. ^ a b Fernández, S.; Graves, A.; Schmidhuber, J. (9 September 2007). "An Application of Recurrent Neural Networks to Discriminative Keyword Spotting". Proceedings of the 17th International Conference on Artificial Neural Networks. ICANN'07. Berlin, Heidelberg: Springer-Verlag: 220–229. ISBN 978-3540746935. Retrieved 28 December 2023.
  27. ^ a b Graves, Alex; Mohamed, Abdel-rahman; Hinton, Geoffrey (2013). "Speech recognition with deep recurrent neural networks". 2013 IEEE International Conference on Acoustics, Speech and Signal Processing. pp. 6645–6649. arXiv:1303.5778. doi:10.1109/ICASSP.2013.6638947. ISBN 978-1-4799-0356-6. S2CID 206741496.
  28. ^ Kratzert, Frederik; Klotz, Daniel; Shalev, Guy; Klambauer, Günter; Hochreiter, Sepp; Nearing, Grey (2019-12-17). "Towards learning universal, regional, and local hydrological behaviors via machine learning applied to large-sample datasets". Hydrology and Earth System Sciences. 23 (12): 5089–5110. arXiv:1907.08456. Bibcode:2019HESS...23.5089K. doi:10.5194/hess-23-5089-2019. ISSN 1027-5606.
  29. ^ Eck, Douglas; Schmidhuber, Jürgen (2002-08-28). "Learning the Long-Term Structure of the Blues". Artificial Neural Networks — ICANN 2002. Lecture Notes in Computer Science. Vol. 2415. Springer, Berlin, Heidelberg. pp. 284–289. CiteSeerX 10.1.1.116.3620. doi:10.1007/3-540-46084-5_47. ISBN 978-3540460848.
  30. ^ Schmidhuber, J.; Gers, F.; Eck, D.; Schmidhuber, J.; Gers, F. (2002). "Learning nonregular languages: A comparison of simple recurrent networks and LSTM". Neural Computation. 14 (9): 2039–2041. CiteSeerX 10.1.1.11.7369. doi:10.1162/089976602320263980. PMID 12184841. S2CID 30459046.
  31. ^ Perez-Ortiz, J. A.; Gers, F. A.; Eck, D.; Schmidhuber, J. (2003). "Kalman filters improve LSTM network performance in problems unsolvable by traditional recurrent nets". Neural Networks. 16 (2): 241–250. CiteSeerX 10.1.1.381.1992. doi:10.1016/s0893-6080(02)00219-8. PMID 12628609.
  32. ^ A. Graves, J. Schmidhuber. Offline Handwriting Recognition with Multidimensional Recurrent Neural Networks. Advances in Neural Information Processing Systems 22, NIPS'22, pp 545–552, Vancouver, MIT Press, 2009.
  33. ^ Graves, A.; Fernández, S.; Liwicki, M.; Bunke, H.; Schmidhuber, J. (3 December 2007). "Unconstrained Online Handwriting Recognition with Recurrent Neural Networks". Proceedings of the 20th International Conference on Neural Information Processing Systems. NIPS'07. USA: Curran Associates Inc.: 577–584. ISBN 9781605603520. Retrieved 28 December 2023.
  34. ^ Baccouche, M.; Mamalet, F.; Wolf, C.; Garcia, C.; Baskurt, A. (2011). "Sequential Deep Learning for Human Action Recognition". In Salah, A. A.; Lepri, B. (eds.). 2nd International Workshop on Human Behavior Understanding (HBU). Lecture Notes in Computer Science. Vol. 7065. Amsterdam, Netherlands: Springer. pp. 29–39. doi:10.1007/978-3-642-25446-8_4. ISBN 978-3-642-25445-1.
  35. ^ Huang, Jie; Zhou, Wengang; Zhang, Qilin; Li, Houqiang; Li, Weiping (2018-01-30). "Video-based Sign Language Recognition without Temporal Segmentation". arXiv:1801.10111 [cs.CV].
  36. ^ a b Hochreiter, S.; Heusel, M.; Obermayer, K. (2007). "Fast model-based protein homology detection without alignment". Bioinformatics. 23 (14): 1728–1736. doi:10.1093/bioinformatics/btm247. PMID 17488755.
  37. ^ Thireou, T.; Reczko, M. (2007). "Bidirectional Long Short-Term Memory Networks for predicting the subcellular localization of eukaryotic proteins". IEEE/ACM Transactions on Computational Biology and Bioinformatics. 4 (3): 441–446. doi:10.1109/tcbb.2007.1015. PMID 17666763. S2CID 11787259.
  38. ^ Malhotra, Pankaj; Vig, Lovekesh; Shroff, Gautam; Agarwal, Puneet (April 2015). "Long Short Term Memory Networks for Anomaly Detection in Time Series" (PDF). European Symposium on Artificial Neural Networks, Computational Intelligence and Machine Learning — ESANN 2015. Archived from the original (PDF) on 2020-10-30. Retrieved 2018-02-21.
  39. ^ Tax, N.; Verenich, I.; La Rosa, M.; Dumas, M. (2017). "Predictive Business Process Monitoring with LSTM Neural Networks". Advanced Information Systems Engineering. Lecture Notes in Computer Science. Vol. 10253. pp. 477–492. arXiv:1612.02130. doi:10.1007/978-3-319-59536-8_30. ISBN 978-3-319-59535-1. S2CID 2192354.
  40. ^ Choi, E.; Bahadori, M.T.; Schuetz, E.; Stewart, W.; Sun, J. (2016). "Doctor AI: Predicting Clinical Events via Recurrent Neural Networks". JMLR Workshop and Conference Proceedings. 56: 301–318. arXiv:1511.05942. Bibcode:2015arXiv151105942C. PMC 5341604. PMID 28286600.
  41. ^ Jia, Robin; Liang, Percy (2016). "Data Recombination for Neural Semantic Parsing". arXiv:1606.03622 [cs.CL].
  42. ^ Wang, Le; Duan, Xuhuan; Zhang, Qilin; Niu, Zhenxing; Hua, Gang; Zheng, Nanning (2018-05-22). "Segment-Tube: Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation" (PDF). Sensors. 18 (5): 1657. Bibcode:2018Senso..18.1657W. doi:10.3390/s18051657. ISSN 1424-8220. PMC 5982167. PMID 29789447.
  43. ^ Duan, Xuhuan; Wang, Le; Zhai, Changbo; Zheng, Nanning; Zhang, Qilin; Niu, Zhenxing; Hua, Gang (2018). "Joint Spatio-Temporal Action Localization in Untrimmed Videos with Per-Frame Segmentation". 2018 25th IEEE International Conference on Image Processing (ICIP). 25th IEEE International Conference on Image Processing (ICIP). pp. 918–922. doi:10.1109/icip.2018.8451692. ISBN 978-1-4799-7061-2.
  44. ^ Orsini, F.; Gastaldi, M.; Mantecchini, L.; Rossi, R. (2019). Neural networks trained with WiFi traces to predict airport passenger behavior. 6th International Conference on Models and Technologies for Intelligent Transportation Systems. Krakow: IEEE. arXiv:1910.14026. doi:10.1109/MTITS.2019.8883365. 8883365.
  45. ^ Zhao, Z.; Chen, W.; Wu, X.; Chen, P.C.Y.; Liu, J. (2017). "LSTM network: A deep learning approach for Short-term traffic forecast". IET Intelligent Transport Systems. 11 (2): 68–75. doi:10.1049/iet-its.2016.0208. S2CID 114567527.
  46. ^ Gupta A, Müller AT, Huisman BJH, Fuchs JA, Schneider P, Schneider G (2018). "Generative Recurrent Networks for De Novo Drug Design". Mol Inform. 37 (1–2). doi:10.1002/minf.201700111. PMC 5836943. PMID 29095571.{{cite journal}}: CS1 maint: multiple names: authors list (link)
  47. ^ Saiful Islam, Md.; Hossain, Emam (2020-10-26). "Foreign Exchange Currency Rate Prediction using a GRU-LSTM Hybrid Network". Soft Computing Letters. 3: 100009. doi:10.1016/j.socl.2020.100009. ISSN 2666-2221.
  48. ^ {{Cite Abbey Martin, Andrew J. Hill, Konstantin M. Seiler & Mehala Balamurali (2023) Automatic excavator action recognition and localisation for untrimmed video using hybrid LSTM-Transformer networks, International Journal of Mining, Reclamation and Environment, DOI: 10.1080/17480930.2023.2290364}}
  49. ^ Mozer, Mike (1989). "A Focused Backpropagation Algorithm for Temporal Pattern Recognition". Complex Systems.
  50. ^ Schmidhuber, Juergen (2022). "Annotated History of Modern AI and Deep Learning". arXiv:2212.11279 [cs.NE].
  51. ^ Sepp Hochreiter; Jürgen Schmidhuber (21 August 1995), Long Short Term Memory, Wikidata Q98967430
  52. ^ a b c Klaus Greff; Rupesh Kumar Srivastava; Jan Koutník; Bas R. Steunebrink; Jürgen Schmidhuber (2015). "LSTM: A Search Space Odyssey". IEEE Transactions on Neural Networks and Learning Systems. 28 (10): 2222–2232. arXiv:1503.04069. Bibcode:2015arXiv150304069G. doi:10.1109/TNNLS.2016.2582924. PMID 27411231. S2CID 3356463.
  53. ^ a b c Gers, Felix; Schmidhuber, Jürgen; Cummins, Fred (1999). "Learning to forget: Continual prediction with LSTM". 9th International Conference on Artificial Neural Networks: ICANN '99. Vol. 1999. pp. 850–855. doi:10.1049/cp:19991218. ISBN 0-85296-721-7.
  54. ^ a b c d e f g Schmidhuber, Juergen (10 May 2021). "Deep Learning: Our Miraculous Year 1990-1991". arXiv:2005.05744 [cs.NE].
  55. ^ Hochreiter, S.; Younger, A. S.; Conwell, P. R. (2001). "Learning to Learn Using Gradient Descent". Artificial Neural Networks — ICANN 2001 (PDF). Lecture Notes in Computer Science. Vol. 2130. pp. 87–94. CiteSeerX 10.1.1.5.323. doi:10.1007/3-540-44668-0_13. ISBN 978-3-540-42486-4. ISSN 0302-9743. S2CID 52872549.
  56. ^ Graves, Alex; Beringer, Nicole; Eck, Douglas; Schmidhuber, Juergen (2004). Biologically Plausible Speech Recognition with LSTM Neural Nets. Workshop on Biologically Inspired Approaches to Advanced Information Technology, Bio-ADIT 2004, Lausanne, Switzerland. pp. 175–184.
  57. ^ a b Beaufays, Françoise (August 11, 2015). "The neural networks behind Google Voice transcription". Research Blog. Retrieved 2017-06-27.
  58. ^ a b Sak, Haşim; Senior, Andrew; Rao, Kanishka; Beaufays, Françoise; Schalkwyk, Johan (September 24, 2015). "Google voice search: faster and more accurate". Research Blog. Retrieved 2017-06-27.
  59. ^ a b Haridy, Rich (August 21, 2017). "Microsoft's speech recognition system is now as good as a human". newatlas.com. Retrieved 2017-08-27.
  60. ^ Wierstra, Daan; Foerster, Alexander; Peters, Jan; Schmidhuber, Juergen (2005). "Solving Deep Memory POMDPs with Recurrent Policy Gradients". International Conference on Artificial Neural Networks ICANN'07.
  61. ^ Märgner, Volker; Abed, Haikal El (July 2009). "ICDAR 2009 Arabic Handwriting Recognition Competition". 2009 10th International Conference on Document Analysis and Recognition. pp. 1383–1387. doi:10.1109/ICDAR.2009.256. ISBN 978-1-4244-4500-4. S2CID 52851337.
  62. ^ Bayer, Justin; Wierstra, Daan; Togelius, Julian; Schmidhuber, Juergen (2009). "Evolving memory cell structures for sequence learning". International Conference on Artificial Neural Networks ICANN'09, Cyprus.
  63. ^ Cho, Kyunghyun; van Merrienboer, Bart; Gulcehre, Caglar; Bahdanau, Dzmitry; Bougares, Fethi; Schwenk, Holger; Bengio, Yoshua (2014). "Learning Phrase Representations using RNN Encoder-Decoder for Statistical Machine Translation". arXiv:1406.1078 [cs.CL].
  64. ^ "Neon prescription... or rather, New transcription for Google Voice". Official Google Blog. 23 July 2015. Retrieved 2020-04-25.
  65. ^ Srivastava, Rupesh Kumar; Greff, Klaus; Schmidhuber, Jürgen (2 May 2015). "Highway Networks". arXiv:1505.00387 [cs.LG].
  66. ^ Srivastava, Rupesh K; Greff, Klaus; Schmidhuber, Juergen (2015). "Training Very Deep Networks". Advances in Neural Information Processing Systems. 28. Curran Associates, Inc.: 2377–2385.
  67. ^ a b Schmidhuber, Jürgen (2021). "The most cited neural networks all build on work done in my labs". AI Blog. IDSIA, Switzerland. Retrieved 2022-04-30.
  68. ^ He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2016). Deep Residual Learning for Image Recognition. 2016 IEEE Conference on Computer Vision and Pattern Recognition (CVPR). Las Vegas, NV, USA: IEEE. pp. 770–778. arXiv:1512.03385. doi:10.1109/CVPR.2016.90. ISBN 978-1-4673-8851-1.
  69. ^ Khaitan, Pranav (May 18, 2016). "Chat Smarter with Allo". Research Blog. Retrieved 2017-06-27.
  70. ^ Metz, Cade (September 27, 2016). "An Infusion of AI Makes Google Translate More Powerful Than Ever | WIRED". Wired. Retrieved 2017-06-27.
  71. ^ "A Neural Network for Machine Translation, at Production Scale". Google AI Blog. 27 September 2016. Retrieved 2020-04-25.
  72. ^ Efrati, Amir (June 13, 2016). "Apple's Machines Can Learn Too". The Information. Retrieved 2017-06-27.
  73. ^ Ranger, Steve (June 14, 2016). "iPhone, AI and big data: Here's how Apple plans to protect your privacy". ZDNet. Retrieved 2017-06-27.
  74. ^ "Can Global Semantic Context Improve Neural Language Models? – Apple". Apple Machine Learning Journal. Retrieved 2020-04-30.
  75. ^ Smith, Chris (2016-06-13). "iOS 10: Siri now works in third-party apps, comes with extra AI features". BGR. Retrieved 2017-06-27.
  76. ^ Capes, Tim; Coles, Paul; Conkie, Alistair; Golipour, Ladan; Hadjitarkhani, Abie; Hu, Qiong; Huddleston, Nancy; Hunt, Melvyn; Li, Jiangchuan; Neeracher, Matthias; Prahallad, Kishore (2017-08-20). "Siri On-Device Deep Learning-Guided Unit Selection Text-to-Speech System". Interspeech 2017. ISCA: 4011–4015. doi:10.21437/Interspeech.2017-1798.
  77. ^ Vogels, Werner (30 November 2016). "Bringing the Magic of Amazon AI and Alexa to Apps on AWS. – All Things Distributed". www.allthingsdistributed.com. Retrieved 2017-06-27.
  78. ^ "Patient Subtyping via Time-Aware LSTM Networks" (PDF). msu.edu. Retrieved 21 Nov 2018.
  79. ^ "Patient Subtyping via Time-Aware LSTM Networks". Kdd.org. Retrieved 24 May 2018.
  80. ^ "SIGKDD". Kdd.org. Retrieved 24 May 2018.
  81. ^ Beck, Maximilian; Pöppel, Korbinian; Spanring, Markus; Auer, Andreas; Prudnikova, Oleksandra; Kopp, Michael; Klambauer, Günter; Brandstetter, Johannes; Hochreiter, Sepp (2024-05-07). "xLSTM: Extended Long Short-Term Memory". arXiv:2405.04517 [cs.LG].
  82. ^ NX-AI/xlstm, NXAI, 2024-06-04, retrieved 2024-06-04

[1]

Further reading

  1. ^ Abbey Martin, Andrew J. Hill, Konstantin M. Seiler & Mehala Balamurali (2023) Automatic excavator action recognition and localisation for untrimmed video using hybrid LSTM-Transformer networks, International Journal of Mining, Reclamation and Environment, DOI: 10.1080/17480930.2023.2290364

Read other articles:

Neo-Nazi organization in Taiwan National Socialism Association 國家社會主義學會AbbreviationNSAChairpersonJoshua Yue Shu-yaSpokespersonHsu Na-chiFoundedc. August 2005 (2005-08)HeadquartersTaipeiNewspaperNational Socialism BiweeklyMembership (2007)20IdeologyNeo-NazismChinese nationalismThree Principles of the PeoplePolitical positionFar-rightReligionConfucianismPolitics of TaiwanPolitical partiesElections National Socialism AssociationTraditional Chinese國家社會主

Artikel ini sebatang kara, artinya tidak ada artikel lain yang memiliki pranala balik ke halaman ini.Bantulah menambah pranala ke artikel ini dari artikel yang berhubungan atau coba peralatan pencari pranala.Tag ini diberikan pada Desember 2022. Zeynab Begumزینب بیگمKematian31 Mei 1640Qazvin, IranPemakamanMakam Imam RezaAyahTahmasp IIbuHuri-Khan KhanumPasanganAli-Qoli Khan ShamluAgamaIslam Syiah Zeynab Begum[a] (bahasa Persia: زینب بیگم; wafat 31 Mei 1640), adalah ...

2023MMXXIII diciembre enero febrero sem L M X J V S D 52.ª 26 27 28 29 30 31 1 1.ª 2 3 4 5 6 7 8 2.ª 9 10 11 12 13 14 15 3.ª 16 17 18 19 20 21 22 4.ª 23 24 25 26 27 28 29 5.ª 30 31 1 2 3 4 5 [Actualizar calendario] Otras fechas: 0 de eneroIr al mes actualTodos los díasMás calendarios El 5 de enero es el 5.º (quinto) día del año del calendario gregoriano. Quedan 360 días para finalizar el año y 361 en los años bisiestos. Acontecimientos 951: en España abdica Ramiro II de León. ...

Heinrich Wilhelm von Zeschau Heinrich Wilhelm von Zeschau (* 22. August 1760 auf dem Rittergut Garrenchen bei Luckau; † 14. November 1832 in Dresden) war sächsischer Wirklicher Geheimer Rat, Generalleutnant, Staatssekretär und Gouverneur von Dresden. Inhaltsverzeichnis 1 Familie 2 Leben 3 Auszeichnungen 4 Mitgliedschaften 5 Literatur 6 Einzelnachweise Familie Sein Vater war der Landesälteste des Kreises Luckau (kurfürstlich-sächs. Mgft. Niederlausitz) Karl Siegismund von Zeschau (1703...

Ernst Nizze (1848) Johann Ernst Nizze, eigentlich Ernst Nizze (* 16. November 1788 in Ribnitz; † 10. Februar 1872 in Stralsund) war ein deutscher Mathematiklehrer und Altphilologe. Über 33 Jahre wirkte er als Rektor des Stralsunder Gymnasiums. Inhaltsverzeichnis 1 Leben 2 Ehrungen 3 Schriften 4 Literatur 5 Weblinks 6 Einzelnachweise Leben Ernst Nizze war älterer Sohn des Ribnitzer Pastors und Präpositus Christian Nizze (1752–1813) und dessen Frau, der Pastorentochter Henriette Dorothea...

Кільцеві осцилятори, виготовлені на кремнії з використанням МДН-транзисторів типу p. Схема простого 3-інверторного кільцевого осцилятора, вихідна частота якого становить 1 6 × t {\displaystyle {\frac {1}{6\times t}}} де t — затримка інвертора. Кільцевий осцилятор — пристрій, що скла

First mug shot of a U.S. president Mug shot of Donald TrumpTrump's booking photograph released by the Fulton County sheriff's officeCompletion dateAugust 24, 2023 (2023-08-24)MediumPhotographSubjectDonald TrumpLocationAtlanta, Georgia, U.S. This article is part of a series aboutDonald Trump Business and personal Business career The Trump Organization wealth tax returns Media career The Apprentice books filmography Eponyms Family Foundation American football Golf Honors Public i...

此條目需要精通或熟悉相关主题的编者参与及协助编辑。 (2014年10月14日)請邀請適合的人士改善本条目。更多的細節與詳情請參见討論頁。 维基百科中的醫學内容仅供参考,並不能視作專業意見。如需獲取醫療幫助或意見,请咨询专业人士。詳見醫學聲明。 四环类抗抑郁药米氮平的化学结构, 其结构有四个原子环 四环抗抑郁药(英語:tetracyclic antidepressants,缩写作 TeCAs)是...

English artist A. Duncan CarseBorn1875/76Newcastle upon Tyne, UKDied1938Wokingham, UKOccupationartist Andreas Duncan Carse (1875/76–1938) was an English artist. Life Carse was born in 1875 or 1876 in Newcastle upon Tyne[1][2] to Norwegian and Scottish parents. His two large works Birds of the Old World and Birds of the New World were selected by Cunard in 1933[3] to be on their new flagship liner, the Queen Mary. Documents are held in the National Archive.[4&...

E

 Nota: Para a letra do alfabeto cirílico, veja E (cirílico). Para outros sentidos de E, veja E (desambiguação). Esta página cita fontes, mas que não cobrem todo o conteúdo. Ajude a inserir referências. Conteúdo não verificável pode ser removido.—Encontre fontes: ABW  • CAPES  • Google (N • L • A) (Março de 2021) E e ê, é EA letra nas versões de fôrma e cursiva, minúsculas e maiúsculas. Sistema de e...

Patriarch of Romania from 1939 to 1948 His BeatitudePatriarch Nicodim of RomaniaBy God's mercy, Archbishop of Bucharest, Metropolitan of Ungro-Vlachia, Locum tenens of the throne of Caesarea Cappadociae and Patriarch of All RomaniaChurchRomanian Orthodox ChurchSeeBucharestInstalled5 July 1939Term ended27 February 1948PredecessorPatriarch Miron of RomaniaSuccessorPatriarch Justinian of RomaniaPersonal detailsBornNicolae Munteanu(1864-12-06)6 December 1864Pipirig, Neamț County, Principality of...

ヴァンドーム伯ルイLouis de Bourbon ルイと最初の妻ブランシュ紋章 ヴァンドーム伯在位 1393年 - 1446年先代 カトリーヌ・ド・ヴァンドームおよびラ・マルシュ伯ジャン1世次代 ジャン8世 配偶者 ブランシェ・ド・ルシージャンヌ・ド・ラヴァル家名 ブルボン家父親 ラ・マルシュ伯ジャン1世母親 カトリーヌ・ド・ヴァンドーム出生 1376年死亡 1446年12月21日(1446-12-21)(69–70...

Academic journalLaser Physics LettersLanguageEnglishEdited byVanderlei S. BagnatoPublication detailsHistory2004-presentPublisherIOP Publishing on behalf of Astro LtdFrequencyMonthlyOpen accessHybridImpact factor2.016 (2020)Standard abbreviationsISO 4 (alt) · Bluebook (alt1 · alt2)NLM (alt) · MathSciNet (alt )ISO 4Laser Phys. Lett.IndexingCODEN (alt · alt2) · JSTOR (alt) · LCCN (alt)MIAR · NL...

Gereja BlendukGereja Protestan Indonesia Barat ImmanuelGereja BlendukPeta di IndonesiaKoordinat: 6°58′5.28″S 110°25′38.98″E / 6.9681333°S 110.4274944°E / -6.9681333; 110.4274944LokasiSemarang, Jawa TengahNegaraIndonesiaDenominasiProtestanSejarahDidirikan1753 (1753)ArsitekturStatus fungsionalAktifSpesifikasiJumlah lantai2Jumlah kubah3Jumlah puncak menara2AdministrasiParoki200 families (2004) Cagar budaya IndonesiaGereja Blenduk (Gereja Protestan di Indo...

Not to be confused with Little Miss Sunshine. 2011 single by R.I.O.Miss SunshineSingle by R.I.O.from the album Sunshine Released13 April 2011Recorded2010GenreDanceLength3:22LabelZoo DigitalSongwriter(s)Yann Peifer, Manuel Reuter, Andres Ballinas, Brad Grobler, Rob JanssenProducer(s)Yann Peifer, Manuel ReuterR.I.O. singles chronology Like I Love You (2011) Miss Sunshine (2011) One More Night (2011) Miss Sunshine is a song by German dance band R.I.O. The song was written by Yann Peifer, Man...

Historic church in Maine, United States This article is about a specific church building. For the denomination, see Unitarian Universalism. This article relies largely or entirely on a single source. Relevant discussion may be found on the talk page. Please help improve this article by introducing citations to additional sources.Find sources: Universalist-Unitarian Church – news · newspapers · books · scholar · JSTOR (May 2021) United States historic p...

العلاقات البوتسوانية الكوبية بوتسوانا كوبا   بوتسوانا   كوبا تعديل مصدري - تعديل   العلاقات البوتسوانية الكوبية هي العلاقات الثنائية التي تجمع بين بوتسوانا وكوبا.[1][2][3][4][5] مقارنة بين البلدين هذه مقارنة عامة ومرجعية للدولتين: وجه المقارنة ...

For the federal constituency represented in the Dewan Rakyat, see Kuala Kangsar (federal constituency). Town in Perak, Malaysia This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.Find sources: Kuala Kangsar town – news · newspapers · books · scholar · JSTOR (July 2012) (Learn how and when to remove this template message...

Auto body-style with its roof extended rearward 2018 Volvo V60 Estate2016 Ford Mondeo Estate1984 Mercury Colony Park Station Wagon A station wagon (US, also wagon) or estate car (UK, also estate), is an automotive body-style variant of a sedan with its roof extended rearward over a shared passenger/cargo volume with access at the back via a third or fifth door (the liftgate or tailgate), instead of a trunk/boot lid.[1] The body style transforms a standard three-box design into a two-b...

Robert F. EngleRobert Engle di konferensi Western Economic Association International pada Januari 2017Lahir10 November 1942 (umur 81)Syracuse, New York, ASKebangsaanAmerika SerikatAlmamaterCornell University Ph.D. 1969Williams College B.S. 1964Dikenal atasARCHCointegrationPenghargaanPenghargaan Nobel dalam Ekonomi (2003)Karier ilmiahBidangEconomicsInstitusiUniversitas California, San DiegoPembimbing doktoralTa-Chung LiuMahasiswa doktoralTim BollerslevMark Watson Robert Franklin Engle III...