Share to: share facebook share twitter share wa share telegram print page

Neural machine translation

Neural machine translation (NMT) is an approach to machine translation that uses an artificial neural network to predict the likelihood of a sequence of words, typically modeling entire sentences in a single integrated model.

It is the dominant approach today[1]: 293 [2]: 1  and can produce translations that rival human translations when translating between high-resource languages under specific conditions.[3] However, there still remain challenges, especially with languages where less high-quality data is available,[4][5][1]: 293  and with domain shift between the data a system was trained on and the texts it is supposed to translate.[1]: 293  NMT systems also tend to produce fairly literal translations.[5]

Overview

In the translation task, a sentence (consisting of tokens ) in the source language is to be translated into a sentence (consisting of tokens ) in the target language. The source and target tokens (which in the simple event are used for each other in order for a particular game ] vectors, so they can be processed mathematically.

NMT models assign a probability [2]: 5 [6]: 1  to potential translations y and then search a subset of potential translations for the one with the highest probability. Most NMT models are auto-regressive: They model the probability of each target token as a function of the source sentence and the previously predicted target tokens. The probability of the whole translation then is the product of the probabilities of the individual predicted tokens:[2]: 5 [6]: 2 

NMT models differ in how exactly they model this function , but most use some variation of the encoder-decoder architecture:[6]: 2 [7]: 469  They first use an encoder network to process and encode it into a vector or matrix representation of the source sentence. Then they use a decoder network that usually produces one target word at a time, taking into account the source representation and the tokens it previously produced. As soon as the decoder produces a special end of sentence token, the decoding process is finished. Since the decoder refers to its own previous outputs during, this way of decoding is called auto-regressive.

History

Early approaches

In 1987, Robert B. Allen demonstrated the use of feed-forward neural networks for translating auto-generated English sentences with a limited vocabulary of 31 words into Spanish. In this experiment, the size of the network's input and output layers was chosen to be just large enough for the longest sentences in the source and target language, respectively, because the network did not have any mechanism to encode sequences of arbitrary length into a fixed-size representation. In his summary, Allen also already hinted at the possibility of using auto-associative models, one for encoding the source and one for decoding the target.[8]

Lonnie Chrisman built upon Allen's work in 1991 by training separate recursive auto-associative memory (RAAM) networks (developed by Jordan B. Pollack[9]) for the source and the target language. Each of the RAAM networks is trained to encode an arbitrary-length sentence into a fixed-size hidden representation and to decode the original sentence again from that representation. Additionally, the two networks are also trained to share their hidden representation; this way, the source encoder can produce a representation that the target decoder can decode.[10] Forcada and Ñeco simplified this procedure in 1997 to directly train a source encoder and a target decoder in what they called a recursive hetero-associative memory.[11]

Also in 1997, Castaño and Casacuberta employed an Elman's recurrent neural network in another machine translation task with very limited vocabulary and complexity.[12][13]

Even though these early approaches were already similar to modern NMT, the computing resources of the time were not sufficient to process datasets large enough for the computational complexity of the machine translation problem on real-world texts.[1]: 39 [14]: 2  Instead, other methods like statistical machine translation rose to become the state of the art of the 1990s and 2000s.

Hybrid approaches

During the time when statistical machine translation was prevalent, some works used neural methods to replace various parts in the statistical machine translation while still using the log-linear approach to tie them together.[1]: 39 [2]: 1  For example, in various works together with other researchers, Holger Schwenk replaced the usual n-gram language model with a neural one[15][16] and estimated phrase translation probabilities using a feed-forward network.[17]

seq2seq

In 2013 and 2014, end-to-end neural machine translation had their breakthrough with Kalchbrenner & Blunsom using a convolutional neural network (CNN) for encoding the source[18] and both Cho et al. and Sutskever et al. using a recurrent neural network (RNN) instead.[19][20] All three used an RNN conditioned on a fixed encoding of the source as their decoder to produce the translation. However, these models performed poorly on longer sentences.[21]: 107 [1]: 39 [2]: 7  This problem was addressed when Bahdanau et al. introduced attention to their encoder-decoder architecture: At each decoding step, the state of the decoder is used to calculate a source representation that focuses on different parts of the source and uses that representation in the calculation of the probabilities for the next token.[22] Based on these RNN-based architectures, Baidu launched the "first large-scale NMT system"[23]: 144  in 2015, followed by Google Neural Machine Translation in 2016.[23]: 144 [24] From that year on, neural models also became the prevailing choice in the main machine translation conference Workshop on Statistical Machine Translation.[25]

Gehring et al. combined a CNN encoder with an attention mechanism in 2017, which handled long-range dependencies in the source better than previous approaches and also increased translation speed because a CNN encoder is parallelizable, whereas an RNN encoder has to encode one token at a time due to its recurrent nature.[26]: 230  In the same year, “Microsoft Translator released AI-powered online neural machine translation (NMT).[27] DeepL Translator, which was at the time based on a CNN encoder, was also released in the same year and was judged by several news outlets to outperform its competitors.[28][29][30] It has also been seen that OpenAI's GPT-3 released in 2020 can function as a neural machine translation system. Some other machine translation systems, such as Microsoft translator and SYSTRAN can be also seen to have integrated neural networks into their operations.

Transformer

Another network architecture that lends itself to parallelization is the transformer, which was introduced by Vaswani et al. also in 2017.[31] Like previous models, the transformer still uses the attention mechanism for weighting encoder output for the decoding steps. However, the transformer's encoder and decoder networks themselves are also based on attention instead of recurrence or convolution: Each layer weights and transforms the previous layer's output in a process called self-attention. Since the attention mechanism does not have any notion of token order, but the order of words in a sentence is obviously relevant, the token embeddings are combined with an explicit encoding of their position in the sentence.[2]: 15 [6]: 7  Since both the transformer's encoder and decoder are free from recurrent elements, they can both be parallelized during training. However, the original transformer's decoder is still auto-regressive, which means that decoding still has to be done one token at a time during inference.

The transformer model quickly became the dominant choice for machine translation systems[2]: 44  and was still by far the most-used architecture in the Workshop on Statistical Machine Translation in 2022 and 2023.[32]: 35–40 [33]: 28–31 

Usually, NMT models’ weights are initialized randomly and then learned by training on parallel datasets. However, since using large language models (LLMs) such as BERT pre-trained on large amounts of monolingual data as a starting point for learning other tasks has proven very successful in wider NLP, this paradigm is also becoming more prevalent in NMT. This is especially useful for low-resource languages, where large parallel datasets do not exist.[4]: 689–690  An example of this is the mBART model, which first trains one transformer on a multilingual dataset to recover masked tokens in sentences, and then fine-tunes the resulting autoencoder on the translation task.[34]

Generative LLMs

Instead of fine-tuning a pre-trained language model on the translation task, sufficiently large generative models can also be directly prompted to translate a sentence into the desired language. This approach was first comprehensively tested and evaluated for GPT 3.5 in 2023 by Hendy et al. They found that "GPT systems can produce highly fluent and competitive translation outputs even in the zero-shot setting especially for the high-resource language translations".[35]: 22  The WMT23 evaluated the same approach (but using GPT-4) and found that it was on par with the state of the art when translating into English, but not quite when translating into lower-resource languages.[33]: 16–17  This is plausible considering that GPT models are trained mainly on English text.[36]

Comparison with statistical machine translation

NMT has overcome several challenges that were present in statistical machine translation (SMT):

  • NMT's full reliance on continuous representation of tokens overcame sparsity issues caused by rare words or phrases. Models were able to generalize more effectively.[18]: 1 [37]: 900–901 
  • The limited n-gram length used in SMT's n-gram language models caused a loss of context. NMT systems overcome this by not having a hard cut-off after a fixed number of tokens and by using attention to choosing which tokens to focus on when generating the next token.[37]: 900–901 
  • End-to-end training of a single model improved translation performance and also simplified the whole process.[citation needed]
  • The huge n-gram models (up to 7-gram) used in SMT required large amounts of memory,[38]: 88  whereas NMT requires less.

Training procedure

Cross-entropy loss

NMT models are usually trained to maximize the likelihood of observing the training data. I.e., for a dataset of source sentences and corresponding target sentences , the goal is finding the model parameters that maximize the sum of the likelihood of each target sentence in the training data given the corresponding source sentence:

Expanding to token level yields:

Since we are only interested in the maximum, we can just as well search for the maximum of the logarithm instead (which has the advantage that it avoids floating point underflow that could happen with the product of low probabilities).

Using the fact that the logarithm of a product is the sum of the factors’ logarithms and flipping the sign yields the classic cross-entropy loss:

In practice, this minimization is done iteratively on small subsets (mini-batches) of the training set using stochastic gradient descent.

Teacher forcing

During inference, auto-regressive decoders use the token generated in the previous step as the input token. However, the vocabulary of target tokens is usually very large. So, at the beginning of the training phase, untrained models will pick the wrong token almost always; and subsequent steps would then have to work with wrong input tokens, which would slow down training considerably. Instead, teacher forcing is used during the training phase: The model (the “student” in the teacher forcing metaphor) is always fed the previous ground-truth tokens as input for the next token, regardless of what it predicted in the previous step.

Translation by prompt engineering LLMs

As outlined in the history section above, instead of using an NMT system that is trained on parallel text, one can also prompt a generative LLM to translate a text. These models differ from an encoder-decoder NMT system in a number of ways:[35]: 1 

  • Generative language models are not trained on the translation task, let alone on a parallel dataset. Instead, they are trained on a language modeling objective, such as predicting the next word in a sequence drawn from a large dataset of text. This dataset can contain documents in many languages, but is in practice dominated by English text.[36] After this pre-training, they are fine-tuned on another task, usually to follow instructions.[39]
  • Since they are not trained on translation, they also do not feature an encoder-decoder architecture. Instead, they just consist of a transformer's decoder.
  • In order to be competitive on the machine translation task, LLMs need to be much larger than other NMT systems. E.g., GPT-3 has 175 billion parameters,[40]: 5  while mBART has 680 million[34]: 727  and the original transformer-big has “only” 213 million.[31]: 9  This means that they are computationally more expensive to train and use.

A generative LLM can be prompted in a zero-shot fashion by just asking it to translate a text into another language without giving any further examples in the prompt. Or one can include one or several example translations in the prompt before asking to translate the text in question. This is then called one-shot or few-shot learning, respectively. For example, the following prompts were used by Hendy et al. (2023) for zero-shot and one-shot translation:[35]

### Translate this sentence from [source language] to [target language], Source:
[source sentence]
### Target:
Translate this into 1. [target language]:
[shot 1 source]
1. [shot 1 reference]
Translate this into 1. [target language]:
[input]
1.

Literature

See also

References

  1. ^ a b c d e f Koehn, Philipp (2020). Neural Machine Translation. Cambridge University Press.
  2. ^ a b c d e f g Stahlberg, Felix (2020-09-29). "Neural Machine Translation: A Review and Survey". arXiv:1912.02047v2 [cs.CL].
  3. ^ Popel, Martin; Tomkova, Marketa; Tomek, Jakub; Kaiser, Łukasz; Uszkoreit, Jakob; Bojar, Ondřej; Žabokrtský, Zdeněk (2020-09-01). "Transforming machine translation: a deep learning system reaches news translation quality comparable to human professionals". Nature Communications. 11 (1): 4381. doi:10.1038/s41467-020-18073-9. hdl:11346/BIBLIO@id=368112263610994118. ISSN 2041-1723. PMC 7463233. PMID 32873773.
  4. ^ a b Haddow, Barry; Bawden, Rachel; Miceli Barone, Antonio Valerio; Helcl, Jindřich; Birch, Alexandra (2022). "Survey of Low-Resource Machine Translation". Computational Linguistics. 48 (3): 673–732. arXiv:2109.00486. doi:10.1162/coli_a_00446.
  5. ^ a b Poibeau, Thierry (2022). Calzolari, Nicoletta; Béchet, Frédéric; Blache, Philippe; Choukri, Khalid; Cieri, Christopher; Declerck, Thierry; Goggi, Sara; Isahara, Hitoshi; Maegaard, Bente (eds.). "On "Human Parity" and "Super Human Performance" in Machine Translation Evaluation". Proceedings of the Thirteenth Language Resources and Evaluation Conference. Marseille, France: European Language Resources Association: 6018–6023.
  6. ^ a b c d Tan, Zhixing; Wang, Shuo; Yang, Zonghan; Chen, Gang; Huang, Xuancheng; Sun, Maosong; Liu, Yang (2020-12-31). "Neural Machine Translation: A Review of Methods, Resources, and Tools". arXiv:2012.15515 [cs.CL].
  7. ^ Goodfellow, Ian; Bengio, Yoshua; Courville, Aaron (2016). "12.4.5 Neural Machine Translation". Deep Learning. MIT Press. pp. 468–471. Retrieved 2022-12-29.
  8. ^ Allen, Robert B. (1987). Several Studies on Natural Language and Back-Propagation. IEEE First International Conference on Neural Networks. Vol. 2. San Diego. pp. 335–341. Retrieved 2022-12-30.
  9. ^ Chrisman, Lonnie (1991). "Learning Recursive Distributed Representations for Holistic Computation". Connection Science. 3 (4): 345–366. doi:10.1080/09540099108946592. ISSN 0954-0091.
  10. ^ Pollack, Jordan B. (1990). "Recursive distributed representations". Artificial Intelligence. 46 (1): 77–105. doi:10.1016/0004-3702(90)90005-K.
  11. ^ Forcada, Mikel L.; Ñeco, Ramón P. (1997). "Recursive hetero-associative memories for translation". Biological and Artificial Computation: From Neuroscience to Technology. Lecture Notes in Computer Science. Vol. 1240. pp. 453–462. doi:10.1007/BFb0032504. ISBN 978-3-540-63047-0.
  12. ^ Castaño, Asunción; Casacuberta, Francisco (1997). A connectionist approach to machine translation. 5th European Conference on Speech Communication and Technology (Eurospeech 1997). Rhodes, Greece. pp. 91–94. doi:10.21437/Eurospeech.1997-50.
  13. ^ Castaño, Asunción; Casacuberta, Francisco; Vidal, Enrique (1997-07-23). Machine translation using neural networks and finite-state models. Proceedings of the 7th Conference on Theoretical and Methodological Issues in Machine Translation of Natural Languages. St John's College, Santa Fe.
  14. ^ Yang, Shuoheng; Wang, Yuxin; Chu, Xiaowen (2020-02-18). "A Survey of Deep Learning Techniques for Neural Machine Translation". arXiv:2002.07526 [cs.CL].
  15. ^ Schwenk, Holger; Dechelotte, Daniel; Gauvain, Jean-Luc (2006). Continuous Space Language Models for Statistical Machine Translation. Proceedings of the COLING/ACL 2006 Main Conference Poster Sessions. Sydney, Australia. pp. 723–730.
  16. ^ Schwenk, Holger (2007). "Contiuous space language models". Computer Speech and Language. 3 (21): 492–518. doi:10.1016/j.csl.2006.09.003.
  17. ^ Schwenk, Holger (2012). Continuous Space Translation Models for Phrase-Based Statistical Machine Translation. Proceedings of COLING 2012: Posters. Mumbai, India. pp. 1071–1080.
  18. ^ a b Kalchbrenner, Nal; Blunsom, Philip (2013). "Recurrent Continuous Translation Models". Proceedings of the Association for Computational Linguistics: 1700–1709.
  19. ^ Cho, Kyunghyun; van Merriënboer, Bart; Gulcehre, Caglar; Bahdanau, Dzmitry; Bougares, Fethi; Schwenk, Holger; Bengio, Yoshua (2014). Learning Phrase Representations using RNN Encoder–Decoder for Statistical Machine Translation. Proceedings of the 2014 Conference on Empirical Methods in Natural Language Processing (EMNLP). Doha, Qatar: Association for Computational Linguistics. pp. 1724–1734. arXiv:1406.1078. doi:10.3115/v1/D14-1179.
  20. ^ Sutskever, Ilya; Vinyals, Oriol; Le, Quoc V. (2014). "Sequence to Sequence Learning with Neural Networks". Advances in Neural Information Processing Systems. 27. Curran Associates, Inc.
  21. ^ Cho, Kyunghyun; van Merriënboer, Bart; Bahdanau, Dzmitry; Bengio, Yoshua (2014). On the Properties of Neural Machine Translation: Encoder–Decoder Approaches. Proceedings of SSST-8, Eighth Workshop on Syntax, Semantics and Structure in Statistical Translation. Doha, Qatar: Association for Computational Linguistics. pp. 103–111. arXiv:1409.1259. doi:10.3115/v1/W14-4012.
  22. ^ Bahdanau, Dzmitry; Cho, Kyunghyun; Bengio, Yoshua (2014). "Neural Machine Translation by Jointly Learning to Align and Translate". arXiv:1409.0473 [cs.CL].
  23. ^ a b Wang, Haifeng; Wu, Hua; He, Zhongjun; Huang, Liang; Church, Kenneth Ward (2022-11-01). "Progress in Machine Translation". Engineering. 18: 143–153. doi:10.1016/j.eng.2021.03.023.
  24. ^ Wu, Yonghui; Schuster, Mike; Chen, Zhifeng; Le, Quoc V.; Norouzi, Mohammad; Macherey, Wolfgang; Krikun, Maxim; Cao, Yuan; Gao, Qin; Macherey, Klaus; Klingner, Jeff; Shah, Apurva; Johnson, Melvin; Liu, Xiaobing; Kaiser, Łukasz (2016). "Google's Neural Machine Translation System: Bridging the Gap between Human and Machine Translation". arXiv:1609.08144 [cs.CL].
  25. ^ Bojar, Ondrej; Chatterjee, Rajen; Federmann, Christian; Graham, Yvette; Haddow, Barry; Huck, Matthias; Yepes, Antonio Jimeno; Koehn, Philipp; Logacheva, Varvara; Monz, Christof; Negri, Matteo; Névéol, Aurélie; Neves, Mariana; Popel, Martin; Post, Matt; Rubino, Raphael; Scarton, Carolina; Specia, Lucia; Turchi, Marco; Verspoor, Karin; Zampieri, Marcos (2016). "Findings of the 2016 Conference on Machine Translation" (PDF). ACL 2016 First Conference on Machine Translation (WMT16). The Association for Computational Linguistics: 131–198. Archived from the original (PDF) on 2018-01-27. Retrieved 2018-01-27.
  26. ^ Gehring, Jonas; Auli, Michael; Grangier, David; Dauphin, Yann (2017). A Convolutional Encoder Model for Neural Machine Translation. Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Vancouver, Canada: Association for Computational Linguistics. pp. 123–135. arXiv:1611.02344. doi:10.18653/v1/P17-1012.
  27. ^ Translator, Microsoft (2018-04-18). "Microsoft brings AI-powered translation to end users and developers, whether you're online or offline". Microsoft Translator Blog. Retrieved 2024-04-19. {{cite web}}: |last= has generic name (help)
  28. ^ Coldewey, Devin (2017-08-29). "DeepL schools other online translators with clever machine learning". TechCrunch. Retrieved 2023-12-26.
  29. ^ Leloup, Damien; Larousserie, David (2022-08-29). "Quel est le meilleur service de traduction en ligne?". Le Monde. Retrieved 2023-01-10.
  30. ^ Pakalski, Ingo (2017-08-29). "DeepL im Hands On: Neues Tool übersetzt viel besser als Google und Microsoft". Golem. Retrieved 2023-01-10.
  31. ^ a b Vaswani, Ashish; Shazeer, Noam; Parmar, Niki; Uszkoreit, Jakob; Gomez, Aidan N.; Kaiser, Łukasz; Polosukhin, Illia (2017). Attention Is All You Need. Advances in Neural Information Processing Systems 30 (NIPS 2017). pp. 5998–6008.
  32. ^ Kocmi, Tom; Bawden, Rachel; Bojar, Ondřej; Dvorkovich, Anton; Federmann, Christian; Fishel, Mark; Gowda, Thamme; Graham, Yvette; Grundkiewicz, Roman; Haddow, Barry; Knowles, Rebecca; Koehn, Philipp; Monz, Christof; Morishita, Makoto; Nagata, Masaaki (2022). Koehn, Philipp; Barrault, Loïc; Bojar, Ondřej; Bougares, Fethi; Chatterjee, Rajen; Costa-jussà, Marta R.; Federmann, Christian; Fishel, Mark; Fraser, Alexander (eds.). Findings of the 2022 Conference on Machine Translation (WMT22). Proceedings of the Seventh Conference on Machine Translation (WMT). Abu Dhabi, United Arab Emirates (Hybrid): Association for Computational Linguistics. pp. 1–45.
  33. ^ a b Kocmi, Tom; Avramidis, Eleftherios; Bawden, Rachel; Bojar, Ondřej; Dvorkovich, Anton; Federmann, Christian; Fishel, Mark; Freitag, Markus; Gowda, Thamme; Grundkiewicz, Roman; Haddow, Barry; Koehn, Philipp; Marie, Benjamin; Monz, Christof; Morishita, Makoto (2023). Koehn, Philipp; Haddow, Barry; Kocmi, Tom; Monz, Christof (eds.). Findings of the 2023 Conference on Machine Translation (WMT23): LLMs Are Here but Not Quite There Yet. Proceedings of the Eighth Conference on Machine Translation. Singapore: Association for Computational Linguistics. pp. 1–42. doi:10.18653/v1/2023.wmt-1.1.
  34. ^ a b Liu, Yinhan; Gu, Jiatao; Goyal, Naman; Li, Xian; Edunov, Sergey; Ghazvininejad, Marjan; Lewis, Mike; Zettlemoyer, Luke (2020). "Multilingual Denoising Pre-training for Neural Machine Translation". Transactions of the Association for Computational Linguistics. 8: 726–742. arXiv:2001.08210. doi:10.1162/tacl_a_00343.
  35. ^ a b c Hendy, Amr; Abdelrehim, Mohamed; Sharaf, Amr; Raunak, Vikas; Gabr, Mohamed; Matsushita, Hitokazu; Kim, Young Jin; Afify, Mohamed; Awadalla, Hany (2023-02-18). "How Good Are GPT Models at Machine Translation? A Comprehensive Evaluation". arXiv:2302.09210 [cs.CL].
  36. ^ a b "GPT 3 dataset statistics: languages by character count". OpenAI. 2020-06-01. Retrieved 2023-12-23.
  37. ^ a b Russell, Stuart; Norvig, Peter. Artificial Intelligence: A Modern Approach (4th, global ed.). Pearson.
  38. ^ Federico, Marcello; Cettolo, Mauro (2007). Callison-Burch, Chris; Koehn, Philipp; Fordyce, Cameron Shaw; Monz, Christof (eds.). "Efficient Handling of N-gram Language Models for Statistical Machine Translation". Proceedings of the Second Workshop on Statistical Machine Translation. Prague, Czech Republic: Association for Computational Linguistics: 88–95. doi:10.3115/1626355.1626367.
  39. ^ Radford, Alec; Narasimhan, Karthik; Salimans, Tim; Sutskever, Ilya (2018). Improving Language Understanding by Generative Pre-Training (PDF) (Technical report). OpenAI. Retrieved 2023-12-26.
  40. ^ Brown, Tom; Mann, Benjamin; Ryder, Nick; Subbiah, Melanie; Kaplan, Jared D; Dhariwal, Prafulla; Neelakantan, Arvind; Shyam, Pranav; Sastry, Girish; Askell, Amanda; Agarwal, Sandhini; Herbert-Voss, Ariel; Krueger, Gretchen; Henighan, Tom; Child, Rewon (2020). "Language Models are Few-Shot Learners". Advances in Neural Information Processing Systems. 33. Curran Associates, Inc.: 1877–1901.

Read other articles:

Estadio Benito Villamarín Estadio de máxima categoría UEFA LocalizaciónPaís  EspañaDivisión Primera División de EspañaLocalidad  SevillaCoordenadas 37°21′23″N 5°58′54″O / 37.356403, -5.981611Detalles generalesNombres anteriores Estadio Heliópolis (1939–1961)Estadio Manuel Ruiz de Lopera (1997-2010)Superficie CéspedDimensiones 105 x 68 mCapacidad 60 721[1]​ espectadoresPropietario Real Betis BalompiéConstrucciónCoste Original 14.000...

ВіжіVigy   Країна  Франція Регіон Гранд-Ест  Департамент Мозель  Округ Мец Кантон Віжі Код INSEE 57716 Поштові індекси 57640 Координати 49°12′16″ пн. ш. 6°17′54″ сх. д.H G O Висота 213 - 324 м.н.р.м. Площа 17,07 км² Населення 1673 (01-2020[1]) Густота 88,58 ос./км² Розміщення Влада М...

  لمعانٍ أخرى، طالع تاريخ الولايات المتحدة (توضيح). تاريخ الولايات المتحدةتطور تقسيمات الولايات المتحدة ما بين 1810 و1920صنف فرعي من تاريخ أمريكا الشمالية جزء من تاريخ أمريكا الشمالية فروع عصر الجاز — Jim Crow Era (en) — العبودية في الولايات المتحدة — فترة اكتشاف الولايات المت...

Fencing at the Olympics Men's foilat the Games of the XXVII OlympiadKim Young-hoVenueSydney Exhibition CentreDate20 September 2000Competitors40 from 22 nationsMedalists Kim Young-ho  South Korea Ralf Bißdorf  Germany Dmitry Shevchenko  Russia← 19962004 → Fencing at the2000 Summer OlympicsÉpéemenwomenTeam épéemenwomenFoilmenwomenTeam foilmenwomenSabremenTeam sabremenvte The men's foil was one of ten fencing events on the fencing at the 2000 S...

عين كندول تقسيم إداري البلد المغرب  الجهة فاس مكناس الإقليم تازة الدائرة واد أمليل الجماعة القروية غياتة الغربية المشيخة أولاد عياش و بني مطير السكان التعداد السكاني 143 نسمة (إحصاء 2004)   • عدد الأسر 20 معلومات أخرى التوقيت ت ع م±00:00 (توقيت قياسي)[1]،  وت ع م+01:00 (توقيت

British Army general (1848–1930) Sir Francis HowardBorn26 March 1848Died21 March 1930 (1930-03-22) (aged 81)Allegiance United KingdomService/branch British ArmyRankMajor-GeneralCommands heldWestern CommandBattles/warsSecond Afghan WarSudanese campaignSecond Boer WarAwardsKnight Commander of the Order of the BathKnight Commander of the Order of St Michael and St George Major-General Sir Francis Howard KCB KCMG DL (26 March 1848 – 21 March 1930) was a British Army offi...

Sherlock Holmes lutando contra o arqui-inimigo Professor Moriarty Um arqui-inimigo é o principal inimigo de alguém.[1][2][3] Na ficção, é um personagem que é o inimigo mais proeminente e mais conhecido do protagonista, geralmente de um herói. Etimologia A palavra arqui-inimigo originou-se por volta de meados do século XVI, das palavras arqui-[3] (do grego ἄρχω archo que significa 'liderar') e inimigo.[1] Um arqui-inimigo também pode ser referido como um arquirrival[4] ou arquivi...

This is the results breakdown of the local elections held in Galicia on 24 May 2015. The following tables show detailed results in the autonomous community's most populous municipalities, sorted alphabetically.[1][2][3] Opinion polls Main article: Opinion polling for the 2015 Spanish local elections (Galicia) § Municipalities Overall Councillor share for parties securing >1.0% of councillors up for election.   PP (46.52%)  PSdeG–PSOE (27....

Natasha RostovaTokoh War and PeaceAudrey Hepburn memerankan Natasha Rostova dalam adaptasi film tahun 1956PenciptaLeo TolstoyPemeranAudrey HepburnLyudmila SavelyevaMorag HoodClémence PoésyLily JamesPhillipa SooDenée BentonLauren ZakrinShoba NarayanInformasiJulukanNatasha, Natalya, NatalieJenis kelaminPerempuanKeluargaIlya Rostov (ayah)Natalia Rostova (ibu)Vera Rostova (saudara)Nikolai Rostov,Petya Rostov (saudara)Sonya Rostova (sepupu)PasanganPierre BezukhovAnakMasha, Lisa, Petya, dan lain...

Cet article recense de manière non exhaustive les châteaux (de défense ou d'agrément), maisons fortes, mottes castrales, manoirs, situés dans le département français du Calvados en région Normandie. Il est fait état des inscriptions et classements au titre des monuments historiques. Liste Légende (Type) Icône Signification Abréviations Château fort = Bastide = Ferme fortifiée = Maison forte = (Néo-)Palladianisme Renaissance = Château gascon = Folie (montpelliéraine) = Manoir,...

Highest mountain in North America This article is about the mountain. For other uses, see Denali (disambiguation). DenaliFrom the north, with Wonder Lake in the foregroundHighest pointElevation20,310 ft (6,190 m) top of snow[1][2]NAVD88Prominence20,194 ft (6,155 m)[3]Parent peakAconcagua[3]Isolation4,621.1 mi (7,436.9 km)[3]ListingWorld's most prominent peaks 3rdWorld's most isolated peaks 3rdContinent high points 3rdCountry hig...

1231–1271 Mongol Yuan conquests This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.Find sources: Mongol invasions of Korea – news · newspapers · books · scholar · JSTOR (April 2022) (Learn how and when to remove this template message) Mongol invasions of KoreaPart of Mongol invasions and conquests1235 Mongol invasion of Go...

1997 single by Erasure This article needs additional citations for verification. Please help improve this article by adding citations to reliable sources. Unsourced material may be challenged and removed.Find sources: Don't Say Your Love Is Killing Me – news · newspapers · books · scholar · JSTOR (December 2009) (Learn how and when to remove this template message) Don't Say Your Love Is Killing MeUK CD single 1 coverSingle by Erasurefrom the album Cowb...

American baseball player (1855–1892) Baseball player Silver FlintSilver Flint baseball cardCatcherBorn: (1855-08-03)August 3, 1855Philadelphia, Pennsylvania, U.S.Died: January 14, 1892(1892-01-14) (aged 36)Chicago, Illinois, U.S.Batted: RightThrew: RightMLB debutMay 4, 1875, for the St. Louis Red StockingsLast MLB appearanceJuly 18, 1889, for the Chicago White StockingsMLB statisticsBatting average.236Home runs21Runs batted in295 Teams St. Louis Red Stocki...

Usher performing during his OMG Tour in Houston, Texas American singer Usher has released six video albums and appeared in forty-one music videos, eleven films, nine television programs, and four commercials. Usher released his debut single, Call Me a Mack in 1993 from the soundtrack Poetic Justice. Directed by Bille Woodruff, Usher appeared in the video for You Make Me Wanna..., the lead single from his break-through album My Way (1997).[1] The video shows Usher flanked by four dance...

2013 first-person shooter video game A major contributor to this article appears to have a close connection with its subject. It may require cleanup to comply with Wikipedia's content policies, particularly neutral point of view. Please discuss further on the talk page. (June 2019) (Learn how and when to remove this template message) 2013 video gameWarfaceDeveloper(s)Crytek Kiev[a]Blackwood Games (2019-2021)MY.GAMES (2021–present)Publisher(s)MY.GAMES (worldwide)Astrum Entertainment ...

Wintley PhippsBackground informationBirth nameWintley Augustus PhippsBorn (1955-01-07) January 7, 1955 (age 68)OriginTrinidad and TobagoGenresChristian, GospelOccupation(s)Singer, musician, entrepreneur, MinisterYears active1970s–presentMusical artist Part of a series onSeventh-dayAdventist Church History Christianity Protestantism Millerism Great Disappointment 1888 General Conference Theology 28 Fundamental Beliefs Pillars Three Angels' Messages Sabbath Eschatology Pre-Second Adv...

Irish king Domhnall Óg Ó Domhnaill in the Annals of Ulster manuscript Donnell Óg O'Donnell (Irish: Domhnall Óg Ó Domhnaill; c. 1242-1281), was a medieval Irish king of Tyrconnell and member of the O'Donnell dynasty. He was a leading figure in the resistance to Anglo-Norman rule in the north west and closely related to many of the movement's most prominent figures, such as Hugh McFelim O'Connor, who is often credited as being the first to import Scottish gallowglass warriors. He should no...

Private, boarding school in Bloomfield Hills, Michigan This article is about the private Pre-K–12 school in Michigan, United States. For alternate uses, including other Cranbrook schools, see Cranbrook (disambiguation). Cranbrook SchoolsLocationBloomfield Hills, MI 48304USACoordinates42°34′21.6″N 83°14′56.4″W / 42.572667°N 83.249000°W / 42.572667; -83.249000InformationTypePrivate, BoardingMottoCranbrook: Aim HighKingswood: Enter to Learn, Go Forth to Serv...

Indian singer-songwriter, actor and filmmaker Zubeen GargGarg live in a concert in 2023Pronunciation[zubin ɡaɹg]BornZubeen Borthakur (1972-11-18) 18 November 1972 (age 51)[1]Tura, Meghalaya, India[2]NationalityIndianAlma materJagannath Barooah College B. Borooah College Dibrugarh University University of GauhatiOccupationsSingersongwritercomposermusic directormusic producerlyricistmulti-instrumentalistactorfilm directorfilm producerscreenwriterpoetYears...

Kembali kehalaman sebelumnya