Share to: share facebook share twitter share wa share telegram print page

BERT (language model)

Bidirectional Encoder Representations from Transformers (BERT)
Original author(s)Google AI
Initial releaseOctober 31, 2018
Repositoryhttps://github.com/google-research/bert
Type
LicenseApache 2.0
Websitearxiv.org/abs/1810.04805 Edit this on Wikidata

Bidirectional encoder representations from transformers (BERT) is a language model introduced in October 2018 by researchers at Google.[1][2] It learned by self-supervised learning to represent text as a sequence of vectors. It had the transformer encoder architecture. It was notable for its dramatic improvement over previous state of the art models, and as an early example of large language model. As of 2020, BERT was a ubiquitous baseline in natural language processing (NLP) experiments.[3]

BERT is trained by masked token prediction and next sentence prediction. As a result of this training process, BERT learns contextual, latent representations of tokens in their context, similar to ELMo and GPT-2.[4] It found applications for many many natural language processing tasks, such as coreference resolution and polysemy resolution.[5] It is an evolutionary step over ELMo, and spawned the study of "BERTology", which attempts to interpret what is learned by BERT.[3]

BERT was originally implemented in the English language at two model sizes, BERTBASE (110 million parameters) and BERTLARGE (340 million parameters). Both were trained on the Toronto BookCorpus[6] (800M words) and English Wikipedia (2,500M words). The weights were released on GitHub.[7] On March 11, 2020, 24 smaller models were released, the smallest being BERTTINY with just 4 million parameters.[7]

Architecture

High-level schematic diagram of BERT. It takes in a text, tokenizes it into a sequence of tokens, add in optional special tokens, and apply a Transformer encoder. The hidden states of the last layer can then be used as contextual word embeddings.

BERT is an "encoder-only" transformer architecture. At a high level, BERT consists of 4 modules:

  • Tokenizer: This module converts a piece of English text into a sequence of integers ("tokens").
  • Embedding: This module converts the sequence of tokens into an array of real-valued vectors representing the tokens. It represents the conversion of discrete token types into a lower-dimensional Euclidean space.
  • Encoder: a stack of Transformer blocks with self-attention, but without causal masking.
  • Task head: This module converts the final representation vectors into one-hot encoded tokens again by producing a predicted probability distribution over the token types. It can be viewed as a simple decoder, decoding the latent representation into token types, or as an "un-embedding layer".

The task head is necessary for pre-training, but it is often unnecessary for so-called "downstream tasks," such as question answering or sentiment classification. Instead, one removes the task head and replaces it with a newly initialized module suited for the task, and finetune the new module. The latent vector representation of the model is directly fed into this new module, allowing for sample-efficient transfer learning.[1][8]

Encoder-only attention is all-to-all.

Embedding

This section describes the embedding used by BERTBASE. The other one, BERTLARGE, is similar, just larger.

The tokenizer of BERT is WordPiece, which is a sub-word strategy like byte pair encoding. Its vocabulary size is 30,000, and any token not appearing in its vocabulary is replaced by [UNK] ("unknown").

The three kinds of embedding used by BERT: token type, position, and segment type.

The first layer is the embedding layer, which contains three components: token type embeddings, position embeddings, and segment type embeddings.

  • Token type: The token type is a standard embedding layer, translating a one-hot vector into a dense vector based on its token type.
  • Position: The position embeddings are based on a token's position in the sequence. BERT uses absolute position embeddings, where each position in sequence is mapped to a real-valued vector. Each dimension of the vector consists of a sinusoidal function that takes the position in the sequence as input.
  • Segment type: Using a vocabulary of just 0 or 1, this embedding layer produces a dense vector based on whether the token belongs to the first or second text segment in that input. In other words, type-1 tokens are all tokens that appear after the [SEP] special token. All prior tokens are type-0.

The three embedding vectors are added together representing the initial token representation as a function of these three pieces of information. After embedding, the vector representation is normalized using a LayerNorm operation, outputting a 768-dimensional vector for each input token. After this, the representation vectors are passed forward through 12 Transformer encoder blocks, and are decoded back to 30,000-dimensional vocabulary space using a basic affine transformation layer.

Architectural family

The encoder stack of BERT has 2 free parameters: , the number of layers, and , the hidden size. There are always self-attention heads, and the feed-forward/filter size is always . By varying these two numbers, one obtains an entire family of BERT models.[9]

For BERT

  • the feed-forward size and filter size are synonymous. Both of them denote the number of dimensions in the middle layer of the feed-forward network.
  • the hidden size and embedding size are synonymous. Both of them denote the number of real numbers used to represent a token.

The notation for encoder stack is written as L/H. For example, BERTBASE is written as 12L/768H, BERTLARGE as 24L/1024H, and BERTTINY as 2L/128H.

Training

Pre-training

BERT was pre-trained simultaneously on two tasks.[10]

Masked language modeling

The masked language modeling task.

In masked language modeling, 15% of tokens would be randomly selected for masked-prediction task, and the training objective was to predict the masked token given its context. In more detail, the selected token is

  • replaced with a [MASK] token with probability 80%,
  • replaced with a random word token with probability 10%,
  • not replaced with probability 10%.

The reason not all selected tokens are masked is to avoid the dataset shift problem. The dataset shift problem arises when the distribution of inputs seen during training differs significantly from the distribution encountered during inference. A trained BERT model might be applied to word representation (like Word2Vec), where it would be run over sentences not containing any [MASK] tokens. It is later found that more diverse training objectives are generally better.[11]

As an illustrative example, consider the sentence "my dog is cute". It would first be divided into tokens like "my1 dog2 is3 cute4". Then a random token in the sentence would be picked. Let it be the 4th one "cute4". Next, there would be three possibilities:

  • with probability 80%, the chosen token is masked, resulting in "my1 dog2 is3 [MASK]4";
  • with probability 10%, the chosen token is replaced by a uniformly sampled random token, such as "happy", resulting in "my1 dog2 is3 happy4";
  • with probability 10%, nothing is done, resulting in "my1 dog2 is3 cute4".

After processing the input text, the model's 4th output vector is passed to its decoder layer, which outputs a probability distribution over its 30,000-dimensional vocabulary space.

Next sentence prediction

The next sentence prediction task.

Given two spans of text, the model predicts if these two spans appeared sequentially in the training corpus, outputting either [IsNext] or [NotNext]. The first span starts with a special token [CLS] (for "classify"). The two spans are separated by a special token [SEP] (for "separate"). After processing the two spans, the 1-st output vector (the vector coding for [CLS]) is passed to a separate neural network for the binary classification into [IsNext] and [NotNext].

  • For example, given "[CLS] my dog is cute [SEP] he likes playing" the model should output token [IsNext].
  • Given "[CLS] my dog is cute [SEP] how do magnets work" the model should output token [NotNext].

Fine-tuning

BERT is meant as a general pretrained model for various applications in natural language processing. That is, after pre-training, BERT can be fine-tuned with fewer resources on smaller datasets to optimize its performance on specific tasks such as natural language inference and text classification, and sequence-to-sequence-based language generation tasks such as question answering and conversational response generation.[12]

The original BERT paper published results demonstrating that a small amount of finetuning (for BERTLARGE, 1 hour on 1 Cloud TPU) allowed it to achieved state-of-the-art performance on a number of natural language understanding tasks:[1]

In the original paper, all parameters of BERT are finetuned, and recommended that, for downstream applications that are text classifications, the output token at the [CLS] input token is fed into a linear-softmax layer to produce the label outputs.[1]

The original code base defined the final linear layer as a "pooler layer", in analogy with global pooling in computer vision, even though it simply discards all output tokens except the one corresponding to [CLS] .[15]

Cost

BERT was trained on the BookCorpus (800M words) and a filtered version of English Wikipedia (2,500M words) without lists, tables, and headers.

Training BERTBASE on 4 cloud TPU (16 TPU chips total) took 4 days, at an estimated cost of 500 USD.[7] Training BERTLARGE on 16 cloud TPU (64 TPU chips total) took 4 days.[1]

Interpretation

Language models like ELMo, GPT-2, and BERT, spawned the study of "BERTology", which attempts to interpret what is learned by these models. Their performance on these natural language understanding tasks are not yet well understood.[3][16][17] Several research publications in 2018 and 2019 focused on investigating the relationship behind BERT's output as a result of carefully chosen input sequences,[18][19] analysis of internal vector representations through probing classifiers,[20][21] and the relationships represented by attention weights.[16][17]

The high performance of the BERT model could also be attributed[citation needed] to the fact that it is bidirectionally trained. This means that BERT, based on the Transformer model architecture, applies its self-attention mechanism to learn information from a text from the left and right side during training, and consequently gains a deep understanding of the context. For example, the word fine can have two different meanings depending on the context (I feel fine today, She has fine blond hair). BERT considers the words surrounding the target word fine from the left and right side.

However it comes at a cost: due to encoder-only architecture lacking a decoder, BERT can't be prompted and can't generate text, while bidirectional models in general do not work effectively without the right side, thus being difficult to prompt. As an illustrative example, if one wishes to use BERT to continue a sentence fragment "Today, I went to", then naively one would mask out all the tokens as "Today, I went to [MASK] [MASK] [MASK] ... [MASK] ." where the number of [MASK] is the length of the sentence one wishes to extend to. However, this constitutes a dataset shift, as during training, BERT has never seen sentences with that many tokens masked out. Consequently, its performance degrades. More sophisticated techniques allow text generation, but at a high computational cost.[22]

History

BERT was originally published by Google researchers Jacob Devlin, Ming-Wei Chang, Kenton Lee, and Kristina Toutanova. The design has its origins from pre-training contextual representations, including semi-supervised sequence learning,[23] generative pre-training, ELMo,[24] and ULMFit.[25] Unlike previous models, BERT is a deeply bidirectional, unsupervised language representation, pre-trained using only a plain text corpus. Context-free models such as word2vec or GloVe generate a single word embedding representation for each word in the vocabulary, whereas BERT takes into account the context for each occurrence of a given word. For instance, whereas the vector for "running" will have the same word2vec vector representation for both of its occurrences in the sentences "He is running a company" and "He is running a marathon", BERT will provide a contextualized embedding that will be different according to the sentence.[4]

On October 25, 2019, Google announced that they had started applying BERT models for English language search queries within the US.[26] On December 9, 2019, it was reported that BERT had been adopted by Google Search for over 70 languages.[27][28] In October 2020, almost every single English-based query was processed by a BERT model.[29]

Variants

The BERT models were influential and inspired many variants.

RoBERTa (2019)[30] was an engineering improvement. It preserves BERT's architecture (slightly larger, at 355M parameters), but improves its training, changing key hyperparameters, removing the next-sentence prediction task, and using much larger mini-batch sizes.

DistilBERT (2019) distills BERTBASE to a model with just 60% of its parameters (66M), while preserving 95% of its benchmark scores.[31][32] Similarly, TinyBERT (2019)[33] is a distilled model with just 28% of its parameters.

ALBERT (2019)[34] used shared-parameter across layers, and experimented with independently varying the hidden size and the word-embedding layer's output size as two hyperparameters. They also replaced the next sentence prediction task with the sentence-order prediction (SOP) task, where the model must distinguish the correct order of two consecutive text segments from their reversed order.

ELECTRA (2020)[35] applied the idea of generative adversarial networks to the MLM task. Instead of masking out tokens, a small language model generates random plausible plausible substitutions, and a larger network identify these replaced tokens. The small model aims to fool the large model.

DeBERTa

DeBERTa (2020)[36] is a significant architectural variant, with disentangled attention. Its key idea is to treat the positional and token encodings separately throughout the attention mechanism. Instead of combining the positional encoding () and token encoding () into a single input vector (), DeBERTa keeps them separate as a tuple: (). Then, at each self-attention layer, DeBERTa computes three distinct attention matrices, rather than the single attention matrix used in BERT:[note 1]

Attention type Query type Key type Example
Content-to-content Token Token "European"; "Union", "continent"
Content-to-position Token Position [adjective]; +1, +2, +3
Position-to-content Position Token -1; "not", "very"

The three attention matrices are added together element-wise, then passed through a softmax layer and multiplied by a projection matrix.

Absolute position encoding is included in the final self-attention layer as additional input.

Notes

  1. ^ The position-to-position type was omitted by the authors for being useless.

References

  1. ^ a b c d e Devlin, Jacob; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (October 11, 2018). "BERT: Pre-training of Deep Bidirectional Transformers for Language Understanding". arXiv:1810.04805v2 [cs.CL].
  2. ^ "Open Sourcing BERT: State-of-the-Art Pre-training for Natural Language Processing". Google AI Blog. November 2, 2018. Retrieved November 27, 2019.
  3. ^ a b c Rogers, Anna; Kovaleva, Olga; Rumshisky, Anna (2020). "A Primer in BERTology: What We Know About How BERT Works". Transactions of the Association for Computational Linguistics. 8: 842–866. arXiv:2002.12327. doi:10.1162/tacl_a_00349. S2CID 211532403.
  4. ^ a b Ethayarajh, Kawin (September 1, 2019), How Contextual are Contextualized Word Representations? Comparing the Geometry of BERT, ELMo, and GPT-2 Embeddings, arXiv:1909.00512, retrieved August 5, 2024
  5. ^ Anderson, Dawn (November 5, 2019). "A deep dive into BERT: How BERT launched a rocket into natural language understanding". Search Engine Land. Retrieved August 6, 2024.
  6. ^ Zhu, Yukun; Kiros, Ryan; Zemel, Rich; Salakhutdinov, Ruslan; Urtasun, Raquel; Torralba, Antonio; Fidler, Sanja (2015). "Aligning Books and Movies: Towards Story-Like Visual Explanations by Watching Movies and Reading Books". pp. 19–27. arXiv:1506.06724 [cs.CV].
  7. ^ a b c "BERT". GitHub. Retrieved March 28, 2023.
  8. ^ Zhang, Tianyi; Wu, Felix; Katiyar, Arzoo; Weinberger, Kilian Q.; Artzi, Yoav (March 11, 2021), Revisiting Few-sample BERT Fine-tuning, arXiv:2006.05987, retrieved September 16, 2024
  9. ^ Turc, Iulia; Chang, Ming-Wei; Lee, Kenton; Toutanova, Kristina (September 25, 2019), Well-Read Students Learn Better: On the Importance of Pre-training Compact Models, arXiv:1908.08962, retrieved July 26, 2024
  10. ^ "Summary of the models — transformers 3.4.0 documentation". huggingface.co. Retrieved February 16, 2023.
  11. ^ Tay, Yi; Dehghani, Mostafa; Tran, Vinh Q.; Garcia, Xavier; Wei, Jason; Wang, Xuezhi; Chung, Hyung Won; Shakeri, Siamak; Bahri, Dara (February 28, 2023), UL2: Unifying Language Learning Paradigms, arXiv:2205.05131, retrieved August 5, 2024
  12. ^ a b Zhang, Aston; Lipton, Zachary; Li, Mu; Smola, Alexander J. (2024). "11.9. Large-Scale Pretraining with Transformers". Dive into deep learning. Cambridge New York Port Melbourne New Delhi Singapore: Cambridge University Press. ISBN 978-1-009-38943-3.
  13. ^ Rajpurkar, Pranav; Zhang, Jian; Lopyrev, Konstantin; Liang, Percy (October 10, 2016). "SQuAD: 100,000+ Questions for Machine Comprehension of Text". arXiv:1606.05250 [cs.CL].
  14. ^ Zellers, Rowan; Bisk, Yonatan; Schwartz, Roy; Choi, Yejin (August 15, 2018). "SWAG: A Large-Scale Adversarial Dataset for Grounded Commonsense Inference". arXiv:1808.05326 [cs.CL].
  15. ^ "bert/modeling.py at master · google-research/bert". GitHub. Retrieved September 16, 2024.
  16. ^ a b Kovaleva, Olga; Romanov, Alexey; Rogers, Anna; Rumshisky, Anna (November 2019). "Revealing the Dark Secrets of BERT". Proceedings of the 2019 Conference on Empirical Methods in Natural Language Processing and the 9th International Joint Conference on Natural Language Processing (EMNLP-IJCNLP). pp. 4364–4373. doi:10.18653/v1/D19-1445. S2CID 201645145.
  17. ^ a b Clark, Kevin; Khandelwal, Urvashi; Levy, Omer; Manning, Christopher D. (2019). "What Does BERT Look at? An Analysis of BERT's Attention". Proceedings of the 2019 ACL Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Stroudsburg, PA, USA: Association for Computational Linguistics: 276–286. arXiv:1906.04341. doi:10.18653/v1/w19-4828.
  18. ^ Khandelwal, Urvashi; He, He; Qi, Peng; Jurafsky, Dan (2018). "Sharp Nearby, Fuzzy Far Away: How Neural Language Models Use Context". Proceedings of the 56th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics: 284–294. arXiv:1805.04623. doi:10.18653/v1/p18-1027. S2CID 21700944.
  19. ^ Gulordava, Kristina; Bojanowski, Piotr; Grave, Edouard; Linzen, Tal; Baroni, Marco (2018). "Colorless Green Recurrent Networks Dream Hierarchically". Proceedings of the 2018 Conference of the North American Chapter of the Association for Computational Linguistics: Human Language Technologies, Volume 1 (Long Papers). Stroudsburg, PA, USA: Association for Computational Linguistics. pp. 1195–1205. arXiv:1803.11138. doi:10.18653/v1/n18-1108. S2CID 4460159.
  20. ^ Giulianelli, Mario; Harding, Jack; Mohnert, Florian; Hupkes, Dieuwke; Zuidema, Willem (2018). "Under the Hood: Using Diagnostic Classifiers to Investigate and Improve how Language Models Track Agreement Information". Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Stroudsburg, PA, USA: Association for Computational Linguistics: 240–248. arXiv:1808.08079. doi:10.18653/v1/w18-5426. S2CID 52090220.
  21. ^ Zhang, Kelly; Bowman, Samuel (2018). "Language Modeling Teaches You More than Translation Does: Lessons Learned Through Auxiliary Syntactic Task Analysis". Proceedings of the 2018 EMNLP Workshop BlackboxNLP: Analyzing and Interpreting Neural Networks for NLP. Stroudsburg, PA, USA: Association for Computational Linguistics: 359–361. doi:10.18653/v1/w18-5448.
  22. ^ Patel, Ajay; Li, Bryan; Mohammad Sadegh Rasooli; Constant, Noah; Raffel, Colin; Callison-Burch, Chris (2022). "Bidirectional Language Models Are Also Few-shot Learners". arXiv:2209.14500 [cs.LG].
  23. ^ Dai, Andrew; Le, Quoc (November 4, 2015). "Semi-supervised Sequence Learning". arXiv:1511.01432 [cs.LG].
  24. ^ Peters, Matthew; Neumann, Mark; Iyyer, Mohit; Gardner, Matt; Clark, Christopher; Lee, Kenton; Luke, Zettlemoyer (February 15, 2018). "Deep contextualized word representations". arXiv:1802.05365v2 [cs.CL].
  25. ^ Howard, Jeremy; Ruder, Sebastian (January 18, 2018). "Universal Language Model Fine-tuning for Text Classification". arXiv:1801.06146v5 [cs.CL].
  26. ^ Nayak, Pandu (October 25, 2019). "Understanding searches better than ever before". Google Blog. Retrieved December 10, 2019.
  27. ^ "Understanding searches better than ever before". Google. October 25, 2019. Retrieved August 6, 2024.
  28. ^ Montti, Roger (December 10, 2019). "Google's BERT Rolls Out Worldwide". Search Engine Journal. Retrieved December 10, 2019.
  29. ^ "Google: BERT now used on almost every English query". Search Engine Land. October 15, 2020. Retrieved November 24, 2020.
  30. ^ Liu, Yinhan; Ott, Myle; Goyal, Naman; Du, Jingfei; Joshi, Mandar; Chen, Danqi; Levy, Omer; Lewis, Mike; Zettlemoyer, Luke; Stoyanov, Veselin (2019). "RoBERTa: A Robustly Optimized BERT Pretraining Approach". arXiv:1907.11692 [cs.CL].
  31. ^ Sanh, Victor; Debut, Lysandre; Chaumond, Julien; Wolf, Thomas (February 29, 2020), DistilBERT, a distilled version of BERT: smaller, faster, cheaper and lighter, arXiv:1910.01108, retrieved August 5, 2024
  32. ^ "DistilBERT". huggingface.co. Retrieved August 5, 2024.
  33. ^ Jiao, Xiaoqi; Yin, Yichun; Shang, Lifeng; Jiang, Xin; Chen, Xiao; Li, Linlin; Wang, Fang; Liu, Qun (October 15, 2020), TinyBERT: Distilling BERT for Natural Language Understanding, arXiv:1909.10351, retrieved August 11, 2024
  34. ^ Lan, Zhenzhong; Chen, Mingda; Goodman, Sebastian; Gimpel, Kevin; Sharma, Piyush; Soricut, Radu (February 8, 2020), ALBERT: A Lite BERT for Self-supervised Learning of Language Representations, arXiv:1909.11942, retrieved August 11, 2024
  35. ^ Clark, Kevin; Luong, Minh-Thang; Le, Quoc V.; Manning, Christopher D. (March 23, 2020), ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators, arXiv:2003.10555, retrieved August 11, 2024
  36. ^ He, Pengcheng; Liu, Xiaodong; Gao, Jianfeng; Chen, Weizhu (October 6, 2021), DeBERTa: Decoding-enhanced BERT with Disentangled Attention, arXiv:2006.03654, retrieved August 11, 2024

Further reading

  • Rogers, Anna; Kovaleva, Olga; Rumshisky, Anna (2020). "A Primer in BERTology: What we know about how BERT works". arXiv:2002.12327 [cs.CL].

Read other articles:

German poet You can help expand this article with text translated from the corresponding article in German. (December 2014) Click [show] for important translation instructions. View a machine-translated version of the German article. Machine translation, like DeepL or Google Translate, is a useful starting point for translations, but translators must revise errors as necessary and confirm that the translation is accurate, rather than simply copy-pasting machine-translated text into the E...

9th Sultan of the Ottoman Empire from 1512 to 1520 This article is about the Ottoman sultan. For the Crimean khan, see Selim I Giray. Selim ICustodian of the Two Holy Mosques16th century miniature of Selim I by Nakkaş OsmanSultan of the Ottoman Empire (Padishah)Reign24 April 1512 – 22 September 1520PredecessorBayezid IISuccessorSuleiman IOttoman caliph (Amir al-Mu'minin)Reign22 January 1517 – 22 September 1520PredecessorAl-Mutawakkil III(Abbasid caliph)SuccessorSuleiman IPrince-Governor ...

Ini adalah nama Korea; marganya adalah Park. Park Sung-woongLahir9 Januari 1973 (umur 50)Chungju, Provinsi Chungcheong Utara, Korea SelatanPendidikanUniversitas Hankuk Studi Luar Negeri - HukumPekerjaanAktorTahun aktif1997-sekarangAgenMask EntertainmentSuami/istriShin Eun-jung ​(m. 2008)​Anak1Nama KoreaHangul박성웅 Hanja朴誠雄 Alih AksaraBak Seong-ungMcCune–ReischauerPak Sŏng-ung Park Sung-woong (lahir 9 Januari 1973) adalah aktor asal Korea Sel...

Wahlkreis 62: Dahme-Spreewald – Teltow-Fläming III – Oberspreewald-Lausitz I Staat Deutschland Bundesland Brandenburg Wahlkreisnummer 62 Wahlberechtigte 252.738 Wahlbeteiligung 76,9 % Wahldatum 26. September 2021 Wahlkreisabgeordneter Name Sylvia Lehmann Partei SPD Stimmanteil 26,5 % Der Wahlkreis Dahme-Spreewald – Teltow-Fläming III – Oberspreewald-Lausitz I (Wahlkreis 62) ist ein Bundestagswahlkreis in Brandenburg. Er umfasst den Landkreis Dahme-Spreewald und den Landkre...

Farming to meet basic needs A Bakweri farmer working on his taro field on the slopes of Mount Cameroon (2005) Subsistence farmers selling their produce Subsistence agriculture occurs when farmers grow crops to meet the needs of themselves and their families on smallholdings.[1] Subsistence agriculturalists target farm output for survival and for mostly local requirements, with little or no surplus. Planting decisions occur principally with an eye toward what the family will need durin...

Adrian Păunescu Información personalNacimiento 20 de julio de 1943 Copăceni (Bălți County, Reino de Rumania) Fallecimiento 5 de noviembre de 2010 (67 años)Hospital Floreasca (Rumania) Causa de muerte Síndrome de disfunción multiorgánica Sepultura Cementerio de Bellu Nacionalidad RumanaReligión Iglesia ortodoxa EducaciónEducado en Facultad de Letras de la Universidad de BucarestCarol I National College Información profesionalOcupación Periodista, escritor, crítico literario, pol...

Норвезьке товариство охорони природи Оригінальна назва бук. Norges NaturvernforbundДата заснування 1914Тип Громадська організаціяГолова Truls GulowsendКількість членів 35 000[1]Адреса ОслоДжерела фінансування членські внески, пожертви, грантиОфіційний сайт www.naturvernforbundet.no Норвезьке ...

Artikel ini tidak memiliki referensi atau sumber tepercaya sehingga isinya tidak bisa dipastikan. Tolong bantu perbaiki artikel ini dengan menambahkan referensi yang layak. Tulisan tanpa sumber dapat dipertanyakan dan dihapus sewaktu-waktu.Cari sumber: Daftar tim sepak bola di Indonesia – berita · surat kabar · buku · cendekiawan · JSTOR Daftar klub-klub sepak bola profesional dan amatir di liga Indonesia sebagai berikut. Klub Liga 1 Artikel utama: Lig...

Irisan melintang bunga sakura Jepang (Prunus serrulata). Bentuk bunga menjadi penciri penting dalam taksonomi tumbuhan. Morfologi tumbuhan merupakan ilmu yang mempelajari bentuk fisik dan struktur tubuh dari tumbuhan, morfologi berasal dari bahasa Latin morphus yang berarti wujud atau bentuk, dan logos yang berarti ilmu.[1][2] Morfologi tumbuhan berbeda dengan anatomi tumbuhan yang secara khusus mempelajari struktur internal tumbuhan pada tingkat mikroskopis.[3] Morfol...

1997 studio album by CorneliusFantasmaStudio album by CorneliusReleasedAugust 6, 1997 (1997-08-06)GenreShibuya-kei[1]Length50:04LabelTrattoriaProducerKeigo OyamadaCornelius chronology 96/69(1996) Fantasma(1997) Point(2001) Singles from Fantasma Star Fruits Surf RiderReleased: July 2, 1997[2] Free FallReleased: February 23, 1998[3] Chapter 8 – Seashore and Horizon –Released: May 25, 1998[3] Fantasma is the third studio album by Japanes...

Наземне цифрове мовлення в Україні реалізовано в стандарті DVB-T2. Передача телевізійних каналів відбувається у чотирьох загальнонаціональних мультиплексах MX-1, MX-2, MX-3, та MX-5, до останнього з яких також входять регіональні канали, що відрізняються між областями. Власником ...

MinionsPoster filmSutradara Pierre Coffin Kyle Balda Produser Chris Meledandri Janet Healy Ditulis olehBrian LynchPemeran Pierre Coffin Sandra Bullock Jon Hamm Michael Keaton Allison Janney Steve Coogan Geoffrey Rush Penata musikHeitor Pereira[1]PenyuntingClaire DodgsonPerusahaanproduksiIllumination EntertainmentDistributorUniversal PicturesTanggal rilis 11 Juni 2015 (2015-06-11) (Odeon Leicester Square)[2] 17 Juni 2015 (2015-06-17) (Indonesia) 10 Juli ...

African Media and Entertainment company The Aristokrat GroupFormer logo of Aristokrat GroupTypePrivateIndustryMediaEntertainmentFounded2006FounderPiriye IsokrariHeadquartersPort Harcourt, NigeriaArea servedWorldwideKey peoplePiriye Isokrari (CEO/Founder)ProductsMusic, Media, and EntertainmentOwnersPiriye IsokrariSubsidiaries Aristokrat Records Aristokrat Publishing Aristokrat 360 Websitehttps://www.aristokratgroup.com/ The Aristokrat Group (commonly known as Aristokrat or Aristokrat Group), h...

1963 film by Mario Bava This article is about the horror film. For other uses, see Black Sabbath (disambiguation). This article is about the 1963 anthology horror film directed by Mario Bava. For the 1960 Mario Bava film, see Black Sunday (1960 film). Black SabbathItalian release posterDirected byMario BavaScreenplay by Marcello Fondato Alberto Bevilacqua Mario Bava Produced by Lionello Santi Alberto Barsanti[1] Starring Boris Karloff Mark Damon Michèle Mercier Susy Andersen Lydia Al...

Greek basketball player Jake TsakalidisTsakalidis, playing with the Memphis Grizzlies, in 2003.Personal informationBorn (1979-06-10) 10 June 1979 (age 44)Poshekhonye area, USSRNationalityGeorgian / GreekListed height7 ft 2 in (2.18 m)Listed weight290 lb (132 kg)Career informationNBA draft2000: 1st round, 25th overall pickSelected by the Phoenix SunsPlaying career1996–2008PositionCenterNumber25, 12Career history1996–2000AEK Athens2000–2003Phoenix Suns2003–...

Microsoft cloud gaming service This article needs to be updated. Please help update this article to reflect recent events or newly available information. (November 2020) Xbox Cloud GamingDeveloperMicrosoft GamingTypeCloud gaming serviceLaunch dateSeptember 15, 2020; 3 years ago (2020-09-15)Preview version1.0 / November 14, 2019; 4 years ago (2019-11-14)Platform(s)Cross-platformOperating system(s)Android,[1] Windows,[2] iOS, iPadOS,[3]...

Welsh actor and singer Aneirin HughesHughes in 2013BornAneurin Hughes (1958-05-08) 8 May 1958 (age 65)Aberystwyth, WalesAlma materRoyal Conservatoire of ScotlandOccupation(s)Actor, singerYears active1994–present Aneirin Hughes (born Aneurin Hughes, 8 May 1958) is a Welsh actor and singer known for playing Chief Superintendent Brian Prosser in the BBC4 Welsh police drama Hinterland. He won a Best Actor BAFTA Cymru (or BAFTA Wales) for his appearance as Delme in Cameleon (1997)...

2018 Hindi language romantic comedy This article is an orphan, as no other articles link to it. Please introduce links to this page from related articles; try the Find link tool for suggestions. (April 2019) DestinyDirected byVikkramm ChandirramaniWritten byVikkramm ChandirramaniProduced byQuest Mercury IntermediaStarringNikita Vijayvargia Bhupendra JadawatMonika PanwarJagriti SinghRelease date 28 March 2018 (2018-03-28) Running time14 minutesCountryIndiaLanguageHindi Destiny i...

Jumo 205 Jumo 205 cutaway Type Aircraft diesel engine Manufacturer Junkers First run 1930s Major applications Junkers Ju 86 Blohm & Voss BV 138 Blohm & Voss BV 222 Number built ca. 900[1] Developed from Junkers Jumo 204 The Jumo 205 aircraft engine was the most famous of a series of aircraft diesel engines produced by Junkers. The Jumo 204 first entered service in 1932. Later engines of this type comprised the experimental Jumo 206 and Jumo 208, with the Jumo 207 produced in s...

Stasiun Gijukukōkōmae義塾高校前駅Stasiun Gijukukōkōmae pada September 2019LokasiNozaki-69-2 Ishikawa, Hirosaki-shi, Aomori-ken 036-8124JepangKoordinat40°33′20.66″N 140°31′39.86″E / 40.5557389°N 140.5277389°E / 40.5557389; 140.5277389Koordinat: 40°33′20.66″N 140°31′39.86″E / 40.5557389°N 140.5277389°E / 40.5557389; 140.5277389Pengelola Kōnan RailwayJalur■ Jalur ŌwaniLetak dari pangkal5.7 km dari ŌwaniJumlah ...

Kembali kehalaman sebelumnya