Residual neural network

A residual block in a deep residual network. Here, the residual connection skips two layers.

A residual neural network (also referred to as a residual network or ResNet)[1] is a deep learning architecture in which the layers learn residual functions with reference to the layer inputs. It was developed in 2015 for image recognition, and won the ImageNet Large Scale Visual Recognition Challenge (ILSVRC) of that year.[2][3]

As a point of terminology, "residual connection" refers to the specific architectural motif of , where is an arbitrary neural network module. The motif had been used previously (see §History for details). However, the publication of ResNet made it widely popular for feedforward networks, appearing in neural networks that are seemingly unrelated to ResNet.

The residual connection stabilizes the training and convergence of deep neural networks with hundreds of layers, and is a common motif in deep neural networks, such as transformer models (e.g., BERT, and GPT models such as ChatGPT), the AlphaGo Zero system, the AlphaStar system, and the AlphaFold system.

Mathematics

Residual connection

In a multilayer neural network model, consider a subnetwork with a certain number of stacked layers (e.g., 2 or 3). Denote the underlying function performed by this subnetwork as , where is the input to the subnetwork. Residual learning re-parameterizes this subnetwork and lets the parameter layers represent a "residual function" . The output of this subnetwork is then represented as:

The operation of "" is implemented via a "skip connection" that performs an identity mapping to connect the input of the subnetwork with its output. This connection is referred to as a "residual connection" in later work. The function is often represented by matrix multiplication interlaced with activation functions and normalization operations (e.g., batch normalization or layer normalization). As a whole, one of these subnetworks is referred to as a "residual block".[1] A deep residual network is constructed by simply stacking these blocks.

Long short-term memory (LSTM) has a memory mechanism that serves as a residual connection.[4] In an LSTM without a forget gate, an input is processed by a function and added to a memory cell , resulting in . An LSTM with a forget gate essentially functions as a highway network.

To stabilize the variance of the layers' inputs, it is recommended to replace the residual connections with , where is the total number of residual layers.[5]

Projection connection

If the function is of type where , then is undefined. To handle this special case, a projection connection is used:

where is typically a linear projection, defined by where is a matrix. The matrix is trained via backpropagation, as is any other parameter of the model.

Signal propagation

The introduction of identity mappings facilitates signal propagation in both forward and backward paths.[6]

Forward propagation

If the output of the -th residual block is the input to the -th residual block (assuming no activation function between blocks), then the -th input is:

Applying this formulation recursively, e.g.:

yields the general relationship:

where is the index of a residual block and is the index of some earlier block. This formulation suggests that there is always a signal that is directly sent from a shallower block to a deeper block .

Backward propagation

The residual learning formulation provides the added benefit of mitigating the vanishing gradient problem to some extent. However, it is crucial to acknowledge that the vanishing gradient issue is not the root cause of the degradation problem, which is tackled through the use of normalization. To observe the effect of residual blocks on backpropagation, consider the partial derivative of a loss function with respect to some residual block input . Using the equation above from forward propagation for a later residual block :[6]

This formulation suggests that the gradient computation of a shallower layer, , always has a later term that is directly added. Even if the gradients of the terms are small, the total gradient resists vanishing due to the added term .

Variants of residual blocks

Two variants of convolutional Residual Blocks.[1] Left: a basic block that has two 3x3 convolutional layers. Right: a bottleneck block that has a 1x1 convolutional layer for dimension reduction, a 3x3 convolutional layer, and another 1x1 convolutional layer for dimension restoration.

Basic block

A basic block is the simplest building block studied in the original ResNet.[1] This block consists of two sequential 3x3 convolutional layers and a residual connection. The input and output dimensions of both layers are equal.

Block diagram of ResNet (2015). It shows a ResNet block with and without the 1x1 convolution. The 1x1 convolution (with stride) can be used to change the shape of the array, which is necessary for residual connection through an upsampling/downsampling layer.

Bottleneck block

A bottleneck block[1] consists of three sequential convolutional layers and a residual connection. The first layer in this block is a 1x1 convolution for dimension reduction (e.g., to 1/2 of the input dimension); the second layer performs a 3x3 convolution; the last layer is another 1x1 convolution for dimension restoration. The models of ResNet-50, ResNet-101, and ResNet-152 are all based on bottleneck blocks.[1]

Pre-activation block

The pre-activation residual block[6] applies activation functions before applying the residual function . Formally, the computation of a pre-activation residual block can be written as:

where can be any activation (e.g. ReLU) or normalization (e.g. LayerNorm) operation. This design reduces the number of non-identity mappings between residual blocks. This design was used to train models with 200 to over 1000 layers.[6]

Since GPT-2, transformer blocks have been mostly implemented as pre-activation blocks. This is often referred to as "pre-normalization" in the literature of transformer models.[7]

The original Resnet-18 architecture. Up to 152 layers were trained in the original publication (as "ResNet-152").[8]

Applications

Originally, ResNet was designed for computer vision.[1][8][9]

The Transformer architecture includes residual connections.

All transformer architectures include residual connections. Indeed, very deep transformers cannot be trained without them.[10]

The original ResNet paper made no claim on being inspired by biological systems. However, later research has related ResNet to biologically-plausible algorithms.[11][12]

A study published in Science in 2023[13] disclosed the complete connectome of an insect brain (specifically that of a fruit fly larva). This study discovered "multilayer shortcuts" that resemble the skip connections in artificial neural networks, including ResNets.

History

Previous work

Residual connections were noticed in neuroanatomy, such as Lorente de No (1938).[14]: Fig 3  McCulloch and Pitts (1943) proposed artificial neural networks and considered those with residual connections.[15]: Fig 1.h 

In 1961, Frank Rosenblatt described a three-layer multilayer perceptron (MLP) model with skip connections.[16]: 313, Chapter 15  The model was referred to as a "cross-coupled system", and the skip connections were forms of cross-coupled connections.

During the late 1980s, "skip-layer" connections were sometimes used in neural networks. Examples include:[17][18] Lang and Witbrock (1988)[19] trained a fully connected feedforward network where each layer skip-connects to all subsequent layers, like the later DenseNet (2016). In this work, the residual connection was the form , where is a randomly-initialized projection connection. They termed it a "short-cut connection".

The long short-term memory (LSTM) cell can process data sequentially and keep its hidden state through time. The cell state can function as a generalized residual connection.

Degradation problem

Sepp Hochreiter discovered the vanishing gradient problem in 1991[20] and argued that it explained why the then-prevalent forms of recurrent neural networks did not work for long sequences. He and Schmidhuber later designed the LSTM architecture to solve this problem,[4][21] which has a "cell state" that can function as a generalized residual connection. The highway network (2015)[22][23] applied the idea of an LSTM unfolded in time to feedforward neural networks, resulting in the highway network. ResNet is equivalent to an open-gated highway network.

Standard (left) and unfolded (right) basic recurrent neural network

During the early days of deep learning, there were attempts to train increasingly deep models. Notable examples included the AlexNet (2012), which had 8 layers, and the VGG-19 (2014), which had 19 layers.[24] However, stacking too many layers led to a steep reduction in training accuracy,[25] known as the "degradation" problem.[1] In theory, adding additional layers to deepen a network should not result in a higher training loss, but this is what happened with VGGNet.[1] If the extra layers can be set as identity mappings, however, then the deeper network would represent the same function as its shallower counterpart. There is some evidence that the optimizer is not able to approach identity mappings for the parameterized layers, and the benefit of residual connections was to allow identity mappings by default.[6]

In 2014, the state of the art was training deep neural networks with 20 to 30 layers.[24] The research team for ResNet attempted to train deeper ones by empirically testing various methods for training deeper networks, until they came upon the ResNet architecture.[26]

Subsequent work

DenseNet (2016)[27] connects the output of each layer to the input to each subsequent layer:

Stochastic depth[28] is a regularization method that randomly drops a subset of layers and lets the signal propagate through the identity skip connections. Also known as DropPath, this regularizes training for deep models, such as vision transformers.[29]

ResNeXt block diagram.

ResNeXt (2017) combines the Inception module with ResNet.[30][8]

Squeeze-and-Excitation Networks (2018) added squeeze-and-excitation (SE) modules to ResNet.[31] An SE module is applied after a convolution, and takes a tensor of shape (height, width, channels) as input. Each channel is averaged, resulting in a vector of shape . This is then passed through a multilayer perceptron (with an architecture such as linear-ReLU-linear-sigmoid) before it is multiplied with the original tensor.

References

  1. ^ a b c d e f g h i He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2016). Deep Residual Learning for Image Recognition (PDF). Conference on Computer Vision and Pattern Recognition. arXiv:1512.03385. doi:10.1109/CVPR.2016.90.
  2. ^ "ILSVRC2015 Results". image-net.org.
  3. ^ Deng, Jia; Dong, Wei; Socher, Richard; Li, Li-Jia; Li, Kai; Li, Fei-Fei (2009). ImageNet: A large-scale hierarchical image database. Conference on Computer Vision and Pattern Recognition. doi:10.1109/CVPR.2009.5206848.
  4. ^ a b Sepp Hochreiter; Jürgen Schmidhuber (1997). "Long short-term memory". Neural Computation. 9 (8): 1735–1780. doi:10.1162/neco.1997.9.8.1735. PMID 9377276. S2CID 1915014.
  5. ^ Hanin, Boris; Rolnick, David (2018). How to Start Training: The Effect of Initialization and Architecture (PDF). Conference on Neural Information Processing Systems. Vol. 31. Curran Associates, Inc. arXiv:1803.01719.
  6. ^ a b c d e He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2016). Identity Mappings in Deep Residual Networks (PDF). European Conference on Computer Vision. arXiv:1603.05027. doi:10.1007/978-3-319-46493-0_38.
  7. ^ Radford, Alec; Wu, Jeffrey; Child, Rewon; Luan, David; Amodei, Dario; Sutskever, Ilya (14 February 2019). "Language models are unsupervised multitask learners" (PDF). Archived (PDF) from the original on 6 February 2021. Retrieved 19 December 2020.
  8. ^ a b c Zhang, Aston; Lipton, Zachary; Li, Mu; Smola, Alexander J. (2024). "8.6. Residual Networks (ResNet) and ResNeXt". Dive into deep learning. Cambridge New York Port Melbourne New Delhi Singapore: Cambridge University Press. ISBN 978-1-009-38943-3.
  9. ^ Szegedy, Christian; Ioffe, Sergey; Vanhoucke, Vincent; Alemi, Alex (2017). Inception-v4, Inception-ResNet and the impact of residual connections on learning (PDF). AAAI Conference on Artificial Intelligence. arXiv:1602.07261. doi:10.1609/aaai.v31i1.11231.
  10. ^ Dong, Yihe; Cordonnier, Jean-Baptiste; Loukas, Andreas (2021). Attention is not all you need: pure attention loses rank doubly exponentially with depth (PDF). International Conference on Machine Learning. PMLR. pp. 2793–2803. arXiv:2103.03404.
  11. ^ Liao, Qianli; Poggio, Tomaso (2016). "Bridging the Gaps Between Residual Learning, Recurrent Neural Networks and Visual Cortex". arXiv:1604.03640 [cs.LG].
  12. ^ Xiao, Will; Chen, Honglin; Liao, Qianli; Poggio, Tomaso (2019). Biologically-Plausible Learning Algorithms Can Scale to Large Datasets. International Conference on Learning Representations. arXiv:1811.03567.
  13. ^ Winding, Michael; Pedigo, Benjamin; Barnes, Christopher; Patsolic, Heather; Park, Youngser; Kazimiers, Tom; Fushiki, Akira; Andrade, Ingrid; Khandelwal, Avinash; Valdes-Aleman, Javier; Li, Feng; Randel, Nadine; Barsotti, Elizabeth; Correia, Ana; Fetter, Fetter; Hartenstein, Volker; Priebe, Carey; Vogelstein, Joshua; Cardona, Albert; Zlatic, Marta (10 Mar 2023). "The connectome of an insect brain". Science. 379 (6636): eadd9330. bioRxiv 10.1101/2022.11.28.516756v1. doi:10.1126/science.add9330. PMC 7614541. PMID 36893230. S2CID 254070919.
  14. ^ De N, Rafael Lorente (1938-05-01). "Analysis of the Activity of the Chains of Internuncial Neurons". Journal of Neurophysiology. 1 (3): 207–244. doi:10.1152/jn.1938.1.3.207. ISSN 0022-3077.
  15. ^ McCulloch, Warren S.; Pitts, Walter (1943-12-01). "A logical calculus of the ideas immanent in nervous activity". The Bulletin of Mathematical Biophysics. 5 (4): 115–133. doi:10.1007/BF02478259. ISSN 1522-9602.
  16. ^ Rosenblatt, Frank (1961). Principles of neurodynamics. perceptrons and the theory of brain mechanisms (PDF).
  17. ^ Rumelhart, David E., Geoffrey E. Hinton, and Ronald J. Williams. "Learning internal representations by error propagation", Parallel Distributed Processing. Vol. 1. 1986.
  18. ^ Venables, W. N.; Ripley, Brain D. (1994). Modern Applied Statistics with S-Plus. Springer. pp. 261–262. ISBN 9783540943501.
  19. ^ Lang, Kevin; Witbrock, Michael (1988). "Learning to tell two spirals apart" (PDF). Proceedings of the 1988 Connectionist Models Summer School: 52–59.
  20. ^ Hochreiter, Sepp (1991). Untersuchungen zu dynamischen neuronalen Netzen (PDF) (diploma thesis). Technical University Munich, Institute of Computer Science, advisor: J. Schmidhuber.
  21. ^ Felix A. Gers; Jürgen Schmidhuber; Fred Cummins (2000). "Learning to Forget: Continual Prediction with LSTM". Neural Computation. 12 (10): 2451–2471. CiteSeerX 10.1.1.55.5709. doi:10.1162/089976600300015015. PMID 11032042. S2CID 11598600.
  22. ^ Srivastava, Rupesh Kumar; Greff, Klaus; Schmidhuber, Jürgen (3 May 2015). "Highway Networks". arXiv:1505.00387 [cs.LG].
  23. ^ Srivastava, Rupesh Kumar; Greff, Klaus; Schmidhuber, Jürgen (2015). Training Very Deep Networks (PDF). Conference on Neural Information Processing Systems. arXiv:1507.06228.
  24. ^ a b Simonyan, Karen; Zisserman, Andrew (2015-04-10). "Very Deep Convolutional Networks for Large-Scale Image Recognition". arXiv:1409.1556 [cs.CV].
  25. ^ He, Kaiming; Zhang, Xiangyu; Ren, Shaoqing; Sun, Jian (2015). Delving Deep into Rectifiers: Surpassing Human-Level Performance on ImageNet Classification (PDF). International Conference on Computer Vision. arXiv:1502.01852. doi:10.1109/ICCV.2015.123.
  26. ^ Linn, Allison (2015-12-10). "Microsoft researchers win ImageNet computer vision challenge". The AI Blog. Retrieved 2024-06-29.
  27. ^ Huang, Gao; Liu, Zhuang; van der Maaten, Laurens; Weinberger, Kilian (2017). Densely Connected Convolutional Networks (PDF). Conference on Computer Vision and Pattern Recognition. arXiv:1608.06993. doi:10.1109/CVPR.2017.243.
  28. ^ Huang, Gao; Sun, Yu; Liu, Zhuang; Weinberger, Kilian (2016). Deep Networks with Stochastic Depth (PDF). European Conference on Computer Vision. arXiv:1603.09382. doi:10.1007/978-3-319-46493-0_39.
  29. ^ Lee, Youngwan; Kim, Jonghee; Willette, Jeffrey; Hwang, Sung Ju (2022). MPViT: Multi-Path Vision Transformer for Dense Prediction (PDF). Conference on Computer Vision and Pattern Recognition. pp. 7287–7296. arXiv:2112.11010. doi:10.1109/CVPR52688.2022.00714.
  30. ^ Xie, Saining; Girshick, Ross; Dollar, Piotr; Tu, Zhuowen; He, Kaiming (2017). Aggregated Residual Transformations for Deep Neural Networks (PDF). Conference on Computer Vision and Pattern Recognition. pp. 1492–1500. arXiv:1611.05431. doi:10.1109/CVPR.2017.634.
  31. ^ Hu, Jie; Shen, Li; Sun, Gang (2018). Squeeze-and-Excitation Networks (PDF). Conference on Computer Vision and Pattern Recognition. pp. 7132–7141. arXiv:1709.01507. doi:10.1109/CVPR.2018.00745.

Read other articles:

Artikel ini membutuhkan rujukan tambahan agar kualitasnya dapat dipastikan. Mohon bantu kami mengembangkan artikel ini dengan cara menambahkan rujukan ke sumber tepercaya. Pernyataan tak bersumber bisa saja dipertentangkan dan dihapus.Cari sumber: BDO International – berita · surat kabar · buku · cendekiawan · JSTOR (Desember 2018) Untuk penggunaan lain, lihat BDO (disambiguasi). Binder Dijker OtteJenisJaringan globalIndustriAkuntansiJasa ProfesionalPa...

 

ІбіньїIbigny   Країна  Франція Регіон Гранд-Ест  Департамент Мозель  Округ Саррбур-Шато-Сален Кантон Решикур-ле-Шато Код INSEE 57342 Поштові індекси 57830 Координати 48°38′38″ пн. ш. 6°54′01″ сх. д.H G O Висота 285 - 341 м.н.р.м. Площа 4,77 км² Населення 94 (01-2020[1]) Густота 20,...

 

Campeonato MundialAtletismo 2009 Provas de pista 100 m masc fem 200 m masc fem 400 m masc fem 800 m masc fem 1500 m masc fem 5000 m masc fem 10000 m masc fem 100 m com barreiras fem 110 m com barreiras masc 400 m com barreiras masc fem 3000 mcom obstáculos masc fem Revezamento 4×100 m masc fem Revezamento 4×400 m masc fem Provas de estrada Maratona masc fem 20 km marcha atlética masc fem 50 km marcha atlética masc Provas de campo Salto em distância masc fem Salto triplo masc fem...

Si ce bandeau n'est plus pertinent, retirez-le. Cliquez ici pour en savoir plus. Cet article ne cite pas suffisamment ses sources (octobre 2020). Si vous disposez d'ouvrages ou d'articles de référence ou si vous connaissez des sites web de qualité traitant du thème abordé ici, merci de compléter l'article en donnant les références utiles à sa vérifiabilité et en les liant à la section « Notes et références » En pratique : Quelles sources sont attendues ? Co...

 

American politician (1921-2004) Carlton SicklesMember of the U.S. House of Representativesfrom Maryland's at-large congressional districtIn officeJanuary 3, 1963 – January 3, 1967 Personal detailsBornCarlton Ralph Sickles(1921-06-15)June 15, 1921Hamden, Connecticut, U.S.DiedJanuary 17, 2004(2004-01-17) (aged 82)Bethesda, Maryland, U.S.Political partyDemocratic Carlton Ralph Sickles (June 15, 1921 – January 17, 2004) was an American lawyer and Congressman from Mar...

 

Anna P. hidup selama beberapa tahun di Jerman sebagai laki-laki, difoto untuk buku Sexual Intermediates karya Magnus Hirschfeld tahun 1922.Bagian dari seri mengenaiTransgender Identitas gender Agender / tanpa gender Androgini Bigender Bissu Dua Roh Gender bender Gender ketiga Genderqueer / nonbiner Heteroseksualitas queer Hijra Pangender Transfeminin Transmaskulin Transpria Transpuan Transseksual Trigender Kesehatan dan pengobatan Anak transgender Disforia gender pada anak-anak Keha...

くまのプーさん冬の贈りもの Winnie the Pooh: Seasons of Giving監督 ハリー・アレンズジュン・ファルカンシュタイン(英語版)カール・グアーズ脚本 バーバラ・スレイド(英語版)製作 ハリー・アレンズバーバラ・フェローカール・グアーズケン・ツムラ出演者 ジム・カミングスポール・ウィンチェル(英語版)スティーヴ・シャッツバーグ(英語版)ジョン・フィードラ...

 

1967 compilation album by Bee GeesTurn Around, Look at UsCompilation album by Bee GeesReleased1967Recorded1963-1966GenreRockLanguageEnglishLabelFestival RecordsProducerRobert Iredale, Nat KipnerBee Gees chronology Turn Around, Look at Us(1967) Rare, Precious and Beautiful(1968) Turn Around, Look at Us was the first compilation album released by the Bee Gees in 1967 on Festival Records. It was released only in Australia and New Zealand.[1] The album effectively served as a mop-...

 

1980 Argentine filmBárbaraDirected byGino LandiWritten byMassimo FranciosaStarring Raffaella Carrà Jorge Martínez Distributed byEditorial CreaRelease date 1980 (1980) Running time100 minutesCountryArgentinaLanguageSpanish Bárbara is a 1980 Argentine comedy film drama directed by Gino Landi. The film stars Raffaella Carrà as Bárbara, where she falls in love with an Argentine photographer (Jorge Martínez). Irma Córdoba also stars as The Dame. The film premiered on June 12, 1980 in ...

Mark Finch Mark Finch (21 October 1961[1] – 14 January 1995[2]) was an English promoter of LGBT cinema. Having founded and expanded several international film festivals he created the first LGBT film market for distributors, sales agents, and independent film producers. Early life Born in Manchester[1] in 1961 Finch never identified with the city, having moved to Cambridge with his mother and siblings after the divorce of his parents. For the rest of his life he was ...

 

This article is about the Neil Sedaka song. For other songs, see Calendar Girl § Music. 1960 single by Neil SedakaCalendar GirlSingle by Neil SedakaB-sideThe Same Old FoolReleased1960Recorded1959GenreBrill Building[1]Length2:32[2]LabelRCA VictorSongwriter(s)Neil SedakaHoward GreenfieldProducer(s)Al NevinsDon KirshnerNeil Sedaka singles chronology Run Samson Run (1960) Calendar Girl (1960) Little Devil (1961) Calendar Girl is a song by Neil Sedaka. The music was composed by Se...

 

Discours de BayeuxCharles de Gaulle prononçant le second discours de Bayeux le 16 juin 1946.PrésentationPrononcés le 14 juin 1944, 16 juin 1946Orateur Charles de GaulleLieu BayeuxContenu du discoursThème principal Libération de la FranceAutres informationsAuteur Charles de Gaullemodifier - modifier le code - modifier Wikidata Les discours de Bayeux sont deux discours prononcés par le général de Gaulle dans le contexte de la Libération, après le débarquement de Normandie le 14 juin ...

M. Howard Lee (21 May 1937 – 18 November 2016) was a Korean-born American physicist who was Regents' Professor at University of Georgia.[1] Born in Busan, South Korea he gained a BS in chemistry in 1959 and a Ph.D. in physics and astronomy in 1967 from the University of Pennsylvania. He was then a postdoc at the Theoretical Physics Institute at the University of Alberta in Edmonton from 1967 to 1969, after which he was a member of the Center for Materials Science at MIT from 1969 to...

 

Football clubTime LvivFull nameFC Time LvivFounded2000Dissolved2011GroundLviv, UkraineCapacityunknownManager Stanyslav GoncharenkoLeagueUkrainian Futsal Championship2008-09Premiere League, 1st Home colours Away colours FC Time Lviv (Футзальний клуб Тайм) was a futsal team from Lviv, Ukraine, which played in the Ukrainian Futsal Championship. Squad 2009/2010 Note: Flags indicate national team as defined under FIFA eligibility rules. Players may hold more than one non-FIFA nat...

 

Bảng B Giải vô địch bóng đá nữ châu Âu 2017 gồm các đội Đức, Nga, Thụy Điển và Ý. Các trận đấu diễn ra từ 17 tới 25 tháng 7 năm 2017.[1] Bảng xếp hạng VT Độixts ST T H B BT BB HS Đ Giành quyền tham dự 1  Đức 3 2 1 0 4 1 +3 7 Vòng đấu loại trực tiếp 2  Thụy Điển 3 1 1 1 4 3 +1 4 3  Nga 3 1 0 2 2 5 −3 3 4  Ý 3 1 0 2 5 6 −1 3 Nguồn: UEFAQuy tắc xếp hạng: Tiêu chí xếp...

Species of moth This article may rely excessively on sources too closely associated with the subject, potentially preventing the article from being verifiable and neutral. Please help improve it by replacing them with more appropriate citations to reliable, independent, third-party sources. (June 2020) (Learn how and when to remove this template message) Catephia stygia Scientific classification Domain: Eukaryota Kingdom: Animalia Phylum: Arthropoda Class: Insecta Order: Lepidoptera Superfami...

 

Genus of fishes For the town in India, see Bangana town. Bangana Bangana diplostoma Scientific classification Domain: Eukaryota Kingdom: Animalia Phylum: Chordata Class: Actinopterygii Order: Cypriniformes Family: Cyprinidae Subfamily: Labeoninae Genus: BanganaHamilton, 1822 Type species Cyprinus deroHamilton, 1822 Species 21, but disputed (see text) Synonyms Mirolabeo Chu, 1963 Tylognathus Heckel, 1842 Bangana is a genus of fish in the family Cyprinidae, the carps and minnows. It is distribu...

 

Santiago Mariño Retrato de Santiago Mariño. Presidente del Estado de Venezuela Encargado 27 de julio-20 de agosto de 1835Predecesor José María VargasSucesor José María Carreño Despacho de Guerra y Marina de Venezuela Agosto de 1830-14 de octubre de 1830Presidente José Antonio PáezPredecesor Francisco Carabaño AponteSucesor Carlos Soublette Diputado del Congreso de Angostura 15 de febrero de 1819-31 de julio de 1821 Información personalNacimiento 25 de julio de 1788El Valle del Esp...

هذه المقالة يتيمة إذ تصل إليها مقالات أخرى قليلة جدًا. فضلًا، ساعد بإضافة وصلة إليها في مقالات متعلقة بها. (يوليو 2022) يستفاد من بليني الأكبر وجود أكثر من 500 قبيلة ليبية في شمال أفريقيا بين نهر الوادي الكبير في الجزائر وخليج سرت، 50 منها مدنيّة، والبقية الباقية من البدو الرعاة....

 

1969 studio album by Luther Allison and the Blue NebulaeLove Me MamaStudio album by Luther Allison and the Blue NebulaeReleased1969RecordedJune 24 and 25, 1969StudioSound Studios, ChicagoGenreBluesLength59:41LabelDelmarkDS-625ProducerRobert G. KoesterLuther Allison chronology Love Me Mama(1969) Bad News Is Coming(1973) Love Me Mama is the debut album by the American blues musician Luther Allison recorded in Chicago in 1969 and released by the Delmark label.[1][2][3...

 

Strategi Solo vs Squad di Free Fire: Cara Menang Mudah!