{"id":3980,"date":"2020-12-29T16:53:20","date_gmt":"2020-12-29T21:53:20","guid":{"rendered":"http:\/\/skimai.com\/?p=3980"},"modified":"2024-05-20T07:38:31","modified_gmt":"2024-05-20T12:38:31","slug":"comment-entrainer-le-modele-linguistique-electra-pour-lespagnol","status":"publish","type":"post","link":"https:\/\/skimai.com\/fr\/how-to-train-electra-language-model-for-spanish\/","title":{"rendered":"Tutoriel : Comment pr\u00e9-entra\u00eener ELECTRA \u00e0 l'espagnol \u00e0 partir de z\u00e9ro"},"content":{"rendered":"<div id=\"ez-toc-container\" class=\"ez-toc-v2_0_82_1 counter-hierarchy ez-toc-counter ez-toc-grey ez-toc-container-direction\">\n<div class=\"ez-toc-title-container\">\n<p class=\"ez-toc-title\" style=\"cursor:inherit\">Table of Contents<\/p>\n<span class=\"ez-toc-title-toggle\"><a href=\"#\" class=\"ez-toc-pull-right ez-toc-btn ez-toc-btn-xs ez-toc-btn-default ez-toc-toggle\" aria-label=\"Toggle Table of Content\"><span class=\"ez-toc-js-icon-con\"><span class=\"\"><span class=\"eztoc-hide\" style=\"display:none;\">Toggle<\/span><span class=\"ez-toc-icon-toggle-span\"><svg style=\"fill: #999;color:#999\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" class=\"list-377408\" width=\"20px\" height=\"20px\" viewBox=\"0 0 24 24\" fill=\"none\"><path d=\"M6 6H4v2h2V6zm14 0H8v2h12V6zM4 11h2v2H4v-2zm16 0H8v2h12v-2zM4 16h2v2H4v-2zm16 0H8v2h12v-2z\" fill=\"currentColor\"><\/path><\/svg><svg style=\"fill: #999;color:#999\" class=\"arrow-unsorted-368013\" xmlns=\"http:\/\/www.w3.org\/2000\/svg\" width=\"10px\" height=\"10px\" viewBox=\"0 0 24 24\" version=\"1.2\" baseProfile=\"tiny\"><path d=\"M18.2 9.3l-6.2-6.3-6.2 6.3c-.2.2-.3.4-.3.7s.1.5.3.7c.2.2.4.3.7.3h11c.3 0 .5-.1.7-.3.2-.2.3-.5.3-.7s-.1-.5-.3-.7zM5.8 14.7l6.2 6.3 6.2-6.3c.2-.2.3-.5.3-.7s-.1-.5-.3-.7c-.2-.2-.4-.3-.7-.3h-11c-.3 0-.5.1-.7.3-.2.2-.3.5-.3.7s.1.5.3.7z\"\/><\/svg><\/span><\/span><\/span><\/a><\/span><\/div>\n<nav><ul class='ez-toc-list ez-toc-list-level-1 ' ><li class='ez-toc-page-1 ez-toc-heading-level-1'><a class=\"ez-toc-link ez-toc-heading-1\" href=\"https:\/\/skimai.com\/fr\/how-to-train-electra-language-model-for-spanish\/#Tutorial_How_to_pre-train_ELECTRA_for_Spanish_from_Scratch\" >Tutorial: How to pre-train ELECTRA for Spanish from Scratch<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-1'><a class=\"ez-toc-link ez-toc-heading-2\" href=\"https:\/\/skimai.com\/fr\/how-to-train-electra-language-model-for-spanish\/#Introduction\" >Introduction<\/a><ul class='ez-toc-list-level-2' ><li class='ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-3\" href=\"https:\/\/skimai.com\/fr\/how-to-train-electra-language-model-for-spanish\/#1_Introduction\" >1. Introduction<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-4\" href=\"https:\/\/skimai.com\/fr\/how-to-train-electra-language-model-for-spanish\/#2_Method\" >2. Method<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-5\" href=\"https:\/\/skimai.com\/fr\/how-to-train-electra-language-model-for-spanish\/#3_Pre-train_ELECTRA\" >3. Pre-train ELECTRA<\/a><ul class='ez-toc-list-level-3' ><li class='ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-6\" href=\"https:\/\/skimai.com\/fr\/how-to-train-electra-language-model-for-spanish\/#Setup\" >Setup<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-7\" href=\"https:\/\/skimai.com\/fr\/how-to-train-electra-language-model-for-spanish\/#Data\" >Data<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-8\" href=\"https:\/\/skimai.com\/fr\/how-to-train-electra-language-model-for-spanish\/#Build_Pretraining_Dataset\" >Build Pretraining Dataset<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-3'><a class=\"ez-toc-link ez-toc-heading-9\" href=\"https:\/\/skimai.com\/fr\/how-to-train-electra-language-model-for-spanish\/#Start_Training\" >Start Training<\/a><\/li><\/ul><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-10\" href=\"https:\/\/skimai.com\/fr\/how-to-train-electra-language-model-for-spanish\/#4_Convert_Tensorflow_checkpoints_to_PyTorch_format\" >4. Convert Tensorflow checkpoints to PyTorch format<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-11\" href=\"https:\/\/skimai.com\/fr\/how-to-train-electra-language-model-for-spanish\/#5_Conclusion\" >5. Conclusion<\/a><\/li><li class='ez-toc-page-1 ez-toc-heading-level-2'><a class=\"ez-toc-link ez-toc-heading-12\" href=\"https:\/\/skimai.com\/fr\/how-to-train-electra-language-model-for-spanish\/#References\" >References<\/a><\/li><\/ul><\/li><\/ul><\/nav><\/div>\n<h1><span class=\"ez-toc-section\" id=\"Tutorial_How_to_pre-train_ELECTRA_for_Spanish_from_Scratch\"><\/span>Tutorial: How to pre-train ELECTRA for Spanish from Scratch<span class=\"ez-toc-section-end\"><\/span><\/h1>\n<pre><code>    Originally published by Skim AI's Machine Learning Researcher, Chris Tran.<\/code><\/pre>\n<p><a href=\"https:\/\/colab.research.google.com\/drive\/1DiOwhRjQbtYRgFWG7e3dybcXJsZcu86l#scrollTo=YIHC6Pg66zHg\"><img decoding=\"async\" src=\"https:\/\/img.shields.io\/badge\/Colab-Run_in_Google_Colab-blue?logo=Google&#038;logoColor=FDBA18\" alt=\"Run in Google Colab\" \/><\/a><\/p>\n<h1><span class=\"ez-toc-section\" id=\"Introduction\"><\/span>Introduction<span class=\"ez-toc-section-end\"><\/span><\/h1>\n<p>This article is on how pre-train ELECTRA, another member of the Transformer pre-training method family, for Spanish to achieve state-of-the-art results in Natural Language Processing benchmarks. It is Part III in a series on training custom BERT Language Models for Spanish for a variety of use cases:<\/p>\n<p><\/br><\/p>\n<ul>\n<li><a href=\"http:\/\/skimai.com\/roberta-language-model-for-spanish\/\">Part I: How to Train a RoBERTa Language Model for Spanish from Scratch<\/a><\/li>\n<li><a href=\"http:\/\/skimai.com\/how-to-use-bert-named-entity-recognition-ner\/\">Part II: How to Train a SpanBERTa Spanish Language Model for Named Entity Recognition (NER)<\/a><\/li>\n<\/ul>\n<h2><span class=\"ez-toc-section\" id=\"1_Introduction\"><\/span>1. Introduction<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>At ICLR 2020, <a href=\"https:\/\/openreview.net\/pdf?id=r1xMH1BtvB\">ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators<\/a>, a new method for self-supervised language representation learning, was introduced. ELECTRA is another member of the Transformer pre-training method family, whose previous members such as BERT, GPT-2, RoBERTa have achieved many state-of-the-art results in Natural Language Processing benchmarks.<\/p>\n<p>Different from other masked language modeling methods, ELECTRA is a more sample-efficient pre-training task called replaced token detection. At a small scale, ELECTRA-small can be trained on a single GPU for 4 days to outperform <a href=\"https:\/\/cdn.openai.com\/research-covers\/language-unsupervised\/language_understanding_paper.pdf\">GPT (Radford et al., 2018)<\/a> (trained using 30x more compute) on the GLUE benchmark. At a large scale, ELECTRA-large outperforms <a href=\"\">ALBERT (Lan et al., 2019)<\/a> on GLUE and sets a new state-of-the-art for SQuAD 2.0.<br \/>\n<img decoding=\"async\" src=\"https:\/\/github.com\/chriskhanhtran\/spanish-bert\/blob\/master\/img\/electra-performance.JPG?raw=true\" alt=\"\" \/><br \/>\n<em>ELECTRA consistently outperforms masked language model pre-training approaches.<\/em><br \/>\n{: .text-center}<\/p>\n<h2><span class=\"ez-toc-section\" id=\"2_Method\"><\/span>2. Method<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Masked language modeling pre-training methods such as <a href=\"https:\/\/arxiv.org\/abs\/1810.04805\">BERT (Devlin et al., 2019)<\/a> corrupt the input by replacing some tokens (typically 15% of the input) with <code>[MASK]<\/code> and then train a model to re-construct the original tokens.<\/p>\n<p>Instead of masking, ELECTRA corrupts the input by replacing some tokens with samples from the outputs of a smalled masked language model. Then, a discriminative model is trained to predict whether each token was an original or a replacement. After pre-training, the generator is thrown out and the discriminator is fine-tuned on downstream tasks.<br \/>\n<img decoding=\"async\" src=\"https:\/\/github.com\/chriskhanhtran\/spanish-bert\/blob\/master\/img\/electra-overview.JPG?raw=true\" alt=\"\" \/><br \/>\n<em>An overview of ELECTRA.<\/em><br \/>\n{: .text-center}<\/p>\n<p>Although having a generator and a discriminator like GAN, ELECTRA is not adversarial in that the generator producing corrupted tokens is trained with maximum likelihood rather than being trained to fool the discriminator.<\/p>\n<p><strong>Why is ELECTRA so efficient?<\/strong><\/p>\n<p>With a new training objective, ELECTRA can achieve comparable performance to strong models such as <a href=\"https:\/\/arxiv.org\/abs\/1907.11692\">RoBERTa (Liu et al., (2019)<\/a> which has more parameters and needs 4x more compute for training. In the paper, an analysis was conducted to understand what really contribute to ELECTRA&#8217;s efficiency. The key findings are:<\/p>\n<ul>\n<li>ELECTRA is greatly benefiting from having a loss defined over all input tokens rather than just a subset. More specifically, in ELECTRA, the discriminator predicts on every token in the input, while in BERT, the generator only predicts 15% masked tokens of the input.<\/li>\n<li>BERT&#8217;s performance is slightly harmed because in the pre-training phase, the model sees <code>[MASK]<\/code> tokens, while it is not the case in the fine-tuning phase.<\/li>\n<\/ul>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/chriskhanhtran\/spanish-bert\/blob\/master\/img\/electra-vs-bert.JPG?raw=true\" alt=\"\" \/><br \/>\n<em>ELECTRA vs. BERT<\/em><br \/>\n{: .text-center}<\/p>\n<h2><span class=\"ez-toc-section\" id=\"3_Pre-train_ELECTRA\"><\/span>3. Pre-train ELECTRA<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>In this section, we will train ELECTRA from scratch with TensorFlow using scripts provided by ELECTRA&#8217;s authors in <a href=\"https:\/\/github.com\/google-research\/electra\">google-research\/electra<\/a>. Then we will convert the model to PyTorch&#8217;s checkpoint, which can be easily fine-tuned on downstream tasks using Hugging Face&#8217;s <code>transformers<\/code> library.<\/p>\n<h3><span class=\"ez-toc-section\" id=\"Setup\"><\/span>Setup<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<pre><code>!pip install tensorflow==1.15\n!pip install transformers==2.8.0\n!git clone https:\/\/github.com\/google-research\/electra.git\n<\/code><\/pre>\n<pre><code>import os\nimport json\nfrom transformers import AutoTokenizer\n<\/code><\/pre>\n<h3><span class=\"ez-toc-section\" id=\"Data\"><\/span>Data<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>We will pre-train ELECTRA on a Spanish movie subtitle dataset retrieved from OpenSubtitles. This dataset is 5.4 GB in size and we will train on a small subset of ~30 MB for presentation.<\/p>\n<pre><code>DATA_DIR = \".\/data\" #@param {type: \"string\"}\nTRAIN_SIZE = 1000000 #@param {type:\"integer\"}\nMODEL_NAME = \"electra-spanish\" #@param {type: \"string\"}\n<\/code><\/pre>\n<pre><code># Download and unzip the Spanish movie substitle dataset\nif not os.path.exists(DATA_DIR):\n  !mkdir -p $DATA_DIR\n  !wget \"https:\/\/object.pouta.csc.fi\/OPUS-OpenSubtitles\/v2016\/mono\/es.txt.gz\" -O $DATA_DIR\/OpenSubtitles.txt.gz\n  !gzip -d $DATA_DIR\/OpenSubtitles.txt.gz\n  !head -n $TRAIN_SIZE $DATA_DIR\/OpenSubtitles.txt > $DATA_DIR\/train_data.txt \n  !rm $DATA_DIR\/OpenSubtitles.txt\n<\/code><\/pre>\n<p>Before building the pre-training dataset, we should make sure the corpus has the following format:<\/p>\n<ul>\n<li>each line is a sentence<\/li>\n<li>a blank line separates two documents<\/li>\n<\/ul>\n<h3><span class=\"ez-toc-section\" id=\"Build_Pretraining_Dataset\"><\/span>Build Pretraining Dataset<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>We will use the tokenizer of <code>bert-base-multilingual-cased<\/code> to process Spanish texts.<\/p>\n<pre><code># Save the pretrained WordPiece tokenizer to get <code>vocab.txt<\/code>\ntokenizer = AutoTokenizer.from_pretrained(\"bert-base-multilingual-cased\")\ntokenizer.save_pretrained(DATA_DIR)\n<\/code><\/pre>\n<p>We use <code>build_pretraining_dataset.py<\/code> to create a pre-training dataset from a dump of raw text.<\/p>\n<pre><code>!python3 electra\/build_pretraining_dataset.py \n  --corpus-dir $DATA_DIR \n  --vocab-file $DATA_DIR\/vocab.txt \n  --output-dir $DATA_DIR\/pretrain_tfrecords \n  --max-seq-length 128 \n  --blanks-separate-docs False \n  --no-lower-case \n  --num-processes 5\n<\/code><\/pre>\n<h3><span class=\"ez-toc-section\" id=\"Start_Training\"><\/span>Start Training<span class=\"ez-toc-section-end\"><\/span><\/h3>\n<p>We use <code>run_pretraining.py<\/code> to pre-train an ELECTRA model.<\/p>\n<p>To train a small ELECTRA model for 1 million steps, run:<\/p>\n<pre><code>python3 run_pretraining.py --data-dir $DATA_DIR --model-name electra_small\n<\/code><\/pre>\n<p>This takes slightly over 4 days on a Tesla V100 GPU. However, the model should achieve decent results after 200k steps (10 hours of training on the v100 GPU).<\/p>\n<p>To customize the training, create a <code>.json<\/code> file containing the hyperparameters. Please refer <a href=\"https:\/\/github.com\/google-research\/electra\/blob\/master\/configure_pretraining.py\"><code>configure_pretraining.py<\/code><\/a> for default values of all hyperparameters.<\/p>\n<p>Below, we set the hyperparameters to train the model for only 100 steps.<\/p>\n<pre><code>hparams = {\n    \"do_train\": \"true\",\n    \"do_eval\": \"false\",\n    \"model_size\": \"small\",\n    \"do_lower_case\": \"false\",\n    \"vocab_size\": 119547,\n    \"num_train_steps\": 100,\n    \"save_checkpoints_steps\": 100,\n    \"train_batch_size\": 32,\n}\nwith open(\"hparams.json\", \"w\") as f:\n    json.dump(hparams, f)\n<\/code><\/pre>\n<p>Let&#8217;s start training:<\/p>\n<pre><code>!python3 electra\/run_pretraining.py \n  --data-dir $DATA_DIR \n  --model-name $MODEL_NAME \n  --hparams \"hparams.json\"\n<\/code><\/pre>\n<p>If you are training on a virtual machine, run the following lines on the terminal to moniter the training process with TensorBoard.<\/p>\n<pre><code>pip install -U tensorboard\ntensorboard dev upload --logdir data\/models\/electra-spanish\n<\/code><\/pre>\n<p>This is the <a href=\"https:\/\/tensorboard.dev\/experiment\/AmaGBV3RTGOB1leXGGsJmw\/#scalars\">TensorBoard<\/a> of training ELECTRA-small for 1 million steps in 4 days on a V100 GPU.<\/p>\n<p><img decoding=\"async\" src=\"https:\/\/github.com\/chriskhanhtran\/spanish-bert\/blob\/master\/img\/electra-tensorboard.JPG?raw=true\" width=\"400\">{: .align-center}<\/p>\n<h2><span class=\"ez-toc-section\" id=\"4_Convert_Tensorflow_checkpoints_to_PyTorch_format\"><\/span>4. Convert Tensorflow checkpoints to PyTorch format<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>Hugging Face has <a href=\"https:\/\/huggingface.co\/transformers\/converting_tensorflow_models.html\">a tool<\/a> to convert Tensorflow checkpoints to PyTorch. However, this tool has yet been updated for ELECTRA. Fortunately, I found a GitHub repo by @lonePatient that can help us with this task.<\/p>\n<pre><code>!git clone https:\/\/github.com\/lonePatient\/electra_pytorch.git\n<\/code><\/pre>\n<pre><code>MODEL_DIR = \"data\/models\/electra-spanish\/\"\nconfig = {\n  \"vocab_size\": 119547,\n  \"embedding_size\": 128,\n  \"hidden_size\": 256,\n  \"num_hidden_layers\": 12,\n  \"num_attention_heads\": 4,\n  \"intermediate_size\": 1024,\n  \"generator_size\":\"0.25\",\n  \"hidden_act\": \"gelu\",\n  \"hidden_dropout_prob\": 0.1,\n  \"attention_probs_dropout_prob\": 0.1,\n  \"max_position_embeddings\": 512,\n  \"type_vocab_size\": 2,\n  \"initializer_range\": 0.02\n}\nwith open(MODEL_DIR + \"config.json\", \"w\") as f:\n    json.dump(config, f)\n<\/code><\/pre>\n<pre><code>!python electra_pytorch\/convert_electra_tf_checkpoint_to_pytorch.py \n    --tf_checkpoint_path=$MODEL_DIR \n    --electra_config_file=$MODEL_DIR\/config.json \n    --pytorch_dump_path=$MODEL_DIR\/pytorch_model.bin\n<\/code><\/pre>\n<p><strong>Use ELECTRA with <code>transformers<\/code><\/strong><\/p>\n<p>After converting the model checkpoint to PyTorch format, we can start to use our pre-trained ELECTRA model on downstream tasks with the <code>transformers<\/code> library.<\/p>\n<pre><code>import torch\nfrom transformers import ElectraForPreTraining, ElectraTokenizerFast\ndiscriminator = ElectraForPreTraining.from_pretrained(MODEL_DIR)\ntokenizer = ElectraTokenizerFast.from_pretrained(DATA_DIR, do_lower_case=False)\n<\/code><\/pre>\n<pre><code>sentence = \"Los p\u00e1jaros est\u00e1n cantando\" # The birds are singing\nfake_sentence = \"Los p\u00e1jaros est\u00e1n hablando\" # The birds are speaking \nfake_tokens = tokenizer.tokenize(fake_sentence, add_special_tokens=True)\nfake_inputs = tokenizer.encode(fake_sentence, return_tensors=\"pt\")\ndiscriminator_outputs = discriminator(fake_inputs)\npredictions = discriminator_outputs[0] > 0\n[print(\"%7s\" % token, end=\"\") for token in fake_tokens]\nprint(\"n\")\n[print(\"%7s\" % int(prediction), end=\"\") for prediction in predictions.tolist()];\n<\/code><\/pre>\n<pre><code>  [CLS]    Los    paj ##aros  estan  habla  ##ndo  [SEP]\n      1      0      0      0      0      0      0      0\n<\/code><\/pre>\n<p>Our model was trained for only 100 steps so the predictions are not accurate. The fully-trained ELECTRA-small for Spanish can be loaded as below:<\/p>\n<pre><code>discriminator = ElectraForPreTraining.from_pretrained(\"skimai\/electra-small-spanish\")\ntokenizer = ElectraTokenizerFast.from_pretrained(\"skimai\/electra-small-spanish\", do_lower_case=False)\n<\/code><\/pre>\n<h2><span class=\"ez-toc-section\" id=\"5_Conclusion\"><\/span>5. Conclusion<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<p>In this article, we have walked through the ELECTRA paper to understand why ELECTRA is the most efficient transformer pre-training approach at the moment. At a small scale, ELECTRA-small can be trained on one GPU for 4 days to outperform GPT on the GLUE benchmark. At a large scale, ELECTRA-large sets a new state-of-the-art for SQuAD 2.0.<\/p>\n<p>We then actually train an ELECTRA model on Spanish texts and convert Tensorflow checkpoint to PyTorch and use the model with the <code>transformers<\/code> library.<\/p>\n<h2><span class=\"ez-toc-section\" id=\"References\"><\/span>References<span class=\"ez-toc-section-end\"><\/span><\/h2>\n<ul>\n<li>[1] <a href=\"https:\/\/openreview.net\/pdf?id=r1xMH1BtvB\">ELECTRA: Pre-training Text Encoders as Discriminators Rather Than Generators<\/a><\/li>\n<li>[2] <a href=\"https:\/\/github.com\/google-research\/electra\">google-research\/electra<\/a> &#8211; the official GitHub repository of the original paper<\/li>\n<li>[3] <a href=\"https:\/\/github.com\/lonePatient\/electra_pytorch\">electra_pytorch<\/a> &#8211; a PyTorch implementation of ELECTRA<\/li>\n<\/ul>\n","protected":false},"excerpt":{"rendered":"<p>Tutorial: How to pre-train ELECTRA for Spanish from Scratch Originally published by Skim AI&#8217;s Machine Learning Researcher, Chris Tran. Introduction This article is on how pre-train ELECTRA, another member of the Transformer pre-training method family, for Spanish to achieve state-of-the-art results in Natural Language Processing benchmarks. It is Part III in a series on training [&hellip;]<\/p>\n","protected":false},"author":1003,"featured_media":3983,"comment_status":"closed","ping_status":"closed","sticky":false,"template":"single-custom-post-template.php","format":"standard","meta":{"_et_pb_use_builder":"","_et_pb_old_content":"","_et_gb_content_width":"","footnotes":""},"categories":[125,64,67],"tags":[74,79,75,80],"class_list":["post-3980","post","type-post","status-publish","format-standard","has-post-thumbnail","hentry","category-enterprise-ai-blog","category-how-to","category-ml-nlp","tag-bert","tag-how-to","tag-nlp","tag-tranformer"],"yoast_head":"<!-- This site is optimized with the Yoast SEO plugin v24.1 - https:\/\/yoast.com\/wordpress\/plugins\/seo\/ -->\n<title>Tutorial: How to pre-train ELECTRA for Spanish from Scratch - Skim AI<\/title>\n<meta name=\"description\" content=\"How to train ELECTRA, another member of the Transformer pre-training method family, for Spanish to achieve state-of-the-art results in NLP benchmarks.\" \/>\n<meta name=\"robots\" content=\"index, follow, max-snippet:-1, max-image-preview:large, max-video-preview:-1\" \/>\n<link rel=\"canonical\" href=\"https:\/\/skimai.com\/fr\/comment-entrainer-le-modele-linguistique-electra-pour-lespagnol\/\" \/>\n<meta property=\"og:locale\" content=\"fr_FR\" \/>\n<meta property=\"og:type\" content=\"article\" \/>\n<meta property=\"og:title\" content=\"Tutorial: How to pre-train ELECTRA for Spanish from Scratch - Skim AI\" \/>\n<meta property=\"og:description\" content=\"How to train ELECTRA, another member of the Transformer pre-training method family, for Spanish to achieve state-of-the-art results in NLP benchmarks.\" \/>\n<meta property=\"og:url\" content=\"https:\/\/skimai.com\/fr\/comment-entrainer-le-modele-linguistique-electra-pour-lespagnol\/\" \/>\n<meta property=\"og:site_name\" content=\"Skim AI\" \/>\n<meta property=\"article:published_time\" content=\"2020-12-29T21:53:20+00:00\" \/>\n<meta property=\"article:modified_time\" content=\"2024-05-20T12:38:31+00:00\" \/>\n<meta property=\"og:image\" content=\"https:\/\/skimai.com\/wp-content\/uploads\/2020\/12\/daredevil-elektra-season-2-1600x856-1.jpg\" \/>\n\t<meta property=\"og:image:width\" content=\"1600\" \/>\n\t<meta property=\"og:image:height\" content=\"856\" \/>\n\t<meta property=\"og:image:type\" content=\"image\/jpeg\" \/>\n<meta name=\"author\" content=\"Greggory Elias\" \/>\n<meta name=\"twitter:card\" content=\"summary_large_image\" \/>\n<meta name=\"twitter:label1\" content=\"\u00c9crit par\" \/>\n\t<meta name=\"twitter:data1\" content=\"Greggory Elias\" \/>\n\t<meta name=\"twitter:label2\" content=\"Dur\u00e9e de lecture estim\u00e9e\" \/>\n\t<meta name=\"twitter:data2\" content=\"6 minutes\" \/>\n<script type=\"application\/ld+json\" class=\"yoast-schema-graph\">{\"@context\":\"https:\/\/schema.org\",\"@graph\":[{\"@type\":\"Article\",\"@id\":\"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/#article\",\"isPartOf\":{\"@id\":\"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/\"},\"author\":{\"name\":\"Greggory Elias\",\"@id\":\"https:\/\/skimai.com\/uk\/#\/schema\/person\/7a883b4a2d2ea22040f42a7975eb86c6\"},\"headline\":\"Tutorial: How to pre-train ELECTRA for Spanish from Scratch\",\"datePublished\":\"2020-12-29T21:53:20+00:00\",\"dateModified\":\"2024-05-20T12:38:31+00:00\",\"mainEntityOfPage\":{\"@id\":\"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/\"},\"wordCount\":920,\"publisher\":{\"@id\":\"https:\/\/skimai.com\/uk\/#organization\"},\"image\":{\"@id\":\"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/skimai.com\/wp-content\/uploads\/2020\/12\/daredevil-elektra-season-2-1600x856-1.jpg\",\"keywords\":[\"bert\",\"how to\",\"nlp\",\"tranformer\"],\"articleSection\":[\"Enterprise AI\",\"How to\",\"LLMs \/ NLP\"],\"inLanguage\":\"fr-FR\"},{\"@type\":\"WebPage\",\"@id\":\"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/\",\"url\":\"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/\",\"name\":\"Tutorial: How to pre-train ELECTRA for Spanish from Scratch - Skim AI\",\"isPartOf\":{\"@id\":\"https:\/\/skimai.com\/uk\/#website\"},\"primaryImageOfPage\":{\"@id\":\"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/#primaryimage\"},\"image\":{\"@id\":\"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/#primaryimage\"},\"thumbnailUrl\":\"https:\/\/skimai.com\/wp-content\/uploads\/2020\/12\/daredevil-elektra-season-2-1600x856-1.jpg\",\"datePublished\":\"2020-12-29T21:53:20+00:00\",\"dateModified\":\"2024-05-20T12:38:31+00:00\",\"description\":\"How to train ELECTRA, another member of the Transformer pre-training method family, for Spanish to achieve state-of-the-art results in NLP benchmarks.\",\"breadcrumb\":{\"@id\":\"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/#breadcrumb\"},\"inLanguage\":\"fr-FR\",\"potentialAction\":[{\"@type\":\"ReadAction\",\"target\":[\"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/\"]}]},{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/#primaryimage\",\"url\":\"https:\/\/skimai.com\/wp-content\/uploads\/2020\/12\/daredevil-elektra-season-2-1600x856-1.jpg\",\"contentUrl\":\"https:\/\/skimai.com\/wp-content\/uploads\/2020\/12\/daredevil-elektra-season-2-1600x856-1.jpg\",\"width\":1600,\"height\":856,\"caption\":\"daredevil elektra season 2 1600x856 1\"},{\"@type\":\"BreadcrumbList\",\"@id\":\"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/#breadcrumb\",\"itemListElement\":[{\"@type\":\"ListItem\",\"position\":1,\"name\":\"Home\",\"item\":\"https:\/\/skimai.com\/\"},{\"@type\":\"ListItem\",\"position\":2,\"name\":\"Tutorial: How to pre-train ELECTRA for Spanish from Scratch\"}]},{\"@type\":\"WebSite\",\"@id\":\"https:\/\/skimai.com\/uk\/#website\",\"url\":\"https:\/\/skimai.com\/uk\/\",\"name\":\"Skim AI\",\"description\":\"The AI Agent Workforce Platform\",\"publisher\":{\"@id\":\"https:\/\/skimai.com\/uk\/#organization\"},\"potentialAction\":[{\"@type\":\"SearchAction\",\"target\":{\"@type\":\"EntryPoint\",\"urlTemplate\":\"https:\/\/skimai.com\/uk\/?s={search_term_string}\"},\"query-input\":{\"@type\":\"PropertyValueSpecification\",\"valueRequired\":true,\"valueName\":\"search_term_string\"}}],\"inLanguage\":\"fr-FR\"},{\"@type\":\"Organization\",\"@id\":\"https:\/\/skimai.com\/uk\/#organization\",\"name\":\"Skim AI\",\"url\":\"https:\/\/skimai.com\/uk\/\",\"logo\":{\"@type\":\"ImageObject\",\"inLanguage\":\"fr-FR\",\"@id\":\"https:\/\/skimai.com\/uk\/#\/schema\/logo\/image\/\",\"url\":\"http:\/\/skimai.com\/wp-content\/uploads\/2020\/07\/SKIM-AI-Header-Logo.png\",\"contentUrl\":\"http:\/\/skimai.com\/wp-content\/uploads\/2020\/07\/SKIM-AI-Header-Logo.png\",\"width\":194,\"height\":58,\"caption\":\"Skim AI\"},\"image\":{\"@id\":\"https:\/\/skimai.com\/uk\/#\/schema\/logo\/image\/\"},\"sameAs\":[\"https:\/\/www.linkedin.com\/company\/skim-ai\"]},{\"@type\":\"Person\",\"@id\":\"https:\/\/skimai.com\/uk\/#\/schema\/person\/7a883b4a2d2ea22040f42a7975eb86c6\",\"name\":\"Greggory Elias\",\"url\":\"https:\/\/skimai.com\/fr\/author\/gregg\/\"}]}<\/script>\n<!-- \/ Yoast SEO plugin. -->","yoast_head_json":{"title":"Tutoriel : Comment pr\u00e9-entra\u00eener ELECTRA pour l'espagnol \u00e0 partir de z\u00e9ro - Skim AI","description":"Comment entra\u00eener ELECTRA, un autre membre de la famille des m\u00e9thodes de pr\u00e9-entra\u00eenement Transformer, pour l'espagnol afin d'obtenir des r\u00e9sultats de pointe dans les tests de r\u00e9f\u00e9rence de la PNL.","robots":{"index":"index","follow":"follow","max-snippet":"max-snippet:-1","max-image-preview":"max-image-preview:large","max-video-preview":"max-video-preview:-1"},"canonical":"https:\/\/skimai.com\/fr\/comment-entrainer-le-modele-linguistique-electra-pour-lespagnol\/","og_locale":"fr_FR","og_type":"article","og_title":"Tutorial: How to pre-train ELECTRA for Spanish from Scratch - Skim AI","og_description":"How to train ELECTRA, another member of the Transformer pre-training method family, for Spanish to achieve state-of-the-art results in NLP benchmarks.","og_url":"https:\/\/skimai.com\/fr\/comment-entrainer-le-modele-linguistique-electra-pour-lespagnol\/","og_site_name":"Skim AI","article_published_time":"2020-12-29T21:53:20+00:00","article_modified_time":"2024-05-20T12:38:31+00:00","og_image":[{"width":1600,"height":856,"url":"https:\/\/skimai.com\/wp-content\/uploads\/2020\/12\/daredevil-elektra-season-2-1600x856-1.jpg","type":"image\/jpeg"}],"author":"Greggory Elias","twitter_card":"summary_large_image","twitter_misc":{"\u00c9crit par":"Greggory Elias","Dur\u00e9e de lecture estim\u00e9e":"6 minutes"},"schema":{"@context":"https:\/\/schema.org","@graph":[{"@type":"Article","@id":"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/#article","isPartOf":{"@id":"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/"},"author":{"name":"Greggory Elias","@id":"https:\/\/skimai.com\/uk\/#\/schema\/person\/7a883b4a2d2ea22040f42a7975eb86c6"},"headline":"Tutorial: How to pre-train ELECTRA for Spanish from Scratch","datePublished":"2020-12-29T21:53:20+00:00","dateModified":"2024-05-20T12:38:31+00:00","mainEntityOfPage":{"@id":"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/"},"wordCount":920,"publisher":{"@id":"https:\/\/skimai.com\/uk\/#organization"},"image":{"@id":"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/#primaryimage"},"thumbnailUrl":"https:\/\/skimai.com\/wp-content\/uploads\/2020\/12\/daredevil-elektra-season-2-1600x856-1.jpg","keywords":["bert","how to","nlp","tranformer"],"articleSection":["Enterprise AI","How to","LLMs \/ NLP"],"inLanguage":"fr-FR"},{"@type":"WebPage","@id":"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/","url":"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/","name":"Tutoriel : Comment pr\u00e9-entra\u00eener ELECTRA pour l'espagnol \u00e0 partir de z\u00e9ro - Skim AI","isPartOf":{"@id":"https:\/\/skimai.com\/uk\/#website"},"primaryImageOfPage":{"@id":"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/#primaryimage"},"image":{"@id":"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/#primaryimage"},"thumbnailUrl":"https:\/\/skimai.com\/wp-content\/uploads\/2020\/12\/daredevil-elektra-season-2-1600x856-1.jpg","datePublished":"2020-12-29T21:53:20+00:00","dateModified":"2024-05-20T12:38:31+00:00","description":"Comment entra\u00eener ELECTRA, un autre membre de la famille des m\u00e9thodes de pr\u00e9-entra\u00eenement Transformer, pour l'espagnol afin d'obtenir des r\u00e9sultats de pointe dans les tests de r\u00e9f\u00e9rence de la PNL.","breadcrumb":{"@id":"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/#breadcrumb"},"inLanguage":"fr-FR","potentialAction":[{"@type":"ReadAction","target":["https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/"]}]},{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/#primaryimage","url":"https:\/\/skimai.com\/wp-content\/uploads\/2020\/12\/daredevil-elektra-season-2-1600x856-1.jpg","contentUrl":"https:\/\/skimai.com\/wp-content\/uploads\/2020\/12\/daredevil-elektra-season-2-1600x856-1.jpg","width":1600,"height":856,"caption":"daredevil elektra season 2 1600x856 1"},{"@type":"BreadcrumbList","@id":"https:\/\/skimai.com\/how-to-train-electra-language-model-for-spanish\/#breadcrumb","itemListElement":[{"@type":"ListItem","position":1,"name":"Home","item":"https:\/\/skimai.com\/"},{"@type":"ListItem","position":2,"name":"Tutorial: How to pre-train ELECTRA for Spanish from Scratch"}]},{"@type":"WebSite","@id":"https:\/\/skimai.com\/uk\/#website","url":"https:\/\/skimai.com\/uk\/","name":"Skim AI","description":"La plateforme de travail des agents de l'IA","publisher":{"@id":"https:\/\/skimai.com\/uk\/#organization"},"potentialAction":[{"@type":"SearchAction","target":{"@type":"EntryPoint","urlTemplate":"https:\/\/skimai.com\/uk\/?s={search_term_string}"},"query-input":{"@type":"PropertyValueSpecification","valueRequired":true,"valueName":"search_term_string"}}],"inLanguage":"fr-FR"},{"@type":"Organization","@id":"https:\/\/skimai.com\/uk\/#organization","name":"Skim AI","url":"https:\/\/skimai.com\/uk\/","logo":{"@type":"ImageObject","inLanguage":"fr-FR","@id":"https:\/\/skimai.com\/uk\/#\/schema\/logo\/image\/","url":"http:\/\/skimai.com\/wp-content\/uploads\/2020\/07\/SKIM-AI-Header-Logo.png","contentUrl":"http:\/\/skimai.com\/wp-content\/uploads\/2020\/07\/SKIM-AI-Header-Logo.png","width":194,"height":58,"caption":"Skim AI"},"image":{"@id":"https:\/\/skimai.com\/uk\/#\/schema\/logo\/image\/"},"sameAs":["https:\/\/www.linkedin.com\/company\/skim-ai"]},{"@type":"Person","@id":"https:\/\/skimai.com\/uk\/#\/schema\/person\/7a883b4a2d2ea22040f42a7975eb86c6","name":"Greggory Elias","url":"https:\/\/skimai.com\/fr\/author\/gregg\/"}]}},"_links":{"self":[{"href":"https:\/\/skimai.com\/fr\/wp-json\/wp\/v2\/posts\/3980","targetHints":{"allow":["GET"]}}],"collection":[{"href":"https:\/\/skimai.com\/fr\/wp-json\/wp\/v2\/posts"}],"about":[{"href":"https:\/\/skimai.com\/fr\/wp-json\/wp\/v2\/types\/post"}],"author":[{"embeddable":true,"href":"https:\/\/skimai.com\/fr\/wp-json\/wp\/v2\/users\/1003"}],"replies":[{"embeddable":true,"href":"https:\/\/skimai.com\/fr\/wp-json\/wp\/v2\/comments?post=3980"}],"version-history":[{"count":0,"href":"https:\/\/skimai.com\/fr\/wp-json\/wp\/v2\/posts\/3980\/revisions"}],"wp:featuredmedia":[{"embeddable":true,"href":"https:\/\/skimai.com\/fr\/wp-json\/wp\/v2\/media\/3983"}],"wp:attachment":[{"href":"https:\/\/skimai.com\/fr\/wp-json\/wp\/v2\/media?parent=3980"}],"wp:term":[{"taxonomy":"category","embeddable":true,"href":"https:\/\/skimai.com\/fr\/wp-json\/wp\/v2\/categories?post=3980"},{"taxonomy":"post_tag","embeddable":true,"href":"https:\/\/skimai.com\/fr\/wp-json\/wp\/v2\/tags?post=3980"}],"curies":[{"name":"wp","href":"https:\/\/api.w.org\/{rel}","templated":true}]}}