repo
stringclasses
1 value
number
int64
1
25.3k
state
stringclasses
2 values
title
stringlengths
1
487
body
stringlengths
0
234k
created_at
stringlengths
19
19
closed_at
stringlengths
19
19
comments
stringlengths
0
293k
transformers
24,804
closed
Support RefinedWebModel as a model_type for Falcon
This PR allows us to temporarily revert the model_type for Falcon repos to fix some issues. cc @sgugger @LysandreJik @Narsil @OlivierDehaene @slippylolo
07-13-2023 11:25:00
07-13-2023 11:25:00
No this is not enough. If we go down that road we need to erase the `RefinedWeb` model type on loading to replace it with `falcon`, so that a new saved version does not keep it.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>This already happens! The model type that is saved in `config.json` is set in `configuration_falcon.py`, with the line `model_type = "falcon"`. If you load a model with model_type `RefinedWebModel` and save it, the output `config.json` has model_type `falcon`.<|||||>Closing because just changing the `model_name` field won't be enough anyway - we also need to revert all the `config.json` parameters.
transformers
24,803
open
Lag llama
# What does this PR do? Implementation of general time series forecaster and classifier using only the target values.
07-13-2023 10:55:13
07-13-2023 10:55:13
transformers
24,802
open
bug when Bert train on multi gpus
### System Info - `transformers` version: 4.28.1 - Platform: Linux-3.10.0-957.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 1.12.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @ArthurZucker @younesbelkada @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I had alter the BertEncoder defined in modeling_bert.py, like below: ``` class BertEncoder(nn.Module): def __init__(self, config, meta_layer_index=None, scale=1): super().__init__() self.config = config self.layer = nn.ModuleList([BertLayer(config) for _ in range(config.num_hidden_layers)]) self.gradient_checkpointing = False # added by me self.meta_layer = BertLayer(config) self.meta_layer_index = meta_layer_index self.scale = scale self.optimizer_for_meta_layer = torch.optim.SGD(self.meta_layer.parameters(), lr=1e-5, weight_decay=0.005) self.inputs_for_metalayer = None self.outputs_for_metalayer = None self.meta_layer_outputs = None self.st_loss = None def forward( self, hidden_states: torch.Tensor, attention_mask: Optional[torch.FloatTensor] = None, head_mask: Optional[torch.FloatTensor] = None, encoder_hidden_states: Optional[torch.FloatTensor] = None, encoder_attention_mask: Optional[torch.FloatTensor] = None, past_key_values: Optional[Tuple[Tuple[torch.FloatTensor]]] = None, use_cache: Optional[bool] = None, output_attentions: Optional[bool] = False, output_hidden_states: Optional[bool] = False, return_dict: Optional[bool] = True, ) -> Union[Tuple[torch.Tensor], BaseModelOutputWithPastAndCrossAttentions]: all_hidden_states = () if output_hidden_states else None all_self_attentions = () if output_attentions else None all_cross_attentions = () if output_attentions and self.config.add_cross_attention else None self.inputs_for_metalayer = (hidden_states.clone().detach(), head_mask[0] if head_mask is not None else None, past_key_values[0] if past_key_values is not None else None) if self.gradient_checkpointing and self.training: if use_cache: logger.warning_once( "`use_cache=True` is incompatible with gradient checkpointing. Setting `use_cache=False`..." ) use_cache = False next_decoder_cache = () if use_cache else None # for i, layer_module in enumerate(self.layer): if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) layer_head_mask = head_mask[i] if head_mask is not None else None past_key_value = past_key_values[i] if past_key_values is not None else None if self.gradient_checkpointing and self.training: def create_custom_forward(module): def custom_forward(*inputs): return module(*inputs, past_key_value, output_attentions) return custom_forward layer_outputs = torch.utils.checkpoint.checkpoint( create_custom_forward(layer_module), hidden_states, attention_mask, layer_head_mask, encoder_hidden_states, encoder_attention_mask, ) else: layer_outputs = layer_module( hidden_states, attention_mask, layer_head_mask, encoder_hidden_states, encoder_attention_mask, past_key_value, output_attentions, ) # layer_outputs : Tuple[torch.Tensor] hidden_states = layer_outputs[0] #added by me if i == self.meta_layer_index - 1: self.inputs_for_metalayer = (hidden_states.clone().detach(), layer_head_mask, past_key_value) if i == self.meta_layer_index + self.scale - 1: self.outputs_for_metalayer = hidden_states.clone().detach() if use_cache: next_decoder_cache += (layer_outputs[-1],) if output_attentions: all_self_attentions = all_self_attentions + (layer_outputs[1],) if self.config.add_cross_attention: all_cross_attentions = all_cross_attentions + (layer_outputs[2],) if self.inputs_for_metalayer is not None and self.outputs_for_metalayer is not None: self.meta_layer_outputs = self.meta_layer( self.inputs_for_metalayer[0], attention_mask, self.inputs_for_metalayer[1], encoder_hidden_states, encoder_attention_mask, self.inputs_for_metalayer[2], output_attentions, )[0] self.st_loss = torch.mean((self.meta_layer_outputs - self.outputs_for_metalayer) ** 2) if self.st_loss.requires_grad is True: self.optimizer_for_meta_layer.zero_grad() self.st_loss.backward() self.optimizer_for_meta_layer.step() if output_hidden_states: all_hidden_states = all_hidden_states + (hidden_states,) if not return_dict: return tuple( v for v in [ hidden_states, next_decoder_cache, all_hidden_states, all_self_attentions, all_cross_attentions, ] if v is not None ) return BaseModelOutputWithPastAndCrossAttentions( last_hidden_state=hidden_states, past_key_values=next_decoder_cache, hidden_states=all_hidden_states, attentions=all_self_attentions, cross_attentions=all_cross_attentions, ) ``` then I just train a Bert Model on multi gpus, and that didn't work because optimizer_for_meta_layer is None. But it works using only one gpu. The train code like below: ``` from transformers import BertLayer,BertConfig,BertModel,BertForMaskedLM from transformers import BertForMaskedLM from transformers import BertConfig from transformers import BertTokenizer import datasets import json import sys import copy from datasets import load_dataset BertBaseconfig = BertConfig() BertBase = BertForMaskedLM(BertBaseconfig) layer = BertLayer(BertBaseconfig) MetaModel = BertForMaskedLM.from_pretrained('/home/wanzhipeng/deepincubation/MetaModel_bert_wiki/checkpoint-36500') MetaModelEncoderBertLayer = MetaModel.bert.encoder.layer BaseModelEncoderBertLayer = BertBase.bert.encoder.layer BaseLayerNums = BaseModelEncoderBertLayer.__len__() MetaLayerNums = MetaModelEncoderBertLayer.__len__() Submodules = [] # def initSubmodules(): global Submodules Submodules = [] scale = BaseLayerNums // MetaLayerNums for i in range(MetaLayerNums): layers = [BertLayer(BertBaseconfig) for _ in range(scale)] Submodule = copy.deepcopy(MetaModel) Submodule.bert.encoder.layer = Submodule.bert.encoder.layer[0:i+1] + layers + Submodule.bert.encoder.layer[i+1:] del Submodule.bert.encoder.layer[i] Submodules.append(Submodule) def tokenize_function(examples): return tokenizer(examples["text"]) initSubmodules() model=Submodules[0] model.config.num_hidden_layers = 6 model.bert.encoder.meta_layer_index = 0 tokenizer = BertTokenizer.from_pretrained('bert-base-uncased',use_fast=True) datasets = load_dataset('wikitext', 'wikitext-2-raw-v1') tokenized_datasets = datasets.map(tokenize_function, batched=True, num_proc=4, remove_columns=["text"]) block_size = 128 def group_texts(examples): # Concatenate all texts. concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()} total_length = len(concatenated_examples[list(examples.keys())[0]]) # We drop the small remainder, we could add padding if the model supported it instead of this drop, you can # customize this part to your needs. total_length = (total_length // block_size) * block_size # Split by chunks of max_len. result = { k: [t[i : i + block_size] for i in range(0, total_length, block_size)] for k, t in concatenated_examples.items() } result["labels"] = result["input_ids"].copy() return result lm_datasets = tokenized_datasets.map( group_texts, batched=True, batch_size=1000, num_proc=4, ) from transformers import Trainer, TrainingArguments from transformers import DataCollatorForLanguageModeling data_collator = DataCollatorForLanguageModeling(tokenizer=tokenizer, mlm_probability=0.15) training_args = TrainingArguments( output_dir="sub1", # output directory to where save model checkpoint evaluation_strategy="steps", # evaluate each `logging_steps` steps logging_strategy="steps", overwrite_output_dir=True, num_train_epochs=10, # number of training epochs, feel free to tweak logging_steps=10, # evaluate, log and save model checkpoints every 1000 step save_steps=10, load_best_model_at_end=True, # whether to load the best model (in terms of loss) at the end of training save_total_limit=3, # whether you don't have much space so you let only 3 model weights saved in the disk learning_rate=1e-5, weight_decay=0.01, warmup_steps=10000, per_device_train_batch_size=64, per_gpu_eval_batch_size=64, ) from transformers import Trainer, TrainingArguments,EarlyStoppingCallback,TrainerCallback from transformers import DataCollatorForLanguageModeling import torch ## 不同卡的情况下会出问题 class CallbackForMetaLayer(TrainerCallback): def __init__(self): super().__init__() self.meta_layer_outputs = None def on_step_begin(self, args, state, control, model=None, **kwargs): self.meta_layer_outputs = [model.bert.encoder.meta_layer_outputs] print("step_begin:") print(id(self.meta_layer_outputs[0])) def on_step_end(self, args, state, control, model=None, **kwargs): print("step_end:") print(id(self.meta_layer_outputs[0])) # print("*********************************************************") # print(model.bert.encoder.meta_layer_outputs) # print("*********************************************************") # print(model.bert.encoder.outputs_for_metalayer) # print("*********************************************************") # model.bert.encoder.st_loss = torch.mean((model.bert.encoder.meta_layer_outputs - model.bert.encoder.outputs_for_metalayer) ** 2) # model.bert.encoder.optimizer_for_meta_layer.zero_grad() # model.bert.encoder.st_loss.backward() # model.bert.encoder.optimizer_for_meta_layer.step() trainer = Trainer( model=model.to("cuda"), args=training_args, train_dataset=lm_datasets["train"], eval_dataset=lm_datasets["validation"], data_collator=data_collator, callbacks = [EarlyStoppingCallback(early_stopping_patience=5)], ) # trainer.train(resume_from_checkpoint=True) trainer.train() ``` error : Traceback (most recent call last): File "test2/sub1.py", line 130, in <module> trainer.train() File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/trainer.py", line 1662, in train return inner_training_loop( File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/trainer.py", line 1929, in _inner_training_loop tr_loss_step = self.training_step(model, inputs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/trainer.py", line 2699, in training_step loss = self.compute_loss(model, inputs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/trainer.py", line 2731, in compute_loss outputs = model(**inputs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 168, in forward outputs = self.parallel_apply(replicas, inputs, kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/parallel/data_parallel.py", line 178, in parallel_apply return parallel_apply(replicas, inputs, kwargs, self.device_ids[:len(replicas)]) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 86, in parallel_apply output.reraise() File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/_utils.py", line 461, in reraise raise exception RuntimeError: Caught RuntimeError in replica 0 on device 0. Original Traceback (most recent call last): File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/parallel/parallel_apply.py", line 61, in _worker output = module(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 1393, in forward outputs = self.bert( File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 1055, in forward encoder_outputs = self.encoder( File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 643, in forward self.meta_layer_outputs = self.meta_layer( File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 495, in forward self_attention_outputs = self.attention( File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 425, in forward self_outputs = self.self( File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/transformers/models/bert/modeling_bert.py", line 284, in forward mixed_query_layer = self.query(hidden_states) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/module.py", line 1130, in _call_impl return forward_call(*input, **kwargs) File "/home/wanzhipeng/miniconda3/envs/dl/lib/python3.8/site-packages/torch/nn/modules/linear.py", line 114, in forward return F.linear(input, self.weight, self.bias) RuntimeError: Output 101 of BroadcastBackward is a view and its base or another view of its base has been modified inplace. This view is the output of a function that returns multiple views. Such functions do not allow the output views to be modified inplace. You should replace the inplace operation by an out-of-place one. ### Expected behavior I think it shoule be trained on multi gpus.
07-13-2023 05:21:11
07-13-2023 05:21:11
The normal BERT model can train on multiple GPUs, the bug is thus likely due to the modifications you made. You should ask no the [forums](https://discuss.huggingface.co/) to help debug your code.
transformers
24,801
closed
Bug on compute_transition_scores, inconsistency between two ways of evaluating probabilities
### System Info Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.30.2 - Platform: Linux-5.15.109+-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): 2.12.0 (True) - Flax version (CPU?/GPU?/TPU?): 0.6.11 (gpu) - Jax version: 0.4.13 - JaxLib version: 0.4.13 - Using GPU in script?: y - Using distributed or parallel set-up in script?: n ### Who can help? @gante ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Colab Link: https://colab.research.google.com/drive/1ldLuHr2h4nSlTv5TGO06iORByJJNjCJ6?usp=sharing ```python import torch import sentencepiece import accelerate import transformers from transformers import GenerationConfig, LlamaForCausalLM, LlamaTokenizer if torch.cuda.is_available(): num_gpus = torch.cuda.device_count() device = "cuda" else: device = "cpu" print(device) model_path ="openlm-research/open_llama_3b" tokenizer = LlamaTokenizer.from_pretrained(model_path) model = LlamaForCausalLM.from_pretrained(model_path, device_map='auto') input_text = 'Hello, I am frustrated' n_seq = 1 max_new_tokens = 5 model.eval() input_ids = tokenizer(input_text, return_tensors="pt").input_ids.to(device) print(input_ids) output_sample = model.generate(input_ids=input_ids, max_new_tokens=max_new_tokens, do_sample=True, top_p = 0.9, top_k = 15, num_return_sequences=n_seq, return_dict_in_generate=True, output_scores=True) # Evaluate transition scores using official method transition_scores = model.compute_transition_scores(output_sample.sequences, output_sample.scores, normalize_logits=True) print(transition_scores) print('generated sequence: ', tokenizer.batch_decode(output_sample.sequences, skip_special_tokens= True)[0]) # Take the text generated and re-evaluate the probability text_generated = tokenizer.batch_decode(output_sample.sequences, skip_special_tokens= True)[0] generated_input_ids = tokenizer(text_generated, return_tensors="pt").input_ids.to(device) print(generated_input_ids) with torch.no_grad(): model_output = model(generated_input_ids) # collect the probability of the generated token -- probability at index 0 corresponds to the token at index 1 probs = torch.log_softmax(model_output.logits, dim=-1).detach() probs = probs[:, :-1, :] generated_input_ids_shifted = generated_input_ids[:, 1:] gen_probs = torch.gather(probs, 2, generated_input_ids_shifted[:, :, None]).squeeze(-1) print(gen_probs[:,-max_new_tokens:]) ``` ### Expected behavior I am comparing the transition matrix by using 1. The official implementation `model.compute_transition_scores(output_sample.sequences, output_sample.scores, normalize_logits=True)` 2. Another official suggestion by @gante explained in the Announcement of the probability generation https://discuss.huggingface.co/t/announcement-generation-get-probabilities-for-generated-output/30075/17?u=redpig-at-imo As they are both suggesting by Joao, I am expecting the two ways to return the exact same probability, however this is not the case, which seems to be weird to me. Am I missing anything or this is expected? <img width="1106" alt="Screen Shot 2023-07-12 at 8 28 09 PM" src="https://github.com/huggingface/transformers/assets/29802555/9f8d4b6f-d62a-4fb3-abe9-510c9f28b11e"> <img width="1069" alt="Screen Shot 2023-07-12 at 8 28 02 PM" src="https://github.com/huggingface/transformers/assets/29802555/ff089ce5-da5e-452a-b375-a19d3003a3d6"> Thanks in advance!
07-13-2023 03:28:56
07-13-2023 03:28:56
Usually, `model.generate` does some post-process on the logits. As you can see in your code snippet, there are arguments you passed ```python do_sample=True, top_p = 0.9, top_k = 15, ``` If we just go through the model outputs (your second way) and compute `log_softmax` on the model raw logits, this value doesn't go through the postprocess. So the value won't be the same. Still tag @gante to see if he has further comments.<|||||>> Usually, `model.generate` does some post-process on the logits. As you can see in your code snippet, there are arguments you passed > > ```python > do_sample=True, > top_p = 0.9, > top_k = 15, > ``` > > If we just go through the model outputs (your second way) and compute `log_softmax` on the model raw logits, this value doesn't go through the postprocess. So the value won't be the same. > > Still tag @gante to see if he has further comments. Thanks for the prompt response! It sounds like there are some postprocess steps affecting the logits/probability, I am curious what are those postprocess mechanism. Would you mind provide some more context on what the postprocess is trying to achieve, or maybe point me to the source code. From my understanding, the part on` do_sample, top_p, top_k ` is only affecting the sampling strategy, not the underlying probability, or maybe it re-normalize the conditional probability? Thanks! <|||||>You might be right regarding @hongzhoulin89 . It has been sometime I haven't deal with those arguments. Let me take a look, unless our generation super expert @gante faster than me for a comment! <|||||>@hongzhoulin89 One place is https://github.com/huggingface/transformers/blob/91d7df58b6537d385e90578dac40204cb550f706/src/transformers/generation/utils.py#L2372-L2375 or https://github.com/huggingface/transformers/blob/91d7df58b6537d385e90578dac40204cb550f706/src/transformers/generation/utils.py#L2652-L2656 And you can check inside `generate` what `logits_warper = self._get_logits_warper(generation_config)` gives in your case. https://github.com/huggingface/transformers/blob/91d7df58b6537d385e90578dac40204cb550f706/src/transformers/generation/utils.py#L1576-L1588<|||||>@hongzhoulin89 👋 @ydshieh said it all -- we often (almost ways, actually) manipulate the logits after the forward pass while generating. There are many reasons to do so, and each reason may add an additional post-processing step. Here are a few examples: - Whisper has special sequences at the beginning of the generation, to select its mode - We might want to block certain words from being generated - We might want to adjust the distribution to be more/less biased towards the most likely tokens They are applied in the places @ydshieh pointed out, and you can check these further docs: - List of possible manipulations, triggered through the config file: https://huggingface.co/docs/transformers/v4.30.0/en/main_classes/text_generation#transformers.GenerationConfig - Implementation of the logit manipulations: https://github.com/huggingface/transformers/blob/main/src/transformers/generation/logits_process.py<|||||>Thanks a lot, this is really helpful!
transformers
24,800
closed
Revert "Unpin protobuf in docker file (for daily CI)"
Reverts huggingface/transformers#24761 ONNX unfortunately doesn't support using protobuf v4, and our daily CI have many ONNX tests broken after #24761. See [failing jobs](https://github.com/huggingface/transformers/actions/runs/5526985620)
07-13-2023 02:10:55
07-13-2023 02:10:55
Merge directly so we can have a better CI report in the next run.<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24800). All of your documentation changes will be reflected on that endpoint.<|||||>@sgugger Just to let you know I revert a merged PR #24761: There is no super easy way to move (all) the ONNX-related tests to another job. We get stuck with protobuf v3 on daily CI.<|||||>Shouldn't all the ONNX tests be done on the optimum side now?<|||||>Yes. I know you once mentioned to me that we will ignore failing ONNX tests (`tests/onnx`) - but I don't know if you are OK for us to completely remove it (as all of them are failing with protobuf v4). But the story is longer: we have tests like `TFGPT2ModelTest::test_onnx_runtime_optimize` that are defined in individual model test file, and not in `tests/onnx`. Of course, I am happy if the Optimum can take care of this on their side.<|||||>cc @michaelbenayoun Can you confirm it's okay for us to remove all ONNX tests? They all test the deprecated way of using ONNX as far as I know.<|||||>The issue here is that these tests rely on a pretty old release (v1.12.0) of `onnx`: https://github.com/huggingface/transformers/actions/runs/5526985620/jobs/10082374086#step:8:159 The last release should be compatible with `protobuf` v4. Other than this, it sounds good to me to remove all ONNX tests in Transformers :+1: @michaelbenayoun and @fxmarty will know better though.<|||||>@regisss Thank you for the heads up ❤️ - but yes it would be great if we can delegate the ONNX testing to Optimum CI.<|||||>@ydshieh @sgugger Yes I believe it is fine to remove the ONNX tests from transformers, as the export in Optimum is now mature, extended and well tested!<|||||>Thanks!
transformers
24,799
open
Add UnivNet Vocoder Model for Tortoise TTS Diffusers Integration
# What does this PR do? This PR adds the UnivNet GAN vocoder model ([paper](https://arxiv.org/pdf/2106.07889.pdf), [code](https://github.com/mindslab-ai/univnet)) to `transformers`, which is the vocoder used in the Tortoise TTS text-to-speech model ([paper](https://arxiv.org/pdf/2305.07243.pdf), [code](https://github.com/neonbjb/tortoise-tts)) which is currently being integrated into `diffusers`. See [this issue](https://github.com/huggingface/diffusers/issues/3891) in `diffusers`. ![univnet_model_architecture](https://github.com/huggingface/transformers/assets/58458699/8a33190a-cd52-4e81-a6ed-2fc921b9f86f) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sanchit-gandhi @susnato
07-13-2023 01:30:28
07-13-2023 01:30:28
Hi @dg845 if you are planning to add it to the `models` folder, then I think it should have a doc file(`univnet.md`) in the docs.<|||||>For now I've added the UnivNet code to the `/src/transformers/models/univnet/` directory. @sanchit-gandhi, since the UnivNet model isn't technically a transformer model (in that it doesn't use any attention mechanisms), is this the best place to put it? For example, the [`SpeechT5HifiGan`](https://huggingface.co/docs/transformers/main/model_doc/speecht5#transformers.SpeechT5HifiGan) vocoder is in `/src/transformers/models/speecht5/` along with the other SpeechT5 models, but I assume most of the other Tortoise TTS code will go into `diffusers` rather than `transformers`.<|||||>Nice start @dg845! Yep fine to have it as a standalone model - we have ResNet in transformers as well which is not strictly attention-based.<|||||>Hi @sanchit-gandhi, I think the PR is ready for review. The following are the differences between the [`SpeechT5HifiGan`](https://huggingface.co/docs/transformers/main/model_doc/speecht5#transformers.SpeechT5HifiGan) and the `UnivNetGan` model: - The `SpeechT5HifiGan` outer residual blocks* (that is, [`HifiGanResidualBlock`](https://github.com/huggingface/transformers/blob/main/src/transformers/models/speecht5/modeling_speecht5.py#L3074)) upsamples the number of hidden channels between each outer residual block, but the `UnivNetGan` outer residual blocks* (`UnivNetLVCBlock`) keep the number of hidden channels constant. - Although the structures of the inner residual blocks (for UnivNet, the `UnivNetLVCResidualBlock` module) are similar: `LReLU` => dilated `Conv1d` => `LReLU` => `Conv1d` => skip connection, the UnivNet model uses a [location variable convolutional layer](https://arxiv.org/pdf/2102.10815.pdf) followed by a [gated activation unit](https://proceedings.neurips.cc/paper_files/paper/2016/file/b1301141feffabac455e1f90a7de2054-Paper.pdf) in place of the second `Conv1d` layer. - Accordingly, each outer residual block (`UnivNetLVCBlock`) in UnivNet has a kernel predictor residual network (`UnivNetKernelPredictor`) to predict the kernels and biases for the location variable convolutional layer in each inner residual block in the main resnet. - In addition to a conditioning log-mel `spectrogram`, UnivNet takes in a noise sequence as input. The `noise_waveform` is the input to the "main" resnet (e.g. the stack of `UnivNetLVCResidualBlock`s), while the conditioning `spectrogram` is the input to the kernel predictor in each `UnivNetLVCBlock`. (*) "Outer residual block" is a bit of a misnomer, since for both blocks in question (`HifiGanResidualBlock`, `UnivNetLVCBlock`) there's no skip connection between the input to the block and the main computation in the block.<|||||>Also, I'm not sure why `utils/check_table.py` is failing. I ran `make fix-copies` to create a table entry for UnivNet in `docs/source/en/index.md`, and then added a checkmark for PyTorch support, but for some reason `check_table.py` doesn't seem to like that.<|||||>> Also, I'm not sure why `utils/check_table.py` is failing. `utils/check_table.py` is no longer failing after I merged `main` into the PR branch. Running `make fix-copies` adds an entry for UnivNet, but I'm not sure why it doesn't add a checkmark in the "PyTorch support" column, perhaps the model information is mis-configured.
transformers
24,797
open
Trn1 LoRA finetuning with HF reaches RuntimeError: Invalid device format: cpu
### System Info optimum 1.9.0 optimum-neuron 0.0.7 transformers 4.30.2 DLAMI: https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2 AWS instance: trn1.2xl ` File "<string>", line 111, in __init__ File "/usr/local/lib/python3.10/dist-packages/transformers/training_args.py", line 1341, in __post_init__ and (get_xla_device_type(self.device) != "GPU") File "/usr/local/lib/python3.10/dist-packages/transformers/training_args.py", line 127, in get_xla_device_type return xm.xla_real_devices([device])[0].split(":")[0] File "/usr/local/lib/python3.10/dist-packages/torch_xla/core/xla_model.py", line 268, in xla_real_devices return [_xla_real_device(device) for device in devices] File "/usr/local/lib/python3.10/dist-packages/torch_xla/core/xla_model.py", line 268, in <listcomp> return [_xla_real_device(device) for device in devices] File "/usr/local/lib/python3.10/dist-packages/torch_xla/core/xla_model.py", line 263, in _xla_real_device raise RuntimeError('Invalid device format: {}'.format(device_str)) RuntimeError: Invalid device format: cpu ` Python training script attached as txt format [train-vit.txt](https://github.com/huggingface/transformers/files/12034074/train-vit.txt) ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Steps to reproduce: 1. Create an EC2 Trn1 instance with [Hugging Face DLAMI ](https://aws.amazon.com/marketplace/pp/prodview-gr3e6yiscria2) 2. `pip install peft` 3. install optimum-neuron form source: `pip install git+https://github.com/huggingface/optimum-neuron.git` 4. run python3 train-vit.py (script attached) ### Expected behavior To run training script to completion.
07-12-2023 23:31:07
07-12-2023 23:31:07
cc @pacman100 <|||||>Issue reported to https://github.com/huggingface/optimum-neuron/issues/134
transformers
24,796
open
new model: IDEFICS via HuggingFaceM4
**important: The following notes are for my team mates and they won't work for anybody else as the data isn't ready for the public yet. should be ready early in August** status: the modeling code integration is ready - awaiting the final review Meanwhile to try it out: ``` $ git clone https://github.com/huggingface/transformers -b add-model-idefics $ cd transformers $ cat generate.py import torch from transformers import IdeficsForVisionText2Text, AutoProcessor device = "cuda" if torch.cuda.is_available() else "cpu" checkpoint = "HuggingFaceM4/idefics-9b" #checkpoint = "HuggingFaceM4/tiny-random-idefics" model = IdeficsForVisionText2Text.from_pretrained(checkpoint, torch_dtype=torch.bfloat16).to(device) processor = AutoProcessor.from_pretrained(checkpoint) prompts = [ [ "User:", "https://hips.hearstapps.com/hmg-prod/images/cute-photos-of-cats-in-grass-1593184777.jpg", "Describe this image." "Assistant: An image of two kittens in grass.", "User:", "https://hips.hearstapps.com/hmg-prod/images/dog-puns-1581708208.jpg", "Describe this image". "Assistant:", ], [ "User:", "https://hips.hearstapps.com/hmg-prod/images/dog-puns-1581708208.jpg", "Describe this image." "Assistant: An image of a dog wearing funny glasses.", "User:", "https://hips.hearstapps.com/hmg-prod/images/cute-photos-of-cats-in-grass-1593184777.jpg", "Describe this image". "Assistant:", ], ] # batched mode inputs = processor(prompts, return_tensors="pt").to(device) # single sample mode #inputs = processor(prompts[0], return_tensors="pt").to(device) generated_ids = model.generate(**inputs, max_length=100) generated_text = processor.batch_decode(generated_ids, skip_special_tokens=True) for i,t in enumerate(generated_text): print(f"{i}:\n{t}\n") ``` and then run: ``` CUDA_VISIBLE_DEVICES=0 PYTHONPATH=src python generate.py ``` # Demos A PR with examples/demos, including finetuning, is here: https://github.com/huggingface/notebooks/pull/418 # TODOs before merging - [ ] make the models public - which coincides with the announcement/release
07-12-2023 23:01:59
07-12-2023 23:01:59
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24796). All of your documentation changes will be reflected on that endpoint.<|||||>Is it possible to be a private repo ? ;-) The m4 repo from huggingface organisation does not exist<|||||>Thank you for your interest, @flozi00 - please give us some time. It says WIP because it's not ready for a public consumption. I edited the OP to clarify that.<|||||>I'm not able to rebase as this recently merged PR https://github.com/huggingface/transformers/pull/25174 breaks `tests/models/idefics/test_image_processing_idefics.py::IdeficsImageProcessingTest::test_torchvision_numpy_transforms_equivalency` cc: @amyeroberts, if I need to adapt our image processing code please let me know - the function in question is called here: https://github.com/huggingface/transformers/pull/24796/files#diff-e1b90eb52340b91c2471bac7c6fd34c67c7cd530050c607852fd426f397b3b3fR162<|||||>@sgugger, I addressed your feedback and this PR is ready for a detailed review. Thank you!<|||||>Thank you, @sgugger, @HugoLaurencon and @leot13 for your reviews - I have addressed everything you have raised.
transformers
24,795
closed
Pop
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-12-2023 22:59:54
07-12-2023 22:59:54
transformers
24,794
open
Bert Generation misses **model_kwargs in prepare_inputs_for_generation()
### System Info https://github.com/huggingface/transformers/blob/main/src/transformers/models/bert_generation/modeling_bert_generation.py#L989 This function should clearly return ```**model_kwargs``` but it is not. This results in passed args such as ```encoder_hidden_states``` not being used for generation. ### Who can help? @gante @ArthurZucker @younesbelkada ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I used this model for image-captioning task, where the visual features should be used as ```encoder_hidden_states``` for the model.generate() method. The current implementation will simply neglect this input and generates the same texts for every image. Hope this information is sufficient to see why the current implementation is problematic. ### Expected behavior current implementation: def prepare_inputs_for_generation(self, input_ids, past_key_values=None, attention_mask=None, **model_kwargs): input_shape = input_ids.shape # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly if attention_mask is None: attention_mask = input_ids.new_ones(input_shape) # cut decoder_input_ids if past is used if past_key_values is not None: input_ids = input_ids[:, -1:] return {"input_ids": input_ids, "attention_mask": attention_mask, "past_key_values": past_key_values} Correct implementation (simply change the last line): def prepare_inputs_for_generation(self, input_ids, past_key_values=None, attention_mask=None, **model_kwargs): input_shape = input_ids.shape # if model is used as a decoder in encoder-decoder model, the decoder attention mask is created on the fly if attention_mask is None: attention_mask = input_ids.new_ones(input_shape) # cut decoder_input_ids if past is used if past_key_values is not None: input_ids = input_ids[:, -1:] return {"input_ids": input_ids, "attention_mask": attention_mask, "past_key_values": past_key_values, **model_kwargs}
07-12-2023 22:50:42
07-12-2023 22:50:42
Hi @luyuzhe111 Thank you for rasing the question! The model of type `BertGeneration`, when we want to use `generation` along `encoder`'s output, it would be a (decoder) component in an encoder-decoder (here `class EncoderDecoderModel`). The `generation` takes care to create the `encoder_outputs` https://github.com/huggingface/transformers/blob/906afa1d5c6054a641cb6abb009cdec732a5a094/src/transformers/generation/utils.py#L1342-L1347 and `EncoderDecoderModel.prepare_inputs_for_generation` pass it to the underlying decoder model. https://github.com/huggingface/transformers/blob/906afa1d5c6054a641cb6abb009cdec732a5a094/src/transformers/models/encoder_decoder/modeling_encoder_decoder.py#L668-L681 So everything work correctly 🤗 . However, if you want to use that model without our `class EncoderDecoderModel` but you still want to pass `BertGenerationDecoder`, then you will have to modify the code on your own.
transformers
24,793
closed
[🔗 Docs] Fixed Incorrect Migration Link
I couldn't find it in transformers files. Can you check? Is it true?
07-12-2023 22:23:37
07-12-2023 22:23:37
Hi @amyeroberts , I am new to transformes library. I wanted to fix this error that I saw in the documentation. Because the link is broken. The documentation page MIGRATION doesn't exist in v4.30.0, but exists on the main version. Click here to redirect to the main version of the documentation. I looked in transformers documentation(https://huggingface.co/docs) and couldn't find it. Can you help?<|||||>Thank you for the help💖
transformers
24,792
closed
AttributeError: 'Parameter' object has no attribute 'ds_numel'
### System Info Python 3.10 CUDA 11.8 torch 2.0.1 transfromers 4.30.2 bitsandbytes 0.39.1 datasets 2.13.0 einops 0.6.1 trl 0.4.4 accelerate 0.20.3 deepspeed 0.9.5 ### Who can help? @pacman100 ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi, I'm trying to reproduce the Falcon LLM fine-tuning by using a modified version of the [HF Collab script](https://colab.research.google.com/drive/1BiQiw31DT7-cDp1-0ySXvvhzqomTdI-o?usp=sharing). The Jupyter notebook runs well when DeepSpeed is not in the mix, but when I introduce the DeepSpeed ZeRO-3 in `TrainingArguments` (which gets fed into `SFTTrainer` the `trainer.train()` call fails with error `AttributeError: 'Parameter' object has no attribute 'ds_numel'.` **Here the DeepSpeed config `dict` I'm using:** ``` ds_config = { "fp16": { "enabled": "auto", "loss_scale": 0, "loss_scale_window": 1000, "initial_scale_power": 16, "hysteresis": 2, "min_loss_scale": 1 }, "optimizer": { "type": "AdamW", "params": { "lr": "auto", "betas": "auto", "eps": "auto", "weight_decay": "auto" } }, "scheduler": { "type": "WarmupLR", "params": { "warmup_min_lr": "auto", "warmup_max_lr": "auto", "warmup_num_steps": "auto" } }, "zero_optimization": { "stage": 3, "offload_optimizer": { "device": "none", "pin_memory": "true" }, "offload_param": { "device": "none", "pin_memory": "true" }, "overlap_comm": "true", "contiguous_gradients": "true", "sub_group_size": 1e9, "reduce_bucket_size": "auto", "stage3_prefetch_bucket_size": "auto", "stage3_param_persistence_threshold": "auto", "stage3_max_live_parameters": 1e9, "stage3_max_reuse_distance": 1e9, "stage3_gather_16bit_weights_on_model_save": "true" }, "gradient_accumulation_steps": GRADIENT_ACCUMULATION_STEPS, "gradient_clipping": "auto", "steps_per_print": 10, "train_batch_size": "auto", "train_micro_batch_size_per_gpu": "auto", "wall_clock_breakdown": "false" ``` **Stack trace: ``` File ~/miniconda3/envs/falcon/lib/python3.10/site-packages/transformers/trainer.py:1793, in Trainer._inner_training_loop(self, batch_size, args, resume_from_checkpoint, trial, ignore_keys_for_eval) 1791 logger.info(f" Gradient Accumulation steps = {args.gradient_accumulation_steps}") 1792 logger.info(f" Total optimization steps = {max_steps:,}") -> 1793 logger.info(f" Number of trainable parameters = {get_model_param_count(model, trainable_only=True):,}") 1795 self.state.epoch = 0 1796 start_time = time.time() File ~/miniconda3/envs/falcon/lib/python3.10/site-packages/transformers/trainer_pt_utils.py:1053, in get_model_param_count(model, trainable_only) 1050 def numel(p): 1051 return p.numel() -> 1053 return sum(numel(p) for p in model.parameters() if not trainable_only or p.requires_grad) File ~/miniconda3/envs/falcon/lib/python3.10/site-packages/transformers/trainer_pt_utils.py:1053, in <genexpr>(.0) 1050 def numel(p): 1051 return p.numel() -> 1053 return sum(numel(p) for p in model.parameters() if not trainable_only or p.requires_grad) File ~/miniconda3/envs/falcon/lib/python3.10/site-packages/transformers/trainer_pt_utils.py:1046, in get_model_param_count.<locals>.numel(p) 1045 def numel(p): -> 1046 return p.ds_numel AttributeError: 'Parameter' object has no attribute 'ds_numel' ``` **Here the core section of the code** ``` # Dataset loader DATASET_PATH = "timdettmers/openassistant-guanaco" # Params for AutoModelForCausalLM DEVICE_MAP = "auto" # Instructs Accelerate to use all GPUs available in the node. LOAD_IN_8BIT = True # 8-bit precision requires ~ 1.2-1.4GB memory per 1B parameters MODEL_NAME = "tiiuae/falcon-7b" # Could use "tiiuae/falcon-40b" or "tiiuae/falcon-7b" TRUST_REMOTE_CODE = True # Required when a model is not yet part of the Transformers library # LoRA configuration (see https://huggingface.co/docs/peft/conceptual_guides/lora) # LoRA allows efficient fine-tuning of LLMs by training low rank (small) matrices LORA_ALPHA = 16 # LoRA scaling factor. LORA_DROPOUT = 0.1 # Probability of a neuron link to get disabled during a step LORA_R = 32 # Rank of update matrices. Lower rank results in smaller update matrices with fewer trainable parameters. #List of modules apart from LoRA layers to be set as trainable and saved in the final checkpoint. LORA_TARGET_MODULES = ["query_key_value", "dense", "dense_h_to_4h", "dense_4h_to_h"] # Trainer configuration BF16 = True # Whether to use bf16 precision. Requires Ampere or higher NVIDIA architecture. EVAL_STEPS = 8 # Number of update steps between two evaluations if evaluation_strategy="steps" EVAL_STRATEGY = 'steps' # Evaluation is done (and logged) every eval_steps. FP16 = not BF16 # Whether to use fp16 16-bit (mixed) precision training instead of 32-bit training. GRADIENT_ACCUMULATION_STEPS = 4 # Accumulates gradients from 'n' batches before stepping the optimizer GROUP_BY_LENGTH = True # group samples of similar length to minimize padding and be more efficient. LOAD_BEST = True # Load the checkpoint with the lowest loss at the end. LOGGING_STEPS = 4 # Number of update steps between two logs if logging_strategy="steps". LOGGING_STRATEGY = 'steps' # Logging is done every logging_steps LR = 2e-4 # The initial learning rate. LR_SCHEDULER_TYPE = 'constant' # Other options are 'cosine' or 'linear' MAX_GRAD_NORM = 0.3 # Maximum gradient norm (for gradient clipping). MAX_STEPS = 184 # Start with a small test (64) then increase the number to multiple epochs OPTIMIZER = "paged_adamw_32bit" # Optimizer function OUTPUT_DIR = "./results" # Where checkpoints will be saved PER_DEV_TRAIN_BATCH_SIZE = 4 # Use a low number if getting out of memory errors REPORT_ENDPOINT = "wandb" # Comment out if don't want to use wandb. Ensure you had run 'wandb login' previously. SAVE_STEPS = 8 # Number of updates steps before two checkpoint saves if save_strategy="steps" SAVE_STRATEGY = 'steps' # Save is done every save_steps. SAVE_TOTAL_LIMIT = 2 # Only save the last and the best checkpoints USE_CACHE = False # Can't use cache with gradient check pointing WARMUP_RATIO = 0.03 # Ratio of total training steps used for a linear warmup from 0 to learning_rate. WEIGHT_DECAY = 0.001 # AdamW regularization parameter # SFTTrainer config (see https://huggingface.co/docs/trl/main/en/sft_trainer) MAX_SEQ_LENGTH = 512 # Max length is token sequence in an example model = AutoModelForCausalLM.from_pretrained( MODEL_NAME, load_in_8bit = LOAD_IN_8BIT, trust_remote_code = TRUST_REMOTE_CODE, device_map = DEVICE_MAP, ) model.config.use_cache = USE_CACHE tokenizer = AutoTokenizer.from_pretrained(MODEL_NAME, trust_remote_code = TRUST_REMOTE_CODE) tokenizer.pad_token = tokenizer.eos_token # Setup LoRA peft_config = LoraConfig( lora_alpha = LORA_ALPHA, lora_dropout = LORA_DROPOUT, r = LORA_R, bias = "none", task_type = "CAUSAL_LM", target_modules = LORA_TARGET_MODULES ) # Setup training arguments training_arguments = TrainingArguments( output_dir = OUTPUT_DIR, per_device_train_batch_size = PER_DEV_TRAIN_BATCH_SIZE, gradient_accumulation_steps = GRADIENT_ACCUMULATION_STEPS, #optim = OPTIMIZER, save_steps = SAVE_STEPS, save_strategy = SAVE_STRATEGY, logging_steps = LOGGING_STEPS, logging_strategy = LOGGING_STRATEGY, learning_rate = LR, #lr_scheduler_type = LR_SCHEDULER_TYPE, fp16 = FP16, bf16 = BF16, max_grad_norm = MAX_GRAD_NORM, max_steps = MAX_STEPS, warmup_ratio = WARMUP_RATIO, group_by_length = GROUP_BY_LENGTH, report_to = REPORT_ENDPOINT, evaluation_strategy = EVAL_STRATEGY, eval_steps = EVAL_STEPS, load_best_model_at_end = LOAD_BEST, greater_is_better = False, save_total_limit = SAVE_TOTAL_LIMIT, deepspeed=ds_config, disable_tqdm=True, #log_level= "error", ) trainer = SFTTrainer( model = model, train_dataset = train_dataset, eval_dataset = eval_dataset, peft_config = peft_config, dataset_text_field = "text", max_seq_length = MAX_SEQ_LENGTH, tokenizer = tokenizer, args = training_arguments, ) for name, module in trainer.model.named_modules(): if "norm" in name: module = module.to(torch.float32) # Fine-tune the model trainer.train() ``` Thanks! ### Expected behavior I expected the training process to run with DeepSpeed in the mix as it was doing when it DS wasn't called. Thanks in advance for your help!
07-12-2023 22:16:14
07-12-2023 22:16:14
transformers
24,791
closed
Upgrade jax/jaxlib/flax pin versions
# What does this PR do? So we can have latest TF versions.
07-12-2023 21:14:19
07-12-2023 21:14:19
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24791). All of your documentation changes will be reflected on that endpoint.
transformers
24,790
open
run_mlm is not working with TPU
### System Info I am using colab : python:3.10 for torch and xla : `pip install cloud-tpu-client==0.10 torch==2.0.0 torchvision==0.15.1 https://storage.googleapis.com/tpu-pytorch/wheels/colab/torch_xla-2.0-cp310-cp310-linux_x86_64.whl` transformers == 4.30.2 after using this command the training get this error : `2023-07-12 21:07:17.770888: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-07-12 21:08:09.577189: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-07-12 21:08:09.645535: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-07-12 21:08:09.848706: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-07-12 21:08:10.028873: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-07-12 21:08:10.122547: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-07-12 21:08:10.322647: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-07-12 21:08:10.612495: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT 2023-07-12 21:08:10.867921: W tensorflow/compiler/tf2tensorrt/utils/py_utils.cc:38] TF-TRT Warning: Could not find TensorRT Exception in device=TPU:0: Invalid device format: cpu Traceback (most recent call last): File "/usr/local/lib/python3.10/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 334, in _mp_start_fn _start_fn(index, pf_cfg, fn, args) File "/usr/local/lib/python3.10/dist-packages/torch_xla/distributed/xla_multiprocessing.py", line 328, in _start_fn fn(gindex, *args) File "/content/run_mlm.py", line 654, in _mp_fn main() File "/content/run_mlm.py", line 239, in main model_args, data_args, training_args = parser.parse_args_into_dataclasses() File "/usr/local/lib/python3.10/dist-packages/transformers/hf_argparser.py", line 346, in parse_args_into_dataclasses obj = dtype(**inputs) File "<string>", line 111, in __init__ File "/usr/local/lib/python3.10/dist-packages/transformers/training_args.py", line 1341, in __post_init__ and (get_xla_device_type(self.device) != "GPU") File "/usr/local/lib/python3.10/dist-packages/transformers/training_args.py", line 127, in get_xla_device_type return xm.xla_real_devices([device])[0].split(":")[0] File "/usr/local/lib/python3.10/dist-packages/torch_xla/core/xla_model.py", line 271, in xla_real_devices return [_xla_real_device(device) for device in devices] File "/usr/local/lib/python3.10/dist-packages/torch_xla/core/xla_model.py", line 271, in <listcomp> return [_xla_real_device(device) for device in devices] File "/usr/local/lib/python3.10/dist-packages/torch_xla/core/xla_model.py", line 266, in _xla_real_device raise RuntimeError('Invalid device format: {}'.format(device_str)) RuntimeError: Invalid device format: cpu ╭───────────────────── Traceback (most recent call last) ──────────────────────╮ │ /content/xla_spawn.py:83 in <module> │ │ │ │ 80 │ │ 81 │ │ 82 if __name__ == "__main__": │ │ ❱ 83 │ main() │ │ 84 │ │ │ │ /content/xla_spawn.py:79 in main │ │ │ │ 76 │ # Patch sys.argv │ │ 77 │ sys.argv = [args.training_script] + args.training_script_args + ["- │ │ 78 │ │ │ ❱ 79 │ xmp.spawn(mod._mp_fn, args=(), nprocs=args.num_cores) │ │ 80 │ │ 81 │ │ 82 if __name__ == "__main__": │ │ │ │ /usr/local/lib/python3.10/dist-packages/torch_xla/distributed/xla_multiproce │ │ ssing.py:397 in spawn │ │ │ │ 394 if pf_cfg.num_devices == 1: │ │ 395 │ _start_fn(0, pf_cfg, fn, args) │ │ 396 else: │ │ ❱ 397 │ result = torch.multiprocessing.start_processes( │ │ 398 │ │ _mp_start_fn, │ │ 399 │ │ args=(pf_cfg, fn, args), │ │ 400 │ │ nprocs=pf_cfg.num_devices, │ │ │ │ /usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py:197 │ │ in start_processes │ │ │ │ 194 │ │ return context │ │ 195 │ │ │ 196 │ # Loop on join until it returns True or raises an exception. │ │ ❱ 197 │ while not context.join(): │ │ 198 │ │ pass │ │ 199 │ │ 200 │ │ │ │ /usr/local/lib/python3.10/dist-packages/torch/multiprocessing/spawn.py:149 │ │ in join │ │ │ │ 146 │ │ │ │ │ signal_name=name │ │ 147 │ │ │ │ ) │ │ 148 │ │ │ else: │ │ ❱ 149 │ │ │ │ raise ProcessExitedException( │ │ 150 │ │ │ │ │ "process %d terminated with exit code %d" % │ │ 151 │ │ │ │ │ (error_index, exitcode), │ │ 152 │ │ │ │ │ error_index=error_index, │ ╰──────────────────────────────────────────────────────────────────────────────╯ ProcessExitedException: process 0 terminated with exit code 17` ### Who can help? @sgugger , @ArthurZucker ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction 1. choose TPU platform on colab 2. ``` !python xla_spawn.py --num_cores 8 run_mlm.py \ --model_name_or_path roberta-base \ --tpu_num_cores 8 \ --train_file tr.txt \ --validation_file dev.txt \ --per_device_train_batch_size 8 \ --per_device_eval_batch_size 8 \ --do_train \ --do_eval \ --max_seq_len 200 \ --line_by_line True \ --pad_to_max_length True \ --output_dir mlm_tpu-v2 ``` ### Expected behavior The training should run.
07-12-2023 21:11:59
07-12-2023 21:11:59
It looks like PyTorch XLA cannot see your TPUs, are you sure you properly set up your instance?<|||||>I suppose it is, I am using colab TPU and I have installed thiis package : `pip install cloud-tpu-client==0.10 torch==2.0.0 torchvision==0.15.1 https://storage.googleapis.com/tpu-pytorch/wheels/colab/torch_xla-2.0-cp310-cp310-linux_x86_64.whl`
transformers
24,789
closed
Update setup.py to be compatible with pipenv
# What does this PR do? This enables installing transformers from source using pipenv. Currently installing transformers through pipenv via a git source is blocked by this issue: https://github.com/pypa/pipenv/issues/5167#issuecomment-1349316531 Installation will fail with: ``` AttributeError: 'Subscript' object has no attribute 's' ``` ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
07-12-2023 20:55:52
07-12-2023 20:55:52
> I'm confused. `install_requires` is already a list as seen on line 415. Thanks for taking a look @sgugger! It seems to be a requirementslib bug that occurs when one of the items in the list of dependencies is declared with a string interpolation.. I'm not sure why using the list constructor fixes the bug. This is the behavior when installing transformers through pipenv without the bugfix: ``` $ pipenv install git+https://github.com/huggingface/transformers#egg=transformers Installing git+https://github.com/huggingface/transformers@v4.30.0#egg=transformers... Resolving git+https://github.com/huggingface/transformers@v4.30.0#egg=transformers... ✘ Locking Failed! Traceback (most recent call last): ... File "/home/gmathews/.local/lib/python3.8/site-packages/pipenv/vendor/requirementslib/models/setup_info.py", line 659, in _find_install_requires return [el.s for el in variable.elts] File "/home/gmathews/.local/lib/python3.8/site-packages/pipenv/vendor/requirementslib/models/setup_info.py", line 659, in <listcomp> return [el.s for el in variable.elts] AttributeError: 'Subscript' object has no attribute 's' ```<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24789). All of your documentation changes will be reflected on that endpoint.
transformers
24,788
closed
set correct model input names for gptsw3tokenizer
# What does this PR do? Makes it so the tokenizer doesn't output `token_type_ids`, as these break generation. Seems like a harmless change, but I'm not sure what these are used for so let me know if this is the wrong approach! ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [X] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [X] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker @amyeroberts @ekgren @Apsod
07-12-2023 20:09:18
07-12-2023 20:09:18
_The documentation is not available anymore as the PR was closed or merged._<|||||>LGTM We're not using token type ids during training, so there is no reason for the tokenizer to output them, and doing so just leads to unintended behaviour.
transformers
24,787
closed
Deprecate models
# What does this PR do? This PR creates the precedent of deprecating models in the library. By deprecating we indicate that we will stop maintaining such models, but there is no intention of actually removing those models and breaking support for them (they might one day move into a separate repo/on the Hub but we would still add the necessary imports to make sure backward compatibility stays). The main point is that we stop testing those models (to ease a bit the burden on our CI). Deprecated models are moved in models/deprecated so direct import of objects from their modeling files will break (though that's easily fixed by adding the `.deprecated` in the path). They are removed from the `tests` folder and a mention is added in the doc page of the model. The heuristic to pick the deprecated models in this PR is: models older than a year that got less than a cumulated 1,000 downloads (over all checkpoints) in the last 30 days (counting deduplicated downloads).
07-12-2023 19:41:00
07-12-2023 19:41:00
_The documentation is not available anymore as the PR was closed or merged._<|||||>FMI: (M=my) I probably need to do something with daily CI to take this PR's change into account!
transformers
24,786
closed
Added support for dtype in .to() method.
Issue #24068. The updated method now accepts both "device" and "dtype" as keyword arguments. When "dtype" is provided, the tensors within the object will be cast to the specified data type. # What does this PR do? This PR adds support for the `dtype` parameter in the `.to()` method of the `BatchEncoding` class. Previously, only the `device` parameter was supported. With this enhancement, users can now specify the desired data type for tensor casting and allocation. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @amyeroberts @NielsRogge
07-12-2023 19:32:42
07-12-2023 19:32:42
@amyeroberts hey, yes will do!<|||||>Thanks for your PR but what is the purpose of this? The tokenization results are all integers and changing their dtype will make then unusable by the model.<|||||>@sgugger I was using `BatchEncoding` in a Processor class (`InstructBlipProcessor`) and noticed that it didn't support `dtype` as `BatchFeature` does. However it's a bit unclear to me whether I need to use `BatchFeature` or `BatchEncoding` for multi-modal processors<|||||>Probably batch feature if you have float values. As the name indicates, `BatchEncoding` is for encoded values (so ints).<|||||>@amyeroberts passed all cases. Kindly check!<|||||>> Thanks for your PR but what is the purpose of this? The tokenization results are all integers and changing their dtype will make then unusable by the model. You did not answer that question though.<|||||>@amannagarkar thanks for your PR, but given the comment by @sgugger it probably makes sense to close this PR, and instead update multimodal processors in the library that return a`BatchEncoding` instead of a `BatchFeature`. This is because `BatchEncoding` is only used by text-only tokenizers, for which the `dtype` isn't relevant, since they always return LongTensors.<|||||>@sgugger sorry for not responding, I thought Niels answered your question. I will be more careful in the future! @NielsRogge okay, noted. I will take a look into it!
transformers
24,785
closed
Copy code when using local trust remote code
# What does this PR do? When using the `trust_remote_code=True` feature with local models (for instance using a clone of a repo with custom code) the custom code files are not copied over if the user does `save_pretrained` (as a result of #22814) but they should in this specific case. This PR fixes that. Fixes #24737
07-12-2023 18:39:05
07-12-2023 18:39:05
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,784
closed
Link with accelerate
# What does this PR do? Replicates the logic in https://github.com/huggingface/accelerate/pull/1718 here on the trainer, to reduce the sync overhead as `get_scale` is a full-sync operation, meaning both GPU and CPU need to fully stop before continuing. This PR reduces it by half when not using the Accelerator. Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
07-12-2023 18:17:20
07-12-2023 18:17:20
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,783
open
Generate: have an example on each `LogitsProcessor` class docstring
# Context `.generate()` can be extensively manipulated through `LogitsProcessor` (and `LogitsWarper`) classes. Those classes are the code implementation behind flags like `temperature` or `top_k`. Most of our `LogitsProcessor` classes have a docstring that briefly describes their effect. However, unless you are an expert in text generation, it's hard to fully grasp the impact of using each class. In some cases, it is also non-trivial to prepare the arguments to initialize the `LogitsProcessor` class. As such, each class should have a clear usage example with `.generate()` in their docstring! 💪 Here is an example: [SequenceBiasLogitsProcessor docstring](https://github.com/huggingface/transformers/blob/f1732e1374a082bf8e43bd0e4aa8a2da21a32a21/src/transformers/generation/logits_process.py#L559). Contrarily to the other classes (at the time of writing), we can quickly learn how to use it just by reading its docstring. We are also immediately aware of a few caveats 🤓 Bonus points: our docstring examples are part of our CI, so we would be beefing up our tests to ensure we don't add regressions 🤗 This issue is part of the [text generation docs rework](https://github.com/huggingface/transformers/issues/24575). # How to participate? 1. Ensure you've read our contributing [guidelines](https://github.com/huggingface/transformers/blob/master/CONTRIBUTING.md) 📜 2. Claim your `LogitProcessor` class in this thread (confirm no one is working on it). You can check the full list of classes below, and you can find their implementation in [this file](https://github.com/huggingface/transformers/blob/main/src/transformers/generation/logits_process.py) 🎯 - You may need to do some detective work to fully understand the purpose of the class. For instance, some classes were created as part of a paper to be applied to any model, others are model-specific, and some exist to avoid weird bugs 🕵️ - Looking at the git history is a great way to understand how a `LogitsProcessor` came to be. 3. Implement your changes, taking the [SequenceBiasLogitsProcessor docstring](https://github.com/huggingface/transformers/blob/f1732e1374a082bf8e43bd0e4aa8a2da21a32a21/src/transformers/generation/logits_process.py#L559) as reference 💪 - Add a clear example that calls the processor through `.generate()`. Make sure the example's outputs are correct and that the model used in the test is a small model (anything larger than GPT2 needs explicit approval); - If you feel like the original docstring could be better, feel free to enhance it as well! - Don't forget to run `make fixup` before your final commit. 4. Open the PR and tag me in it 🎊 # Tracker - [ ] MinNewTokensLengthLogitsProcessor - [x] TemperatureLogitsWarper - [x] RepetitionPenaltyLogitsProcessor - [ ] EncoderRepetitionPenaltyLogitsProcessor - [ ] TopPLogitsWarper - [ ] TopKLogitsWarper - [ ] TypicalLogitsWarper - [ ] EpsilonLogitsWarper - [x] EtaLogitsWarper - [ ] NoRepeatNGramLogitsProcessor - [ ] EncoderNoRepeatNGramLogitsProcessor - [x] SequenceBiasLogitsProcessor - [x] NoBadWordsLogitsProcessor - [ ] PrefixConstrainedLogitsProcessor - [ ] HammingDiversityLogitsProcessor - [ ] ForcedBOSTokenLogitsProcessor - [ ] ForcedEOSTokenLogitsProcessor - [ ] InfNanRemoveLogitsProcessor - [ ] ExponentialDecayLengthPenalty - [ ] LogitNormalization - [ ] SuppressTokensAtBeginLogitsProcessor - [ ] SuppressTokensLogitsProcessor - [ ] ForceTokensLogitsProcessor - [ ] WhisperTimeStampLogitsProcessor - [ ] ClassifierFreeGuidanceLogitsProcessor
07-12-2023 17:30:16
07-12-2023 17:30:16
Hi @gante I'm happy to give one of these a go as it seems a nice learning experience... `TemperatureLogitsWarper` feels as good as anyone else, can you assign it to me if it's not taken yet?<|||||>@nablabits thank you for your interest. It's all yours! (The assignment is informal -- the first one to mention a certain class gets automatically assigned to it 🤗 )<|||||>Hello @gante 👋 Thanks for opening up this contrib! Would like to give it a try to [NoBadWordsLogitsProcessor ](https://github.com/huggingface/transformers/blob/f1732e1374a082bf8e43bd0e4aa8a2da21a32a21/src/transformers/generation/logits_process.py#L725). Alredy tried `bad_words_ids` argument and looking forward to digging and learning more into the impact of `eos_token_id` for this class. LKM if that works for you !<|||||>Hey @gante 👋 I would like to try [RepetitionPenaltyLogitsProcessor ](https://github.com/huggingface/transformers/blob/5bb4430edc7df9f9950d412d98bbe505cc4d328b/src/transformers/generation/logits_process.py#L194)to start with. I hope that works!<|||||>Hey Shauray (@shauray8), Nice one! I can see you have opened a PR for the RepetitionPenaltyLogitsProcessor. I was working on it. Just a note for next time, please go through the "How to Participate" and confirm no one is working on it. @gante I'll look at other Processor Classes soon and take up something else. 👍 <|||||>Hey @Rishab26, I didn't go through the comments and I appreciate your understanding. <|||||>Hey @Rishab26 I would like to highlight your level of empathy towards this OSS Governance issue. IMO, example of level 4 in [trust-level system](https://blog.discourse.org/2018/06/understanding-discourse-trust-levels/) HF relies upon . 🤗 Thanks for setting a positive example for the Open Source Community. 🤗 <|||||>Working on `TopKLogitsWarper`<|||||>Hey Folks! Still working on this, having fun though. Im opening the WIP in this [repo](https://github.com/SoyGema/contrib_schema/) in case someone wants to have a look before PR. Have already some things , but giving it a careful thought to the example and digging into some things. Found this gem #22168 ! <|||||>Hey @gante I would like to work on `SuppressTokensLogitsProcessor` 🙂<|||||>Hey @gante I am working on `NoRepeatNGramLogitsProcessor` 🙂<|||||>hey @gante I am working on **TypicalLogitsWarper**<|||||>hey @gante I would like to work on **EtaLogitsWarper** 😊<|||||>Hi! May I claim `TopPLogitsWrapper`?<|||||>Hey @gante, yes, me again :upside_down_face: , are you happy for me to pick `MinNewTokensLengthLogitsProcessor`?<|||||>Hi @gante I want to claim `ForcedEOSTokenLogitsProcessor` 😄 <|||||>Hi @gante can I try to work on `LogitNormalization`?<|||||>Hi @gante , can I try working on `ForceTokensLogitsProcessor`?<|||||>hi @gante i am working on `EncoderRepetitionPenaltyLogitsProcessor`, is there a way I can create a new issue named `EncoderRepetitionPenaltyLogitsProcessor` l then link that here and it shows active in the list? like this, you can put a checklist there ![image](https://github.com/huggingface/transformers/assets/64583161/6efd3f34-c7ec-4360-95c8-671f808c852f) <|||||>Hello @gante, I would like to work on `InfNanRemoveLogitsProcessor`. I see no one has referred to it in the comments section. But just wanted to make sure no one is working on it. Could you please confirm?<|||||>> hi @gante i am working on EncoderRepetitionPenaltyLogitsProcessor, is there a way I can create a new issue named EncoderRepetitionPenaltyLogitsProcessor l then link that here and it shows active in the list? like this, you can put a checklist there @rajveer43 We can, but I don't see the benefit of it (each issue would be very small and straightforward), so I'm not convinced about having this extra step :D <|||||>Hi @gante , I'd love to try writing examples for `HammingDiversityLogitsProcessor` !<|||||>Hi @gante can I work on ExponentialDecayLengthPenalty? Thanks!
transformers
24,782
closed
Skip torchscript tests for `MusicgenForConditionalGeneration`
# What does this PR do? This model class requires the model tester to prepare `input_values` and `padding_mask` for torchscript tests. So far I think it is fine to skip it until we have high usage.
07-12-2023 16:31:27
07-12-2023 16:31:27
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,781
open
Add text-mesh models inside Hugginfaces
### Feature request Text to 3D models are really getting the scene in some industries but actually the state of the art techiniques are very hard to integrate in production code. Some examples are: https://www.nasir.lol/clipmesh https://github.com/openai/shap-e Would be awesome to the community if HF has that integrated. ### Motivation Text-3D models can have a big space in multiple types of industry ### Your contribution If I have some guidance I can help working on this side. But I will need HF developers help.
07-12-2023 15:50:53
07-12-2023 15:50:53
Hi @math-sasso, This would be a great addition to the library! I don't know the paper's in depth, but I believe that both of these models - CLIPMesh and Shap-e - are diffusion models and so might be better suggestions for the diffusers library: https://github.com/huggingface/diffusers
transformers
24,780
closed
Rm duplicate pad_across_processes
# What does this PR do? Accelerate now handles `pad_across_processes` directly, so removes code copied from Accelerate. As it's internal, no need for a deprecation cycle Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
07-12-2023 15:23:29
07-12-2023 15:23:29
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,779
open
Best aproach to fine tune a GPT model for feature extraction
Hi All, I am trying to use BioGPT as a feature encoder and I want to compare if fine-tuning is going to improve the quality of the embeddings. So I have two options the first is to fine-tune BioGPT without passing the labels and then use the last token of the last hidden state for classification using a separate machine-learning model. (Is it possible to fine-tune BioGPT as an encoder with the labels? Do the labels make any difference since the model is not attempting to classify?) The second option would be to use BioGptForSequenceClassification which has a sequence classification head on top (linear layer) and fine-tune this by passing the labels to the model, I can then use this fine-tuned model for the classification or use the last token of the last hidden state for classification using a separate machine learning classifier.
07-12-2023 14:26:46
07-12-2023 14:26:46
Hi! This is better on [HF forums](https://huggingface.co/). This github repository is mainly for issues and feature requests 🙏 Thank you for your comprehension.
transformers
24,778
closed
save quantized model throws error.
### System Info ===================================BUG REPORT=================================== Welcome to bitsandbytes. For bug reports, please run python -m bitsandbytes ================================================================================ bin /opt/conda/envs/pytorch/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda118.so CUDA SETUP: CUDA runtime path found: /opt/conda/envs/pytorch/lib/libcudart.so.11.0 CUDA SETUP: Highest compute capability among GPUs detected: 7.5 CUDA SETUP: Detected CUDA version 118 CUDA SETUP: Loading binary /opt/conda/envs/pytorch/lib/python3.10/site-packages/bitsandbytes/libbitsandbytes_cuda118.so... [2023-07-12 13:52:54,626] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) Copy-and-paste the text below in your GitHub issue and FILL OUT the two last points. - `transformers` version: 4.30.2 - Platform: Linux-5.15.0-1038-aws-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.16.2 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Hi, I'm trying to save quantized model. First attempt didn't work. (I also opened an issue, https://github.com/huggingface/accelerate/issues/1713, to clarify it). I opened this issue because I'm receiving an error message when I run following code. I'm not sure I'm following the right instructions written on https://huggingface.co/docs/transformers/main_classes/quantization. Because model is pushed to hub in documentation. But I expect to save it to local filesystem. Thanks for your help in advance. ``` ### load packages ### import transformers import textwrap from transformers import LlamaTokenizer, LlamaForCausalLM import os import sys from typing import List import accelerate from peft import ( LoraConfig, get_peft_model, get_peft_model_state_dict, prepare_model_for_int8_training, ) #import fire import torch from datasets import load_dataset import pandas as pd import deepspeed DEVICE = "cuda" if torch.cuda.is_available() else "cpu" DEVICE ### load model ### BASE_MODEL = "decapoda-research/llama-7b-hf" model = LlamaForCausalLM.from_pretrained( BASE_MODEL, load_in_8bit=True, torch_dtype=torch.float16, device_map="auto", ) model.save_pretrained(save_directory="quantized_decapoda-research_llama-7b-hf_v2") ``` Error Message: ``` /opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/modeling_utils.py:1709: UserWarning: You are calling `save_pretrained` to a 8-bit converted model you may likely encounter unexepected behaviors. If you want to save 8-bit models, make sure to have `bitsandbytes>0.37.2` installed. warnings.warn( --------------------------------------------------------------------------- AttributeError Traceback (most recent call last) Cell In[3], line 1 ----> 1 model.save_pretrained(save_directory="quantized_decapoda-research_llama-7b-hf_v2") File /opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/modeling_utils.py:1820, in PreTrainedModel.save_pretrained(self, save_directory, is_main_process, state_dict, save_function, push_to_hub, max_shard_size, safe_serialization, variant, **kwargs) 1817 weights_name = SAFE_WEIGHTS_NAME if safe_serialization else WEIGHTS_NAME 1818 weights_name = _add_variant(weights_name, variant) -> 1820 shards, index = shard_checkpoint(state_dict, max_shard_size=max_shard_size, weights_name=weights_name) 1822 # Clean the folder from a previous save 1823 for filename in os.listdir(save_directory): File /opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/modeling_utils.py:318, in shard_checkpoint(state_dict, max_shard_size, weights_name) 315 storage_id_to_block = {} 317 for key, weight in state_dict.items(): --> 318 storage_id = id_tensor_storage(weight) 320 # If a `weight` shares the same underlying storage as another tensor, we put `weight` in the same `block` 321 if storage_id in storage_id_to_block: File /opt/conda/envs/pytorch/lib/python3.10/site-packages/transformers/pytorch_utils.py:290, in id_tensor_storage(tensor) 283 def id_tensor_storage(tensor: torch.Tensor) -> Tuple[torch.device, int, int]: 284 """ 285 Unique identifier to a tensor storage. Multiple different tensors can share the same underlying storage. For 286 example, "meta" tensors all share the same storage, and thus their identifier will all be equal. This identifier is 287 guaranteed to be unique and constant for this tensor's storage during its lifetime. Two tensor storages with 288 non-overlapping lifetimes may have the same id. 289 """ --> 290 return tensor.device, storage_ptr(tensor), storage_size(tensor) AttributeError: 'str' object has no attribute 'device' ``` ### Expected behavior Save quantized model to local filesystem.
07-12-2023 14:03:21
07-12-2023 14:03:21
`load_in_8bit=True,` --> cc @younesbelkada as he knows much better 🙏 <|||||>Hi @nemesis00sam Thanks for the issue, https://github.com/huggingface/transformers/pull/24416 fixed the issue you mentioned please install transformers from source ``` pip uninstall transformers pip install git+https://github.com/huggingface/transformers.git ``` And it should be solved right after<|||||>Thanks for prompt answer. @younesbelkada <|||||>I still see the same issue while saving `meta-llama/Llama-2-13b-chat-hf` as safetensors My setup: ``` pip list | grep -E 'trans|accel|bits|safe' accelerate 0.21.0 bitsandbytes 0.41.0 safetensors 0.3.1 transformers 4.32.0.dev0 # uninstalled and installed from git on 7/28 ``` Script: ``` model_name = "meta-llama/Llama-2-13b-chat-hf" save_dir = "/home/abc/local_models/Llama-2-13b-chat-8bit" tokenizer = AutoTokenizer.from_pretrained(model_name) tokenizer.save_pretrained(save_dir, save_config=True) max_memory = {0: "22GIB", 1: "22GIB", 2: "22GIB", 3: "22GIB", 4: "22GIB", 5: "22GIB", 6: "22GIB", 7: "22GIB"} model = AutoModelForCausalLM.from_pretrained(model_name, load_in_8bit=True, max_memory=max_memory) model.save_pretrained(save_dir, save_config=True, safe_serialization=True) ``` Error: ``` Traceback (most recent call last): File "/home/hmohapa/search/llama/8bit_quantize.py", line 26, in <module> model.save_pretrained(save_dir, save_config=True, safe_serialization=True) File "/opt/conda/lib/python3.10/site-packages/transformers/modeling_utils.py", line 1803, in save_pretrained ptrs[id_tensor_storage(tensor)].append(name) File "/opt/conda/lib/python3.10/site-packages/transformers/pytorch_utils.py", line 287, in id_tensor_storage return tensor.device, storage_ptr(tensor), storage_size(tensor) AttributeError: 'str' object has no attribute 'device' ```
transformers
24,777
closed
Make CLIP model could use new added tokens with meaningful pooling
# What does this PR do? Fix #24650 This is to address the feature request #24650. Although we have default values of bos/eos been corrected in #24773, the existing config on the Hub still have the incorrect value `1` and `2`, which prevents CLIP model to use new added tokens when a user add them. Although we can open mass PRs on the Hub, I want to decouple (slightly) this with the ability to support such feature. With this PR, if a user want to use new added tokens, they has to specify/update the `eos_token_id`. **We don't need to wait all Hub repo. to be updated to merge this PR.**
07-12-2023 13:10:26
07-12-2023 13:10:26
I will fix the CI by using `fix-copies` later<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>I will merge once the branch is cut tonight.<|||||>Sorry for the spam: @sgugger said the branch cut would be on next Monday. I think it's safer to wait until then.
transformers
24,776
closed
To work out tokenization_utils_base.py:731 list to tensor so slow
tokenization_utils_base.py:731: UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.asarray() before converting to a tensor. (Triggered internally at ../torch/csrc/utils/tensor_new.cpp:230.) Solved this problem to accelerate list to tensor.
07-12-2023 12:37:00
07-12-2023 12:37:00
@askxiaozhang Thanks for opening this PR and contributing to improving the transformers library! There is another PR opened #24772 which addresses this, and so this PR will not be merged in.
transformers
24,775
closed
Fix pad across processes dim in trainer and not being able to set the timeout
# What does this PR do? Reverts tiny regression where `dim=1` is needed during `pad_across_processes`, and `ddp_timeout` wasn't trickled down through `PartialState` Fixes # (issue) Solves https://github.com/huggingface/transformers/issues/24751 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
07-12-2023 12:22:54
07-12-2023 12:22:54
Yep, sorry 😅 Right one is used now<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24775). All of your documentation changes will be reflected on that endpoint.
transformers
24,774
open
torch_dtype='auto' is not working when using AutoModel.from_pretrained(...)
### System Info - `transformers` version: 4.29.2 - Platform: macOS-12.2.1-x86_64-i386-64bit - Python version: 3.11.3 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @younesbelkada @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction The `torch_dtype='auto'` argument is not forwarded correctly when using `AutoModelForCausalLM(model_name, torch_dtype='auto')`. For example, the opt model 'facebook/opt-125m' has `torch_dtype: float16` in its config file but the following happens: ```python from transformers import AutoModelForCausalLM, OPTForCausalLM model = OPTForCausalLM.from_pretrained('facebook/opt-125m', torch_dtype='auto') model.dtype # CORRECT dtype >>> torch.float16 model = AutoModelForCausalLM.from_pretrained('facebook/opt-125m', torch_dtype='auto') model.dtype # INCORRECT dtype >>> torch.float32 ``` ### Expected behavior Both outputs should be `torch.float16` as in the config file specification. From what I've looked into, this comes from ```python if kwargs_copy.get("torch_dtype", None) == "auto": _ = kwargs_copy.pop("torch_dtype") ``` in `transformers.models.auto.auto_factory.py`, line 441. The additional kwarg specifying dtype is poped and the dtype is only inferred by the dtype argument of the config file, which is then not given explicitly (only implicitly in the config) to `PretrainedModel.from_pretrained(model_name, config=config,...)`, which does not use it if the explicit `torch_dtype` argument is not provided. I would be happy to help solve the issue if needed. Also, I find it strange that ```python from transformers import AutoConfig, AutoModelForCausalLM config = AutoConfig.from_pretrained('facebook/opt-125m') config.torch_dtype >>> torch.float16 model = AutoModelForCausalLM.from_pretrained('facebook/opt-125m', config=config) model.dtype >>> torch.float32 ``` i.e. `AutoModelForCausalLM.from_pretrained(...)` does not respect the dtype of the config as I was saying before (this is the reason of the previous bug). But maybe this is to avoid errors when model configs specify `torch_dtype: bfloat16` and users try to instantiate on the cpu? Anyway, when specifying 'auto', i.e. `AutoModelForCausalLM.from_pretrained(...torch_dtype='auto')`, I think it is absolutely necessary for the model to be instantiated with the dtype specified on the config file if any, even if it may break code for `torch.bfloat16` models, because users using this feature are aware of what they are doing.
07-12-2023 12:10:10
07-12-2023 12:10:10
Pinging @younesbelkada as I think he looked into this previously!
transformers
24,773
closed
Update default values of bos/eos token ids in `CLIPTextConfig`
# What does this PR do? Currently the default values are not the ones from the corresponding tokenizers. See discussion in #24650 However, we can't use the `config.eos_token_id` in the modeling file (which is the ultimate goal in #24650) with only the change in this PR. We will have to update all the Hub repo. config files first 😢 . (Probably there is something easier to do)
07-12-2023 10:00:31
07-12-2023 10:00:31
_The documentation is not available anymore as the PR was closed or merged._<|||||>Regarding the padding token: (copy past from (partial) internal discussion given @patil-suraj) > When we added CLIP I tested for the text_projection , logits_per_image and logits_per_text. For the text_projection the model pulls the embeddings of the last token i.e the eos token. The rest of the tokens i.e the padding tokens are ignored. We can see in this [colab](https://colab.research.google.com/drive/1kgGMnFpkc4TP7otlhAOngp9Wlke4tKJw?usp=sharing) that text_projection , logits_per_image and logits_per_text match with the OAI model because we only take the pooled embeddings. And when CLIP was released it was intended for these features which are needed for contrastive tasks. Hence I didn't test against all token embeddings. > IMO the wrong padding token will only affect inference when using all token emebeddings i.e Stable Diffusion. For training even if the padding token is wrong it shouldn't affect because > - Because CLIP did not use attention_mask during training. > - CLIPTextEncoder uses casual mask, so the tokens to the right don't influence the hidden states of tokens to the left. > - CLIP is trained with contrastive loss which is computed using the projections, and as I said above the text_projection is computed by pooling the eos token embeddings, which will be always similar no matter what the padding token is, because CLIPTextEncoder is causal, so the eos embeddings won't be affected by tokens on the right. > - Hence, for downstream training (like SD) as long as a consistent token is used for padding it shouldn't severely affect the training. But for inference we will need to use the same token as Patrick explained. This could also be the reason that we didn't have any issue related to this. > As far as I can understand, it'll only affect the inference if a different token (compared to the padding token used for training) is used for padding. (edited)
transformers
24,772
closed
fix "UserWarning: Creating a tensor from a list of numpy.ndarrays is …
fix "UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor." # What does this PR do? reduce latency of codes below from 0.744675874710083s to 0.013312816619873047s. Fixes #24764 ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker
07-12-2023 09:42:07
07-12-2023 09:42:07
_The documentation is not available anymore as the PR was closed or merged._<|||||>@ArthurZucker Do I need to make any changes to this PR? What's the next step?<|||||>@ydshieh could review this pr?<|||||>Seems to have ~30x speed up. ```python import numpy as np import torch import time def measure(batch_size, seq_len): a = np.ones(shape=(batch_size, seq_len, 16)) # A list of numpy arrary b = [x for x in a] # directly to torch.tensor st = time.time() c = torch.tensor(b) t1 = time.time() - st # np -> tensor st = time.time() d = np.array(b) e = torch.tensor(d) t2 = time.time() - st print(f"batch_size: {batch_size} | seq_len: {seq_len} | main: {t1} sec. | PR: {t2} sec.") batch_size = 128 seq_len = 32 for idx in range(10): batch_size = batch_size * 2 measure(batch_size, seq_len) batch_size = 128 seq_len = 256 for idx in range(8): batch_size = batch_size * 2 measure(batch_size, seq_len) ``` results: ```bash batch_size: 256 | seq_len: 32 | main: 0.010269403457641602 sec. | PR: 0.002008676528930664 sec. batch_size: 512 | seq_len: 32 | main: 0.015998125076293945 sec. | PR: 0.0010027885437011719 sec. batch_size: 1024 | seq_len: 32 | main: 0.03223681449890137 sec. | PR: 0.0019538402557373047 sec. batch_size: 2048 | seq_len: 32 | main: 0.0663607120513916 sec. | PR: 0.004067182540893555 sec. batch_size: 4096 | seq_len: 32 | main: 0.13183259963989258 sec. | PR: 0.0060040950775146484 sec. batch_size: 8192 | seq_len: 32 | main: 0.26061558723449707 sec. | PR: 0.011055707931518555 sec. batch_size: 16384 | seq_len: 32 | main: 0.5237565040588379 sec. | PR: 0.02300405502319336 sec. batch_size: 32768 | seq_len: 32 | main: 1.0568530559539795 sec. | PR: 0.041966915130615234 sec. batch_size: 65536 | seq_len: 32 | main: 2.0813064575195312 sec. | PR: 0.0868995189666748 sec. batch_size: 131072 | seq_len: 32 | main: 4.243735074996948 sec. | PR: 0.17353129386901855 sec. ``` ```bash batch_size: 256 | seq_len: 256 | main: 0.06456398963928223 sec. | PR: 0.0034742355346679688 sec. batch_size: 512 | seq_len: 256 | main: 0.12811279296875 sec. | PR: 0.005001068115234375 sec. batch_size: 1024 | seq_len: 256 | main: 0.26175403594970703 sec. | PR: 0.010001659393310547 sec. batch_size: 2048 | seq_len: 256 | main: 0.5197086334228516 sec. | PR: 0.019011259078979492 sec. batch_size: 4096 | seq_len: 256 | main: 1.040560245513916 sec. | PR: 0.03655409812927246 sec. batch_size: 8192 | seq_len: 256 | main: 2.089771032333374 sec. | PR: 0.07351517677307129 sec. batch_size: 16384 | seq_len: 256 | main: 4.197775602340698 sec. | PR: 0.1453232765197754 sec. batch_size: 32768 | seq_len: 256 | main: 8.368194103240967 sec. | PR: 0.36582493782043457 sec. ```<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24772). All of your documentation changes will be reflected on that endpoint.
transformers
24,771
closed
Add MobileVitV2 to doctests
# What does this PR do? Adds MobileVitV2 to the doctests. The example snippet wasn't working because the model's config files pointed to an image processor that doesn't exist. This adds the models to the doctests so that this is caught. Also removes a duplicate line in image_processing_auto.py Fixes #24763 (partially) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
07-12-2023 09:30:14
07-12-2023 09:30:14
_The documentation is not available anymore as the PR was closed or merged._<|||||>> ValueError: Files in `utils/documentation_tests.txt` are not in alphabetical order. @amyeroberts You are the one creating this check 😆 <|||||>![image](https://github.com/huggingface/transformers/assets/22614925/6636d6fa-8da3-4305-b69f-a7925ecde3a8) <|||||>Oh, I am wrong! The PR doctest is not triggered as this PR doesn't change modeling file. Great!
transformers
24,770
closed
Add multi-label text classification support to pytorch example
# What does this PR do? The transoformer config supports multi-label classification by setting config.problem_type = "multi_label_classification", but the run_glue.py does not support it. This PR add `run_classification.py` to support multi-label classification task. Main changes compared to `run_glue.py`: - [x] Add support for multi-label classification task and datasets, e.g., [Reuters-21578](https://huggingface.co/datasets/reuters21578). - [x] Remove code related to glue tasks - [x] Update README.md for multi-label classification task. - Add parameraters and code to support single/multi-label classification and regression task - Add `shuffle_train_dataset` option to shuffle train dataset. This is useful to avoid problems caused by ordered labels. - Add `metric_name` to specify the metric used to evaluate the model. - Add `remove_splits` to remove some unnsed splits from the dataset, e.g., Reuter dataset has "unused" split, IMDB dataset has "unsupervised" split. - Add `remove_columns` to remove some unnsed columns from the dataset - Add `text_column_names` to specify the (possibly multiple) columns containing the text. - Add `label_column_name` to specify the column containing the labels, e.g., `stars" for amazon review dataset - Add train/validation/test_split_name to specify the split name for train/validation/test dataset - Add do_regression to force treating text-classification task as regression task. This remove the need to change the label dtype of the dataset.
07-12-2023 09:18:28
07-12-2023 09:18:28
@ranchlai Thanks for opening this PR and contributing to the examples! Could you add to the README for this example a snippet for running on a multi-label classification task?<|||||>> @ranchlai Thanks for opening this PR and contributing to the examples! > > Could you add to the README for this example a snippet for running on a multi-label classification task? Sure^_^. Working on the [reuters21578](https://huggingface.co/datasets/reuters21578) dataset as a minimum example. Will update README accordingly. <|||||>@ranchlai Please note that the examples are kept simple to be more readable. This adds a lot of complexity to the original example for something that is not covered by the primary goal of that example (run GLUE benchmark) so I would keep it separate.<|||||>Thanks for commenting @sgugger. I understand and that's why I am trying to make the change minimum. Multiple label classification is indeed more complicated . Hence, adding a demo in the "text-classification" example could be helpful. Thank you~<|||||>Yes, but maybe it could go in a new file focused on text classification only (and not GLUE)?<|||||>That's a good idea. How about run_classification.py in parallel to run_glue.py? I can try to work it out. <|||||>Perfect!<|||||>@sgugger please would you leave more comments, although I am still running more tests<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>> @sgugger I think I have finished my tests. Scripts [here](https://github.com/ranchlai/transformers/tree/add_test_scripts/examples/pytorch/text-classification/test) at another branch. Please merge if looks good. <|||||>Thanks again for your contribution!<|||||>I think the added content in README should be placed in the bottom.
transformers
24,769
closed
[fix] Change the condition of ValueError in "convert_checkpoint_from_transformers_to_megatron"
The "target_tensor_model_parallel_size" is related to "num_attention_heads", and the "target_pipeline_model_parallel_size" is related to "num_hidden_layers". However, the old code had "target_tensor_model_parallel_size" related to "num_hidden_layers". So we modified the code and added the part about "target_tensor_model_parallel_size". Thanks!
07-12-2023 08:04:49
07-12-2023 08:04:49
cc @pacman100 <|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
24,768
closed
🐛 torch baddbmm error fixed for BigCode models
Fixes # (issue) This was needed because of a bug in pytorch https://github.com/pytorch/pytorch/issues/80588. The bug was fixed in https://github.com/pytorch/pytorch/pull/96086 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? @ArthurZucker @younesbelkada @jlamypoirier
07-12-2023 07:58:22
07-12-2023 07:58:22
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24768). All of your documentation changes will be reflected on that endpoint.<|||||>> I think we don't use PT>2.0.0 that includes the fix you mentioned above, there should be a reason for that. cc @ydshieh You prediction is 200% correct: we have torch `2.0.1`. The mentioned torch fix is not included in that minor bug release.<|||||>Ah I see thanks for double checking @ydshieh !<|||||>closing this since its not relevant at this point in time.
transformers
24,767
open
add aquila
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-12-2023 07:15:55
07-12-2023 07:15:55
Hi @shunxing1234, Thanks a lot for opening a PR and contributing to the HF ecosystem! 🤗 We have recently been trying to push for `model on the hub` and have as much support as we can there. It will also be easier to integrate it! Here is a [tutorial](https://huggingface.co/docs/transformers/custom_models) if that sound good to you!
transformers
24,766
open
Saving LLAMA 13B checkpoint with FSDP finetuning results in disk full error
### System Info transformers - installed from source accelerate - installed from source torch 2.0.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Finetune LLAMA 13B with accelerate launcher. Saving strategy "epoch". FSDP based training. ### Expected behavior I would be able to save the checkpoint. Now getting disk full error. Note that the disk initially had space. My code was working with transformers-4.28, accelerate 0.18 and torch 1.13. This error started after I moved to accelerate based launcher and upgraded the packages.
07-12-2023 07:01:43
07-12-2023 07:01:43
It seems saving a checkpoint requires more than 213GB (Free memory on my hard disk). Not sure if it is intended. ``` Filesystem Size Used Avail Use% Mounted on overlay 251G 27G 213G 11% / tmpfs 64M 0 64M 0% /dev tmpfs 434G 0 434G 0% /sys/fs/cgroup shm 2.0G 0 2.0G 0% /dev/shm /dev/sdb1 251G 27G 213G 11% /tmp tmpfs 434G 12K 434G 1% /proc/driver/nvidia /dev/root 124G 23G 102G 18% /usr/bin/nvidia-smi tmpfs 87G 2.4M 87G 1% /run/nvidia-persistenced/socket devtmpfs 434G 0 434G 0% /dev/nvidia0 ```<|||||>Hi @ari9dam, thanks for raising this issue. Could you a minimal code snippet we can use to reproduce the error? Specifically how accelerate launcher is being used, training arguments, and FDSP config. For the transformers and accelerate source installs, which commit are you running from? When you say saving a checkpoint - am I right in saying this is the memory requirement for saving a single checkpoint after 1 epoch of training is > 213 GB? <|||||>Yes, " the memory requirement for saving a single checkpoint after 1 epoch of training is > 213 GB?". `accelerate launch --config_file accelerate_config.yaml --num_machines 4 --num_processes 16 --machine_rank $NODE_RANK --main_process_ip $MASTER_ADDR --main_process_port $MASTER_PORT ./trainer.py --model_name_or_path ".." --data_path "..." --per_device_train_batch_size 16 --per_device_eval_batch_size 16 --do_train --evaluation_strategy no --output_dir outputs --learning_rate 2e-5 --num_train_epochs 4 --lr_scheduler_type cosine --warmup_ratio 0.03 --weight_decay 0.0 --logging_steps 1 --save_strategy epoch --bf16 true --tf32 true --load_best_model_at_end false --model_max_length 1024 --gradient_checkpointing true --save_total_limit 1 --model_resume_from_checkpoint false --torch_compile false` ### accelerate_config.yaml ``` compute_environment: LOCAL_MACHINE distributed_type: FSDP downcast_bf16: 'no' fsdp_config: fsdp_auto_wrap_policy: TRANSFORMER_BASED_WRAP fsdp_backward_prefetch_policy: BACKWARD_PRE fsdp_forward_prefetch: false fsdp_offload_params: false fsdp_sharding_strategy: 1 fsdp_state_dict_type: FULL_STATE_DICT fsdp_sync_module_states: true fsdp_transformer_layer_cls_to_wrap: LlamaDecoderLayer fsdp_use_orig_params: true main_training_function: main num_machines: 1 num_processes: 2 mixed_precision: bf16 rdzv_backend: static tpu_env: [] tpu_use_cluster: false tpu_use_sudo: false use_cpu: false ``` transformers==4.31.0.dev0 (https://github.com/huggingface/transformers/commit/45025d92f815675e483f32812caa28cce3a960e7) accelerate==0.21.0.dev0 (https://github.com/huggingface/accelerate/commit/7954a28a71d484c4182a6b1074c1b8cc51642fc9) <|||||>@ari9dam Thanks for the additional info cc @pacman100 @muellerzr <|||||>Hello @ari9dam, please show the contents of the checkpoint along with their sizes<|||||>total 201G 69 Jul 14 10:40 added_tokens.json 656 Jul 14 10:40 config.json 137 Jul 14 10:40 generation_config.json 97G Jul 14 10:46 optimizer.bin 6.1G Jul 14 10:41 optimizer.pt 9.3G Jul 14 10:42 pytorch_model-00001-of-00006.bin 9.3G Jul 14 10:42 pytorch_model-00002-of-00006.bin 9.3G Jul 14 10:42 pytorch_model-00003-of-00006.bin 9.2G Jul 14 10:42 pytorch_model-00004-of-00006.bin 9.2G Jul 14 10:42 pytorch_model-00005-of-00006.bin 2.4G Jul 14 10:41 pytorch_model-00006-of-00006.bin 49G Jul 14 10:44 pytorch_model.bin 33K Jul 14 10:40 pytorch_model.bin.index.json 18K Jul 14 10:40 rng_state_0.pth 18K Jul 14 10:40 rng_state_10.pth 18K Jul 14 10:40 rng_state_11.pth 18K Jul 14 10:40 rng_state_12.pth 18K Jul 14 10:40 rng_state_13.pth 18K Jul 14 10:40 rng_state_14.pth 18K Jul 14 10:40 rng_state_15.pth 18K Jul 14 10:40 rng_state_1.pth 18K Jul 14 10:40 rng_state_2.pth 18K Jul 14 10:40 rng_state_3.pth 18K Jul 14 10:40 rng_state_4.pth 18K Jul 14 10:40 rng_state_5.pth 18K Jul 14 10:40 rng_state_6.pth 18K Jul 14 10:40 rng_state_7.pth 18K Jul 14 10:40 rng_state_8.pth 18K Jul 14 10:40 rng_state_9.pth 627 Jul 14 10:40 scheduler.pt 435 Jul 14 10:40 special_tokens_map.json 745 Jul 14 10:40 tokenizer_config.json 1.8M Jul 14 10:40 tokenizer.json 489K Jul 14 10:40 tokenizer.model 11K Jul 14 10:40 trainer_state.json 4.1K Jul 14 10:40 training_args.bin<|||||>I'm not sure about `49G pytorch_model.bin`. It looks to be a duplicate.<|||||>Hello, PR https://github.com/huggingface/transformers/pull/24926 should resolve the duplicate saving issue.
transformers
24,765
closed
fix: "UserWarning: Creating a tensor from a list of numpy.ndarrays is…
fix "UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor." # What does this PR do? reduce latency of codes below from 0.744675874710083s to 0.013312816619873047s. ``` st = time.time() inputs = tokenizer(query_list, return_tensors="pt" ,padding=True) print(time.time() - st) ``` Fixes #24764 ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @ArthurZucker
07-12-2023 06:25:39
07-12-2023 06:25:39
transformers
24,764
closed
UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow
### System Info System Info nvidia CUDA Version: 12.1, Driver Version: 525.105.17 transformers-cli env is - `transformers` version: 4.29.2 - Platform: Linux-4.19.87-netease6-1-x86_64-with-glibc2.29 - Python version: 3.8.10 - Huggingface_hub version: 0.14.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.1.0a0+fe05266 (True) ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction when i use tokenizer(...,padding=True), i will get a warnning : "UserWarning: Creating a tensor from a list of numpy.ndarrays is extremely slow. Please consider converting the list to a single numpy.ndarray with numpy.array() before converting to a tensor." and this function latency is 0.744675874710083 if query_list over 800 words . ``` st = time.time() inputs = tokenizer(query_list, return_tensors="pt" ,padding=True) print(time.time() - st) ``` ### Expected behavior Reduce latency and fix the UserWarning.
07-12-2023 06:19:14
07-12-2023 06:19:14
@liucw2012 Thank you for the opening the issue and PR! I haven't check the PR in detail, but I am wondering what's the **total time** if you convert it to numpy first + numpy to torch tensor. Also a remark: when giving a code snippet, please provide all the necessary variable value so it could run directly. (I know, the above one is simple enough, but it's a good thing to provide, thank you!)<|||||>@ydshieh the latency is 0.0147s if convert it to numpy.<|||||>The speed looks very good. But see my comment in the PR :-)<|||||>@ydshieh i have another pr . it's already went through a lot checks. but i don't know is it accepted or what should i do next. could u give a review pls ? https://github.com/huggingface/transformers/pull/24772#issuecomment-1635168756<|||||>Thank you @liucw2012 for the PR ❤️ The CI in that PR is green 🚀 . However I would like to check a bit deeper what the (nested) inputs would be possible for that method, and if every case works and if none of the case will slow down.<|||||>BTW, could you maybe provide what `query_list` you used. (it's always a nice thing to provide the actual definition for variables in a code snippet 🙏 )<|||||>sorry, i was a lit bit busy recently. the query_list is just two chats, each one is almost 800 Chinese characters。all others examples is ok if u used tokenier with padding=true;<|||||>It's fine. But see https://github.com/huggingface/transformers/pull/24772#discussion_r1265559341
transformers
24,763
closed
sample code in for mobilevitv2 is not working
### System Info - `transformers` version: 4.31.0.dev0 - Platform: macOS-10.16-x86_64-i386-64bit - Python version: 3.10.11 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?:no ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction run sample code from [MobileViTV2ForImageClassification](https://huggingface.co/docs/transformers/v4.30.0/en/model_doc/mobilevitv2#transformers.MobileViTV2ForImageClassification) throws an error ValueError: Unrecognized image processor in apple/mobilevitv2-1.0-imagenet1k-256. Should have a `image_processor_type` key in its preprocessor_config.json of config.json, or one of the following `model_type` keys in its config.json: align, beit, bit, blip, blip-2, bridgetower, chinese_clip, clip, clipseg, conditional_detr, convnext, convnextv2, cvt, data2vec-vision, deformable_detr, deit, deta, detr, dinat, donut-swin, dpt, efficientformer, efficientnet, flava, focalnet, git, glpn, groupvit, imagegpt, instructblip, layoutlmv2, layoutlmv3, levit, mask2former, maskformer, mgp-str, mobilenet_v1, mobilenet_v2, mobilevit, mobilevitv2, nat, oneformer, owlvit, perceiver, pix2struct, poolformer, regnet, resnet, sam, segformer, swiftformer, swin, swin2sr, swinv2, table-transformer, timesformer, tvlt, upernet, van, videomae, vilt, vit, vit_hybrid, vit_mae, vit_msn, xclip, yolos ### Expected behavior sample code should not throw an error
07-12-2023 04:16:40
07-12-2023 04:16:40
Hi @darwinharianto Thank you for opening the issue. Could you specify which code sample in the link is the exact one that has problem to run? Thank a lot!<|||||>@darwinharianto Thanks for reporting this! It seems the issue is coming from the preprocessor config files on the hub for this checkpoint: [it points to a class which doesn't exist](https://huggingface.co/apple/mobilevitv2-1.0-imagenet1k-256/blob/6229cf24f57fe7210db6c6f1ad872a616b802679/preprocessor_config.json#L10). If I clone and modify the config file, the image processor will load correctly. I'll also add this modeling file to our doctests.
transformers
24,762
open
Abnormally slow inference speed of quantized model?
### System Info To reproduce, I am running on CUDA 12.1/Driver 530 on an A100 with Ubuntu 20.04. I am running with the following packages accelerate 0.21.0.dev0 triton 2.0.0 transformers 4.31.0.dev0 torch 2.0.1 bitsandbytes 0.40.0.post3 Output from the transformers-cli env is - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.15.0-1015-aws-x86_64-with-glibc2.17 - Python version: 3.8.12 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Hi guys, I am trying out the load_in_4bit/load_in_8bit options to see if it speed up model inference. My understanding is that by quantizing the model, the inference speed will improve. However that's not the case. I use the following simple script to test out speed on a T5 XXL model for 4bits/8bits/fp32, and actually fp32 model runs the fastest (0.07sec), and the 4bits/8bits run almost in the same speed (0.1sec). So I want to double check. The script I am using to test the speed is here ``` from transformers import T5Tokenizer, T5ForConditionalGeneration import torch import pdb import gc import time import os device_id = 3 tokenizer = T5Tokenizer.from_pretrained("google/flan-t5-xxl") input_text = "translate English to German: How old are you?" * 20 input_ids = tokenizer(input_text, return_tensors="pt").input_ids[:, :128].to(f"cuda:{device_id}") # Actual model loading max_memory = f'{int(torch.cuda.mem_get_info()[0]/1024**3)-2}GB' model_4bits = T5ForConditionalGeneration.from_pretrained( "google/flan-t5-xxl", load_in_4bit=True, max_memory=max_memory) # model_8bits = T5ForConditionalGeneration.from_pretrained( # "google/flan-t5-xxl", # load_in_8bit=True, # max_memory=max_memory) # model_fp32 = T5ForConditionalGeneration.from_pretrained( # "google/flan-t5-xxl").to(f"cuda:{device_id}") def benchmark(model_name, model, input_ids): # warmup for _ in range(200): model.encoder(input_ids) torch.cuda.synchronize() with torch.no_grad(): start = time.time() for i in range(200): model.encoder(input_ids) torch.cuda.synchronize() end = time.time() print(f"{model_name} inference time is {(end-start)/200} sec") benchmark("model_4bits", model_4bits, input_ids) # model_4bits inference time is 0.1036052393913269 sec # benchmark("model_8bits", model_8bits, input_ids) # model_8bits inference time is 0.1006016504764557 sec # benchmark("model_fp32", model_fp32, input_ids) # model_fp32 inference time is 0.0731453263759613 sec ``` ### Expected behavior I expect the 4 bits model runs faster than the 8bits model, which in turn runs faster than the fp32 model. That's not the case I observe.
07-11-2023 21:34:25
07-11-2023 21:34:25
Having the same issue here. Also want to ask if it is true that 8bit model is slower than fp16 during inference<|||||>cc @younesbelkada for `4-bit` 🙏 <|||||>Hi everyone, Sadly 8bit models are expected to be slower than fp16 models, see this: https://huggingface.co/blog/hf-bitsandbytes-integration#faster-inference-speed-for-smaller-models for reference bitsandbytes juste released a new version for faster inference (batch_size=1) https://github.com/TimDettmers/bitsandbytes/releases/tag/0.40.0 Can you try to upgrade bitsandbytes and run the benchmark again?<|||||>> Hi everyone, Sadly 8bit models are expected to be slower than fp16 models, see this: https://huggingface.co/blog/hf-bitsandbytes-integration#faster-inference-speed-for-smaller-models for reference bitsandbytes juste released a new version for faster inference (batch_size=1) https://github.com/TimDettmers/bitsandbytes/releases/tag/0.40.0 Can you try to upgrade bitsandbytes and run the benchmark again? Hi younesbelkada, I get my bitsandbytes from directly pulling the github repository and compile from source yesterday. So it is already the latest 0.40 version that includes the 4 bits bs1 inference. <|||||>This is strange .. Can you report that to bitsandbytes library? 🙏 <|||||>> This is strange .. Can you report that to bitsandbytes library? 🙏 Sure. I will post on the issues of bitsandbytes library and link this issue.
transformers
24,761
closed
Unpin protobuf in docker file (for daily CI)
# What does this PR do? I forgot to unpin protobuf (in the docker file) in my previous PR #24599. Currently, CircleCI is testing against with protobuf 4, but daily CI is still v3. Let's move on on daily CI too.
07-11-2023 21:25:33
07-11-2023 21:25:33
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,760
closed
Allow existing configs to be registered
# What does this PR do? If a model has a class defined both on the Hub and locally, there is a clash appearing when loading it in the auto API and `trust_remote_code=True` coming from [this line](https://github.com/huggingface/transformers/blob/253d43d46d1291633fb21116b737f2bd8799d3da/src/transformers/models/auto/auto_factory.py#L421). This PR fixes it.
07-11-2023 20:39:28
07-11-2023 20:39:28
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24760). All of your documentation changes will be reflected on that endpoint.
transformers
24,759
closed
:bug: Handle empty gen_kwargs for seq2seq trainer prediction_step function
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> All Trainers expose function `prediction_step`. For `Seq2SeqTrainer`, it seems like the `prediction_step` function is relying on availability of `_gen_kwargs` attribute, which will get set automatically, if `prediction_step` will get called from other functions like `predict` or `evaluate`. However, if someone calls `prediction_step` directly, then this field will not get set and currently will throw `AttributeError: 'Seq2SeqTrainer' object has no attribute '_gen_kwargs'`. In this PR, I am trying to resolve above issue by accepting `gen_kwargs` as an argument to `prediction_step` function, in addition to automatically get it from `self` if it has been set previously while falling back to empty `{}` in case its not set in either of those methods. ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. cc: @sgugger
07-11-2023 19:44:36
07-11-2023 19:44:36
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,758
closed
Fix lr scheduler not being reset on reruns
# What does this PR do? This PR ensures that a new learning rate scheduler is created each time we rerun the `inner_training_loop`, so that if we have an lr such as `linear`, a new LR is generated based on the new batch size and step count. I don't believe a new optimizer is needed here to be recreated, just the scheduler as adjusting the bs and lr *shouldn't* matter? But if we think it is we can go ahead and add a reset to the optimizer as well. Fixes # (issue) The true solution to https://github.com/huggingface/transformers/pull/24521 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
07-11-2023 19:43:46
07-11-2023 19:43:46
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24758). All of your documentation changes will be reflected on that endpoint.
transformers
24,757
closed
Replacement of 20 asserts with exceptions
# What does this PR do? Replaces 20 assertions with relevant errors, mostly ValueError. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes part of #12789 ## Before submitting - [x] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). ## Who can review? @JuheonChu @sgugger I saw both of you tagged in above issue. Please have a look when you have time! :) <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-11-2023 19:27:55
07-11-2023 19:27:55
You will need to put your PR out of draft mode for us to be able to merge it :-)<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
24,756
closed
Fix eval_accumulation_steps leading to incorrect metrics
# What does this PR do? Uses the logic in the `GradientState` to know when we've reached the end of training and should sync the gradients. Doing so relies on [this](https://github.com/huggingface/accelerate/blob/main/src/accelerate/accelerator.py#L862-L869) code, which already checks for the case of if a dataloader has no length and works properly Fixes # (issue) Solves https://github.com/huggingface/transformers/issues/24734 ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger
07-11-2023 18:50:04
07-11-2023 18:50:04
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,755
closed
gpt-bigcode: avoid `zero_` to support Core ML
# What does this PR do? In-place `zero_` is not supported by the Core ML conversion process. This PR replaces it with `zeros_like` so conversion can proceed. The change only affects a workaround for a PyTorch bug on the `cpu` device. <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Fixes # (issue) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @younesbelkada, @loubnabnl, @jlamypoirier
07-11-2023 18:33:37
07-11-2023 18:33:37
Note: to fully test conversion of `gpt-bigcode` models, the following `coremltools` PRs (or equivalent workarounds) need to be applied as well: https://github.com/apple/coremltools/pull/1910, https://github.com/apple/coremltools/pull/1911.<|||||>_The documentation is not available anymore as the PR was closed or merged._<|||||>@younesbelkada I think this is already fixed in PT. Should we just drop this logic? Opened a PR: https://github.com/huggingface/transformers/pull/24768 which supercedes this one<|||||>We support versions of PyTorch from 1.10 and onward, so we need to keep the workaround for the bug.<|||||>Merging to unblock @pcuenca , let's maybe address @jlamypoirier 's comments in a follow up PR !
transformers
24,754
open
📝 Add parameter names to code examples in README
@sgugger, @stevhliu and @MKhalusova Updated the code examples in the README file to include parameter names for better clarity and readability. Previously, the examples were missing parameter names, which could lead to confusion. By adding the parameter names, it becomes easier for users to understand and utilize the code correctly. Thanks! 🙌
07-11-2023 18:01:05
07-11-2023 18:01:05
> Thanks for your PR. I don't believe this makes those basic examples easier to understand, however, so I would leave things as is. Hi @sgugger, thank you for your feedback. It might be nice to specify the 'model' parameter of the pipeline function. This is how I will update all the tasks in the https://huggingface.co/tasks section. Example(task=depth-estimation): huggingface/hub-docs#890 If you want, I can close this pull request.
transformers
24,753
closed
Skip some slow tests for doctesting in PRs (Circle)CI
# What does this PR do? For doctest: each`.md` file is seen as a single test by pytest, and some takes more time (say `task_summary.md`) than others. Let's allow a 5 minute timeout per test. For the job step, it still has the total 1200s timeout.
07-11-2023 17:56:55
07-11-2023 17:56:55
_The documentation is not available anymore as the PR was closed or merged._<|||||>Hi! - This file `.circleci/create_circleci_config.py` is running on CircleCI. See #23245 - If we don't accept the change is this PR: we can simply skip some (doc) test file that takes longer time to run) - This PR addresses the per test timeout issue while there is a global timeout: motivated by #23318. - Usually I will try to respect the 120s timeout (per test). But **since in this doctests on CircleCI (where we have 1200s global timeout), I think overall it's fine (?)** <|||||>Can we skip the longest tests? We are trying to rationalize the costs of circleCI so want to make sure we don't run something too beefy on it, especially since all those tests are run on GPU nightly.<|||||>> Can we skip the longest tests? Sure! We will need to have two lists: the exiting `utils/documentation_tests` and a new `slow_doctest_to_ignore`. (I am not sure how to mark some doctest file as slow doctest as we have done for usual tests)<|||||>That works for me!<|||||>Me too! <|||||>The latest version should skip the slow doctests. (I don't really run the full doctest on CircleCI to get all of them: we just update the list if we see some files being slow by doctest)<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24753). All of your documentation changes will be reflected on that endpoint.
transformers
24,752
open
Training stage error with batch mode on conditional generation for multimodal models
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.15.90.1-microsoft-standard-WSL2-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cpu (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @younesbelkada @gante @sgugger ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` import torch from transformers import VisionEncoderDecoderModel, TrOCRProcessor loc = "microsoft/trocr-small-handwritten" processor = TrOCRProcessor.from_pretrained(loc) model = VisionEncoderDecoderModel.from_pretrained(loc) decoder_input_ids = torch.tensor([[0, 7344, 2159, 12, 345], [0, 7344, 2159, 12, 346]]) # a batch_size of 2 Prefixes for each examples decoder_attention_mask = None encoder_hidden_states = torch.randn(2, 578, 384) # a random encoder input to the decoder encoder_attention_mask = None decoder_inputs_embeds = None output_attentions = None output_hidden_states = None use_cache = None past_key_values = None return_dict = True kwargs_decoder = {} decoder_outputs = model.decoder( input_ids=decoder_input_ids, attention_mask=decoder_attention_mask, encoder_hidden_states=encoder_hidden_states, encoder_attention_mask=encoder_attention_mask, inputs_embeds=decoder_inputs_embeds, output_attentions=output_attentions, output_hidden_states=output_hidden_states, use_cache=use_cache, past_key_values=past_key_values, return_dict=return_dict, **kwargs_decoder, ) decoder_outputs['logits'].shape, decoder_input_ids.shape ``` ### Expected behavior The decoder default max_target_length is set to 128. So I expect the logits with shape [2, 128, 64044], where batch_size is 2. But I only get shape of [2, 5, 64044], where prefix length is 5.
07-11-2023 17:17:04
07-11-2023 17:17:04
This is an issue for[ this merged PR](https://github.com/huggingface/transformers/pull/22424)<|||||>Hey @cramraj8 👋 The output shape is correct -- the sequence length of the logits is as long as the input sequence length. Perhaps you're interested in using the model to complete the prefix, in which case you'll need to use generative tools like `.generate()` or the `Seq2SeqTrainer`. Have a look at the documentation for these terms :)<|||||>Hi @gante , Thanks for the reply. Yes, I tried using `.generate() `and `Seq2SeqTrainer` to do the training. But if I provide prefix `input_ids` and parse them as `decoder_input_ids ` to the `.generate()` function, the Trainer throws error during the training stage claiming the loss calculation end up getting different shape for prediction and label. That is why I looked up `model.decode() `function. It is true that output sequence length must be same as input sequence length for the `decode() `function. But for the prefix completion task, how do we adapt it for `generate`() and `decode`() function. I do have this question - I am interested in conditional decoding where the a multimodal completes a prefix or a prompt. Do we train the model with complete text (prefix and completion text) during the training stage, and only provide prefix as an additional input during inference stage ? Or we can still provide prefix as additional input to both training and inference stages ?<|||||>@cramraj8 I see, now I understand what your goal is :) AFAIK We do not support passing a prefix at train time, I'm afraid you'll have to build a custom solution. In any case, it must be based on `.generate()` and `Seq2SeqTrainer`, as your task relies on auto-regressive text generation! You can also train without a prefix at all, even if you expect a certain prefix (or set of prefixes) at inference time. For instance, Whisper does this (see [section 2.3 of its paper](https://arxiv.org/pdf/2212.04356.pdf)). At train time, treat the prefixes as variables. At inference time, starts generating from the prefix.<|||||>Thank you! This is helpful. Looks like not doing anything during train time, and applying prefix during test time works better. <|||||>Hi @gante, during my implementation I found that I am getting an error of device mismatch at the following location. ``` File "/mnt/azureml/cr/.../exe/wd/trainer_seq2seq.py", line 296, in prediction_step generated_tokens = self.model.generate(**inputs, **gen_kwargs) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/torch/autograd/grad_mode.py", line 27, in decorate_context return func(*args, **kwargs) File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/generation/utils.py", line 1328, in generate input_ids, model_kwargs = self._prepare_decoder_input_ids_for_generation( File "/opt/conda/envs/ptca/lib/python3.8/site-packages/transformers/generation/utils.py", line 676, in _prepare_decoder_input_ids_for_generation decoder_input_ids = torch.cat([decoder_input_ids_start, decoder_input_ids], dim=-1) RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:4 and cpu! (when checking argument for argument tensors in method wrapper_cat) ``` This happens when I do `Seq2SeqTrainer `evaluation with prefix (`decoder_input_ids`) provided. In CPU machine, the code works perfectly fine. But at the GPU presence, it was throwing this device error. After debugging I found that the `decoder_input_ids` were never placed in the corresponding cuda device IDs even though other input values were placed in GPU. I did the following change, and it worked fine now. I am not sure if it's a bug or not, but I am bringing this up to your attention if anyone face similar issues in future. In addition, I had to overwrite Seq2SeqTrainer to separate `decoder_input_ids` from `inputs `and assign it with `self._gen_kwargs` so that the code works. Otherwise, I was getting complex errors. ``` decoder_input_ids = inputs.pop("decoder_input_ids") self._gen_kwargs["decoder_input_ids"] = decoder_input_ids ``` (adding @wgx998877 for reference) <|||||>@cramraj8 would you be able to share a short reproducer? (like the one you shared at the top)
transformers
24,751
closed
Stalled loop during prediction with deepspeed
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.10.173-154.642.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.10.10 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help? @pacman100 (b/c deepspeed-only problem) @sgugger (b/c this is a documentation example script) ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I am running `examples/pytorch/translation/run_translation.py` on a machine with 4 V100's. To replicate my issue, run deepspeed examples/pytorch/translation/run_translation.py --deepspeed tests/deepspeed/ds_config_zero3.json / --model_name_or_path t5-small / --per_device_train_batch_size 1 / --output_dir output_dir / --overwrite_output_dir / --fp16 / --do_train / --max_train_samples 64 / --num_train_epochs 1 / --dataset_name wmt16 / --dataset_config "ro-en" / --source_lang en / --target_lang ro / --do_predict / --max_predict_samples 64 / --predict_with_generate ### Expected behavior I would expect the script to fully run using `deepspeed`, not just without it. Right now, it outputs an warning message ``` Invalidate trace cache @ step 0: expected module 2, but got module 0 Invalidate trace cache @ step 1: expected module 116, but got module 2 ``` and gets stuck during the `.evaluation_loop()` method. I added some printing steps to the code, and it appeared that the code was stalling after the first `.prediction_step()`.
07-11-2023 16:08:50
07-11-2023 16:08:50
Not really sure, but let me tag @pacman100 (?) and see if he knows this is more an issue in `transformers/accelerate` or should go to DeepSpeed repo issue page.<|||||>I wasn't sure which repo it belonged in either. I couldn't seem to locate the source of the bug. Based on all the print statements I added, it looked like it had to be in the `dataloader`, but then I added some code just to iterate through the `dataloader` without doing anything, and that worked without issue.<|||||>Hello @avivbrokman, so you mean the issue is not with dataloaders? <|||||>@pacman100 I couldn't figure it out—I think this is beyond my coding skill level. I spent a few days trying to locate the source of the bug, but failed. Normally, when I get an error, the traceback helps me find the problem. Here, there's no error message, so my (probably highly inadvisable) solution was to add print statements in between every single line in the source code so I can see the last line that was executed. When I add `print(f'finished step {step}')` at line 3179 of `trainer.py` with one less level of indentation than line 3178, it prints, but then the `print(f'beginning step {step}')` at line 3114 doesn't execute for a second batch. This led me to believe that the issue was with the `dataloader`. So I inserted the following code at line 3112: ``` for step, inputs in enumerate(dataloader): print(step) ``` This fully executed, which led me to to believe the problem is not the `dataloader`. At this point, I reached the limits of my understanding, and submitted my bug report.<|||||>Thank you. This isn't a deepspeed issue as this also happens on just using DDP<|||||>> for step, inputs in enumerate(dataloader): > print(step) > This fully executed, which led me to to believe the problem is not the dataloader. At this point, I reached the limits of my understanding, and submitted my bug report. This doesn't print for 2nd step for me<|||||>Seems to be related to dataloader, cc @muellerzr, post completion of 1 step, it hangs when using DDP: <img width="1502" alt="Screenshot 2023-07-12 at 5 02 43 PM" src="https://github.com/huggingface/transformers/assets/13534540/dd55966a-0bd0-4004-b72e-be9069a9535b"> Command: ``` accelerate launch --num_processes 2 examples/pytorch/translation/run_translation.py --model_name_or_path t5-small --per_device_train_batch_size 1 --output_dir output_dir --overwrite_output_dir --fp16 --do_train --max_train_samples 64 --num_train_epochs 1 --dataset_name wmt16 --dataset_config "ro-en" --source_lang en --target_lang ro --do_predict --max_predict_samples 64 --predict_with_generate ``` Main branch of transformers and accelerate. @muellerzr, any ideas about what might be going wrong? <|||||>Hmmm @pacman100 can you grab the absolutely latest version of main on Accelerate and try again? (Like within the last 5 minutes)<|||||>Hello, just updated the accelerate to main and still the issue persists<|||||>@muellerzr Is the #24775 PR just for the Trainer? I am hitting this issue in my eval loop, but with a custom loop, not the trainer. I am using zero-3.<|||||>@init-random yes, we'd need a reproducer to know what's going on with your custom loop, but in general that's the correct solution to do if you're mimicking what the trainer should be doing. (And it's not directly deepspeed related)<|||||>@muellerzr OK, thank you! I'll look into it and open a new issue, if need be.
transformers
24,750
open
Add PEFT support directly in transformers pipeline
# What does this PR do? Replaces https://github.com/huggingface/peft/pull/585 After discussing with @LysandreJik I made this PoC PR to see whether it is simpler to add PEFT support directly in transformers and centralize all sort of pipelines in transformers pipeline. In the future, we can concentrate the efforts on diffusers side to add PEFT support there as well. Do not merge before the next PEFT release Currently the API looks as follows: ```python from transformers import pipeline peft_model_id = "ybelkada/opt-350m-lora" pipe = pipeline("text-generation", peft_model_id) print(pipe("hello")) pipe = pipeline("text-generation", peft_model_id, peft_model_kwargs={"adapter_name": "default"}) print(pipe("hello")) local_peft_pipeline_path = "./test_lora_pipeline" pipe.model.save_pretrained(local_peft_pipeline_path) pipe = pipeline("text-generation", local_peft_pipeline_path) print(pipe("hello")) ``` ## TODOS: - [ ] seq2seq generation - [ ] seq classification - [ ] Add check with task_type - [ ] add clean tests - [ ] update docs
07-11-2023 15:50:06
07-11-2023 15:50:06
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24750). All of your documentation changes will be reflected on that endpoint.<|||||>Thanks ! If the PoC gets validated by @Narsil I can extend it to more tasks (seq2seq generation, seq-cls) and add nice tests in the current testing suite<|||||>So far `self.check_model_type` is removed, and I know @Narsil suggests it's ok. I think it could be simply skipped by checking if the model is an instance of PEFT model, and we don't really need to remove it. Leave @sgugger to make the final call though.<|||||>Sounds good to me!
transformers
24,749
closed
Skip keys not in the state dict when finding mismatched weights
# What does this PR do? When looping through the keys in `find_mismatched_weights`, we loop through all the `loaded_keys` which are all the keys in the checkpoint. If the checkpoint is sharded, the `state_dict` passed won't contain all those keys, only a subset of them, so we need to skip the keys not present in the `state_dict`. Fixes #24704
07-11-2023 15:29:27
07-11-2023 15:29:27
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,748
open
Docs: Added benchmarks for `torch.compile()` for vision models
As discussed with @amyeroberts & @sayakpaul, this PR adds `torch.compile()` benchmarks to our documentation. I mainly benchmarked for latency, I can add throughput as well. I got it with doc-builder on my local, it looks like below. <img width="928" alt="Screenshot 2023-07-11 at 18 05 42" src="https://github.com/huggingface/transformers/assets/53175384/fecf12e0-750b-4085-8224-2fe91705bbfd"> <img width="529" alt="Screenshot 2023-07-11 at 18 04 39" src="https://github.com/huggingface/transformers/assets/53175384/abe6a3f5-0e20-4d76-9e89-8724c7b1cb45">
07-11-2023 15:09:54
07-11-2023 15:09:54
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24748). All of your documentation changes will be reflected on that endpoint.<|||||>@sayakpaul can you check this out when you get time? after that, we can merge. <|||||>@sayakpaul I added more models.<|||||>I thought I committed more models but apparently haven't committed the work, just did now<|||||>@sayakpaul @amyeroberts @stevhliu I added more models and visualizations, can you give another round of review?<|||||>I made a mistake when doing visualizations for T4 batch=4 ViT, I did replace the image in HF documentation-images but since it's once uploaded for md preview on GitHub, GitHub doesn't update that (so everything's actually fine, it's just GitHub)<|||||>_Note_: this PR will be stale until the benchmarks are improved. <|||||>Hey @amyeroberts I added `nightly` + `nightly`/`reduce-overhead` comparisons.
transformers
24,747
open
`device_map="auto"` support multi-node
### Feature request `AutoModel.from_pretrained(model_dir, device_map="auto", trust_remote_code=True).half()` I want to load a huge model in multi-node for inference, such as 4 node with 1 gpu per node. But I do not know how to do it. The `device_map="auto"` seems only work for one node. ### Motivation I want to test the long-context ppl. When I increase the context, the gpu memory increase too. So I need more node to do the inference. ### Your contribution not yet.
07-11-2023 13:15:12
07-11-2023 13:15:12
cc @younesbelkada Hi @guozhiyao, Thanks for raising this issue. At the moment, there isn't enough information for us to be able to help you. Could you specify what you mean by "doesn't work"? As a side note, I don't think you can do `device_map=auto"` and then `.half()`, instead you can pass in a flag to `from_pretrained` to specify the precision e.g. `torch_dtype=torch.float16` or `load_in_8bit=True`. @younesbelkada can confirm :) <|||||>hi @guozhiyao thanks for raising this up ! firstly as @amyeroberts suggested, the canonical way of loading a model with a specific dtype (in your case `half`=`torch.float16`) is by passing `torch_dtype=torch.float16` thus you avoid any unexpected issue you may encounter Regarding your second question, I don't think this is supported by `device_map="auto"` for inference. Usually the multi-node paradigm is useful for training, where you have an entire training process running independently on each node. I think accelerate supports multi-node training (you can select mutli node training when running `accelerate config` and we have made some training process work under multi-node regime using accelerate internally). However I doubt that you can run multi-node inference out of the box with `device_map='auto'` as this is intended only for single node (single / multi GPU or CPU only). In multi-node setting each process will run independently `AutoModel.from_pretrained(model_dir, device_map="auto", trust_remote_code=True).half()` thus the model will not be shared across both processes. I am also unsure about the benefits of such protocol - the only case it might be interesting to see this would be if someone wants to fit a model that can't fit in more than a node (more than 8xA100 80GB at most) Would like also to hear from @sgugger or @muellerzr , in case I missed something I am not aware of.<|||||>I can confirm that multi-node is not supported by `device_map="auto"` :-)
transformers
24,746
open
CPM_BEE model should have local model_path to infer and don't use trust_remote_code=True
### System Info I want to use my local model path for infer openbmb/cpm-bee-10b, like model_path="/home/users/fanxingran/workspace/workspace/cpm_bee_cpu/cpm-bee-10b" tokenizer = AutoTokenizer.from_pretrained(model_path, cache_dir=model_path, subfolder="scheduler", trust_remote_code=False) model = AutoModelForCausalLM.from_pretrained(model_path, cache_dir=model_path, trust_remote_code=False).cpu() # but it coulud't work , if I use # tokenizer = AutoTokenizer.from_pretrained("openbmb/cpm-bee-10b", trust_remote_code=True) # model = AutoModelForCausalLM.from_pretrained("openbmb/cpm-bee-10b", trust_remote_code=True).cpu() every time will down something in .cache/huggface ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction from transformers import AutoModelForCausalLM, AutoTokenizer model_path="/home/users/fanxingran/workspace/workspace/cpm_bee_cpu/cpm-bee-10b" cache_dir=model_path tokenizer = AutoTokenizer.from_pretrained(model_path, cache_dir=model_path, subfolder="scheduler", trust_remote_code=False) model = AutoModelForCausalLM.from_pretrained(model_path, cache_dir=model_path, trust_remote_code=False).cpu() # result = model.generate({"input": "今天天气不错,", "<ans>": ""}, tokenizer) print(result) ### Expected behavior i want to infer cpm-bee-10b in local model_path and i could change the code (very import change code) use the .cache will update the code and the change is will be delete with the update
07-11-2023 11:36:42
07-11-2023 11:36:42
Hi @fxrhhx, `cache_dir` is the [directory where the checkpoint is located](https://huggingface.co/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoModel.from_pretrained.cache_dir). `pretrained_model_name_or_path` is either [the checkpoint name, or the full path to the directory containing weights & configs](https://huggingface.co/docs/transformers/v4.30.0/en/model_doc/auto#transformers.AutoModel.from_pretrained.pretrained_model_name_or_path). In this case, following your example, this should work: ```python from transformers import AutoModelForCausalLM, AutoTokenizer cache_dir="/home/users/fanxingran/workspace/workspace/cpm_bee_cpu" checkpoint = "cpm-bee-10b" model = AutoModelForCausalLM.from_pretrained(checkpoint, cache_dir=cache_dir) # Pass in the full model path model_path = "/home/users/fanxingran/workspace/workspace/cpm_bee_cpu/cpm-bee-10b" model = AutoModelForCausalLM.from_pretrained(model_path) ``` <|||||>> Thank you for your answer!! I found use the code like this, is also dont't work, will still download something is .cache/huggface and use it, the model_path which i give containing the weights and configs, i couldn't understant why > `from transformers import AutoModelForCausalLM, AutoTokenizer` `model_path="/home/users/fanxingran/workspace/workspace/cpm_bee_cpu/cpm-bee-10b"` `tokenizer = AutoTokenizer.from_pretrained(model_path, trust_remote_code=True)` `model = AutoModelForCausalLM.from_pretrained(model_path, trust_remote_code=True).cpu() ` `result = model.generate({"input": "今天天气不错,", "<ans>": ""}, tokenizer)` `print(result)` <|||||> ![9f353864f24ba9a6688e9ba99280747e](https://github.com/huggingface/transformers/assets/42543089/97396528-6f7b-4739-99ec-82f45a6176fc) <|||||>![01b809b730e290bb1b536122829191c5](https://github.com/huggingface/transformers/assets/42543089/646f00c9-1a69-401c-92f4-e85799ff3fb8) <|||||>@fxrhhx Without knowing what's in the model path, it won't be possible to debug this. Could you run: ``` ls /home/users/fanxingran/workspace/workspace/cpm_bee_cpu/cpm-bee-10b ``` And ``` less /home/users/fanxingran/workspace/workspace/cpm_bee_cpu/cpm-bee-10b/config.json ``` ? Just from this, it looks like the modeling config is pointing to the model on the hub: https://huggingface.co/openbmb/cpm-bee-10b/blob/4b1905b3195203330c462ed367d97c3361288937/config.json#L3 <|||||>> @fxrhhx Without knowing what's in the model path, it won't be possible to debug this. Could you run: > > ``` > ls /home/users/fanxingran/workspace/workspace/cpm_bee_cpu/cpm-bee-10b > ``` > > And > > ``` > less /home/users/fanxingran/workspace/workspace/cpm_bee_cpu/cpm-bee-10b/config.json > ``` > > ? > > Just from this, it looks like the modeling config is pointing to the model on the hub: https://huggingface.co/openbmb/cpm-bee-10b/blob/4b1905b3195203330c462ed367d97c3361288937/config.json#L3 ![63e85e8609d966a4ac2704b71c79f710](https://github.com/huggingface/transformers/assets/42543089/eeef3212-6b4c-4c5d-854c-cc82c374c4f2) ![3c53e82442b8b7aa8b2139c5b9ba9156](https://github.com/huggingface/transformers/assets/42543089/ffecd4aa-f344-4f44-a71a-cd9a471a4bb3)
transformers
24,745
open
Add CLVP
# What does this PR do? <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> <!-- Remove if not applicable --> Adds CLVP, which is an integral part of `tortoise-tts`. Required for `tortoise-tts` integration in HF diffusers(Please see [this issue](https://github.com/huggingface/diffusers/issues/3891)). ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [x] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. ([link](https://github.com/huggingface/diffusers/issues/3891)) - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-11-2023 07:50:32
07-11-2023 07:50:32
cc: @sanchit-gandhi, @dg845<|||||>Very cool @susnato! Let me know if you have any questions / queries - more than happy to lend a hand here!<|||||>The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24745). All of your documentation changes will be reflected on that endpoint.<|||||>Hi @sanchit-gandhi, this PR is ready for review! Some notes related to this design I wanted to mention - 1. Although CLIP has both tokenizer and Image Processor, in tortoise the text is encoded and pushed into both the autoregressive model and the CLVP model so I think its better to have only one tokenizer(for tortoise) and encoding the text one time and pushing it to both of the models rather than defining a seperate tokenization_clvp.py. 2. Instead of processing Images and checking which image fits the description(from text) best, CLVP compares Speech Token Candidates and text. The speech tokens come from the output of the auto-regressive model itself, so we don't need the Image Processor too! 3. CLVP uses Rotary Position Embeddings.
transformers
24,744
open
Import error for relative import of module_name = 'testing_utils'
### System Info Error message: ` self = <module 'transformers' from '/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/__init__.py'> module_name = 'testing_utils' def _get_module(self, module_name: str): try: > return importlib.import_module("." + module_name, self.__name__) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/utils/import_utils.py:1086: name = '.testing_utils', package = 'transformers' def import_module(name, package=None): """Import a module. The 'package' argument is required when performing a relative import. It specifies the package to use as the anchor point from which to resolve the relative import to an absolute import. """ level = 0 if name.startswith('.'): if not package: msg = ("the 'package' argument is required to perform a relative " "import for {!r}") raise TypeError(msg.format(name)) for character in name: if character != '.': break level += 1 > return _bootstrap._gcd_import(name[level:], package, level) ../../miniconda3/envs/my_project/lib/python3.8/importlib/__init__.py:127: name = 'transformers.testing_utils', package = 'transformers', level = 1 > ??? <frozen importlib._bootstrap>:1014: name = 'transformers.testing_utils' import_ = <function _gcd_import at 0x7fc0abbb0430> > ??? <frozen importlib._bootstrap>:991: name = 'transformers.testing_utils' import_ = <function _gcd_import at 0x7fc0abbb0430> > ??? <frozen importlib._bootstrap>:975: spec = ModuleSpec(name='transformers.testing_utils', loader=<_pytest.assertion.rewrite.AssertionRewritingHook object at 0x7fc0975af460>, origin='/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/testing_utils.py') > ??? <frozen importlib._bootstrap>:671: self = <_pytest.assertion.rewrite.AssertionRewritingHook object at 0x7fc0975af460> module = <module 'transformers.testing_utils' from '/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/testing_utils.py'> def exec_module(self, module: types.ModuleType) -> None: assert module.__spec__ is not None assert module.__spec__.origin is not None fn = Path(module.__spec__.origin) state = self.config.stash[assertstate_key] self._rewritten_names[module.__name__] = fn # The requested module looks like a test file, so rewrite it. This is # the most magical part of the process: load the source, rewrite the # asserts, and load the rewritten source. We also cache the rewritten # module code in a special pyc. We must be aware of the possibility of # concurrent pytest processes rewriting and loading pycs. To avoid # tricky race conditions, we maintain the following invariant: The # cached pyc is always a complete, valid pyc. Operations on it must be # atomic. POSIX's atomic rename comes in handy. write = not sys.dont_write_bytecode cache_dir = get_cache_dir(fn) if write: ok = try_makedirs(cache_dir) if not ok: write = False state.trace(f"read only directory: {cache_dir}") cache_name = fn.name[:-3] + PYC_TAIL pyc = cache_dir / cache_name # Notice that even if we're in a read-only directory, I'm going # to check for a cached pyc. This may not be optimal... co = _read_pyc(fn, pyc, state.trace) if co is None: state.trace(f"rewriting {fn!r}") source_stat, co = _rewrite_test(fn, self.config) if write: self._writing_pyc = True try: _write_pyc(state, co, source_stat, pyc) finally: self._writing_pyc = False else: state.trace(f"found cached rewritten pyc for {fn}") > exec(co, module.__dict__) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/_pytest/assertion/rewrite.py:168: import collections import contextlib import doctest import functools import inspect import logging import multiprocessing import os import re import shlex import shutil import subprocess import sys import tempfile import time import unittest from collections.abc import Mapping from io import StringIO from pathlib import Path from typing import Iterable, Iterator, List, Optional, Union from unittest import mock import huggingface_hub import requests > from _pytest.doctest import ( Module, _get_checker, _get_continue_on_failure, _get_runner, _is_mocked, _patch_unwrap_mock_aware, get_optionflags, import_path, ) E ImportError: cannot import name 'Module' from '_pytest.doctest' (/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/_pytest/doctest.py) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/testing_utils.py:39: ImportError The above exception was the direct cause of the following exception: args = () kwargs = {'end_date_str': '1990-02-13', 'is_valid': False, 'start_date_str': '1980-02-12'} def wrapper(*args, **kwargs): > with self as time_factory: ../../miniconda3/envs/my_project/lib/python3.8/site-packages/freezegun/api.py:800: self = <freezegun.api._freeze_time object at 0x7fbf5285f8e0> def __enter__(self): > return self.start() ../../miniconda3/envs/my_project/lib/python3.8/site-packages/freezegun/api.py:633: self = <freezegun.api._freeze_time object at 0x7fbf5285f8e0> def start(self): if self.auto_tick_seconds: freeze_factory = StepTickTimeFactory(self.time_to_freeze, self.auto_tick_seconds) elif self.tick: freeze_factory = TickingDateTimeFactory(self.time_to_freeze, real_datetime.now()) else: freeze_factory = FrozenDateTimeFactory(self.time_to_freeze) is_already_started = len(freeze_factories) > 0 freeze_factories.append(freeze_factory) tz_offsets.append(self.tz_offset) ignore_lists.append(self.ignore) tick_flags.append(self.tick) if is_already_started: return freeze_factory # Change the modules datetime.datetime = FakeDatetime datetime.date = FakeDate time.time = fake_time time.monotonic = fake_monotonic time.perf_counter = fake_perf_counter time.localtime = fake_localtime time.gmtime = fake_gmtime time.strftime = fake_strftime if uuid_generate_time_attr: setattr(uuid, uuid_generate_time_attr, None) uuid._UuidCreate = None uuid._last_timestamp = None copyreg.dispatch_table[real_datetime] = pickle_fake_datetime copyreg.dispatch_table[real_date] = pickle_fake_date # Change any place where the module had already been imported to_patch = [ ('real_date', real_date, FakeDate), ('real_datetime', real_datetime, FakeDatetime), ('real_gmtime', real_gmtime, fake_gmtime), ('real_localtime', real_localtime, fake_localtime), ('real_monotonic', real_monotonic, fake_monotonic), ('real_perf_counter', real_perf_counter, fake_perf_counter), ('real_strftime', real_strftime, fake_strftime), ('real_time', real_time, fake_time), ] if _TIME_NS_PRESENT: time.time_ns = fake_time_ns to_patch.append(('real_time_ns', real_time_ns, fake_time_ns)) if _MONOTONIC_NS_PRESENT: time.monotonic_ns = fake_monotonic_ns to_patch.append(('real_monotonic_ns', real_monotonic_ns, fake_monotonic_ns)) if _PERF_COUNTER_NS_PRESENT: time.perf_counter_ns = fake_perf_counter_ns to_patch.append(('real_perf_counter_ns', real_perf_counter_ns, fake_perf_counter_ns)) if real_clock is not None: # time.clock is deprecated and was removed in Python 3.8 time.clock = fake_clock to_patch.append(('real_clock', real_clock, fake_clock)) self.fake_names = tuple(fake.__name__ for real_name, real, fake in to_patch) self.reals = {id(fake): real for real_name, real, fake in to_patch} fakes = {id(real): fake for real_name, real, fake in to_patch} add_change = self.undo_changes.append # Save the current loaded modules self.modules_at_start = set(sys.modules.keys()) with warnings.catch_warnings(): warnings.filterwarnings('ignore') for mod_name, module in list(sys.modules.items()): if mod_name is None or module is None or mod_name == __name__: continue elif mod_name.startswith(self.ignore) or mod_name.endswith('.six.moves'): continue elif (not hasattr(module, "__name__") or module.__name__ in ('datetime', 'time')): continue > module_attrs = _get_cached_module_attributes(module) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/freezegun/api.py:722: module = <module 'transformers' from '/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/__init__.py'> def _get_cached_module_attributes(module): module_hash, cached_attrs = _GLOBAL_MODULES_CACHE.get(module.__name__, ('0', [])) if _get_module_attributes_hash(module) == module_hash: return cached_attrs # cache miss: update the cache and return the refreshed value > _setup_module_cache(module) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/freezegun/api.py:129: module = <module 'transformers' from '/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/__init__.py'> def _setup_module_cache(module): date_attrs = [] > all_module_attributes = _get_module_attributes(module) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/freezegun/api.py:108: module = <module 'transformers' from '/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/__init__.py'> def _get_module_attributes(module): result = [] try: module_attributes = dir(module) except (ImportError, TypeError): return result for attribute_name in module_attributes: try: > attribute_value = getattr(module, attribute_name) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/freezegun/api.py:97: self = <module 'transformers' from '/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/__init__.py'> name = 'testing_utils' def __getattr__(self, name: str) -> Any: if name in self._objects: return self._objects[name] if name in self._modules: > value = self._get_module(name) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/utils/import_utils.py:1074: self = <module 'transformers' from '/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/__init__.py'> module_name = 'testing_utils' def _get_module(self, module_name: str): try: return importlib.import_module("." + module_name, self.__name__) except Exception as e: > raise RuntimeError( f"Failed to import {self.__name__}.{module_name} because of the following error (look up to see its" f" traceback):\n{e}" ) from e E RuntimeError: Failed to import transformers.testing_utils because of the following error (look up to see its traceback): E cannot import name 'Module' from '_pytest.doctest' (/home/user/miniconda3/envs/my_project/lib/python3.8/site-packages/_pytest/doctest.py) ../../miniconda3/envs/my_project/lib/python3.8/site-packages/transformers/utils/import_utils.py:1088: RuntimeError ` ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction import transformers in a pytest ### Expected behavior No import error
07-11-2023 07:21:03
07-11-2023 07:21:03
Hi @teddius Could you show us the command or the python script you run that gives this error. It's not super clear what ``` import transformers in a pytest ``` this means. Thank you in advance!
transformers
24,743
closed
T5 Tokenizer Adds Space after Each Added (Extra) Token
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.4.0-146-generic-x86_64-with-glibc2.35 - Python version: 3.11.3 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: (NA) - Using distributed or parallel set-up in script?: (NA) ### Who can help? @Arthu ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ```ipython In [1]: from transformers import AutoTokenizer In [2]: tokenizer = AutoTokenizer.from_pretrained("./models/t5-base/") In [3]: tokenizer.add_tokens(["asdfg"], special_tokens=False) Out[3]: 1 In [4]: tokenizer.tokenize("asdfgwordtimeasdfgtime") Out[4]: ['asdfg', '▁word', 'time', 'asdfg', '▁time'] ``` ### Expected behavior tokenizer return `['asdfg', 'word', 'time', 'asdfg', 'time']`
07-11-2023 06:54:38
07-11-2023 06:54:38
I think a fix is in https://github.com/huggingface/transformers/pull/24622 <|||||>FYI: that PR is not merged yet into `main` branch<|||||>Let's wait until we merge to close! <|||||>@ArthurZucker Hi, this issue still exists after updating transformers to the latest 4.31.0 with #24622<|||||>Hey! It is adressed for slow tokenizer, which are part of transformers! Fast tokenizers will need to wait a bit. It is also linked to the conversion script and meta space that need to be used similar to Llama ```python In [6]: tokenizer = AutoTokenizer.from_pretrained("t5-base", legacy = False, use_fast = False) /fsx/arthur/miniconda3/envs/py10/lib/python3.10/site-packages/transformers/models/t5/tokenization_t5.py:199: FutureWarning: This tokenizer was incorrectly instantiated with a model max length of 512 which will be corrected in Transformers v5. For now, this behavior is kept to avoid breaking backwards compatibility when padding/encoding with `truncation is True`. - Be aware that you SHOULD NOT rely on t5-base automatically truncating your input to 512 when padding/encoding. - If you want to encode/pad to sequences longer than 512 you can either instantiate this tokenizer with `model_max_length` or pass `max_length` when encoding/padding. - To avoid this warning, please instantiate this tokenizer with `model_max_length` set to your preferred value. warnings.warn( In [7]: tokenizer.add_tokens(["asdfg"], special_tokens=False) Out[7]: 1 In [8]: tokenizer.tokenize("asdfgwordtimeasdfgtime") Out[8]: ['asdfg', 'word', 'time', 'asdfg', 'time'] ``` the key is that you need to set `legacy=False` and `use_fast = False` because fast tokenizer is not fixed yet 😉
transformers
24,742
closed
Problems when using PyTorch Class _Dataset_ in model fineturn
### System Info ```shell transformer 0.15.1 ``` ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I try to use PyTorch Class _Dataset_ to create my own training task, but it seems make the model worse. After 4 epochs training the model outputs null string. Only a little part of the input can get right answer. I apprecitate it if someone could save me!!! ``` from datasets import load_dataset from transformers import AutoTokenizer, AutoModelForSeq2SeqLM from transformers import DataCollatorForSeq2Seq, Seq2SeqTrainingArguments, Seq2SeqTrainer import evaluate import numpy as np import torch from torch.utils.data import Dataset def compute_metrics(eval_preds): metric = evaluate.load("sacrebleu") preds, labels = eval_preds # In case the model returns more than the prediction logits if isinstance(preds, tuple): preds = preds[0] decoded_preds = tokenizer.batch_decode(preds, skip_special_tokens=True) # Replace -100s in the labels as we can't decode them labels = np.where(labels != -100, labels, tokenizer.pad_token_id) decoded_labels = tokenizer.batch_decode(labels, skip_special_tokens=True) # Some simple post-processing to remove the "\n", "\t" and so on decoded_preds = [pred.strip() for pred in decoded_preds] decoded_labels = [[label.strip()] for label in decoded_labels] for i in range(10): print(decoded_preds[i]) print(decoded_labels[i]) result = metric.compute(predictions=decoded_preds, references=decoded_labels) return {"bleu": result["score"]} class MyDataset(Dataset): def __init__(self, file_name, tokenizer): self.text1 = [] self.text2 = [] self.read(file_name) self.read(file_name) self.encoding = tokenizer(self.text1, text_target=self.text2, truncation=True, max_length=128, padding=True, return_tensors="pt") def read(self, file_name): # Train data is like: "Go.\tVa !" with open(file_name, "r", encoding="utf-8") as file: while True: line = file.readline() if line == "": break self.text1.append(line.split("\t")[0]) self.text2.append(line.split("\t")[1]) def __getitem__(self, index): item = {k: v[index].clone().detach() for k, v in self.encoding.items()} return item def __len__(self): return len(self.text1) def train(): train_dataset = MyDataset(train_file, tokenizer) eval_dataset = MyDataset(eval_file, tokenizer) training_args = Seq2SeqTrainingArguments( output_dir="save_model", learning_rate=2e-5, per_device_train_batch_size=8, per_device_eval_batch_size=16, num_train_epochs=4, evaluation_strategy="no", save_strategy="epoch", save_total_limit=1, predict_with_generate=True, ) trainer = Seq2SeqTrainer( model=model, args=training_args, train_dataset=train_dataset, eval_dataset=eval_dataset, tokenizer=tokenizer, compute_metrics=compute_metrics ) print(trainer.evaluate()) trainer.train() print(trainer.evaluate()) if __name__ == "__main__": model_name = "t5-small" train_file = "fra-eng.txt" eval_file = "fra-eng.txt" model = AutoModelForSeq2SeqLM.from_pretrained(model_name).to("cuda:0") tokenizer = AutoTokenizer.from_pretrained(model_name) train() ``` ### Expected behavior ```shell I hope anyone could tell me if I use _Dataset_ Class in a wong way. ``` ### Checklist - [X] I have read the migration guide in the readme. ([pytorch-transformers](https://github.com/huggingface/transformers#migrating-from-pytorch-transformers-to-transformers); [pytorch-pretrained-bert](https://github.com/huggingface/transformers#migrating-from-pytorch-pretrained-bert-to-transformers)) - [X] I checked if a related official extension example runs on my machine.
07-11-2023 06:29:16
07-11-2023 06:29:16
Hi @Cassius31 This kind of question is better on [Hugging Face Forums](https://discuss.huggingface.co/). We reserve the `transformers` GitHub repository for issues and feature requests.<|||||>> Hi @Cassius31 > > This kind of question is better on [Hugging Face Forums](https://discuss.huggingface.co/). > > We reserve the `transformers` GitHub repository for issues and feature requests. Thanks for reminding! I am new to github and sorry for making trouble. I will delete this 3 hours later.
transformers
24,741
closed
past_key_values supporting more-than-one-token inputs
### System Info - `transformers` version: 4.30.0 - Platform: Linux-5.4.0-113-generic-x86_64-with-glibc2.31 - Python version: 3.10.11 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction `past_key_values` does not work when my `input_ids` are more than 1 token. For example: ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("gpt2") tokenizer = AutoTokenizer.from_pretrained("gpt2") inputs = "I love hugging face. Me too" inputs = tokenizer(inputs, return_tensors="pt") inputs1 = {} inputs2 = {} for k, v in inputs.items(): inputs1[k] = v[:, :-3] inputs2[k] = v[:, -3:] outputs = model(**inputs) input1_outputs = model(**inputs1) # Error!! input2_outputs = model(**inputs2, past_key_values=input1_outputs.past_key_values) ``` ### Expected behavior I find the error is because you only extend the keys and values [here](https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/models/gpt2/modeling_gpt2.py#L319), while you forget to extend the `attention_mask` to the same size as the extended keys and values and hence the error. I also find this is common across different models, e.g. gpt-neo, gpt2. I think this is an urgent problem because many downstream applications like chatbots require this feature. I think you can extend the attention mask simply by concatenating `torch.ones((batch_size, past_length))` in front of the input attention mask to solve the problem. Here is my work around: ```python from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("gpt2") tokenizer = AutoTokenizer.from_pretrained("gpt2") inputs = "I love hugging face. Me too" inputs = tokenizer(inputs, return_tensors="pt") inputs1 = {} inputs2 = {} for k, v in inputs.items(): inputs1[k] = v[:, :-3] inputs2[k] = v[:, -3:] outputs = model(**inputs) input1_outputs = model(**inputs1) # without the following line, will raise errors inputs2["attention_mask"] = inputs.attention_mask input2_outputs = model(**inputs2, past_key_values=input1_outputs.past_key_values) # check print(((input2_outputs.logits - outputs.logits[:, -3:]) < 1e-4).all()) ```
07-11-2023 05:02:57
07-11-2023 05:02:57
Hey! Thanks for opening an issue! 🤗 The reason why this is not working currently is that GPT2 is a pretty old model, thus it requires the attention mask to be passed when you want to generate. Now this is not an issue most of the time people use `gpt2_model.generate(input_ids, attention_mask)` and thus don't need to handle the past key values on their own! This is why, no it's not a very urgent problem and it's pretty much expected. Someone had a similar question see [here.](https://github.com/huggingface/transformers/issues/16811) The issue rather lies in the creation of the positional ids, see in #18104 <|||||>Hey! Thanks for replying so soon. I tried `.generate` method with `input_ids` and `past_key_values` but it does not work as expected when I have more than one token in the `input_ids`. To be specific, assume I'm building a QA system with GPT2. My input would be like ``` Q: a question\nA: an answer\nQ: a new question\nA: ``` After generating the first answer, I have `past_key_values` until the token `answer`. However, when I want to generate the second answer, due to the insertion of the new question and prompts, I have to input `\nQ: a new question\nA: ` together with the `past_key_values` to the model. I expect the model to compute the hidden states of the given input, then generate the next token. However, according to the [code](https://github.com/huggingface/transformers/blob/35eac0df75c692c5b93c12f7eaf3279cab8bd7ce/src/transformers/models/gpt2/modeling_gpt2.py#L1012), the model will automatically truncate the `input_ids` as long as there is `past_key_values` passed alongside. This leads to false generation results. Here is my snippet: ``` from transformers import AutoModel, AutoTokenizer model = AutoModel.from_pretrained("gpt2") tokenizer = AutoTokenizer.from_pretrained("gpt2") inputs = "I love you. Me too" inputs = tokenizer(inputs, return_tensors="pt") inputs1 = {} inputs2 = {} inputs3 = {} inputs4 = {} for k, v in inputs.items(): inputs1[k] = v[:, :-3] inputs2[k] = v[:, -3:] inputs3[k] = v[:, :-1] inputs4[k] = v[:, -1:] print(f"All inputs: {tokenizer.batch_decode(inputs['input_ids'])}") print(f"Inputs1: {tokenizer.batch_decode(inputs1['input_ids'])}") print(f"Inputs2: {tokenizer.batch_decode(inputs2['input_ids'])}") print(f"Inputs3: {tokenizer.batch_decode(inputs3['input_ids'])}") print(f"Inputs4: {tokenizer.batch_decode(inputs4['input_ids'])}") # 1. Generate without cache. This is the standard output. outputs = model.generate(**inputs, max_new_tokens=5) print(tokenizer.batch_decode(outputs[:, inputs['input_ids'].shape[1]:])) # 2. WRONG!!! Generate with partial past_key_values. Extend attention_mask by past_length because the model expect the input_ids of shape [B, 1] when past_key_values is not None outputs1 = model(**inputs1) past_length = outputs1.past_key_values[0][0].size(-2) inputs2["attention_mask"] = torch.cat([torch.ones(1, past_length), inputs2['attention_mask'][:, :1]], dim=-1) print(inputs2, past_length) outputs2 = model.generate(**inputs2, past_key_values=outputs1.past_key_values, max_new_tokens=5) print(tokenizer.batch_decode(outputs2[:, inputs2['input_ids'].shape[1]:])) # 3. CORRECT!!! Generate with past_key_values of all previous tokens except the most recent one. Extend attention_mask by past_length because the model expect the input_ids of shape [B, 1] when past_key_values is not None outputs3 = model(**inputs3) past_length = outputs3.past_key_values[0][0].size(-2) inputs4["attention_mask"] = torch.cat([torch.ones(1, past_length), inputs4['attention_mask'][:, :1]], dim=-1) print(inputs4, past_length) outputs4 = model.generate(**inputs4, past_key_values=outputs3.past_key_values, max_new_tokens=5) print(tokenizer.batch_decode(outputs4[:, inputs4['input_ids'].shape[1]:])) ```<|||||>What you are trying to do is very akin to the [`QuestionAnsweringPipeline](https://huggingface.co/docs/transformers/main/main_classes/pipelines#transformers.QuestionAnsweringPipeline)`, which implements all the pre-processing and post processing. The generate function is made for general generation, and uses indeed the `prepare_inputs_for_generation` function. What you are expecting is not a supported behaviour, but rather a specific usage. In the `generate` function, we expect the `new_tokens` to be a single token per batch: https://github.com/huggingface/transformers/blob/f092997ca669750d4f32ada127b2624bd450aee5/src/transformers/generation/utils.py#L2474 It should be possible to hack your way trough the code, probably by writing a logits processor that only returns the last prediction, so that when you compute the next token: https://github.com/huggingface/transformers/blob/f092997ca669750d4f32ada127b2624bd450aee5/src/transformers/generation/utils.py#L2465 the shape is correct. You also need to modify the `prepare_inputs_for_generation`. cc @gante for visibility <|||||>Got it. Thank you Arthur.
transformers
24,740
open
docker/transformers-all-latest-gpu/Dockerfile Not Work
### System Info Docker: 20.10.12 ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction 1. change pip mirror to aliyun 2. `docker build -t huggingface .` ### Expected behavior an error happened: ``` Step 16/24 : RUN python3 -m pip install --no-cache-dir -e ./transformers[dev,onnxruntime] ---> Running in 1dc79ba974ba Looking in indexes: https://mirrors.aliyun.com/pypi/simple Obtaining file:///root/transformers Installing build dependencies: started Installing build dependencies: finished with status 'done' Checking if build backend supports build_editable: started Checking if build backend supports build_editable: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting packaging>=20.0 Downloading https://mirrors.aliyun.com/pypi/packages/05/8e/8de486cbd03baba4deef4142bd643a3e7bbe954a784dc1bb17142572d127/packaging-21.3-py3-none-any.whl (40 kB) Collecting tokenizers!=0.11.3,<0.14,>=0.11.1 Downloading https://mirrors.aliyun.com/pypi/packages/29/9c/936ebad6dd963616189d6362f4c2c03a0314cf2a221ba15e48dd714d29cf/tokenizers-0.13.3.tar.gz (314 kB) Installing build dependencies: started Installing build dependencies: finished with status 'done' Getting requirements to build wheel: started Getting requirements to build wheel: finished with status 'done' Preparing metadata (pyproject.toml): started Preparing metadata (pyproject.toml): finished with status 'done' Collecting tqdm>=4.27 Downloading https://mirrors.aliyun.com/pypi/packages/47/bb/849011636c4da2e44f1253cd927cfb20ada4374d8b3a4e425416e84900cc/tqdm-4.64.1-py2.py3-none-any.whl (78 kB) Collecting requests Downloading https://mirrors.aliyun.com/pypi/packages/2d/61/08076519c80041bc0ffa1a8af0cbd3bf3e2b62af10435d269a9d0f40564d/requests-2.27.1-py2.py3-none-any.whl (63 kB) ERROR: Could not find a version that satisfies the requirement huggingface-hub<1.0,>=0.14.1 (from transformers[dev,onnxruntime]) (from versions: 0.0.1, 0.0.2, 0.0.3rc1, 0.0.3rc2, 0.0.5, 0.0.6, 0.0.7, 0.0.8, 0.0.9, 0.0.10, 0.0.11, 0.0.12, 0.0.13, 0.0.14, 0.0.15, 0.0.16, 0.0.17, 0.0.18, 0.0.19, 0.1.0, 0.1.1, 0.1.2, 0.2.0, 0.2.1, 0.4.0) ERROR: No matching distribution found for huggingface-hub<1.0,>=0.14.1 The command 'sh -lc python3 -m pip install --no-cache-dir -e ./transformers[dev,onnxruntime]' returned a non-zero code: 1 ``` Could you tell me the right ` huggingface-hub` version? thanks a lot
07-11-2023 04:10:21
07-11-2023 04:10:21
The problem in the used (pip) `index-url` being `https://mirrors.aliyun.com/pypi/simple`, where it can't find higher version of `huggingface_hub`. From the provided log, the highest version it has is `0.4.0`. It should work fine if you are using `https://pypi.org/simple` as the pip index.<|||||>> The problem in the used (pip) `index-url` being `https://mirrors.aliyun.com/pypi/simple`, where it can't find higher version of `huggingface_hub`. From the provided log, the highest version it has is `0.4.0`. > > It should work fine if you are using `https://pypi.org/simple` as the pip index. Thanks for your reply. I find the reason is `Python version`, this command `apt install python3` only installs the python3.6, but huggingface-hub seems like needs python >= 3.7, so I change it to `apt install python3.8` at [line19](https://github.com/huggingface/transformers/blob/main/docker/transformers-all-latest-gpu/Dockerfile#L19), and it worked, but when installing `kenml`, a compile fail error happened, I think `kenml` should be installed via build not pip. Finally, I pull this image from your docker hub since build it myself really slow<|||||>Yeah nice! If the pull from our docker hub works, it's definitely easier :-) BTW, may I wonder why you need to use our docker image. We only use it for our CI testing.<|||||>> Yeah nice! If the pull from our docker hub works, it's definitely easier :-) > > BTW, may I wonder why you need to use our docker image. We only use it for our CI testing. because I need to install the newest pytorch and tensorflow, the easiest way is docker, since tensorflow2.xx having packages compatibility problems, so I find your Dockerfile
transformers
24,739
open
Sorting FAISS scores for similarity search
### System Info - `transformers` version: 4.29.2 - Platform: Linux-5.10.157-139.675.amzn2.x86_64-x86_64-with-glibc2.26 - Python version: 3.9.15 - Huggingface_hub version: 0.15.1 - Safetensors version: not installed - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No - Using distributed or parallel set-up in script?: No ### Who can help? @sgugger, @stevhliu, @MKhalusova ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ``` import pandas as pd samples_df = pd.DataFrame.from_dict(samples) samples_df["scores"] = scores samples_df.sort_values("scores", ascending=False, inplace=True) ``` Please refer [https://huggingface.co/learn/nlp-course/chapter5/6?fw=pt#using-faiss-for-efficient-similarity-search](https://huggingface.co/learn/nlp-course/chapter5/6?fw=pt#using-faiss-for-efficient-similarity-search) ### Expected behavior I think the sorting of scores should be in ascending and not descending; because the default index is IndexFlatL2 which is L2/Euclidean distance. It will be great if these two changes are made to relevant documentation 1. Change `ascending=False` to `ascending=True` 2. There can be a reference that the default scores returned is the Euclidean distances (by digging sourcecode, I understood it's IndexFlatL2), but it will be easy to include this in documentation **Refer:** [https://discuss.huggingface.co/t/chapter-5-questions/11744/58?u=namburisrinath](https://discuss.huggingface.co/t/chapter-5-questions/11744/58?u=namburisrinath) **P.S:** I am sorry if this is the correct place to create the bug as the documentation needs to be changed accordingly. People consume Huggingface documentation a lot, so it needs to be fool-proof, so please correct if I am wrong!
07-10-2023 23:09:24
07-10-2023 23:09:24
Hi! Thank you for opening the issue. I tag someone in the team on the course chapter discussion page. Let's wait a reply first 🤗 .
transformers
24,738
closed
Add missing attention mask in ASTFeatureExtractor
The ASTFeatureExtractor has a return_attention_mask attribute, but even if set to true, the feature extractor does not return it, because the code to do so is missing. I added the code which checks the lengths of the raw audio arrays before computing the spectrograms and then creates the attention mask for each element in the batch accordingly. @sanchit-gandhi - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests?
07-10-2023 21:36:05
07-10-2023 21:36:05
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24738). All of your documentation changes will be reflected on that endpoint.<|||||>Hmm I believe this was omitted because Audio Spectrogram Transformer doesn't take a padding mask as input: https://github.com/huggingface/transformers/blob/cfc8a05305b4c89c5393766161d89ef24e72fdfa/src/transformers/models/audio_spectrogram_transformer/modeling_audio_spectrogram_transformer.py#L567 (only a head mask which is for masking out entire heads, not elements of the input sequence) The way AST works is by padding / truncating all input audio samples to a fixed length, then computing the log-mel spectrogram on these adjusted inputs. Since we pad with zeros (silence), the model learns padding implicitly from the input features, and so doesn't require an attention mask. The same is done with the Whisper model, which also works directly on log-mel spectrograms: https://huggingface.co/blog/fine-tune-whisper#load-whisperfeatureextractor So I don't think it's necessary to return the attention mask in the feature extractor, since we'll just discard this immediately anyways. Probably instead we can remove the attribute `return_attention_mask` from the init? Also cc @NielsRogge <|||||>Thank you @sanchit-gandhi, that makes sense! The way I encountered the issue is related to your explanation: I am using the ASTFeatureExtractor for another model, where it would be nice to have the attention mask, therefore I got confused that I didn't obtain the mask even though setting return_attention_mask=True in the ASTFeatureExtractor's init method. I think it would be nice to either add this functionality such that one could use it with attention masks (as the current documentation promises and as I tried to use it), or, as you said, remove it from the init. What do you think @NielsRogge ?<|||||>Hey @lu-wo! Interesting use case! Unfortunately we can't maintain all classes in `transformers` to be compatible with every other combination of model, i.e. the `ASTFeatureExtractor` is designed to work with the `ASTModel`, but we can't guarantee that it works for every other model that takes a log-mel spectrogram as input. To do so would be a large maintenance burden, since we'd have to check that every combination works, and would probably complicate the code by introducing additional complexity. I would suggest trying one of two things here: 1. Use a similar log-mel feature extractor that does return an attention mask. The Whisper feature extractor also computes log-mel spectrograms, but we require the attention mask if we use SpecAug during fine-tuning, so it can return the attention mask if required. Note that you may have to change the spectrogram hyper parameters to get parity with the AST feature extractor 2. Copy the feature extractor code locally, and make the changes you require so that your use case works. If you subsequently train a new model is added to the `transformers` library, then the feature extractor we add for this model will use an attention mask since the model requires one! Let me know if the above two don't work - happy to brainstorm some more solutions with you!<|||||>Thanks @sanchit-gandhi , I guess I can adapt the code for my purposes :) <|||||>Thanks for understanding and looking forward to your next PR @lu-wo! 🤗
transformers
24,737
closed
Falcon Models saved with `save_pretrained` no longer get saved with python files
### System Info - `transformers` version: 4.30.2 - Platform: Linux-5.15.0-75-generic-x86_64-with-glibc2.35 - Python version: 3.10.3 - Huggingface_hub version: 0.16.4 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: No or N/A - Using distributed or parallel set-up in script?: No or N/A ### Who can help? @ArthurZucker @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction When saving `tiiuae/falcon` models using ``` from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-7b-instruct") model.save_pretrained("/path/to/save") ``` the python files `configuration_RW.py` and `modelling_RW.py` are no longer saved. Loading the model with `from_pretrained(...)` results in the following error: ``` >>> model = AutoModelForCausalLM.from_pretrained("/data/test-models/falcon-40b-instruct", trust_remote_code=True) Could not locate the configuration_RW.py inside /data/test-models/falcon-40b-instruct. Traceback (most recent call last): File "<stdin>", line 1, in <module> File "/home/recoverx/.cache/pypoetry/virtualenvs/test-tgi-yWaeKVH5-py3.10/lib/python3.10/site-packages/transformers/models/auto/auto_factory.py", line 456, in from_pretrained config, kwargs = AutoConfig.from_pretrained( File "/home/recoverx/.cache/pypoetry/virtualenvs/test-tgi-yWaeKVH5-py3.10/lib/python3.10/site-packages/transformers/models/auto/configuration_auto.py", line 953, in from_pretrained config_class = get_class_from_dynamic_module(class_ref, pretrained_model_name_or_path, **kwargs) File "/home/recoverx/.cache/pypoetry/virtualenvs/test-tgi-yWaeKVH5-py3.10/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 431, in get_class_from_dynamic_module final_module = get_cached_module_file( File "/home/recoverx/.cache/pypoetry/virtualenvs/test-tgi-yWaeKVH5-py3.10/lib/python3.10/site-packages/transformers/dynamic_module_utils.py", line 247, in get_cached_module_file resolved_module_file = cached_file( File "/home/recoverx/.cache/pypoetry/virtualenvs/test-tgi-yWaeKVH5-py3.10/lib/python3.10/site-packages/transformers/utils/hub.py", line 388, in cached_file raise EnvironmentError( OSError: /data/test-models/falcon-40b-instruct does not appear to have a file named configuration_RW.py. Checkout 'https://huggingface.co//data/test-models/falcon-40b-instruct/None' for available files. ``` ### Expected behavior To be able to load the model with `from_pretrained` after saving it with `save_pretrained` either by having the python files saved or pulling them from the hub. With transformers version = `4.27.4` using `save_pretrained()` does actually save the python files and the saved model can be loaded right away
07-10-2023 20:24:52
07-10-2023 20:24:52
Hi @sgugger I checked the code snippet and indeed only config and model bin files are saved. (tested on main branch of July 10th) I am more than happy to help and learn, but I would like to know if this behavior is expected before taking action. (and if you want to fix directly, ok for me) ``` total 27038084 -rw-r--r-- 1 root root 773 Jul 12 12:41 config.json -rw-r--r-- 1 root root 116 Jul 12 12:41 generation_config.json -rw-r--r-- 1 root root 9962615667 Jul 12 12:41 pytorch_model-00001-of-00003.bin -rw-r--r-- 1 root root 9939388767 Jul 12 12:42 pytorch_model-00002-of-00003.bin -rw-r--r-- 1 root root 7784945757 Jul 12 12:42 pytorch_model-00003-of-00003.bin -rw-r--r-- 1 root root 16924 Jul 12 12:42 pytorch_model.bin.index.json ```<|||||>This is expected as the config will keep references to where the code lives, you can see it has: ``` "auto_map": { "AutoConfig": "tiiuae/falcon-7b-instruct--configuration_RW.RWConfig", "AutoModelForCausalLM": "tiiuae/falcon-7b-instruct--modelling_RW.RWForCausalLM" }, ``` Saving then reloading with `from_pretrained` from the local dir works without issue on main. I don't know what exact code sample caused the issue but on my side: ```py from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("tiiuae/falcon-7b-instruct", trust_remote_code=True) model.save_pretrained("/path/to/save") new_model = AutoModelForCausalLM.from_pretrained("/path/to/save", trust_remote_code=True) ``` works.<|||||>Hey @sgugger apologies for the misunderstanding you're right I was mistaken and over simplified the code snippet causing the issue; after taking another look I've realized that the issue is how I've downloaded the model. Rather than using ``` AutoModelForCausalLM.from_pretrained("tiiuae/falcon-7b-instruct", trust_remote_code=True) ``` I first download the model locally with ``` git lfs install git clone git@hf.co:tiiuae/falcon-7b-instruct ``` if I inspect `config.json` I see this: ``` "auto_map": {   "AutoConfig": "configuration_RW.RWConfig",   "AutoModelForCausalLM": "modelling_RW.RWForCausalLM" }, ``` which matches what is in the hub here: https://huggingface.co/tiiuae/falcon-7b-instruct/blob/main/config.json. Then when running ``` from transformers import AutoModelForCausalLM model = AutoModelForCausalLM.from_pretrained("/local/falcon-7b-instruct", trust_remote_code=True) model.save_pretrained("/path/to/save") new_model = AutoModelForCausalLM.from_pretrained("/path/to/save", trust_remote_code=True) ``` I get the error above. It may be that this is the expected behavior but it works fine with version `4.27.4` as in that case `save_pretrained()` actually copies over `configuration_RW.py` and `modelling_RW.py`. My assumption is that this is issue is due to `RWConfig` and `RWModel` being defined within the model repo as opposed to within the transformers library but I may be mistaken.<|||||>That I can reproduce. This should be fixed by the PR mentioned above.<|||||>That's awesome thanks, just a question or two if that's alright so I can see if I understand what's going on here: ``` if os.path.isdir(pretrained_model_name_or_path): model_class.register_for_auto_class(cls.__name__) else: cls._model_mapping.register(config.__class__, model_class, exist_ok=True) ``` in case we are loading from a local trust remote code repo `model_class.register_for_auto_class()` sets `model_class._auto_class = cls.__name__` which I believe in the case of falcon results in `RWForCausalLM._auto_class = "RWForCausalLM"` Then in the call to `save_pretrained()` this block: ``` if self._auto_class is not None: custom_object_save(self, save_directory, config=self.config) ``` get's executed which results in the modelling files being saved along with the the weights and config files. Is that correct? Edit: one other question is there a reason why this `cls._model_mapping.register(config.__class__, model_class, exist_ok=True)` is used in stead of `cls.register(config.__class__, model_class, exist_ok=True)`?<|||||>That's completely correct! As for the second question, I haven't deep-dived to make sure the two do exactly the same thing, but it's probably the same yes. This line is only there so that `pipeline` does not complain that the model doesn't belong to the corresponding auto class when using remote code.<|||||>Thanks again for all your help really appreciate it! Tested this with your PR and works on my end for local falcon models! Also `cls.register()` just calls `cls._model_mapping.register()` with an additional check ``` @classmethod def register(cls, config_class, model_class, exist_ok=False): if hasattr(model_class, "config_class") and model_class.config_class != config_class: raise ValueError( "The model class you are passing has a `config_class` attribute that is not consistent with the " f"config class you passed (model has {model_class.config_class} and you passed {config_class}. Fix " "one of those so they match!" ) cls._model_mapping.register(config_class, model_class, exist_ok=exist_ok) ``` Switching that line out to `cls.register` doesn't cause the above value error at least when loading falcon with `from_pretrained` but not sure if there are cases where it would be benificial to not have the restriction that `model_class.config_class == config_class`<|||||>I think it would be fine if we add an `and not exist_ok` in the test. Would you like to make a PR with those changes?<|||||>Yeah would love to just want to make sure I understand the rationale behind adding `and not exist_ok`. Correct me if I'm wrong but I think the reason is that if `exists_ok = True` we will overwrite `_model_mapping` anyway so we don't want to enforce the restriction that `model_class.config_class == config_class`; is that the right idea?<|||||>Oh I completely misread your comment, thanks for asking a clarification. The test should be left as is, it is a consistency check, not an exist ok check. We can do the switch without adding anything.<|||||>Ok makes sense; more than happy to still make a PR for that switch if it would be helpful<|||||>Please go ahead!<|||||>PR is linked above! One of us will have to rebase/fix conflicts as I've made these changes on top of main which hasn't incorporated your PR yet
transformers
24,736
closed
Fix typo in LocalAgent
# What does this PR do? This PR fixes a typo in LocalAgent. Crash Log: ``` Traceback (most recent call last): File "/gnu/git/hf-agent/./agent.py", line 18, in <module> agent.run(prompt) File "/gnu/git/hf-agent/venv/lib/python3.11/site-packages/transformers/tools/agents.py", line 335, in run result = self.generate_one(prompt, stop=["Task:"]) ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^ File "/gnu/git/hf-agent/venv/lib/python3.11/site-packages/transformers/tools/agents.py", line 731, in generate_one encoded_inputs = self.tokenizer(prompt, return_tensors="pt").to(self._model_device) ^^^^^^^^^^^^^^^^^^ File "/gnu/git/hf-agent/venv/lib/python3.11/site-packages/transformers/tools/agents.py", line 727, in _model_device for param in self.mode.parameters(): ^^^^^^^^^ AttributeError: 'LocalAgent' object has no attribute 'mode'. Did you mean: 'model'? ``` Code that triggered the above crash ``` #!/usr/bin/env python3 import torch from transformers import LocalAgent model = "bigcode/tiny_starcoder_py" agent = LocalAgent.from_pretrained(model, torch_dtype=torch.bfloat16) text = "Sally sold sea shells down by the seashore." prompt = "Summarize the text given in the variable `text` and read it out loud." agent.run(prompt, text=text) ``` <!-- Congratulations! You've made it this far! You're not quite done yet though. Once merged, your PR is going to appear in the release notes with the title you set, so make sure it's a great title that fully reflects the extent of your awesome contribution. Then, please replace this with a description of the change and which issue is fixed (if applicable). Please also include relevant motivation and context. List any dependencies (if any) that are required for this change. Once you're done, someone will review your PR shortly (see the section "Who can review?" below to tag some potential reviewers). They may suggest changes to make the code even better. If no one reviewed your PR after a week has passed, don't hesitate to post a new comment @-mentioning the same persons---sometimes notifications get lost. --> ## Before submitting - [X] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [ ] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. @sgugger <!-- Your PR will be replied to more quickly if you can figure out the right person to tag with @ If you know how to use git blame, that is the easiest way, otherwise, here is a rough guide of **who to tag**. Please tag fewer than 3 people. Models: - text models: @ArthurZucker and @younesbelkada - vision models: @amyeroberts - speech models: @sanchit-gandhi - graph models: @clefourrier Library: - flax: @sanchit-gandhi - generate: @gante - pipelines: @Narsil - tensorflow: @gante and @Rocketknight1 - tokenizers: @ArthurZucker - trainer: @sgugger Integrations: - deepspeed: HF Trainer/Accelerate: @pacman100 - ray/raytune: @richardliaw, @amogkam Documentation: @sgugger, @stevhliu and @MKhalusova HF projects: - accelerate: [different repo](https://github.com/huggingface/accelerate) - datasets: [different repo](https://github.com/huggingface/datasets) - diffusers: [different repo](https://github.com/huggingface/diffusers) - rust tokenizers: [different repo](https://github.com/huggingface/tokenizers) Maintained examples (not research project or legacy): - Flax: @sanchit-gandhi - PyTorch: @sgugger - TensorFlow: @Rocketknight1 -->
07-10-2023 20:11:12
07-10-2023 20:11:12
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24736). All of your documentation changes will be reflected on that endpoint.
transformers
24,735
open
Distil* hanging on torch.distributed.barrier()
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.14.21-150400.24.55-default-x86_64-with-glibc2.31 - Python version: 3.10.10 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 1.13.0+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: Yes ### Who can help? @VictorSanh @sgugger ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction I'm trying to run [Distil*](https://github.com/huggingface/transformers/tree/main/examples/research_projects/distillation) project with a custom dataset. After the preprocessing steps, I enter the following command to start the training (single-node multi-GPU): ``` CUDA_VISIBLE_DEVICES=0,1,2 N_GPU_NODE=3 N_NODES=1 NODE_RANK=0 python -m torch.distributed.launch \ --nproc_per_node=3 \ train.py \ --force \ --dump_path serialization_dir/my_first_training \ --data_file ./data/binarized_text.roberta-base.pickle \ --student_type roberta \ --student_config ./training_configs/distilroberta-base.json \ --student_pretrained_weights ~/higo/distilbert/serialization_dir/tf_roberta_048131723.pth \ --teacher_type roberta \ --teacher_name roberta-base \ --mlm \ --temperature 2.0 \ --alpha_ce 5.0 \ --alpha_mlm 2.0 \ --alpha_clm 0.0 \ --alpha_mse 0.0 \ --alpha_cos 1.0 \ --token_counts ./data/token_counts.binarized_text.roberta-base.pickle \ --freeze_pos_embs \ --freeze_token_type_embds \ --n_epoch 4 \ --batch_size 8 \ --gradient_accumulation_steps 256 \ --learning_rate 2e-4 \ --n_gpu 3 \ --seed 42 ``` When the script arrives [here](https://github.com/huggingface/transformers/blob/a074a5d34d6411fb00e83a2ed30acf23d8c976b5/examples/research_projects/distillation/distiller.py#L343), it get stuck, the train does not start and after a given span of time, I got a timeout error. I tried to set higher timeout values (up to 3 hours), with no result. At this given point, my `nvidia-smi` is shown like this: ``` +---------------------------------------------------------------------------------------+ | NVIDIA-SMI 530.30.02 Driver Version: 530.30.02 CUDA Version: 12.1 | |-----------------------------------------+----------------------+----------------------+ | GPU Name Persistence-M| Bus-Id Disp.A | Volatile Uncorr. ECC | | Fan Temp Perf Pwr:Usage/Cap| Memory-Usage | GPU-Util Compute M. | | | | MIG M. | |=========================================+======================+======================| | 0 NVIDIA A100 80GB PCIe Off| 00000000:4F:00.0 Off | 0 | | N/A 34C P0 71W / 300W| 2479MiB / 81920MiB | 100% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 1 NVIDIA A100 80GB PCIe Off| 00000000:52:00.0 Off | 0 | | N/A 34C P0 69W / 300W| 1851MiB / 81920MiB | 100% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 2 NVIDIA A100 80GB PCIe Off| 00000000:CE:00.0 Off | 0 | | N/A 36C P0 68W / 300W| 1827MiB / 81920MiB | 100% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ | 3 NVIDIA A100 80GB PCIe Off| 00000000:D1:00.0 Off | 0 | | N/A 42C P0 66W / 300W| 63151MiB / 81920MiB | 0% Default | | | | Disabled | +-----------------------------------------+----------------------+----------------------+ +---------------------------------------------------------------------------------------+ | Processes: | | GPU GI CI PID Type Process name GPU Memory | | ID ID Usage | |=======================================================================================| | 0 N/A N/A 69824 C .../home/u021274/higo/myenv/bin/python 2476MiB | | 1 N/A N/A 69825 C .../home/u021274/higo/myenv/bin/python 1848MiB | | 2 N/A N/A 69826 C .../home/u021274/higo/myenv/bin/python 1824MiB | | 3 N/A N/A 97463 C python3 63148MiB | +---------------------------------------------------------------------------------------+ ``` (Process at GPU 3 is from another researcher) To me, the little amount of allocated memory seems odd. I honestly don't have a clue of what can be happening. Checked some other threads, but nothing helped to make things clear. ### Expected behavior Start of single-node, multi-GPU distributed training.
07-10-2023 20:04:19
07-10-2023 20:04:19
This is probably a setup error in the environment, as it means that one of the processes does not properly ping the others at the barrier line, and then it hangs forever.<|||||>Is there a way to solve this setup error?<|||||>i have same problem
transformers
24,734
closed
bug: eval_accumulation_steps can lead to incorrect metrics
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.15.0-76-generic-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.14.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): 2.11.1 (True) - Flax version (CPU?/GPU?/TPU?): 0.5.3 (cpu) - Jax version: 0.3.6 - JaxLib version: 0.3.5 - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? Hey @sgugger, I'm tagging you since this has to do with the trainer. ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Using the `run_qa.py` script in the `examples/pytorch/question-answering/` folder ```bash python run_qa.py \ --model_name_or_path "sjrhuschlee/flan-t5-base-squad2" \ --dataset_name squad_v2 \ --output_dir "tmp/eval_squad_v2/" \ --version_2_with_negative True \ --max_seq_length 512 \ --doc_stride 128 \ --do_eval \ --per_device_eval_batch_size 24 \ --tf32 True \ --dataloader_num_workers 6 \ --preprocessing_num_workers 6 \ --bf16_full_eval \ --eval_accumulation_steps 2 \ --overwrite_output_dir False ``` I found that the calculated metrics when using `eval_accumulation_steps` is not always correct. When not using `eval_accumulation_steps` with the above script I find that I get the expected metrics. However, I found that I needed to use `eval_accumulation_steps` for evaluation of the `flan-t5` models with the above parameters on my system otherwise the memory usage on the GPU would fluctuate from 4 - 8GB which could cause an OOM. I believe I found the cause for the inconsistency in the metrics. Specifically this line https://github.com/huggingface/transformers/blob/a074a5d34d6411fb00e83a2ed30acf23d8c976b5/src/transformers/trainer.py#L3150 does not cover the edge case where the total number of batches in the evaluation is not exactly divisible by `eval_accumulation_steps`. For example, if `eval_accumulation_steps = 2` and the total number of batches is 613, then only the last batch is used when calculating `all_preds`. I was able to partially fix this problem by adding a new variable called `total_steps` and updating the if statement ```python logger.info(f"***** Running {description} *****") if has_length(dataloader): total_steps = len(dataloader) logger.info(f" Num examples = {self.num_examples(dataloader)}") else: total_steps = None logger.info(" Num examples: Unknown") ... if args.eval_accumulation_steps is not None and ( (step + 1) % args.eval_accumulation_steps == 0 or (step + 1) == total_steps ): ``` However, this will still be a problem for dataloaders that don't have a defined length. ### Expected behavior Using `eval_accumulation_steps` should work in every case even when the number of batches is not divisible by `eval_accumulation_steps`.
07-10-2023 17:23:03
07-10-2023 17:23:03
Thanks for the report! I'll look into a solution for this today<|||||>@sjrl could you quickly verify that installing `transformers` via `pip install git+https://github.com/huggingface/transformers@fix-eval-accum-steps` solves this for you? Thanks!<|||||>Hey @muellerzr thanks for the quick fix! And my apologies I actually can't seem to reproduce the error on my end, but I did check that your change also works.
transformers
24,733
closed
Docs: add `kwargs` type to fix formatting
# What does this PR do? As the title indicates: in several places our docs `kwargs` did not include its type, which made our doc builder treat it like a continuation of the previous parameter 💔 Example of currently broken docs ([this function](https://huggingface.co/docs/transformers/v4.30.0/en/main_classes/processors#transformers.ProcessorMixin.save_pretrained)): <img width="915" alt="Screenshot 2023-07-10 at 18 20 47" src="https://github.com/huggingface/transformers/assets/12240844/d027950f-7082-416a-b2da-e4f3712bd27c"> This PR is a result of CMD+F on the broken pattern, and applying the fix :)
07-10-2023 17:22:15
07-10-2023 17:22:15
_The documentation is not available anymore as the PR was closed or merged._<|||||>After the PR (as seen in the doc preview): <img width="926" alt="Screenshot 2023-07-10 at 18 55 21" src="https://github.com/huggingface/transformers/assets/12240844/9329186d-86ee-4091-a5c5-bbee77ab83f7"> <|||||>Happy to push a `doc-builder` side change if needed -- and to do the opposite of this PR: remove `Dict[str, Any]` from `kwargs` whenever it is present. Just let me know your preference :) If it's neutral for you, I think having the explicit type is friendly for Python newbies.<|||||>@amyeroberts if you don't oppose, I'll merge this PR 🤗 <|||||>@gante Go for it!
transformers
24,732
open
GPT2 model training , Loss nan
### System Info Sometimes I get this error. while following this article about fine-tuning based on question and answers. [https://discuss.huggingface.co/t/fine-tuning-gpt2-for-question-answering/31895](url) I have just updated the code to create batches because my dataset is more extensive. Here is the code that I am using: ### Who can help? _No response_ ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction import pandas as pd import torch from torch.utils.data import DataLoader, Dataset from transformers import GPT2Tokenizer, GPT2LMHeadModel class FeedbackEssentials(Dataset): def __init__(self, qa_pairs, tokenizer, max_length): self.qa_pairs = qa_pairs self.tokenizer = tokenizer self.max_length = max_length def __len__(self): return len(self.qa_pairs) def __getitem__(self, idx): question = self.qa_pairs[idx][0] text = f"{question} {self.tokenizer.eos_token}" input_ids = self.tokenizer.encode(text, add_special_tokens=True, max_length=self.max_length, padding='max_length', truncation=True) attention_mask = [1] * len(input_ids) # Assuming all tokens should be attended to return { 'input_ids': torch.tensor(input_ids), 'attention_mask': torch.tensor(attention_mask) } train_df = pd.read_csv('/Users/irfanyaqub/Downloads/Research Dataset/train_dataset.csv') val_df = pd.read_csv('/Users/irfanyaqub/Downloads/Research Dataset/val_dataset.csv') val_df=val_df[:10] def remove_anomalies(value): return value['Coding'].replace({'\*-': ''}, regex=True) train_df['Coding'] = remove_anomalies(train_df) val_df['Coding'] = remove_anomalies(val_df) tokenizer = GPT2Tokenizer.from_pretrained('gpt2') model = GPT2LMHeadModel.from_pretrained('gpt2') tokenizer.add_special_tokens({'pad_token': '[PAD]'}) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') def text_manipulation(train_dataset): column1_values = train_dataset['Total Marks'].values column2_values = train_dataset['Coding'].values listOfLists = [[pair[0], pair[1]] for pair in zip(column1_values, column2_values)] text = "" for feedback in listOfLists: text += f"{feedback[0]} {feedback[1]} {tokenizer.eos_token}" return text training_dataset = text_manipulation(val_df) max_length_training = max(len(tokenizer.encode(qa_pair[0], add_special_tokens=True)) for qa_pair in training_dataset) dataset_training = FeedbackEssentials(training_dataset, tokenizer, max_length_training) batch_size = 4 dataloader = DataLoader(dataset_training, batch_size=batch_size, shuffle=True) optimizer = torch.optim.AdamW(model.parameters(), lr=5e-5) scheduler = torch.optim.lr_scheduler.StepLR(optimizer, step_size=1, gamma=0.9) device = torch.device('cuda' if torch.cuda.is_available() else 'cpu') model.to(device) model.train() for epoch in range(1): for batch in dataloader: input_ids = batch['input_ids'].to(device) attention_mask = batch['attention_mask'].to(device) optimizer.zero_grad() loss = model(input_ids.to(device), labels=input_ids.to(device))[0] loss.backward() optimizer.step() scheduler.step() if epoch % 100 == 0: print(f"Epoch {epoch}, Loss {loss.item()}") model.eval() def generate_response(question): input_ids = tokenizer.encode(question, add_special_tokens=True, return_tensors='pt').to(device) sample_output = model.generate(input_ids, do_sample=True, max_length=200, top_k=20, top_p=1.0) answer = tokenizer.decode(sample_output[0], skip_special_tokens=True) sentences = answer.split('. ') for sentence in sentences: if question in sentence: return sentence return answer ### Expected behavior Expected output will be something like that: question = “How to delete an account” response = generate_response(question) print(f"{question}\n {response}") Answer: How to delete an account How to delete an account <|question|> This tool allows you to:
07-10-2023 15:12:01
07-10-2023 15:12:01
Hi @irfan767, thanks for raising an issue! Questions about debugging code or custom training behaviour are best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.
transformers
24,731
closed
LLAMA for sequence classification
### System Info @ArthurZucker and @younesbelkada I am trying to perform sequence classification for text using LLAMA 7B model leveraging LORA training. I have 2 classes. Tokeniser and models are loading fine. But loss is zero after the first batch; when I check the logits, of model outputs, they are nan. ``` tokenizer = LlamaTokenizer.from_pretrained(PATH_TO_CONVERTED_WEIGHTS) if getattr(tokenizer, "pad_token_id") is None: tokenizer.pad_token_id = tokenizer.eos_token_id print("Loading LLAMA model...") model = LlamaForSequenceClassification.from_pretrained( PATH_TO_CONVERTED_WEIGHTS, num_labels=2, output_attentions=False, output_hidden_states=False, # torch_dtype=getattr(torch, 'float16'), load_in_8bit=True, low_cpu_mem_usage=True, ) peft_config = LoraConfig( task_type=TaskType.SEQ_CLS, target_modules=target_modules, inference_mode=False, r=lora_rank, lora_alpha=lora_alpha, lora_dropout=lora_dropout, modules_to_save=mod_to_save) model = get_peft_model(model, peft_config) model.print_trainable_parameters() device = torch.device('cuda') model.cuda() ``` This is my training loop. ``` for e in range(epochs): train_loss = 0 train_acc = 0 model.train() optim = optimizer(model.parameters(), lr=lr, eps=1e-8) scheduler = get_linear_schedule_with_warmup( optim, num_warmup_steps=0, num_training_steps=len(train_dataloader) * epochs ) for batch in train_dataloader: b_input_ids = batch[0].to(device) b_input_mask = batch[1].to(device) b_labels = batch[2].to(device) model.zero_grad() output = model( b_input_ids, # token_type_ids=None, attention_mask=b_input_mask, labels=b_labels ) loss = output['loss'] if torch.isnan(loss): print("!!!! Loss is NaN") break preds = output['logits'].detach().cpu().numpy() labels = b_labels.to('cpu').numpy() print(f"Loss :{loss.item()}") train_loss += loss.item() train_acc += accuracy(preds, labels) loss.backward() torch.nn.utils.clip_grad_value_(model.parameters(), 5.0) optim.step() scheduler.step() avg_train_loss = train_loss / len(train_dataloader) avg_train_acc = train_acc / len(train_dataloader) print('average training loss for epoch: {}'.format(avg_train_loss)) print('average training accuracy for epoch: {}'.format(avg_train_acc)) ``` I am getting ‘NaN’ loss after the first batch. Experiments tried (but did not work): - Tried clip grad value and clip grad norm (values from 1.0 to 5.0) ```torch.nn.utils.clip_grad_value_(model.parameters(), 5.0)``` - Tried changing lr too - loading in 8bit and float16 Any help would be greatly appreciated. Thanks ### Who can help? _No response_ ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Starting seq classification training using the above code. ### Expected behavior I see the loss calculated for the first batch only. From the next batch the logits become NaN and in turn loss and everything else is NaN
07-10-2023 15:10:18
07-10-2023 15:10:18
hi @Lathashree01 thanks for reporting, can you try to load the model in `bfloat16`, also what is the GPU hardware you are using? <|||||>Hi @younesbelkada , I am using Quadro RTX 6000 node with 8 GPUs of 24GB memory. When I run using - bfloat16, I am getting below error: `TypeError: Got unsupported ScalarType BFloat16` @ preds = output['logits'].detach().cpu().numpy() <|||||>Can you replace the lines that causing that error to: ```python preds = output['logits'].detach().cpu().float().numpy() labels = b_labels.to('cpu').float().numpy() ``` from what I can they are used to compute the accuracy only so it should be fine Also can you share your bitsandbytes version?<|||||>Hi @younesbelkada , I tried removing the above lines and ran with bfloat16; I see loss values normally, hope everything works as expected. Thank you so much. I was stuck in this and was trying out so many other things. Also, I changed the accuracy calculation to: ``` preds = output['logits'].detach().cpu().to(torch.float16) labels = b_labels.to('cpu').numpy() ``` My bitsandBytes version is - bitsandbytes 0.39.0 However, I do see some errors when I run `python -m bitsandbytes`. UserWarning: /home/anaconda3/envs/finetuneenv did not contain ['libcudart.so', 'libcudart.so.11.0', 'libcudart.so.12.0'] as expected! Searchingfurther paths... .. things related to posixpath .... raise RuntimeError('Something when wrong when trying to find file. Maybe you do not have a linux system?') RuntimeError: Something when wrong when trying to find file. Maybe you do not have a linux system?<|||||>Thanks ! Does training works with PEFT + int8 as showed in the script you shared? i.e. do you get that error only if you do `python -m bitsandbytes` ? Also, there should be no need to call `model.cuda()` after you have quantized the model<|||||>No, the training does not work fine with int8; logits and loss goes to NaN. Above mentioned error about bitsandbytes appears when I start training (even in bfloat16) and when I do `python -m bitsandbytes`. Since I am not using any components on bnb and I am also not loading the model in int8, I ignored the error while training with bfloat16. Is that fine? Please let me know what I can do if that's a problem. <|||||>I see now thanks ! for the bnb issue probably your CUDA + bnb installation might be broken, you can post the issue on the bitsandbytes repository by stating what hardware and operating system you are using Regarding your hotfix, which is to fine-tune in bf16 I think it is fine to do so, bfloat16 training is recommended over float16 training<|||||>> Regarding your hotfix, which is to fine-tune in bf16 I think it is fine to do so, bfloat16 training is recommended over float16 training Oh, that's relieving. Thank you, I will check on bitsandbytes error.<|||||>I have the same errors.<|||||>已经收到您的邮件。<|||||>> I have the same errors. Mine got resolved when trained with bf16. If you could please elaborate on where your error is or maybe post your error trace. It will be helpful for the team or others to provide any suggestions.
transformers
24,730
closed
Docs: change some `input_ids` doc reference from `BertTokenizer` to `AutoTokenizer`
# What does this PR do? As the title indicates. We are doing it in most places, but there were a few places with the old pattern. (detected it as part of https://github.com/huggingface/transformers/issues/24575)
07-10-2023 15:06:47
07-10-2023 15:06:47
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,729
closed
Docs: Update logit processors __call__ docs
# What does this PR do? PR done as part of https://github.com/huggingface/transformers/issues/24575 This PR polishes the inexistent `__call__` method docs for the logit processors (before this PR, only the base classes had docs).
07-10-2023 15:05:17
07-10-2023 15:05:17
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,728
closed
Saving with Trainer missing config.json and tokenizer files.
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.4.119-19.0009.28-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> - - `Accelerate` version: 0.20.3 - Platform: Linux-5.4.119-19.0009.28-x86_64-with-glibc2.35 - Python version: 3.10.6 - Numpy version: 1.22.2 - PyTorch version (GPU?): 2.0.0 (True) - PyTorch XPU available: False - System RAM: 1877.62 GB - GPU type: NVIDIA H800 - `Accelerate` default config: Not found - [2023-07-10 14:40:30,136] [INFO] [real_accelerator.py:110:get_accelerator] Setting ds_accelerator to cuda (auto detect) -------------------------------------------------- DeepSpeed C++/CUDA extension op report -------------------------------------------------- NOTE: Ops not installed will be just-in-time (JIT) compiled at runtime if needed. Op compatibility means that your system meet the required dependencies to JIT install the op. -------------------------------------------------- JIT compiled ops requires ninja ninja .................. [OKAY] -------------------------------------------------- op name ................ installed .. compatible -------------------------------------------------- [WARNING] async_io requires the dev libaio .so object and headers but these were not found. [WARNING] async_io: please install the libaio-dev package with apt [WARNING] If libaio is already installed (perhaps from source), try setting the CFLAGS and LDFLAGS environment variables to where it can be found. async_io ............... [NO] ....... [NO] cpu_adagrad ............ [NO] ....... [OKAY] cpu_adam ............... [NO] ....... [OKAY] fused_adam ............. [NO] ....... [OKAY] fused_lamb ............. [NO] ....... [OKAY] quantizer .............. [NO] ....... [OKAY] random_ltd ............. [NO] ....... [OKAY] [WARNING] sparse_attn requires a torch version >= 1.5 and < 2.0 but detected 2.0 [WARNING] using untested triton version (2.0.0), only 1.0.0 is known to be compatible sparse_attn ............ [NO] ....... [NO] spatial_inference ...... [NO] ....... [OKAY] transformer ............ [NO] ....... [OKAY] stochastic_transformer . [NO] ....... [OKAY] transformer_inference .. [NO] ....... [OKAY] -------------------------------------------------- DeepSpeed general environment info: torch install path ............... ['/usr/local/lib/python3.10/dist-packages/torch'] torch version .................... 2.0.0 deepspeed install path ........... ['/usr/local/lib/python3.10/dist-packages/deepspeed'] deepspeed info ................... 0.9.5, unknown, unknown torch cuda version ............... 12.1 torch hip version ................ None nvcc version ..................... 12.1 deepspeed wheel compiled w. ...... torch 2.0, cuda 12.1 ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction run run_clm.py i have add one Callback as following to trainfer. with deepspeed zero3 enable. ``` class CheckPointFinishCallBack(TrainerCallback): def on_save(self, args: TrainingArguments, state: TrainerState, control: TrainerControl, **kwargs): # Save model checkpoint checkpoint_folder = f"{PREFIX_CHECKPOINT_DIR}-{state.global_step}" log_file = os.path.join(args.output_dir,"checkpoint_saved") with open(log_file,"w") as writer: writer.write(checkpoint_folder) ``` ### Expected behavior save the mode config.json file and tokenizer files.
07-10-2023 14:42:20
07-10-2023 14:42:20
@pacman100 <|||||>Hi @dumpmemory ! Could you verify if any of the 3 places `self._save` in the code snippet below is triggered? (line 2742, 2753, 2762) https://github.com/huggingface/transformers/blob/25411085647a4dbcbd4e7ba6f381881a3e49c33e/src/transformers/trainer.py#L2734-L2762 <|||||>> Hi @dumpmemory ! > > Could you verify if any of the 3 places `self._save` in the code snippet below is triggered? > > (line 2742, 2753, 2762) > > https://github.com/huggingface/transformers/blob/25411085647a4dbcbd4e7ba6f381881a3e49c33e/src/transformers/trainer.py#L2734-L2762 I can check it later. currently my training is using zero3 and multi gpus setting <|||||>It would be nice if you can check the execution flow at this place, to see if any `self._save` is triggered 🙏 or not (and why). Thank you! No worry, we can wait :-)<|||||>> It would be nice if you can check the execution flow at this place, to see if any `self._save` is triggered 🙏 or not (and why). Thank you! No worry, we can wait :-) I’m checking now. My transformer code base may not be the newest one.<|||||>I am using save_16bit_model=true<|||||>I almost found the reason and I will update my code base to the current main base and test again. <|||||>after update to the main commit code base for accelerate and transformers. it was fixed
transformers
24,727
open
Add "save_best_only" parameter in "transformers.PushToHubCallback" class
### Feature request When utilizing Keras callbacks, we have the ability to specify when the model should be saved during training. The **transformers.PushToHubCallback()** class already incorporates similar functionality through the use of the **"save_strategy"** parameter. This parameter accepts the following values: - "no": Saving is performed at the conclusion of training. - "epoch": Saving is performed at the end of each epoch. - "steps": Saving is performed every "save_steps" interval. However, these options do not take into account accuracy (or any other specified metric) improvement. In contrast, the Keras callback provides the **"save_best_only"** parameter, which exclusively saves the model when there is an enhancement in accuracy or the specified metric. The code snippet below demonstrates its usage: ``` #Define the callback callbacks = [ keras.callbacks.ModelCheckpoint( filepath="directory_name/model.keras", monitor="val_loss", save_best_only=True, ) ] #Start the training history = model.fit( train_dataset, epochs=5, validation_data=validation_dataset, monitor="val_loss", callbacks=callbacks) ``` The model mentioned above will undergo training for a total of 5 epochs. However, the model will only be saved when there is an improvement in the "validation loss" metric. **transformers.PushToHubCallback()** class must incorporate this feature as well. ### Motivation This feature is indeed quite valuable, and it is readily accessible through [Keras callbacks](https://keras.io/api/callbacks/model_checkpoint/#:~:text=save_best_only%3A%20if%20save_best_only%3DTrue%20%2C,by%20each%20new%20better%20model.). By utilizing this feature, significant processing power and bandwidth can be saved, particularly when dealing with large transformers models. It ensures that only the best-performing models, based on the specified metric (such as validation loss), are saved, resulting in more efficient storage and reduced computational resources. ### Your contribution This [source code](https://github.com/keras-team/keras/blob/v2.12.0/keras/callbacks.py) can be helpful.
07-10-2023 11:18:33
07-10-2023 11:18:33
cc @sgugger to see if we want to support the **save/push only the best model during training**. It seems the trainer currently only support load the best model at the end (with a specified metric).<|||||>The whole goal of pushing the model to the hub is to be able to resume training from a different machine if there is a problem. If we push only the best model while checkpointing, this is not going to be possible anymore. Note that the best model will be pushed at the end of the training, so you will have the correct result once the training is finished.<|||||>Thank you for your consideration and feedback on my feature request. I understand the goal of pushing the model to the hub using **Transformers.PushToHubCallback()** is to enable resuming training from a different machine if necessary. I appreciate the point you made about the potential impact on that capability if only the best model is saved and pushed during training. Keeping this in mind, would it be possible to explore a solution that balances both requirements? - Perhaps a configuration option that allows users to choose between saving and pushing only the best model, or - Saving and pushing the best model at specific intervals during training? This way, users can have the flexibility to optimize storage and computational resources while still maintaining the ability to resume training from different machines if needed. Thank you for your time and consideration. I look forward to hearing your thoughts on this matter. _**A workaround**_ _I would like to state that even if the PushToHubCallback does not incorporate this feature, there is still a workaround if we are constrained by the bandwidth._ _We should train the model without **PushToHubCallback**. In order to save the best model locally, use the Keras **save_best_only** callback as shown above. Finally, at the end of the training, we can use **model.push_to_hub()** to save the best model stored on the local machine to the Hub._ _Nonetheless, it would be better if this feature is incorporated in the **Transformers.PushToHubCallback()**._
transformers
24,726
closed
[`T5`, `MT5`, `UMT5`] Add [T5, MT5, UMT5]ForSequenceClassification
# What does this PR do? This adds a sequence classification head to the PyTorch implementation of T5 and MT5, following the pattern of BartForSequenceClassification since it is also an encoder-decoder sequence classification model. I have trained and uploaded a flan-t5-base for MNLI [here](https://huggingface.co/sjrhuschlee/flan-t5-base-mnli) which has shown promising results on the dataset. I've updated the model tests to include the new model and I believe I found hopefully most of the additional imports and compatibility with the text-classification and zero-shot classification pipelines. **NOTE:** - [x] Help with failing tests - I found a number of tests are failing and I have linked it to the fact that `T5ForSequenceClassification` (and also `BartForSequenceClassification`) expect the `input_ids` and `decoder_input_ids` to have the same sequence length which they do not for the T5 tests (shown below) https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/tests/models/t5/test_modeling_t5.py#L104-L106 where `encoder_seq_length != decoder_seq_length` - Whereas they do have the same sequence length for the `BartModelTest` (shown below) https://github.com/huggingface/transformers/blob/abaca9f9432a84cfaa95531de4c72334f38a42f2/tests/models/bart/test_modeling_bart.py#L126-L133 - Here are the lines of code in `BartForSequenceClassification` (and the `T5` versions) that cause an error when the encoder and decoder sequence lengths are different https://github.com/huggingface/transformers/blob/caf5e369fc7b4755d9f98568cbe5e36a0898c96c/src/transformers/models/bart/modeling_bart.py#L1546-L1554 The `eos_mask` has the wrong shape to be properly cast onto the `hidden_states` since the `eos_mask` shape is linked to the encoder sequence length and the `hidden_states` shape is linked to the decoder sequence length. Would it be okay to change the T5 tests such that the decoder and encoder input_ids have the same sequence length to get the tests to pass? ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? Anyone in the community is free to review the PR once the tests have passed. Feel free to tag members/contributors who may be interested in your PR. Hey @ArthurZucker and @younesbelkada I would greatly appreciate a review on this when you have a chance.
07-10-2023 07:23:29
07-10-2023 07:23:29
_The documentation is not available anymore as the PR was closed or merged._<|||||>> Quick comment, we can probably add this to UMT5 too (you don't have to if it take too much time) For sure, I'd be happy to!<|||||>Hey @sgugger thanks for the feedback! > Hi, thanks for your PR! This does not follow the pattern of BartForSequenceClassification, or any other classification model of the library: I agree! I ran into the same problem when implementing `T5ForQuestionAnswering`. I opted to follow the implementation used for `T5ForConditionalGeneration` which also does not use the `BaseModel` and instead reimplements the encoder and decoder. I can go ahead and try and use the BaseModel for the SequenceClassification model, but its probably worth doing a refactor of T5 to use the BaseModel for the other models as well (e.g. ConditionalGeneration and QuestionAnswering). What do you think? <|||||>We can't change existing models without risking massive breaking changes (users wouldn't be able to re-use their checkpoints directly, I think `from_pretrained` would still work though). But that doesn't mean we shouldn't do the right thing for new models! So if you could try it the usual way, that would be great!<|||||>And thanks for the feedback!
transformers
24,725
open
Sum loss instead of mean loss should be used if gradient accumulation step is larger than 1 when training a language model
### System Info Not applicable, because this is a design issue, not a runtime error. ### Who can help? @sgugger, @ArthurZucker and @younesbelkada ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Given gradient accumulation step 2, batch size 1, and a training set of 2 samples, where sample 1 contains 11 tokens and sample 2 contains 101 tokens, train a decoder-only model in unsupervised learning (first token in each sample is untrainable), then the gradient will be different from training on same dataset and model at gradient accumulation step 1, batch size 2. The reason is that currently `transformers` use mean loss for most models (if not all), as a result, each token in sample 2 would produce 10 times larger gradient than that of each token in sample 2. ### Expected behavior Settings of accumulation step 2 / batch size 1 should produce the same gradient as settings of accumulation step 1 / batch size 2.
07-10-2023 02:59:09
07-10-2023 02:59:09
Hi @Atry Your description is correct. However, the loss logic is implemented in each model classes, and therefore it could not see multiple batches in a single model forward pass (and that's probably the main reason for which we just simply use `mean`). The best and easy way to have a correct computation if to modify the trainer class to compute back, given the loss from model output, compute the sum of losses in a batch (by considering the sequence length, or total number of tokens that is meaningful - i.e. not padding token etc.), and send this new custom loss values to compute the gradients then accumulate it.<|||||>Computing back the gradient would damage the precision if the gradient is in `fp16`.<|||||>An idea is to switch all models to `sum` loss and create a custom `GradientScaler` to count the number of trainable tokens.<|||||>By the way there is another example of the issue in `mean` loss. Suppose you have batch size 33, 1 epoch, a data set of 100 samples, then the last iteration will have only 1 sample and the gradient produced by the last sample is 33 times larger than other samples'.<|||||>> switch all models to sum loss This would be a big breaking change, and would not be an option. > Computing back the gradient would damage the precision if the gradient is in fp16 I would not think it will produce a big difference, if at the end, we still use some form of mean after we accumulate (sum) all the gradients (saying divided by the total number of non-padding tokens appear in all the batches in a gradient accumulation). When the loss is computed by sum in a batch, it actually requires specific work to perform to get back to the usual definition of that loss (say the average non-padding token loss) when we sum over all batches. (Here I only say non-padding token. But loss definition could get very complex depending on the tasks and the specific models) <|||||>As studied in https://arxiv.org/abs/1711.00489, changing batch size would have a side effect to also change learning rate per sample (and learning rate per token) even when the learning rate per iteration is unchanged. However their analysis to their experiment result is non-sense. The actual explanation is that the side effect is just due to the mean loss. Sum loss would not lead to the side effect. <|||||>If you are not happy with the loss computation inside the model, you can just not pass the `labels` to the model and compute it yourself outside of the forward pass. Note that all of our examples account for gradient accumulation by dividing the final loss by the number of gradient accumulation steps. As @ydshieh mentioned, a breaking change across all models of this magnitude is not possible.<|||||>Good idea! I wonder if the `Trainer` can fix this loss issue by not passing `labels`, too.<|||||>The Trainer already does divide the loss by the number of gradient accumulation steps and there are tests in the CI to ensure training with batch size X and batch size X / g gradient accumulation steps g yield the same results.<|||||>Suppose you have a dataset of two samples used in unsupervised learning against a decoder-only language model, sample 1 contains 11 tokens, sample 2 contains 101 tokens, when training at batch size 1 without padding, the `mean` loss of sample 1 is 0.1 and the `mean` loss of sample 2 is 0.9, then mathematically what's your expected loss when the batch size is 2? In current `transformers` implementation: - when gradient accumulation step is 1 and batch size is 2, padding to sequence length 101, the loss would be `(0.1*10+0.9*100)/(10+100)=0.82727` - when gradient accumulation step is 2 and batch size is 1, no padding, the loss would be `(0.1+0.9)/2=0.5`. IMHO ideally the loss should be 0.82727<|||||>> when gradient accumulation step is 1 and batch size is 2, padding to sequence length 101, the loss would be (0.1*10+0.9*100)/(100*2)=0.455 where does `100*2` come from in the denominator?<|||||>I believe in `transformers` we do take care of the padding token. If you find a HF causal LM model that has a loss computation (in the model forward) that doesn't take care of the padding token, please let us know. 🙏 <|||||>You are right. I misunderstood the implementation. I just updated my previous comments. Thank you!<|||||>Thanks! As mentioned earlier: - you can either compute back the sum from the mean - but as you don't like the precision loss in fp16 if using the above way, you can choose not to pass the labels to the model forward, and compute the actual sum. But - (*) you need to modify a bit the code `to not to divide by the accumulation step 2`, but the total number of non-padding tokens seen in all the batches during that gradient accumulation - this necessary change (*) is not possible to be done in the model forward, no matter if we return `mean` or `sum` in forward pass.<|||||>I confronted the same issue. The gradient accumulation's result is much worse than using a large batch size (per device). The main reason that I assume is probably that the gradient accumulation macro-averages the loss scores, but they should be micro-averaged. I think this problem is so critical that it affects the result a lot for LMs (variable lengths across batches). Otherwise, the training result must be suboptimal.
transformers
24,724
closed
New Version Usage Issue
### System Info - `transformers` version: 4.29.0 - Platform: Linux-3.10.0-1160.92.1.el7.x86_64-x86_64-with-glibc2.31 - Python version: 3.10.9 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction ##Here is my code. ``` import os import logging from dataclasses import dataclass, field from typing import Dict, Optional, Sequence import torch import transformers from datasets import load_dataset, load_from_disk from transformers import ( AutoModelForCausalLM, AutoTokenizer, Trainer, DataCollatorForSeq2Seq, ) IGNORE_INDEX = -100 PROMPT_DICT = { "prompt_input": ( "### 指令:\n{instruction}\n\n### 输入:\n{input}\n\n### 回答:" ), "prompt_no_input": ( "### 指令:\n{instruction}\n\n### 回答:" ), } @dataclass class TrainingArguments(transformers.TrainingArguments): model_name_or_path: Optional[str] = field(default=None, metadata={"help": "模型名称"}) cache_dir: Optional[str] = field(default=None, metadata={"help": "模型地址"}) data_path: str = field(default=None, metadata={"help": "数据地址"}) mask_input: bool = field(default=True, metadata={"help": "是否遮掉指令,只计算回答的损失"}) model_max_length: int = field(default=512, metadata={"help": "最大序列长度"}) optim: str = field(default="adamw_torch", metadata={"help": "优化器"}) @dataclass class DataCollatorForSupervisedDataset(object): """Collate examples for supervised fine-tuning.""" tokenizer: transformers.PreTrainedTokenizer def __call__(self, instances: Sequence[Dict]) -> Dict[str, torch.Tensor]: input_ids, labels = tuple([torch.tensor(instance[key]) for instance in instances] for key in ("input_ids", "labels")) input_ids = torch.nn.utils.rnn.pad_sequence( input_ids, batch_first=True, padding_value=self.tokenizer.pad_token_id ) labels = torch.nn.utils.rnn.pad_sequence(labels, batch_first=True, padding_value=IGNORE_INDEX) return dict( input_ids=input_ids, labels=labels, attention_mask=input_ids.ne(self.tokenizer.pad_token_id), ) def train(): local_rank = int(os.environ["LOCAL_RANK"]) parser = transformers.HfArgumentParser(TrainingArguments) training_args, = parser.parse_args_into_dataclasses() if local_rank == 0: print(training_args) tokenizer = AutoTokenizer.from_pretrained( training_args.model_name_or_path, cache_dir=training_args.cache_dir, model_max_length=training_args.model_max_length, padding_side="right" ) model = AutoModelForCausalLM.from_pretrained( training_args.model_name_or_path, cache_dir=training_args.cache_dir, # torch_dtype=torch.float16 ) def generate_and_tokenize(sample): prompt_input, prompt_no_input = PROMPT_DICT["prompt_input"], PROMPT_DICT["prompt_no_input"] source = prompt_input.format_map(sample) if sample.get("input", "") != "" \ else prompt_no_input.format_map(sample) target = f"\n{sample['output']}{tokenizer.eos_token}" complete = source + target # </s> 1 2 3 : a b </s> complete_tokenized = tokenizer(complete, truncation=True, max_length=training_args.model_max_length) # </s> 1 2 3 : source_tokenized = tokenizer(source, truncation=True, max_length=training_args.model_max_length) if training_args.mask_input: source_len = len(source_tokenized['input_ids']) complete_tokenized['labels'] = [IGNORE_INDEX] * source_len + complete_tokenized['input_ids'][source_len:] else: complete_tokenized['labels'] = complete_tokenized['input_ids'].copy() return complete_tokenized tokenized_path = os.path.join(os.path.dirname(training_args.data_path), f"{training_args.model_name_or_path.split('/')[-1]}_tokenized") if not os.path.exists(tokenized_path): logging.warning("tokenized data not existed, tokenize data...") data = load_dataset("json", data_files=training_args.data_path) train_dataset = data['train'].shuffle().map(generate_and_tokenize, batched=False, remove_columns=["instruction", "input", "output"]) if local_rank == 0: train_dataset.save_to_disk(tokenized_path) else: logging.warning("tokenized data existed, load data...") train_dataset = load_from_disk(tokenized_path) # data_collator = DataCollatorForSupervisedDataset(tokenizer=tokenizer) data_collator = DataCollatorForSeq2Seq(tokenizer=tokenizer, label_pad_token_id=IGNORE_INDEX, pad_to_multiple_of=8) logging.warning("training...") trainer = Trainer(model=model, tokenizer=tokenizer, args=training_args, train_dataset=train_dataset, eval_dataset=None, data_collator=data_collator) trainer.train() trainer.save_state() trainer.save_model(output_dir=training_args.output_dir) tokenizer.save_pretrained(save_directory=training_args.output_dir) if __name__ == '__main__': train() ``` ### Expected behavior Has anyone encountered this problem? I used the same instruction fine-tuning code. It runs successfully with transformers package version 4.29.0, but when I upgrade to version 4.30.2, it fails to run and throws an OOM (Out of Memory) error. Does anyone know the reason behind this? Below is the GPU status during my successful run. ![image](https://github.com/huggingface/transformers/assets/22993056/47653653-0ec4-4d98-beab-101665dde0d1)
07-10-2023 01:46:27
07-10-2023 01:46:27
Here's another question, in the new version of the Transformers package, the default loaded model by from_pretrained has become safeTensors. How can I change it to pytorch.bin? Is there any parameter I can specify?<|||||>Hi @Excuses123, thanks for raising this issue. Without knowing the model or dataset, we're unable to reproduce and won't be able to debug this issue. Is there a minimal reproducible snippet with a public dataset and model checkpoint where this issue (increase memory footprint) still occurs and you could share? To force the model to not load safetensor weights you can pass `use_safetensors=False` in the `from_pretrained` call<|||||>@amyeroberts Thank you for your response. I am using the model: [bigscience/bloomz-1b1](https://huggingface.co/bigscience/bloomz-1b1) The data can be found at: https://huggingface.co/datasets/BelleGroup/train_0.5M_CN/blob/main/Belle_open_source_0.5M.json Below is the execution script: ``` torchrun --nproc_per_node=4 --master_port=12345 train.py \ --model_name_or_path bigscience/bloomz-1b1 \ --cache_dir /workspace/pretrain_model/bloomz \ --output_dir /workspace/finetune_model/bloomz/bloomz_1b1_sft \ --data_path /workspace/datasets/Belle_train_0.5M_CN/Belle_open_source_0.5M.json \ --fp16 True \ --num_train_epochs 1 \ --per_device_train_batch_size 1 \ --per_device_eval_batch_size 1 \ --gradient_accumulation_steps 32 \ --model_max_length 512 \ --evaluation_strategy "no" \ --save_strategy "steps" \ --save_steps 2000 \ --save_total_limit 1 \ --learning_rate 2e-5 \ --weight_decay 0. \ --warmup_ratio 0.03 \ --lr_scheduler_type "cosine" \ --logging_steps 1 \ --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'BloomBlock' \ --report_to "tensorboard" ``` After testing, The maximum version that can currently run is 4.29.2, and all versions after that cannot run.<|||||>I guess it might be caused by FSDP (Fully Sharded Data Parallelism), but I'm not sure.<|||||>@Excuses123 Have you tried running without FDSP? Which version of accelerate are you running?<|||||>@amyeroberts I have tried it, and without FSDP, both the new and old versions of transformers throw an OOM error. My accelerate version is 0.20.3.<|||||>> both the new and old versions of transformers throw an OOM error. @Excuses123 Is this including versions <= 4.29.2 ? <|||||>@amyeroberts I have tried version 4.29.0 and it works<|||||>@Excuses123 OK, thanks for confirming. Could you: * Format the code example so that all of the code is in markdown code blocks: ` ``` code goes here ``` ` * Try on the most recent version of transformers, [installing from source](https://huggingface.co/docs/transformers/installation#install-from-source)? * Share the versions of datasets being used? <|||||>@amyeroberts I have fixed the code formatting, and the version of my datasets is 2.11.0. My machine is currently running a task, and as soon as it is finished, I will try the latest version.<|||||>Facing the same issue. Code ran smoothly with transformers==4.28.1 but OOM with transformers==4.30.2<|||||>@Excuses123 @larrylawl OK, thanks for the information and updates. I'm going to cc @pacman100 and @younesbelkada who know more about training in fp16 and torchrun <|||||>I can confirm this. It is a bug introduced recently. It can be reproduced by the Vicuna training [example](https://github.com/lm-sys/FastChat#fine-tuning-vicuna-7b-with-local-gpus). The script works well for 4.28.1 but hits OOM with 4.31.0. With 4.31.0, the warning is ``` FSDP Warning: When using FSDP, it is efficient and recommended to call prepare for the model before creating the optimizer FSDP Warning: When using FSDP, several parameter groups will be conflated into a single one due to nested module wrapping and parameter flattening. ``` To fix it, I followed the [guide](https://huggingface.co/docs/accelerate/usage_guides/fsdp#a-few-caveats-to-be-aware-of) and changed these lines (https://github.com/huggingface/transformers/blob/e42587f596181396e1c4b63660abf0c736b10dae/src/transformers/trainer.py#L1646-L1661) to ```python3 model = self.accelerator.prepare(model) if delay_optimizer_creation: self.create_optimizer_and_scheduler(num_training_steps=max_steps) self.optimizer = self.accelerator.prepare(self.optimizer) ``` Then the warnings and OOM disappeared. @pacman100 @younesbelkada I think my fix is a hack that only works for my case. Could you do a more complete fix in the main branch?<|||||>Hello @Ying1123, Thank you for the detailed info, very helpful. Could you please try out the above PRs for accelerate and transformers and see if it fixes the OOM? <|||||>> Hello @Ying1123, Thank you for the detailed info, very helpful. Could you please try out the above PRs for accelerate and transformers and see if it fixes the OOM? Thanks @pacman100, cherry-pick the PRs for transformers v4.31.0 and accelerate v0.21.0 works for me.<|||||>@pacman100 Hi, I am still getting out-of-memory issues with the latest main. With transformer==4.28.1, the vicuna-7b [example](https://github.com/lm-sys/FastChat#fine-tuning-vicuna-7b-with-local-gpus) can run on 4xA100 (40GB) without any issues. After accelerate is used for FSDP (from v4.30 - the current main), the example hits OOM. Before your fix, the example hits OOM immediately. After your fix, the example hits OOM after a few batches. From these observations, I can confirm that the recent refactoring makes the memory usage higher than the older version but I do not know how to debug because I am not familiar with Accelerate. Could you do more testing and help us fix it? This blocks us from updating transformers to the latest version.<|||||>Hello @merrymercy, can you post the vram usage with the 4.28 version?<|||||>Hi @pacman100 @Ying1123 , I meet the same issus: OOM ; And I revised my tranfomers to 4.31.0 or 4.30.0 and accelerate=0.21.0, all these are not worked ! On 2 x A6000 48G, fine-tuning LLaMA 7B With transformer=4.31.0, accelerate=0.22.0.dev0 (latest main), the warning is: ``` FutureWarning: using `--fsdp_transformer_layer_cls_to_wrap` is deprecated. Use fsdp_config instead FSDP Warning: When using FSDP, it is efficient and recommended to call prepare for the model before creating the optimizer. FSDP Warning: When using FSDP, several parameter groups will be conflated into a single one due to nested module wrapping and parameter flattening. ``` And my fsdp are: ``` --fsdp "full_shard auto_wrap" \ --fsdp_transformer_layer_cls_to_wrap 'LlamaDecoderLayer' \ ```<|||||>@pacman100 @Ying1123 And I found another way to add the fsdp_config.json can disappear the all follow warning : ``` FutureWarning: using `--fsdp_transformer_layer_cls_to_wrap` is deprecated. Use fsdp_config instead ``` And [hacking method](https://github.com/huggingface/transformers/issues/24724#issuecomment-1645189539) can disappear: ``` FSDP Warning: When using FSDP, it is efficient and recommended to call prepare for the model before creating the optimizer. FSDP Warning: When using FSDP, several parameter groups will be conflated into a single one due to nested module wrapping and parameter flattening. ``` But all these still hit on OOM ! My fsdp_config.json is: ``` { "fsdp_auto_wrap_policy": "FULL_SHARD", "fsdp_transformer_layer_cls_to_wrap": "LlamaDecoderLayer" } ``` I think there is better way to fix this. <|||||>I see same memory usage across versions for the following example: ``` cd transformers export TASK_NAME=mrpc torchrun --nnodes 1 --nproc-per-node 2 ./examples/pytorch/text-classification/run_glue.py --model_name_or_path bert-base-cased --task_name $TASK_NAME --do_train --do_eval --max_seq_length 128 --per_device_train_batch_size 16 --learning_rate 5e-5 --num_train_epochs 3 --output_dir /tmp/$TASK_NAME/ --overwrite_output_dir --fsdp "full_shard auto_wrap" --fsdp_transformer_layer_cls_to_wrap BertLayer --bf16 ``` version 4.28.1 - 5.4GB vram latest main branch - 4.8GB vram Please provide a minimal example that I can directly run without having to spend time in getting it to work. <|||||>You mean the transformers=the latest main branch; accelerate=0.21.0 ?<|||||>Both Accelerate and Transformers main branch<|||||>With both Accelerate and Transformers main branch works for me
transformers
24,723
open
install from source dont work
### System Info ms server 2019 PS C:\Users\a_ital> pip install git+https://github.com/huggingface/transformers Collecting git+https://github.com/huggingface/transformers Cloning https://github.com/huggingface/transformers to c:\users\a_ital\appdata\local\temp\6\pip-req-build-no6t74od ERROR: Error [WinError 2] The system cannot find the file specified while executing command git version ERROR: Cannot find command 'git' - do you have 'git' installed and in your PATH? ### Who can help? _No response_ ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction PS C:\Users\a_ital> pip install git+https://github.com/huggingface/transformers Collecting git+https://github.com/huggingface/transformers Cloning https://github.com/huggingface/transformers to c:\users\a_ital\appdata\local\temp\6\pip-req-build-no6t74od ERROR: Error [WinError 2] The system cannot find the file specified while executing command git version ERROR: Cannot find command 'git' - do you have 'git' installed and in your PATH? PS C:\Users\a_ital> ### Expected behavior I need to install transformers
07-09-2023 13:47:19
07-09-2023 13:47:19
Hello @IdoTal120 ! Welcome to Github, 👋 Taking this into account ``` ERROR: Error [WinError 2] The system cannot find the file specified while executing command git version ERROR: Cannot find command 'git' - do you have 'git' installed and in your PATH? ``` The error message is telling you that `pip `doesn't know where you have installed `git`. Can you confirm that you have `git` installed? It can be installed for Windows [here](https://git-scm.com/download/win). Then you need to add it to the PATH environment variable. Currently, `pip` is checking for `git.exe `in all the locations listed in PATH, but the location for the` git `executable is not there. I believe during installation of git you have the option to update PATH automatically, but you can do it at any time: find the filepath to git.exe on your local machine (for example, `C:\Program Files\...\git\bin`) and then add it to PATH ([instructions here for your operating system I think](https://www.opensourceforu.com/2021/01/how-to-install-and-configure-git-on-a-windows-server/)). Taken from [here](https://github.com/stefmolin/Hands-On-Data-Analysis-with-Pandas-2nd-edition/issues/3) Let me know how it goes and welcome to the Open!
transformers
24,722
open
Feature Request: To add nested hierarchy retrieval from Donut response
### Feature request ### Donut for hierarchy extraction (Document Parsing) While preprocessing the ground truth json to the tokens for Donut the processor function (json2token) handles nested hierarchy but the same doesn't hold true for token2json. Below is an example json: ` { "header": "This is 1st header", "elements": [ { "text_block": "This is a textblock" }, { "header": "1st nested header", "elements": [ { "text_block": "This is a sentence" }, { "text_block": "Another sentence...." }, { "itallic_header": "This is an itallic header", "elements": [ { "text_block": "Text 1 inside itallic header.." }, { "text_block": "Text 2 inside itallic header.." } ] } ] } ] } ` Consider the above json. Applying the json2token function gives the following token sequence. Function Call: `output = json2token(temp_test)` > <s_header>This is 1st header</s_header><s_elements><s_text_block>This is a textblock</s_text_block><sep/><s_header>1st nested header</s_header><s_elements><s_text_block>This is a sentence</s_text_block><sep/><s_text_block>Another sentence....</s_text_block><sep/><s_itallic_header>This is an itallic header</s_itallic_header><s_elements><s_text_block>Text 1 inside itallic header..</s_text_block><sep/><s_text_block>Text 2 inside itallic header..</s_text_block></s_elements></s_elements></s_elements> This maintains the hierarchy (like parenthesis matching). So, if donut is trained on such data it will give response which parses the information & also retains the hierarchy but the token2json function doesn't handle the conversion properly. Below is the output of the function id passed the token sequence present above. Function Call: `processor.token2json(output)` Output ` [ { 'header': 'This is 1st header', 'elements': [ { 'text_block': 'This is a textblock' }, { 'header': '1st nested header', 'text_block': 'This is a sentence' }, { 'text_block': 'Another sentence....' }, { 'itallic_header': 'This is an itallic header', 'text_block': 'Text 1 inside itallic header..' }, { 'text_block': 'Text 2 inside itallic header..' } ] } ] ` Updated Function Results (Preserving the hierarchy): ` [ { 'header': 'This is 1st header', 'elements': [ { 'text_block': 'This is a textblock' }, { 'header': '1st nested header', 'elements': [ { 'text_block': 'This is a sentence' }, { 'text_block': 'Another sentence....' }, { 'itallic_header': 'This is an itallic header', 'elements': [ { 'text_block': 'Text 1 inside itallic header..' }, { 'text_block': 'Text 2 inside itallic header..' } ] } ] } ] } ] ` Example from CORD: > temp_test = { "company": "ADVANCO COMPANY", "date": "17/01/2018", "address": "NO 1&3, JALAN WANGSA DELIMA 12, WANGSA LINK, WANGSA MAJU, 53300 KUALA LUMPUR", "total": "7.00" } Updated Function Output: ` [ { 'company': 'ADVANCO COMPANY', 'date': '17/01/2018', 'address': 'NO 1&3, JALAN WANGSA DELIMA 12, WANGSA LINK, WANGSA MAJU, 53300 KUALA LUMPUR', 'total': '7.00' } ] ` ### Motivation Found out about this while working on a project to extract information from images also maintaining the hierarchy/structure of it. Going through the CORD dataset made me realize that the data itself is not nested in nature. So, thought of testing on a sample the postprocessing logics json -> token & token -> json conversion. Updated the token2json to get the hierarchy as it is from the token but wasn't sure about the model performance on nested jsons but long story short Donut predicts the hierarchy pretty good. ### Your contribution ` def token2json(tokens, is_inner_value=False, nested_key = 'elements'): """ Convert a (generated) token seuqnce into an ordered JSON format """ output = dict() while tokens: start_token = re.search(r"<s_(.*?)>", tokens, re.IGNORECASE) if start_token is None: break key = start_token.group(1) start_matches = re.finditer(fr"<s_{key}>", tokens) end_matches = re.finditer(fr"</s_{key}>", tokens) start_tups = [(match.group(), match.start(), match.end()) for match in start_matches] end_tups = [(match.group(), match.start(), match.end()) for match in end_matches] mergeTups = start_tups + end_tups sortedMergeTups = sorted(mergeTups, key=lambda x: x[1]) # remove any unattended close tag for the key present before the current focus start key updatedIdx = -1 for idx in range(len(sortedMergeTups)): if start_token.span()[0] == sortedMergeTups[idx][1]: updatedIdx = idx break sortedMergeTups = sortedMergeTups[updatedIdx:] start_main = sortedMergeTups[0] match_tracker = 0 end_token = None if key == nested_key : if start_main[0] == f'<s_{key}>': for tup in sortedMergeTups[1:]: if tup[0] == f'</s_{key}>': if match_tracker == 0: end_token = tup break else: match_tracker -= 1 elif tup[0] == f'<s_{key}>': match_tracker += 1 elif len(sortedMergeTups) > 1: nextTup = sortedMergeTups[1] if nextTup[0] == f'</s_{key}>': end_token = nextTup if end_token is None: tokens = tokens.replace(start_token[0], "", 1) else: start_token_word = start_main[0] start_token_id = start_main[2] end_token_word = end_token[0] end_token_id = end_token[1] content = tokens[start_token_id: end_token_id] if content is not None: if r"<s_" in content and r"</s_" in content: # non-leaf node value = token2json(content, is_inner_value=True) if value: if len(value) == 1: value = value[0] output[key] = value else: # leaf nodes if key in output.keys(): if isinstance(output[key], str): tempVal = output[key] output[key] = [tempVal] else: output[key] = [] for leaf in content.split(r"<sep/>"): leaf = leaf.strip() if ( leaf in processor.tokenizer.get_added_vocab() and leaf[0] == "<" and leaf[-2:] == "/>" ): leaf = leaf[1:-2] # for categorical special tokens output[key].append(leaf) if len(output[key]) == 1: output[key] = output[key][0] tokens = tokens[end_token[2]:] if tokens[:6] == r"<sep/>": # non-leaf nodes return [output] + token2json(tokens[6:], is_inner_value=True) if len(output): return [output] if is_inner_value else output else: return [] if is_inner_value else {"text_sequence": tokens} `
07-09-2023 07:40:15
07-09-2023 07:40:15
Hi @sam99dave, Thanks for raising this issue! Would you like to open a PR with your suggestion? This way you get the contribution on git. One thing to note when adding this feature is that we have to consider backwards compatibility with our models, so the default behaviour would still need to be preserved.<|||||>> Hi @sam99dave, > > Thanks for raising this issue! Would you like to open a PR with your suggestion? This way you get the contribution on git. One thing to note when adding this feature is that we have to consider backwards compatibility with our models, so the default behaviour would still need to be preserved. Hey hi, I would like to open a PR for this. Regarding the backward compatibility, I agree with that, I think it can be handled by having a check for some nested key. If it's present then only we will use the logic to handle it. If not present then it will return what it should be returning by default. Will be doing some test on this to be sure of it.
transformers
24,721
closed
[WIP] Gradient Checkpointing: use_reentrant=False
# What does this PR do? As per pytorch's [recommendation](https://github.com/pytorch/pytorch/blob/main/torch/utils/checkpoint.py#L418) When using gradient checkpointing for models that allow them, torch.util.checkpoint recommends using `use_reentrant=False``` as per [this note](https://github.com/pytorch/pytorch/blob/main/torch/utils/checkpoint.py#L336): ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [ ] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [ ] Did you write any new necessary tests? @ArthurZucker @sgugger
07-08-2023 14:19:03
07-08-2023 14:19:03
transformers
24,720
closed
Pvt model
# Add PVT(Pyramid Vision Transformer) Partially fixes: [issue](https://github.com/huggingface/transformers/issues/17596), [Closed PR](https://github.com/huggingface/transformers/pull/22445) ## Before submitting - [ ] This PR fixes a typo or improves the docs (you can dismiss the other checks if that's the case). - [x] Did you read the [contributor guideline](https://github.com/huggingface/transformers/blob/main/CONTRIBUTING.md#start-contributing-pull-requests), Pull Request section? - [ ] Was this discussed/approved via a Github issue or the [forum](https://discuss.huggingface.co/)? Please add a link to it if that's the case. - [x] Did you make sure to update the documentation with your changes? Here are the [documentation guidelines](https://github.com/huggingface/transformers/tree/main/docs), and [here are tips on formatting docstrings](https://github.com/huggingface/transformers/tree/main/docs#writing-source-documentation). - [x] Did you write any new necessary tests? ## Who can review? @amyeroberts From previous PR: > PvtBlock which contains a PvtPatchEmbeddings layer and the subsequent PvtLayer layers. PvtLayer has depth, while PvtPatchEmbeddings only at the beginning of the each encoder block and it would be trailed along the whole depth without using or it would require extra logic to make it None and also would require to trail height and width.
07-08-2023 13:34:56
07-08-2023 13:34:56
_The documentation is not available anymore as the PR was closed or merged._<|||||>@Xrenya Thanks again for adding and iterating. Merging now :)
transformers
24,719
closed
add gradient checkpointing for distilbert
# What does this PR do? Fixes #9113 and #23219 I just added the gradient checkpointint feature for DistilBert following the implementation in BERT. This should be useful if one wants to train a relatively small model with extremly large batch size for better performance in application scenarios such as text retrieval or embeddings. @ArthurZucker @sgugger
07-08-2023 13:16:43
07-08-2023 13:16:43
Hi @jordane95, thanks for opening this PR. Overall changes look OK to me. I can see from the issue discussion that gradient checkpointing was deliberately not added to DistilBert. In general, we try to avoid adding complexity to existing models, in particular their forward pass. Let's get @sgugger second opinion on whether this should be merged into main. For the quality checks, you'll need to run `make style` and push any changes made to this branch.<|||||>_The documentation is not available anymore as the PR was closed or merged._
transformers
24,718
open
Speech recognition with CTC runs not reproducible
### System Info - `transformers` version: 4.30.1 - Platform: Linux-3.10.0-1160.71.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.16.4 - PyTorch version (GPU?): 1.9.0+cu102 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: Yes - Using distributed or parallel set-up in script?: No ### Who can help? @sg ### Information - [X] The official example scripts - [ ] My own modified scripts ### Tasks - [X] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I'm using official `run_speech_recognition_ctc.py` on a single GPU. I ran this for pretrained `hubert` twice with same seed but every time I get different WER on test set. ### Expected behavior Should return same WER when running with same seed.
07-08-2023 11:36:06
07-08-2023 11:36:06
Hi @bhavitvyamalik, If running the `run_speech_recognition_ctc.py` script, could you share the command being run, including all argument settings? Is the script being run on a single or multiple GPUs? cc @sanchit-gandhi <|||||>The script is being run on single GPU. I'm training on Multilingual Librispeech dataset (English version). ``` DOMAINS="mls_en" PYTHON_FILE=${PROJECT_ROOT}/"dom_finetune/run_speech_recognition_ctc.py" CUDA_VISIBLE_DEVICES=0 python ${PYTHON_FILE} \ --model_name_or_path="facebook/hubert-base-ls960" \ --domains=${DOMAINS} \ --num_train_epochs="20" \ --per_device_train_batch_size="4" \ --per_device_eval_batch_size="8" \ --gradient_accumulation_steps="2" \ --preprocessing_num_workers="16" \ --learning_rate="3e-5" \ --lr_scheduler_type="constant" \ --logging_steps="25" \ --evaluation_strategy="epoch" --save_strategy="epoch" \ --load_best_model_at_end=true \ --metric_for_best_model="wer" \ --greater_is_better=false \ --text_column_name="transcription" \ --length_column_name="input_length" \ --layerdrop="0.0" \ --save_total_limit="1" \ --freeze_feature_encoder \ --chars_to_ignore , ? . ! \ --output_dir "/disk/scratch1/" \ --group_by_length \ --do_train --do_eval --do_predict ```<|||||>@bhavitvyamalik Thanks for the additional information. Could you also share the WER results seen after different runs? i.e. how different are they typically? <|||||>WER: 0.6475 and 0.651 using same seed 42. The loss numbers remain very similar initially but after a point (roughly after 2nd epoch) they start differing at 3rd decimal place<|||||>Hey @bhavitvyamalik - could you also share the script you're using to fine-tune? Since it differs from the original example script, it's not possible to say whether the non-determinism comes from the 🤗 Trainer, or the data pre-processing. I see that the data arguments in your script differ from those in the example, so would be interested in checking what data pre-processing strategy is employed! It would also be super helpful to have the dataset as well so that we can run it locally as well for reproducibility<|||||>Hi @sanchit-gandhi, I'm using similar data pre-processing given in the official script. Line 412-421 is the only change I've done to the official script to use `audiofolder` functionality of `datasets`. I'm using 10h English data of MLS for training and full dev, test data for validation and testing respectively. Here is the link to the script: https://gist.github.com/bhavitvyamalik/948d6ca9f42e6c4d70fb8a2f037b4c88 I will upload the dataset in a while to dataset hub. Thank you!<|||||>Link to dataset: https://huggingface.co/datasets/bhavitvyamalik/mls_english_10h <|||||>Thanks @bhavitvyamalik - running two runs concurrently now: 1. Run 1: https://wandb.ai/sanchit-gandhi/huggingface/runs/0auf9oue?workspace=user-sanchit-gandhi 2. Run 2: https://wandb.ai/sanchit-gandhi/huggingface/runs/jsbzkm4o?workspace=user-sanchit-gandhi<|||||>The runs are indeed not identical, e.g. comparing the eval loss: ![Screenshot 2023-07-13 at 09 26 40](https://github.com/huggingface/transformers/assets/93869735/5f7603d8-df90-4c22-b786-f9fa7f9dcb8b) This is pretty strange behaviour considering we fix the same seed in both cases and use the same training arguments.<|||||>cc'ing @muellerzr and @pacman100 here - for context, we're fine-tuning a CTC model for ASR using the examples script [run_speech_recognition_ctc.py](https://github.com/huggingface/transformers/blob/main/examples/pytorch/speech-recognition/run_speech_recognition_ctc.py) and the arguments: <details> <summary> run_mls.sh </summary> ```bash #!/usr/bin/env bash python run_speech_recognition_ctc.py \ --model_name_or_path="facebook/hubert-base-ls960" \ --dataset_name "bhavitvyamalik/mls_english_10h" \ --num_train_epochs="20" \ --per_device_train_batch_size="4" \ --per_device_eval_batch_size="8" \ --gradient_accumulation_steps="2" \ --preprocessing_num_workers="16" \ --train_split_name "train" \ --eval_split_name "eval" \ --learning_rate="3e-5" \ --lr_scheduler_type="constant" \ --logging_steps="25" \ --evaluation_strategy="epoch" \ --save_strategy="epoch" \ --load_best_model_at_end True \ --metric_for_best_model="wer" \ --greater_is_better False \ --text_column_name="transcription" \ --length_column_name="input_length" \ --layerdrop="0.0" \ --save_total_limit="1" \ --freeze_feature_encoder \ --chars_to_ignore , ? . ! \ --output_dir "./" \ --group_by_length \ --overwrite_output_dir \ --do_train \ --do_eval ``` </details> However, the training runs are not reproducible, even when we use the same seed. The runs do not give the same eval loss and eval WER between training runs (see above plot). The training loss also diverges after approx 800 training steps (see [logs](https://wandb.ai/sanchit-gandhi/huggingface?workspace=user-sanchit-gandhi)). Wondering whether there's any non-determinism that we can try and investigate with the new `accelerate` powered trainer? Or whether we put this down to numerical differences?<|||||>Alright after leaving the runs to continue for the full length of training, we see that the run 1 and run 2 are to within 0.01 of each other on pretty much all metrics: https://wandb.ai/sanchit-gandhi/huggingface?workspace=user-sanchit-gandhi So I think we can conclude the seed is set correctly (the differences would be much larger if this wasn't the case). So probably what we're seeing is the effect of numerical differences accumulated over many thousands of ops? I still would have thought the two runs would be exactly the same since I've run them on the same hardware, same env, same seed etc. Would be interested in hearing whether you agree here both!<|||||>@sanchit-gandhi Thanks for digging into this 🕵️‍♂️ ! Yes, I agree, it looks there's just some small numerical differences creeping in. Given how tricky these things are to investigate and how small the differences are, it's not something I think is worth investigating further. If someone from the community is interested and wants to dig into this more, then we will still welcome links to relevant write-ups or results in this issue. <|||||>Doing a quick run with Transformers v4.27.4 to see whether the Trainer was reproducible to within 0.01 prior to the `accelerate` integration: https://wandb.ai/sanchit-gandhi/huggingface?workspace=user-sanchit-gandhi If the behaviour is the same as it is on `main` with the `accelerate` back-end, I think we can safely conclude this is an accumulation of numerical errors.<|||||>We're looking into it on the accelerate side for fixing. Thanks for the flag
transformers
24,717
open
Possibly a bug in Pix2Struct outputs
I'm sorry if I'm wrong, as I don't have much experience with transformers internals. I was playing with Pix2Struct and trying to visualise attention on input image. `output.cross_attentions` shape didn't make much sense as it didn't have `patch_count` as any of dimensions. After inspecting `modeling_pix2struct.py` I have notices the following ``` # layer_outputs = hidden-states, key-value-states (self-attention position bias), (self-attention weights), # (cross-attention position bias), (cross-attention weights) ``` And then later ``` if output_attentions: all_attentions = all_attentions + (layer_outputs[2],) all_cross_attentions = all_cross_attentions + (layer_outputs[3],) ``` As I understand `layer_outputs[3]` is `(self-attention weights)` and it should be replaces with `layer_outputs[5]` which is `(cross-attention weights)`. The same goes of `(layer_outputs[2],) => (layer_outputs[3],)` Does it make sense, or am I getting something wrong? I tried to patch it locally and output + visualisation make sense (highlight image patch with information in token) https://github.com/huggingface/transformers/blob/fe861e578f50dc9c06de33cd361d2f625017e624/src/transformers/models/pix2struct/modeling_pix2struct.py#L1550C74-L1550C74
07-08-2023 10:12:10
07-08-2023 10:12:10
Hi @artyomxyz, thanks for reporting this issue. Yes, you're right, there is a current issue with indexing in the Pix2Struct model. Related PR: #23985 cc @younesbelkada <|||||>> Hi @loveisp ! Again thanks for your contribution on this Can you share with us why this PR got closed? The PR should also fix #24717 so it would be great to merge it :D I initially made the change like this, but it didn't pass all the tests. The len(layer_outputs) here seemed a bit strange, so I changed it to what came later. Even though it can pass all the tests, there are still issues with the underlying logic. Regarding this matter, you can take a look at my discussion with @amyeroberts . I realized that I cannot fix this bug in a short amount of time, so I closed it. Will it be sufficient to make this change as he suggested, so that it passes all the tests? If so, then go ahead and make the change.
transformers
24,716
open
Loading pretrained RobertaModel,size missmatch error
My code is as follows: `config = RobertaConfig.from_pretrained("roberta-base", max_position_embeddings=2048)` `model = RobertaModel.from_pretrained('roberta-base',config = config)` then I get the following error: `size mismatch for roberta.embeddings.position_embeddings.weight: copying a param with shape torch.Size([514, 768]) from checkpoint, the shape in current model is torch.Size([2048, 768]).` how can I solve this problem? If I want to expand the length of input sentences, what should I do?
07-08-2023 08:19:04
07-08-2023 08:19:04
@aixiaobaikyh Please follow the issue template and fill out all the requesting information such as the transformers version being run. If running on a version of transformers released in the past year, the error message shared here is not the full error message that is printed out. The final part instructs on how to resolve: ``` You may consider adding `ignore_mismatched_sizes=True` in the model `from_pretrained` method. ```
transformers
24,715
open
Generate function
### System Info The model is llama and the tokenizer is also llama (decapoda-research/llama-7b-hf) tokenizer = LlamaTokenizer.from_pretrained('decapoda-research/llama-7b-hf',add_special_tokens=False,add_bos_token = False) I have a question about the model generation. ```python prompt = """Tell me some things about NBA""" input_tokenized_info = tokenizer(prompt, return_tensors="pt") input_ids, attention_mask = input_tokenized_info['input_ids'], input_tokenized_info['attention_mask'] input_ids = input_ids.to('cuda') attention_mask = attention_mask.to('cuda') outputs = model.generate(input_ids=input_ids, attention_mask=attention_mask,num_beams = 10,no_repeat_ngram_size=1,max_length=200,\ return_dict_in_generate=True,output_scores=True,length_penalty=0.9) print(len(outputs[0][0])) 18 print(len(outputs.scores)) 194 print(outputs[0][0]) tensor([24948, 592, 777, 2712, 1048, 21517, 29871, 29906, 29968, 29896, 29929, 341, 29911, 3189, 1144, 29889, 2, 1], device='cuda:0') print(tokenizer.decode(outputs[0][0], skip_special_tokens=True)) 'Tell me some things about NBA 2K19 MT Coins.' ``` I think the score size should be the same as the (output-input) size ### Who can help? @ArthurZucker ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction I think the output-input size is the same as the score size, also why the end of the token is 1 and then stop it. ### Expected behavior How can I generate a good output for example I set a max_length =100, it should have the stop and some "," , not print 100 tokens or stop at BOS token. Thanks you
07-07-2023 22:03:42
07-07-2023 22:03:42
Hi @Dongximing, The `decapoda-research/llama-7b-hf` checkpoint shouldn't be used. The tokenizer and weights were released before the Llama PR was merged and are not compatible with the Llama implementation in transformers. There are other checkpoints (e.g. [this one](https://huggingface.co/huggyllama/llama-7b)) which are compatible.
transformers
24,714
open
find_unused_parameters is not passed from Trainer to Sagemaker DistributedModel
### System Info When trying to launch a model on SageMaker with the Huggingface Estimator and the `transformers.Trainer` class, I discovered the the trainer argument `ddp_find_unused_parameters` is not passed to the Sagemaker DistributedModel, see https://github.com/huggingface/transformers/blob/495729427045c7a58e040fa9bf6df81c16f54208/src/transformers/trainer.py#L1336 It is possible to work around this by wrapping the model with DistributedModel before passing it to the Trainer, so that I can pass any arguments I want, but really the trainer argument ought to just work. I'm running fine tuning of a MaskedLM model, using a stripped down version of the example run_mlm.py script. ### Who can help? @sgugger ### Information - [X] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction Pass the argument `ddp_find_unused_parameters = True` to the Trainer class in a SageMaker Model Parallel environment, when training one of the ESM models (e.g., Facebook/esm2_t33_650M_UR50D) loaded using on sequence data, e.g., agemagician/uniref30. 1. In the training script, e.g., `run_mlm.py`, load a masked LM model, e.g. `AutoModelForMaskedLM("Facebook/esm2_t33_650M_UR50D")` 2. Use a dataset like agemagician/uniref30, by downloading the files to Sagemaker via its "data channels". Load it in the training script using `datasets.load_dataset` with the data_files argument. Then use a Huggingface estimator in a SageMaker notebook, e.g. this example from AWS: https://github.com/PacktPublishing/Applied-Machine-Learning-and-High-Performance-Computing-on-AWS/blob/main/Chapter12/protein-secondary-structure-model-parallel.ipynb, the model training should crash indicating that there are unreduced parameters, telling you to use `find_unused_parameters`: ``` [1,mpirank:0,algo-1]<stderr>:RuntimeError: Expected to have finished reduction in the prior iteration before [1,mpirank:0,algo-1]<stderr>:starting a new one. This error indicates that your module has parameters that [1,mpirank:0,algo-1]<stderr>:were not used in producing loss. You can enable unused parameter detection by [1,mpirank:0,algo-1]<stderr>:passing the keyword argument `find_unused_parameters=True` to [1,mpirank:0,algo-1]<stderr>:`torch.nn.parallel.DistributedDataParallel`, and by [1,mpirank:0,algo-1]<stderr>:making sure all `forward` function outputs participate in calculating loss. [1,mpirank:0,algo-1]<stderr>:If you already have done the above, then the distributed data parallel module [1,mpirank:0,algo-1]<stderr>:wasn't able to locate the output tensors in the return value of your module's [1,mpirank:0,algo-1]<stderr>:`forward` function. Please include the loss function and the structure of the [1,mpirank:0,algo-1]<stderr>:return value of `forward` of your module when reporting this issue (e.g. list, [1,mpirank:0,algo-1]<stderr>:dict, iterable). [1,mpirank:0,algo-1]<stderr>:Parameter indices which did not receive grad for rank 0: 1 132 133 [1,mpirank:0,algo-1]<stderr>: In addition, you can set the environment variable TORCH_DISTRIBUTED_DEBUG to [1,mpirank:0,algo-1]<stderr>:either INFO or DETAIL to print out information about which particular parameters [1,mpirank:0,algo-1]<stderr>:did not receive gradient on this rank [1,mpirank:0,algo-1]<stderr>:as part of this error ``` 3. Now, using a sagemaker Huggingface estimator, pass the hyper-parameter `{"ddp_find_unused_parameters": True,...}` to the estimator. If using the script `run_mlm.py`, this will be parsed and passed to the `Trainer` class as part of `training_args`. However, you should still see the same error. This is because, as noted above, the argument is not passed to `smp.DistributedModel`. ### Expected behavior The argument should be passed to `DistributedModel`, so that it can resolve the error noted above.
07-07-2023 20:29:34
07-07-2023 20:29:34
Update: the workaround does not appear to solve the problem either. We need some way to pass this argument into SageMaker's version of parallelism (which I think still uses torch's DistributedDataParallel under the hood.)<|||||>Possibly @pacman100 might know about this? <|||||>Are we sure this SageMaker class actually supports this argument with the same name as PyTorch?<|||||>I just wanted to add that this is a problem with [run.plm](https://github.com/huggingface/transformers/blob/main/examples/pytorch/language-modeling/run_plm.py) as well, but the example for [question-answer](https://github.com/huggingface/notebooks/blob/main/sagemaker/03_distributed_training_data_parallelism/sagemaker-notebook.ipynb) does work.
transformers
24,713
open
Amazon Sagemaker - huggingface-textgeneration1-gpt-j-6b-fp16
### System Info Running on Amazon sagemaker notebook.(https://us-west-2.console.aws.amazon.com/sagemaker/playground?region=us-west-2#/foundation-models/playground/prod-000000021) ml.m5.xlarge | 4 vcpu | 16 GiB memory ### Who can help? _No response_ ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction (https://huggingface.co/docs/transformers/main/model_doc/gptj) The hugging face repo claims that the fp16 model should be able to run on 16 GB GPU memory for inference. I am using ml.m5.xlarge | 4 vcpu | 16 GiB memory in Amazon Sagemaker to deploy the model. Why am I getting data load error in Amazon sagemeaker? Can someone please share steps to deploy a model in sagemaker. Moreover how do we requisition ASW to allocate us larger resource, if models won't work on these instances.? It requires an organization for instance allocation, are individuals not able to test models on AWS sagemaker. ### Expected behavior Error hosting endpoint jumpstart-example-huggingface-textgener-2023-07-07-13-55-56-291: Failed. Reason: Failed to extract model data archive from URL "s3://jumpstart-cache-prod-us-west-2/huggingface-infer/prepack/v1.1.2/infer-prepack-huggingface-textgeneration1-gpt-j-6b-fp16.tar.gz". The model data archive is too large. Please reduce the size of the model data archive or move to an instance type with more memory.
07-07-2023 15:21:13
07-07-2023 15:21:13
Hi @Mrin7, thanks for raising an issue! When creating issues on github, please create a separate issue for each individual question. With regards to your first question, the docs state that the model can fit into 16 GB of RAM for interference, however there may be other processes or objects which have memory requirements resulting in the total amount of GPU RAM needed being above 16 GB. Without knowing exactly what you're running it's not possible to know. For questions on how to deploy on sagemaker, please refer to the docs: https://huggingface.co/docs/sagemaker/inference. If you still have questions, then it's best asked in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports.
transformers
24,712
open
adding dynamic categorical feature option
# What does this PR do? Adding dynamic categorical feature to time_series_transformer // Have not tested yet!
07-07-2023 15:05:07
07-07-2023 15:05:07
awesome @guyko81 will have a look! this will be great! <|||||>@kashif can you help me please? when I try to add the variables based on the example I struggle getting the dynamic_categorical_features split into past and future values. When I add it to the time features ``` def create_instance_splitter( config: PretrainedConfig, mode: str, train_sampler: Optional[InstanceSampler] = None, validation_sampler: Optional[InstanceSampler] = None, ) -> Transformation: assert mode in ["train", "validation", "test"] instance_sampler = { "train": train_sampler or ExpectedNumInstanceSampler( num_instances=1.0, min_future=config.prediction_length ), "validation": validation_sampler or ValidationSplitSampler(min_future=config.prediction_length), "test": TestSplitSampler(), }[mode] return InstanceSplitter( target_field="values", is_pad_field=FieldName.IS_PAD, start_field=FieldName.START, forecast_start_field=FieldName.FORECAST_START, instance_sampler=instance_sampler, past_length=config.context_length + max(config.lags_sequence), future_length=config.prediction_length, time_series_fields=["time_features", "observed_mask", "dynamic_categorical_features"], ) ``` and use it like this: ``` if config.num_dynamic_categorical_features > 0: PREDICTION_INPUT_NAMES.append("past_dynamic_categorical_features") PREDICTION_INPUT_NAMES.append("future_dynamic_categorical_features") ``` I got a shape error: ``` RuntimeError: stack expects each tensor to be equal size, but got [89, 1568] at entry 0 and [161, 1568] at entry 1 ``` But when I try to simply add it like this: ``` if config.num_dynamic_categorical_features > 0: PREDICTION_INPUT_NAMES.append("dynamic_categorical_features") ``` I got an error like this: ``` RuntimeError: stack expects each tensor to be equal size, but got [1568, 2] at entry 0 and [1566, 2] at entry 16 ``` So the second version goes longer, however when a time series is shorter (1566 long vs 1568) it throws an error. I'm just not familiar with gluons enough to feel how to create past and future dynamic_categorical_features.
transformers
24,711
open
Initialize Flax model params on CPU
### Feature request Currently, the `from_pretrained` method of Flax models automatically puts model parameters on a single GPU device, if available. For very large models, this is not great, as the model parameters may just not fit on GPU memory. In contrast, when passing `_do_init=False` to `from_pretrained`, the parameters are returned on CPU, outside the model. I would love to have a feature that allows me to initialize model parameters on the device I want - in this case, on CPU - but at the same time initialize the model parameters within the model. Right now I have to call `_do_init=False` to avoid out-of-memory, but this causes inconsistencies with my API. The feature could be either implemented as just another type (if we detect a numpy type, we initialize on CPU; otherwise on GPU) or as an additional argument, e.g. `initialize_on_cpu: bool = False`. ### Motivation Described above. Another reason is to be more consistent with the PyTorch behaviour, where parameters are initialized (as a generator) on CPU. ### Your contribution If we agree on on the design, I am happy to add this myself.
07-07-2023 13:18:57
07-07-2023 13:18:57
Related to this, the `init_weights` method should not initialize all random parameters on GPU when `params` are actually passed (see, for example, [Flax GPT-J](https://github.com/huggingface/transformers/blob/495729427045c7a58e040fa9bf6df81c16f54208/src/transformers/models/gptj/modeling_flax_gptj.py#L401)). This makes me go out-of-memory even if I pass all the parameters to the method, initialized on CPU using `_do_init=False`.<|||||>cc @sanchit-gandhi <|||||>Hey @gianlucadetommaso! For reference, the PR to add the `_do_init` flag was added in this PR: #16148. Feel free to have a read through on what the motivations behind this PR and design were. Think you'd find it interesting! I think it would be a nice design to have the params loaded on CPU by default. There's an open PR for this here: #15295. You're more than welcome to pick-up where Boris left off here and finish the PR! The comments detailing the proposed design are quite thorough, but feel free to ping me if you have any other questions or want to clarify something<|||||>@sanchit-gandhi thanks for the links! As soon as I have time, I can try and do this. By the way, to make sure I am not just doing something wrong, it would help me a lot if you could have a look and comment on [this](https://github.com/google/jax/discussions/16659) discussion. It regards memory consumption of initializing a sharded state using pjit. I think you had a discussion related to it in [this](https://github.com/huggingface/transformers/issues/22224) thread before, thus it'd be great hearing your thoughts.<|||||>Awesome, sounds great! Had a look at the linked discussion - not entirely sure why we see this behaviour (think it's one for the JAX team to answer), but what you can do is use a few helper functions from the [T5x codebase](https://github.com/google-research/t5x/tree/main) to assist you here. You can load your model into a T5x `Checkpointer` with `use_gda=True` (use global device arrays) on the CPU: https://github.com/huggingface/bloom-jax-inference/blob/2a04aa519d262729d54adef3d19d63879f81ea89/bloom_inference/generator.py#L86 And then save this `Checkpointer` state to a Google Cloud bucket (use the built in save function). When you then come to loading your state, you can load each shard of your weights onto the mapped devices (so if shard 1 goes on device 1, it'll be loaded straight there, so you won't blow up your memory trying to load a sharded model onto your accelerator device): https://github.com/huggingface/bloom-jax-inference/blob/2a04aa519d262729d54adef3d19d63879f81ea89/bloom_inference/generator.py#L95
transformers
24,710
open
Inheritance issue with _LazyConfigMapping
### System Info - `transformers` version: 4.30.2 - Platform: Linux-3.10.0-1160.90.1.el7.x86_64-x86_64-with-glibc2.17 - Python version: 3.10.11 - Huggingface_hub version: 0.16.2 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0.post200 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): 0.6.8 (gpu) - Jax version: 0.4.8 - JaxLib version: 0.4.7 ### Who can help? @ArthurZucker @Narsil ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction Not Applicable ### Expected behavior `_LazyConfigMapping` and `_LazyLoadAllMappings` inherits from `OrderedDict`, but they do not use any feature of `OrderedDict`. It's probably a good idea to merge `self._mapping` into `self` so that the inheritance is meaningful.
07-07-2023 12:47:15
07-07-2023 12:47:15
Hi, thank you for pointing out this. Looks like you are right. However, this is just a simple utility to make things easier internally and there is no need to over-engineering😅<|||||>> Hi, thank you for pointing out this. > > Looks like you are right. However, this is just a simple utility to make things easier internally and there is no need to over-engineering😅 Thank you for your quick reply. I noticed this I was trying to read the model list so that it will arise an error in case an invalid model name is specified. As it takes quite some time to set up the environment and build dataset before building model and raise error. I wonder if there are better ways to validate arguments before? reading `CONFIG_MAPPING_NAMES` is probably not an good idea as user may register their own model. Also, I noticed there are hundreds of lines in `CONFIG_MAPPING_NAMES`, which could be a bit redundant and has to be modified manually when new algorithms are added. May I try to add some code to find them and construct `CONFIG_MAPPING_NAMES` automatically? <|||||>Hello! > I wonder if there are better ways to validate arguments before? It's not clear to me what's your use case here. If you specify invalid model name, an error must be given. Do you mean a better error message instead of just a simple key error? > Also, I noticed there are hundreds of lines in CONFIG_MAPPING_NAMES, which could be a bit redundant and has to be modified manually when new algorithms are added. It's actually not edited that frequently 😅 . No need to over engineering in this case (but see below comments too) > May I try to add some code to find them and construct CONFIG_MAPPING_NAMES automatically? This list should be very explicit so we know what model types (config) are available in `transformers`. Of course we can try to detect the modules (or easier check the python files), but that could potentially gives wrong results too (and in that case, difficult to reason/figure out). <|||||>> It's not clear to me what's your use case here. If you specify invalid model name, an error must be given. Do you mean a better error message instead of just a simple key error? Our model needs some information from dataset to build, so we can only build model after built dataset. ```python dataset = Dataset(*args, **kwargs) model = Model(pretrained=xxx, num_outputs=dataset.num_outputs) ``` For some tasks, it takes hours to build a dataset. And hence take hours to fail. So, it's better to validate model before building. ```python if xxx not in XXX_LIST: raise RuntimeError("Invalid model specified.") dataset = Dataset(*args, **kwargs) model = Model(pretrained=xxx, num_outputs=dataset.num_outputs) ```<|||||>Hi! We need more specific info of the failing you encounter. I assume you are saying ``` model = some_hf_model_class.from_pretrained(pretrained_model_name_or_path="xxx") ``` where `xxx` is a a model repo name on the Hub (or a local path). In your example, you are passing a model type name ``CONFIG_MAPPING_NAMES . However, if `Model` is **your custom class** which can be initialized with a model type name, then you have to implement the argument check in your own codebase.
transformers
24,709
open
[WIP] Add `kosmos-2`
# What does this PR do? [WIP] Add `kosmos-2`
07-07-2023 10:59:09
07-07-2023 10:59:09
The docs for this PR live [here](https://moon-ci-docs.huggingface.co/docs/transformers/pr_24709). All of your documentation changes will be reflected on that endpoint.<|||||>any updates?<|||||>WIP, but a bit slow pace
transformers
24,708
closed
Resume from checkpoint on fused AdamW raises device errors
### System Info Hi, I'm trying to resume a very minimal T5 model after some pre-training. * transformers==4.30.1 * accelerate==0.20.3 * datasets==2.12.0 I'm using `Trainer`'s built-in `resume_from_checkpoint` argument, and I get the following error message: ```RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument state_steps in method wrapper_CUDA___fused_adamw_)``` The model trains fine, so I don't think there's anything wrong with the model training code per se. ### Who can help? @sgugger ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction ``` model = T5ForConditionalGeneration(hf_model_config) tokenizer = load_tokenizer(tokenizer_path, max_length=max_position_embeddings) training_args = TrainingArguments( ... bf16=True, optim='adamw_torch_fused', ... ) trainer = Trainer( model=model, ... ) trainer.train(resume_from_checkpoint=True) ``` ### Expected behavior Resuming from my latest checkpoint
07-07-2023 09:59:43
07-07-2023 09:59:43
Hi @ideasbyjin, thanks for raising this issue. In order for us to be able to help, we'll need a minimal code snippet for us to be able to reproduce the error. Could you provide us with some more information on the running environment: run `transformers-cli env` in the terminal and copy-paste the output? Have you run tried training and resuming from checkpoint with a different optimizer than `adamw_torch_fused`? Was it successful?<|||||>Hi @amyeroberts! Yep, if I use `adamw_torch` then it seems to train & resume perfectly OK, so I take it it's the fused implementation that's raising issues. ``` - `transformers` version: 4.30.1 - Platform: Linux-5.15.0-1019-aws-x86_64-with-glibc2.35 - Python version: 3.10.6 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.0 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: True - Using distributed or parallel set-up in script?: DDP ```<|||||>Also, I can't provide a more comprehensive example as it's private at this stage, but the gist is that it's running the `T5ForConditionalGeneration` model. Again, with standard `adamw_torch` it seems to train/resume with 0 issues. When running with `adamw_torch_fused` , it clearly detects the correct checkpoint and loads the optimizer state (see below) but I can't really pin down where there might be device discrepancies that's upsetting the optimizer ``` Currently training with a batch size of: 256 ***** Running training ***** Num examples = 14,805,069 Num Epochs = 1 Instantaneous batch size per device = 256 Total train batch size (w. parallel, distributed & accumulation) = 256 Gradient Accumulation steps = 1 Total optimization steps = 100 Number of trainable parameters = 8,311,296 Continuing training from checkpoint, will skip to saved global_step Continuing training from epoch 0 Continuing training from global step 20 Will skip the first 0 epochs then the first 20 batches in the first epoch. 0%| | 0/100 [00:00<?, ?it/s] ```<|||||>@ideasbyjin OK, interesting. Could you try updating the pytorch version? There were some know issues with fused AdamW and fp16 - #22144 - which _should_ have been resolved in 2.0.1.<|||||>Thanks @amyeroberts , no avail on using PyTorch 2.0.1 though. ``` - `transformers` version: 4.30.1 - Platform: Linux-5.15.0-1019-aws-x86_64-with-glibc2.31 - Python version: 3.10.12 - Huggingface_hub version: 0.16.3 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu118 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: True - Using distributed or parallel set-up in script?: DDP ``` ``` Will skip the first 0 epochs then the first 20 batches in the first epoch. 0%| | 0/100 [00:00<?, ?it/s]Traceback (most recent call last): trainer.train(resume_from_checkpoint=True) File "/.../.conda/envs/pytorch2.0.1/lib/python3.10/site-packages/transformers/trainer.py", line 1645, in train return inner_training_loop( File "/.../.conda/envs/pytorch2.0.1/lib/python3.10/site-packages/transformers/trainer.py", line 2007, in _inner_training_loop self.optimizer.step() File "/.../.conda/envs/pytorch2.0.1/lib/python3.10/site-packages/accelerate/optimizer.py", line 140, in step self.optimizer.step(closure) File "/.../conda/envs/pytorch2.0.1/lib/python3.10/site-packages/torch/optim/lr_scheduler.py", line 69, in wrapper return wrapped(*args, **kwargs) File "/.../.conda/envs/pytorch2.0.1/lib/python3.10/site-packages/torch/optim/optimizer.py", line 280, in wrapper out = func(*args, **kwargs) File "/.../.conda/envs/pytorch2.0.1/lib/python3.10/site-packages/torch/optim/optimizer.py", line 33, in _use_grad ret = func(self, *args, **kwargs) File "/.../.conda/envs/pytorch2.0.1/lib/python3.10/site-packages/torch/optim/adamw.py", line 171, in step adamw( File "/.../.conda/envs/pytorch2.0.1/lib/python3.10/site-packages/torch/optim/adamw.py", line 321, in adamw func( File "/.../.conda/envs/pytorch2.0.1/lib/python3.10/site-packages/torch/optim/adamw.py", line 615, in _fused_adamw torch._fused_adamw_( RuntimeError: Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cpu! (when checking argument for argument state_steps in method wrapper_CUDA___fused_adamw_) ```<|||||>Right, I think I discovered the issue, and it's a bit to do with both PyTorch and HF. I'll explain: This first starts with how the HF Trainer reloads an optimizer from a checkpoint, https://github.com/huggingface/transformers/blob/v4.30.1/src/transformers/trainer.py#L2542 Since the optimizer's states are loaded onto `cpu` (in the case of a single worker, even with multiple GPUs I think?) then when you come to spinning up the fused AdamW optimizer, https://github.com/pytorch/pytorch/blob/v2.0.1/torch/optim/adamw.py#L614 `device_state_dict` has its Tensors in `cpu` even though all others are in `cuda`, so it raises the error! I found that either deliberately loading the optimizer states into `cuda` from the `Trainer`, or modifying the `torch.optim.AdamW` code to shift everything to `cuda` did the trick, though I feel like the fix on HF's end is a bit more elegant. Perhaps there's argument for changing the `map_location` of the optimizer states, especially in a scenario where we have multiple GPUs on one worker? I'll leave this to your judgment on how to navigate/fix though.
transformers
24,707
open
decoder_kwargs are not passed over to AutomaticSpeechRecognitionPipeline.tokenizer.decode
### Feature request The `postprocess()` function here https://github.com/huggingface/transformers/blob/fb78769b9c053876ed7ae152ee995b0439a4462a/src/transformers/pipelines/automatic_speech_recognition.py#L492-L494 should pass down the `decoder_kwargs` it receives down to the decoder https://github.com/huggingface/transformers/blob/fb78769b9c053876ed7ae152ee995b0439a4462a/src/transformers/pipelines/automatic_speech_recognition.py#L563 ### Motivation Sometimes the decoder will output special tokens - https://github.com/huggingface/transformers/issues/15275 - and there's no way to pass `skip_special_tokens=True` to the decoder https://github.com/huggingface/transformers/blob/fb78769b9c053876ed7ae152ee995b0439a4462a/src/transformers/models/wav2vec2/tokenization_wav2vec2.py#L407-L416 ### Your contribution I can submit a PR
07-07-2023 07:47:12
07-07-2023 07:47:12
From https://github.com/huggingface/transformers/blob/fb78769b9c053876ed7ae152ee995b0439a4462a/src/transformers/pipelines/automatic_speech_recognition.py#L549-L552 That `decoder_kwargs` is for an instance of `BeamSearchDecoderCTC` and not a tokenizer. But cc @ArthurZucker to see if he has more words to say. <|||||>I guess we can have `tokenizer_decoder_kwargs` too :)<|||||>Yep, pipeline does not support `**tokenizer_kwargs` yet. This has been talked about in #22995 and #12039. I am in for `tokenizer_kwargs`, not `tokenizer_decoder_kwargs`. We need to as less specific as possible. A con is that having lots of kwargs is hard to maintain and we are trying to get away from this. The tokenizer class can save the `_init_kwargs` which contains the last parameters with which the tokenizer was called. You can set them and pass the tokenizer to the pipeline
transformers
24,706
closed
Fix flaky `test_for_warning_if_padding_and_no_attention_mask`
# What does this PR do? This tests #24510. It fails a few times (on my PRs, and once in someone's PR), and now it fails on the latest daily CI. This test is flaky because it test the functionality of `warn_if_padding_and_no_attention_mask` which uses `logger.warning_once(warn_string)` - this uses `@functools.lru_cache(None)`. If there is any test that triggers this warning before `test_for_warning_if_padding_and_no_attention_mask`, the cache is produced, and we won't get the expected warning in `test_for_warning_if_padding_and_no_attention_mask` then it fails.
07-07-2023 07:45:11
07-07-2023 07:45:11
_The documentation is not available anymore as the PR was closed or merged._
transformers
24,705
open
elif self.fsdp is not None and self.args.fsdp_config["xla"]:
### System Info - `transformers` version: 4.24.0 - Platform: Linux-5.4.143.bsk.8-amd64-x86_64-with-glibc2.28 - Python version: 3.10.9 - Huggingface_hub version: 0.10.1 - PyTorch version (GPU?): 1.12.1 (False) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: <fill in> - Using distributed or parallel set-up in script?: <fill in> ### Who can help? @sgugger ### Information - [ ] The official example scripts - [ ] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [ ] My own task or dataset (give details below) ### Reproduction in trainer if self.args.fsdp_config["xla"] is true,trainer will wrap model layers,is this mean if i want to use fsdp to shard model,I must install torch-xla>2.0? ### Expected behavior want to know if i must install torch-xla for fspd trainning
07-07-2023 05:23:11
07-07-2023 05:23:11
Hi @duanzhenyu001, thanks for raising an issue! This is a question best placed in our [forums](https://discuss.huggingface.co/). We try to reserve the github issues for feature requests and bug reports. That being said, checking the docs it's possible to see that [XLA supports FDSP](https://huggingface.co/docs/transformers/main/main_classes/trainer#pytorchxla-fully-sharded-data-parallel), but [FDSP can be used separately](https://huggingface.co/docs/transformers/main/main_classes/trainer#pytorch-fully-sharded-data-parallel). If you wish to use XLA with FDSP, it's necessary to install torch-xla >= 2.0. <|||||>@amyeroberts thanks a lot for your reply. when I use huggingface trainer fsdp mode finetune a 6B model,I got oom even 8 V100-32G gpu used. I didn't find where trainer shard my model when I use fsdp without xla. could you please point it out in the trainer code. thanks very much. <|||||>@amyeroberts and here is my TrainingArguments when I finetune glm2 a 6B model. TrainingArguments( _n_gpu=1, adafactor=False, adam_beta1=0.9, adam_beta2=0.999, adam_epsilon=1e-08, auto_find_batch_size=False, bf16=False, bf16_full_eval=False, data_seed=None, dataloader_drop_last=False, dataloader_num_workers=0, dataloader_pin_memory=True, ddp_backend=None, ddp_bucket_cap_mb=None, ddp_find_unused_parameters=None, ddp_timeout=1800, debug=[], deepspeed=None, disable_tqdm=False, do_eval=True, do_predict=False, do_train=True, eval_accumulation_steps=64, eval_delay=0, eval_steps=None, evaluation_strategy=epoch, fp16=True, fp16_backend=auto, fp16_full_eval=False, fp16_opt_level=O1, fsdp=full_shard auto_wrap, fsdp_config={'fsdp_forward_prefetch': True, 'fsdp_sync_module_states': True, 'fsdp_use_orig_params': True, 'xla': False, 'fsdp_transformer_layer_cls_to_wrap': 'GLMBlock'}, fsdp_min_num_params=0, fsdp_transformer_layer_cls_to_wrap=GLMBlock, full_determinism=False, gradient_accumulation_steps=128, gradient_checkpointing=False, greater_is_better=None, group_by_length=False, half_precision_backend=auto, hub_model_id=None, hub_private_repo=False, hub_strategy=every_save, hub_token=<HUB_TOKEN>, ignore_data_skip=False, include_inputs_for_metrics=False, jit_mode_eval=False, label_names=None, label_smoothing_factor=0.0, learning_rate=2e-05, length_column_name=length, load_best_model_at_end=False, local_rank=1, log_level=debug, log_level_replica=warning, log_on_each_node=True, logging_dir=/mnt/bn/mods-llm/duanzhenyu/llm/ds_llm_sft/data/models/finetune/runs/Jul14_06-57-19_mlxlab25ta1apm6482c909-20230609063905-cqjk4l-u3z93n-worker, logging_first_step=False, logging_nan_inf_filter=True, logging_steps=1, logging_strategy=steps, lr_scheduler_type=cosine, max_grad_norm=None, max_steps=-1, metric_for_best_model=None, mp_parameters=, no_cuda=False, num_train_epochs=2, optim=adamw_torch, optim_args=None, output_dir=/mnt/bn/mods-llm/duanzhenyu/llm/ds_llm_sft/data/models/finetune, overwrite_output_dir=False, past_index=-1, per_device_eval_batch_size=1, per_device_train_batch_size=1, prediction_loss_only=False, push_to_hub=False, push_to_hub_model_id=None, push_to_hub_organization=None, push_to_hub_token=<PUSH_TO_HUB_TOKEN>, ray_scope=last, remove_unused_columns=True, report_to=['tensorboard'], resume_from_checkpoint=None, run_name=/mnt/bn/mods-llm/duanzhenyu/llm/ds_llm_sft/data/models/finetune, save_on_each_node=False, save_safetensors=False, save_steps=50, save_strategy=steps, save_total_limit=1, seed=42, sharded_ddp=[], skip_memory_metrics=True, tf32=None, torch_compile=False, torch_compile_backend=None, torch_compile_mode=None, torchdynamo=None, tpu_metrics_debug=False, tpu_num_cores=None, use_ipex=False, use_legacy_prediction_loop=False, use_mps_device=False, warmup_ratio=0.0, warmup_steps=1000, weight_decay=0.1, xpu_backend=None, )<|||||>@amyeroberts and at function "create_accelerator_and_postprocess" , self.accelerator = Accelerator( deepspeed_plugin=self.args.deepspeed_plugin, gradient_accumulation_steps=self.args.gradient_accumulation_steps, ) self.accelerator always be inited, if i don't set os env ACCELERATE_USE_FSDP to be true, "self.is_fsdp_enabled = getattr(self.accelerator.state, "fsdp_plugin", None) is not None" self.is_fsdp_enabled will always be false, is this what you realy wanted? and I'm confused the intention of this function.<|||||>cc @pacman100 who will be able to comment on the OOM memory issues and whether this is expected. Please note that recently there was a large update with Trainer, and it now uses accelerate in the background. In the issue information, I see that you're using v4.24. I suggest updating to a more recent release to benefit from this update and any bug resolutions in between. <|||||>> I'm now using v4.30.2,but still get oom memory errors when training on 8 V100-32g GPUs. @pacman100 can some one give any suggestions? thanks a lot<|||||>Hello @duanzhenyu001, how are you launching the training script and a minimal training script for deep dive is required<|||||>The following works on 4 A100 80GB GPUs with GPT-j (6B model), although the entire VRAM is occupied on each: ``` cd transformers/examples/pytorch/language-modeling torchrun --nnodes 1 --nproc-per-node 4 run_clm.py --model_name_or_path EleutherAI/gpt-j-6b --dataset_name wikitext --dataset_config_name wikitext-2-raw-v1 --per_device_train_batch_size 1 --per_device_eval_batch_size 1 --do_train --do_eval --output_dir /tmp/test-clm --gradient_accumulation_steps 8 --overwrite_output_dir --fsdp "full_shard auto_wrap" --fsdp_transformer_layer_cls_to_wrap "GPTJBlock" --bf16 ``` The reason for OOM at your end is plausible due to large seq lengths such as >=1024. In such cases, gradient/activation checkpointing is recommended with `--gradient_checkpointing`.
transformers
24,704
closed
bug for from_pretrained method with ignore_mismatched_sizes=True
### System Info - `transformers` version: 4.31.0.dev0 - Platform: Linux-5.15.0-1040-azure-x86_64-with-glibc2.17 - Python version: 3.8.16 - Huggingface_hub version: 0.15.1 - Safetensors version: 0.3.1 - PyTorch version (GPU?): 2.0.1+cu117 (True) - Tensorflow version (GPU?): not installed (NA) - Flax version (CPU?/GPU?/TPU?): not installed (NA) - Jax version: not installed - JaxLib version: not installed - Using GPU in script?: no - Using distributed or parallel set-up in script?: no ### Who can help? @ArthurZucker @sgugg ### Information - [ ] The official example scripts - [X] My own modified scripts ### Tasks - [ ] An officially supported task in the `examples` folder (such as GLUE/SQuAD, ...) - [X] My own task or dataset (give details below) ### Reproduction When setting `ignore_mismatched_sizes=True` in `from_pretrained` method, it would give errors. ```python from transformers import AutoModelForCausalLM model_type = "facebook/opt-6.7b" model = AutoModelForCausalLM.from_pretrained(model_type,max_position_embeddings=4096, ignore_mismatched_sizes=True) ``` The tracebacks are: <img width="987" alt="image" src="https://github.com/huggingface/transformers/assets/38466901/ce67a66b-ed9a-42fe-871d-eb1ffe348da0"> However, this wouldn't happen in smaller model. Simply changing `model_type` to `facebook/opt-2.7b` is fine. And this bug is not specific for `OptModel`, `EleutherAI/pythia-6.9b` would also trigger error. The trackback for `EleutherAI/pythia-6.9b` is: <img width="882" alt="image" src="https://github.com/huggingface/transformers/assets/38466901/7b3b3bea-0a03-4e5c-aef1-b386bff11be3"> ### Expected behavior loading weights successfully without error.
07-07-2023 05:22:47
07-07-2023 05:22:47
Hey 🤗 ! Thanks for reporting, this indeed a bug! I tracked this down to #24505 (this [commit](https://github.com/huggingface/transformers/commit/8e5d1619b3e57367701d74647e87b95f8dba5409)). Will probably let @sgugger handle, I don't have bandwidth to solve it right now! <|||||>No, this has no link to #24505, the code sample also fails on v4.30.2 and looking at the fix, it never worked before. The PR linked above should fix it.<|||||>I tested with `pip install -q transformers==4.30.2` and it worked fine ( no error but maybe no resize?), same for previous versions. When checking out commit after comit, this was the failing one but I think I missed something!