Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

New posts in huggingface-transformers

How to truncate input in the Huggingface pipeline?

How can i solve ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1` when using Huggingface's TrainArguments?

How to apply a pretrained transformer model from huggingface?

saving finetuned model locally

Training loss is not decreasing for roberta-large model but working perfectly fine for roberta-base, bert-base-uncased

AttributeError: 'GPT2TokenizerFast' object has no attribute 'max_len'

Cannot import BertModel from transformers

How to download hugging face sentiment-analysis pipeline to use it offline?

Passing two evaluation datasets to HuggingFace Trainer objects

Facing SSL Error with Huggingface pretrained models

Mapping text data through huggingface tokenizer

How do I translate using HuggingFace from Chinese to English?

Passing multiple sentences to BERT?

AutoTokenizer.from_pretrained fails to load locally saved pretrained tokenizer (PyTorch)

How to truncate a Bert tokenizer in Transformers library

Remove downloaded tensorflow and pytorch(Hugging face) models

How to free GPU memory in PyTorch

Target modules for applying PEFT / LoRA on different models

How to use forward() method instead of model.generate() for T5 model

ModuleNotFoundError: no module named 'transformers'