Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

New posts in huggingface-transformers

How can I make sentence-BERT throw an exception if the text exceeds max_seq_length, and what is the max possible max_seq_length for all-MiniLM-L6-v2?

Huggingface MarianMT translators lose content, depending on the model

How to efficiently mean-pool BERT embeddings while excluding padding?

HuggingFace's linear scheduler with warmup parameters

Tokenizer.from_file() HUGGINFACE : Exception: data did not match any variant of untagged enum ModelWrapper

Input type into Linear4bit is torch.float16, but bnb_4bit_compute_type=torch.float32 (default). This will lead to slow inference or training speed

How to Load a 4-bit Quantized VLM Model from Hugging Face with Transformers?

Loading checkpoint shards takes too long

Huggingface AutoTokenizer can't load from local path

what is so special about special tokens?

Transformers pretrained model with dropout setting

"Unsupported number of image dimensions" while using image_utils from Transformers

Asking to truncate to max_length but no maximum length is provided and the model has no predefined maximum length. Default to no truncation

How to load transformers pipeline from folder?

T5Tokenizer requires the SentencePiece library but it was not found in your environment