Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

New posts in huggingface

Transformers gets killed for no reason on linux

ValueError: Tokenizer class MarianTokenizer does not exist or is not currently imported

python huggingface nmt

Changing the Default Cache Path for All HuggingFace Data

python nlp huggingface

How to specify number of target classes for TFRobertaSequenceClassification?

Setting Huggingface cache in Google Colab notebook to Google Drive

Model not calculating loss during training returning ValueError (Huggingface/BERT)

How to save a SetFit trainer locally after training

How to use diffusers with custom ckpt file

TheBloke/Llama-2-7b does not appear to have a file named pytorch_model.bin, tf_model.h5, model.ckpt or flax_model.msgpack

NameError: name 'tokenize_and_split_data' is not defined in Python code

How to handle sequences longer than 512 tokens in layoutLMV3?

AttributeError: 'AcceleratorState' object has no attribute 'distributed_type'

huggingface accelerate

OSError: meta-llama/Llama-2-7b-chat-hf is not a local folder

How do I save a Huggingface dataset?

How can i solve ImportError: Using the `Trainer` with `PyTorch` requires `accelerate>=0.20.1` when using Huggingface's TrainArguments?

How does huggingface/sentencetransformers figure out a model's input/output shapes?

Target modules for applying PEFT / LoRA on different models

I don't understand how the prompts work in llama_index

Why we use return_tensors = "pt" during tokenization?