PEFT
llama2
llama2-7b
code generation
code-generation
code
instruct
instruct-code
code-alpaca
alpaca-instruct
alpaca
llama7b
gpt2
Instructions to use monsterapi/llama2-code-generation with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use monsterapi/llama2-code-generation with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf") model = PeftModel.from_pretrained(base_model, "monsterapi/llama2-code-generation") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 9d74d146efbf25f9c56af4e8908da37c259b14a6b6f56d7811064d0d457fce59
- Size of remote file:
- 33.6 MB
- SHA256:
- c17cdbe9d8c91a101e199f90a60d8d441cbaadea5cde6ccb1ce6e249e4a8cad3
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.