PEFT
llama2
llama2-7b
code generation
code-generation
code
instruct
instruct-code
code-alpaca
alpaca-instruct
alpaca
llama7b
gpt2
Instructions to use monsterapi/llama2-7b-tiny-codes-code-generation with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- PEFT
How to use monsterapi/llama2-7b-tiny-codes-code-generation with PEFT:
from peft import PeftModel from transformers import AutoModelForCausalLM base_model = AutoModelForCausalLM.from_pretrained("meta-llama/Llama-2-7b-hf") model = PeftModel.from_pretrained(base_model, "monsterapi/llama2-7b-tiny-codes-code-generation") - Notebooks
- Google Colab
- Kaggle
- Xet hash:
- 2e14b5eefa9e3e3e72aec2246ab6b9a8a3b8bf06805cdf2a0a2c58c621b61880
- Size of remote file:
- 33.6 MB
- SHA256:
- c90d4653bbb4f230f6ca267ab7e8b451e66ca1ddd822ce56b12ae7575305edf2
·
Xet efficiently stores Large Files inside Git, intelligently splitting files into unique chunks and accelerating uploads and downloads. More info.