Instructions to use qgyd2021/sft_llama2_stack_exchange with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Adapters
How to use qgyd2021/sft_llama2_stack_exchange with Adapters:
from adapters import AutoAdapterModel model = AutoAdapterModel.from_pretrained("undefined") model.load_adapter("qgyd2021/sft_llama2_stack_exchange", set_active=True) - Notebooks
- Google Colab
- Kaggle
I followed this script to train this model.
instead of the official meta-llama/Llama-2-7b-hf model, I used this repo NousResearch/Llama-2-7b-hf.
The model trained on lvwerra/stack-exchange-paired dataset.
seq_length: 1024
steps: 1600
- Downloads last month
- -
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support