kyujinpy/KOR-gugugu-platypus-set
Viewer โข Updated โข 51.2k โข 88 โข 1
How to use PracticeLLM/Custom-KoLLM-13B-v5 with Transformers:
# Use a pipeline as a high-level helper
from transformers import pipeline
pipe = pipeline("text-generation", model="PracticeLLM/Custom-KoLLM-13B-v5") # Load model directly
from transformers import AutoTokenizer, AutoModelForCausalLM
tokenizer = AutoTokenizer.from_pretrained("PracticeLLM/Custom-KoLLM-13B-v5")
model = AutoModelForCausalLM.from_pretrained("PracticeLLM/Custom-KoLLM-13B-v5")How to use PracticeLLM/Custom-KoLLM-13B-v5 with vLLM:
# Install vLLM from pip:
pip install vllm
# Start the vLLM server:
vllm serve "PracticeLLM/Custom-KoLLM-13B-v5"
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:8000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "PracticeLLM/Custom-KoLLM-13B-v5",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker model run hf.co/PracticeLLM/Custom-KoLLM-13B-v5
How to use PracticeLLM/Custom-KoLLM-13B-v5 with SGLang:
# Install SGLang from pip:
pip install sglang
# Start the SGLang server:
python3 -m sglang.launch_server \
--model-path "PracticeLLM/Custom-KoLLM-13B-v5" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "PracticeLLM/Custom-KoLLM-13B-v5",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'docker run --gpus all \
--shm-size 32g \
-p 30000:30000 \
-v ~/.cache/huggingface:/root/.cache/huggingface \
--env "HF_TOKEN=<secret>" \
--ipc=host \
lmsysorg/sglang:latest \
python3 -m sglang.launch_server \
--model-path "PracticeLLM/Custom-KoLLM-13B-v5" \
--host 0.0.0.0 \
--port 30000
# Call the server using curl (OpenAI-compatible API):
curl -X POST "http://localhost:30000/v1/completions" \
-H "Content-Type: application/json" \
--data '{
"model": "PracticeLLM/Custom-KoLLM-13B-v5",
"prompt": "Once upon a time,",
"max_tokens": 512,
"temperature": 0.5
}'How to use PracticeLLM/Custom-KoLLM-13B-v5 with Docker Model Runner:
docker model run hf.co/PracticeLLM/Custom-KoLLM-13B-v5
Model Developers
Model Architecture
Base Model
Training Dataset
Ko-LLM leaderboard(11/27; link)
| Model | Average | Ko-ARC | Ko-HellaSwag | Ko-MMLU | Ko-TruthfulQA | Ko-CommonGen V2 |
|---|---|---|---|---|---|---|
| โญMy custom LLM 13B-v1โญ | 50.19 | 45.99 | 56.93 | 41.78 | 41.66 | 64.58 |
| โญMy custom LLM 13B-v4โญ | 49.89 | 45.05 | 57.06 | 41.83 | 42.93 | 62.57 |
| โญMy custom LLM 13B-v5โญ | 49.50 | 44.88 | 56.74 | 42.23 | 42.82 | 60.80 |
AI-Harness evaluation; link
| Model | Copa | Copa | HellaSwag | HellaSwag | BoolQ | BoolQ | Sentineg | Sentineg |
|---|---|---|---|---|---|---|---|---|
| 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 5-shot | 0-shot | 5-shot | |
| โญMy custom LLM 13B-v1โญ | 0.7987 | 0.8269 | 0.4994 | 0.5660 | 0.3343 | 0.5060 | 0.6984 | 0.9723 |
| โญMy custom LLM 13B-v4โญ** | 0.7988 | 0.8279 | 0.4995 | 0.4953 | 0.3343 | 0.3558 | 0.7825 | 0.9698 |
| โญMy custom LLM 13B-v5โญ | 0.8028 | 0.8329 | 0.5082 | 0.5136 | 0.8647 | 0.8500 | 0.5524 | 0.9723 |
| beomi/llama-2-koen-13b | 0.7768 | 0.8128 | 0.4999 | 0.5127 | 0.3988 | 0.7038 | 0.5870 | 0.9748 |
### KO-Platypus
from transformers import AutoModelForCausalLM, AutoTokenizer
import torch
repo = "PracticeLLM/Custom-KoLLM-13B-v5"
OpenOrca = AutoModelForCausalLM.from_pretrained(
repo,
return_dict=True,
torch_dtype=torch.float16,
device_map='auto'
)
OpenOrca_tokenizer = AutoTokenizer.from_pretrained(repo)