Instructions to use diff-interpretation-tuning/loras with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Diff Interpretation Tuning
How to use diff-interpretation-tuning/loras with Diff Interpretation Tuning:
# No code snippets available yet for this library. # To use this model, check the repository files and the library's documentation. # Want to help? PRs adding snippets are welcome at: # https://github.com/huggingface/huggingface.js
- Notebooks
- Google Colab
- Kaggle
Ctrl+K
- gemma3-4b-all-linear-layers-with-no-biases
- gemma3-4b-all-params
- gemma3-4b-rank-002
- gemma3-4b-rank-004
- gemma3-4b-rank-008
- gemma3-4b-rank-016
- gemma3-4b-rank-032
- gemma3-4b-rank-064
- gemma3-4b-rank-128
- qwen3-4b-all-linear-layers-with-no-biases
- qwen3-4b-all-params
- qwen3-4b-rank-002
- qwen3-4b-rank-004
- qwen3-4b-rank-008
- qwen3-4b-rank-016
- qwen3-4b-rank-032
- qwen3-4b-rank-064