nielsr HF Staff commited on
Commit
cdb3861
·
verified ·
1 Parent(s): 8e74169

Add model metadata and links to paper and code

Browse files

Hi! I'm Niels from the community science team at Hugging Face.

This PR improves the model card by adding relevant metadata and links:
- Added `pipeline_tag: text-generation`.
- Added `library_name: transformers` as the repository follows the standard Transformers structure.
- Added `license: apache-2.0` based on the project documentation.
- Linked the research paper to its arXiv page.
- Linked the official GitHub repository for the project.

Files changed (1) hide show
  1. README.md +11 -5
README.md CHANGED
@@ -1,8 +1,11 @@
1
  ---
2
  language:
3
  - en
4
- ...
 
 
5
  ---
 
6
  # AscendKernelGen/KernelGen-LM-8B
7
 
8
  ![License](https://img.shields.io/badge/License-Apache-yellow)
@@ -11,8 +14,9 @@ language:
11
  KernelGen-LM-8B is a state-of-the-art domain-adaptive large language model specialized for low-level NPU kernel generation, specifically for the Huawei Ascend architecture using the AscendC programming language. Built upon the Qwen3-8B backbone, it is trained on the Ascend-CoT dataset and refined via reinforcement learning with execution feedback.
12
 
13
  **Other artifacts:**
14
- * The **AscendKernelGen Technical Report** is published at https://arxiv.org/abs/2601.07160.
15
- * The **NPUKernelBench** evaluation framework is published at https://git.openi.org.cn/PCL-Benchmark/NPUKernelBench.
 
16
 
17
  ## Introduction
18
 
@@ -24,10 +28,12 @@ Our framework, **AscendKernelGen (AKGen)**, bridges the gap between general-purp
24
  * **Performance:** The model demonstrates siginificant improvement on complex Level-2 kernels compared to baselines, and effectively solving tasks where general-purpose models (like Qwen3, Llama3.1) fail completely.
25
 
26
  ## Citation
 
27
  @article{cao2026ascendkernelgen,
28
  title={AscendKernelGen: A Systematic Study of LLM-Based Kernel Generation for Neural Processing Units},
29
  author={Xinzi Cao and Jianyang Zhai and Pengfei Li and Zhiheng Hu and Cen Yan and Bingxu Mu and Guanghuan Fang and Bin She and Jiayu Li and Yihan Su and Dongyang Tao and Xiansong Huang and Fan Xu and Feidiao Yang and Yao Lu and Chang-Dong Wang and Yutong Lu and Weicheng Xue and Bin Zhou and Yonghong Tian},
30
  journal={arXiv preprint arXiv:2601.07160},
31
  year={2026},
32
- url=https://arxiv.org/abs/2601.07160
33
- }
 
 
1
  ---
2
  language:
3
  - en
4
+ license: apache-2.0
5
+ library_name: transformers
6
+ pipeline_tag: text-generation
7
  ---
8
+
9
  # AscendKernelGen/KernelGen-LM-8B
10
 
11
  ![License](https://img.shields.io/badge/License-Apache-yellow)
 
14
  KernelGen-LM-8B is a state-of-the-art domain-adaptive large language model specialized for low-level NPU kernel generation, specifically for the Huawei Ascend architecture using the AscendC programming language. Built upon the Qwen3-8B backbone, it is trained on the Ascend-CoT dataset and refined via reinforcement learning with execution feedback.
15
 
16
  **Other artifacts:**
17
+ * The **AscendKernelGen Technical Report** is published at [https://arxiv.org/abs/2601.07160](https://arxiv.org/abs/2601.07160).
18
+ * The **Official Code** and **NPUKernelBench** framework are available on GitHub: [https://github.com/weich97/NPUKernelBench](https://github.com/weich97/NPUKernelBench).
19
+ * The **NPUKernelBench** evaluation framework is also published at [https://git.openi.org.cn/PCL-Benchmark/NPUKernelBench](https://git.openi.org.cn/PCL-Benchmark/NPUKernelBench).
20
 
21
  ## Introduction
22
 
 
28
  * **Performance:** The model demonstrates siginificant improvement on complex Level-2 kernels compared to baselines, and effectively solving tasks where general-purpose models (like Qwen3, Llama3.1) fail completely.
29
 
30
  ## Citation
31
+ ```bibtex
32
  @article{cao2026ascendkernelgen,
33
  title={AscendKernelGen: A Systematic Study of LLM-Based Kernel Generation for Neural Processing Units},
34
  author={Xinzi Cao and Jianyang Zhai and Pengfei Li and Zhiheng Hu and Cen Yan and Bingxu Mu and Guanghuan Fang and Bin She and Jiayu Li and Yihan Su and Dongyang Tao and Xiansong Huang and Fan Xu and Feidiao Yang and Yao Lu and Chang-Dong Wang and Yutong Lu and Weicheng Xue and Bin Zhou and Yonghong Tian},
35
  journal={arXiv preprint arXiv:2601.07160},
36
  year={2026},
37
+ url={https://arxiv.org/abs/2601.07160}
38
+ }
39
+ ```