Instructions to use Continuous-Rivals-Discrete/langflow-owt with libraries, inference providers, notebooks, and local apps. Follow these links to get started.
- Libraries
- Transformers
How to use Continuous-Rivals-Discrete/langflow-owt with Transformers:
# Use a pipeline as a high-level helper from transformers import pipeline pipe = pipeline("text-generation", model="Continuous-Rivals-Discrete/langflow-owt", trust_remote_code=True)# Load model directly from transformers import AutoModelForMaskedLM model = AutoModelForMaskedLM.from_pretrained("Continuous-Rivals-Discrete/langflow-owt", trust_remote_code=True, dtype="auto") - Notebooks
- Google Colab
- Kaggle
- Local Apps
- vLLM
How to use Continuous-Rivals-Discrete/langflow-owt with vLLM:
Install from pip and serve model
# Install vLLM from pip: pip install vllm # Start the vLLM server: vllm serve "Continuous-Rivals-Discrete/langflow-owt" # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:8000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Continuous-Rivals-Discrete/langflow-owt", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker
docker model run hf.co/Continuous-Rivals-Discrete/langflow-owt
- SGLang
How to use Continuous-Rivals-Discrete/langflow-owt with SGLang:
Install from pip and serve model
# Install SGLang from pip: pip install sglang # Start the SGLang server: python3 -m sglang.launch_server \ --model-path "Continuous-Rivals-Discrete/langflow-owt" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Continuous-Rivals-Discrete/langflow-owt", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }'Use Docker images
docker run --gpus all \ --shm-size 32g \ -p 30000:30000 \ -v ~/.cache/huggingface:/root/.cache/huggingface \ --env "HF_TOKEN=<secret>" \ --ipc=host \ lmsysorg/sglang:latest \ python3 -m sglang.launch_server \ --model-path "Continuous-Rivals-Discrete/langflow-owt" \ --host 0.0.0.0 \ --port 30000 # Call the server using curl (OpenAI-compatible API): curl -X POST "http://localhost:30000/v1/completions" \ -H "Content-Type: application/json" \ --data '{ "model": "Continuous-Rivals-Discrete/langflow-owt", "prompt": "Once upon a time,", "max_tokens": 512, "temperature": 0.5 }' - Docker Model Runner
How to use Continuous-Rivals-Discrete/langflow-owt with Docker Model Runner:
docker model run hf.co/Continuous-Rivals-Discrete/langflow-owt
Add paper and code links to model card
#1
by nielsr HF Staff - opened
README.md
CHANGED
|
@@ -1,44 +1,48 @@
|
|
| 1 |
-
---
|
| 2 |
-
datasets:
|
| 3 |
-
- Skylion007/openwebtext
|
| 4 |
-
language:
|
| 5 |
-
- en
|
| 6 |
-
library_name: transformers
|
| 7 |
-
license: apache-2.0
|
| 8 |
-
metrics:
|
| 9 |
-
- perplexity
|
| 10 |
-
pipeline_tag: text-generation
|
| 11 |
-
---
|
| 12 |
-
|
| 13 |
-
# LangFlow
|
| 14 |
-
|
| 15 |
-
LangFlow is a continuous diffusion language model that operates in embedding space. Unlike discrete diffusion models (MDLM, SEDD, DUO), LangFlow performs diffusion directly on continuous token embeddings, enabling smoother denoising dynamics.
|
| 16 |
-
|
| 17 |
-
|
| 18 |
-
|
| 19 |
-
|
| 20 |
-
|
| 21 |
-
|
| 22 |
-
|
| 23 |
-
|
| 24 |
-
|
| 25 |
-
|
| 26 |
-
|
| 27 |
-
|
| 28 |
-
|
| 29 |
-
|
| 30 |
-
|
| 31 |
-
|
| 32 |
-
|
| 33 |
-
|
| 34 |
-
|
| 35 |
-
|
| 36 |
-
|
| 37 |
-
|
| 38 |
-
|
| 39 |
-
|
| 40 |
-
- **
|
| 41 |
-
|
| 42 |
-
|
| 43 |
-
|
| 44 |
-
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
| 1 |
+
---
|
| 2 |
+
datasets:
|
| 3 |
+
- Skylion007/openwebtext
|
| 4 |
+
language:
|
| 5 |
+
- en
|
| 6 |
+
library_name: transformers
|
| 7 |
+
license: apache-2.0
|
| 8 |
+
metrics:
|
| 9 |
+
- perplexity
|
| 10 |
+
pipeline_tag: text-generation
|
| 11 |
+
---
|
| 12 |
+
|
| 13 |
+
# LangFlow
|
| 14 |
+
|
| 15 |
+
LangFlow is a continuous diffusion language model that operates in embedding space. Unlike discrete diffusion models (MDLM, SEDD, DUO), LangFlow performs diffusion directly on continuous token embeddings, enabling smoother denoising dynamics. It is the first continuous DLM to rival discrete diffusion models on standard language modeling benchmarks like LM1B and OpenWebText.
|
| 16 |
+
|
| 17 |
+
- **Paper:** [LangFlow: Continuous Diffusion Rivals Discrete in Language Modeling](https://huggingface.co/papers/2604.11748)
|
| 18 |
+
- **Code:** [GitHub Repository](https://github.com/nealchen2003/LangFlow)
|
| 19 |
+
- **Project Blog:** [LangFlow Blog Post](https://caradryanl.github.io/blog/2026/langflow/)
|
| 20 |
+
|
| 21 |
+
## Using LangFlow
|
| 22 |
+
|
| 23 |
+
To use the pre-trained model for text generation, use the following snippet:
|
| 24 |
+
|
| 25 |
+
```python
|
| 26 |
+
from transformers import AutoModelForMaskedLM, AutoTokenizer
|
| 27 |
+
|
| 28 |
+
tokenizer = AutoTokenizer.from_pretrained('gpt2')
|
| 29 |
+
model = AutoModelForMaskedLM.from_pretrained('chumengl/langflow-owt', trust_remote_code=True)
|
| 30 |
+
|
| 31 |
+
# Generate samples
|
| 32 |
+
samples = model.generate_samples(num_samples=5, num_steps=128)
|
| 33 |
+
texts = tokenizer.batch_decode(samples, skip_special_tokens=True)
|
| 34 |
+
for text in texts:
|
| 35 |
+
print(text)
|
| 36 |
+
```
|
| 37 |
+
|
| 38 |
+
## Model Details
|
| 39 |
+
|
| 40 |
+
- **Architecture**: DiT (Diffusion Transformer) backbone with adaptive layer normalization
|
| 41 |
+
- **Context Length**: 1024 tokens
|
| 42 |
+
- **Parameters**: ~130M non-embedding parameters (similar to GPT-2 medium)
|
| 43 |
+
- **Training**: 1M steps on OpenWebText corpus
|
| 44 |
+
- **Tokenizer**: GPT-2 tokenizer (50,257 vocab size)
|
| 45 |
+
|
| 46 |
+
## Model Card Contact
|
| 47 |
+
|
| 48 |
+
Chumeng Liang (chumengl@illinois.edu)
|