Qwen3.5-2B
qwen3.5-2bCompact model for edge and mobile with native multimodal support.
Context Window
262.1K
tokens
Max Output
8.2K
tokens
Input Price
—
per 1M tokens
Output Price
—
per 1M tokens
Details
Quick Access
curl pikaainews.com/api/models/qwen-qwen3-5-2bnpx pika-models info qwen-qwen3-5-2bGet API Access
Third-Party Providers & Aggregators
Cerebras
Wafer-scale inference. 1000+ tokens/sec for select models.
DeepInfra
Lowest per-token rates for open-source models.
Fireworks AI
Fastest inference engine. Multimodal support, HIPAA/SOC2.
Groq
Ultra-fast LPU inference. Best latency for real-time apps.
OpenRouter
500+ models, one API key. Pay-per-token, no minimums.
SiliconFlow
China-optimized inference. Strong Qwen/DeepSeek support.
Together AI
Fast open-source model inference. Sub-100ms latency.
Other qwen3.5 models
Qwen3.5-0.8B
qwen3.5-0.8b
Qwen3.5-4B
qwen3.5-4b
Qwen3.5-9B
qwen3.5-9b
Qwen3.5-27B
qwen3.5-27b
Qwen3.5-122B-A10B
qwen3.5-122b-a10b
Qwen3.5-35B-A3B
qwen3.5-35b-a3b
Qwen3.5-397B-A17B
qwen3.5-397b-a17b
Qwen3.5-Plus
qwen3.5-plus
