All Models
DeepSeekactiveOpen Source
DeepSeek-Coder-V3
deepseek-coder-v3Specialized code generation model with state-of-the-art SWE-bench performance.
Context Window
131.1K
tokens
Max Output
8.2K
tokens
Input Price
$0.27
per 1M tokens
Output Price
$1.10
per 1M tokens
Details
Familydeepseek-coder
Parameters236B MoE
Training Cutoff2025-04-01
ReleasedJune 1, 2025
Capabilities
FunctionsStreamingJSON ModeCodeTool Use
Documentation
https://api.deepseek.com/chat/completionsEvaluation Scores(2 benchmarks)
HumanEval
92.8%
SWE-bench Verified
48.5%
Quick Access
curl pikaainews.com/api/models/deepseek-coder-v3npx pika-models info deepseek-coder-v3Get API Access
Third-Party Providers & Aggregators
Cerebras
Wafer-scale inference. 1000+ tokens/sec for select models.
DeepInfra
Lowest per-token rates for open-source models.
Fireworks AI
Fastest inference engine. Multimodal support, HIPAA/SOC2.
Groq
Ultra-fast LPU inference. Best latency for real-time apps.
OpenRouter
500+ models, one API key. Pay-per-token, no minimums.
SiliconFlow
China-optimized inference. Strong Qwen/DeepSeek support.
Together AI
Fast open-source model inference. Sub-100ms latency.
