OBxChat

Providers Supporting This Model

DeepSeek
PPIOPPIO
DeepSeekdeepseek/deepseek-v3/community
Max Context Length
62K
Max Response Length
--
Input Cost
$0.14
Output Cost
$0.28
PPIOPPIO
DeepSeekdeepseek/deepseek-v3/community
Max Context Length
62K
Max Response Length
--
Input Cost
$0.14
Output Cost
$0.28
FireworksFireworks
DeepSeekdeepseek/deepseek-v3/community
Max Context Length
--
Max Response Length
--
Input Cost
--
Output Cost
--
QwenQwen
DeepSeekdeepseek/deepseek-v3/community
Max Context Length
--
Max Response Length
--
Input Cost
--
Output Cost
--
BaiduCloudBaiduCloud
WenxinWenxin千帆
DeepSeekdeepseek/deepseek-v3/community
Max Context Length
--
Max Response Length
--
Input Cost
--
Output Cost
--
InfinigenceInfinigence
DeepSeekdeepseek/deepseek-v3/community
Max Context Length
--
Max Response Length
--
Input Cost
--
Output Cost
--

Model Settings

Creativity Level
temperature

Controls creativity. 0 = predictable responses, higher = more varied and unexpected results. View Documentation

Type
FLOAT
Default
1.00
Range
0.00 ~ 2.00
Response Diversity
top_p

Filters responses to the most likely words within a probability range. Lower values keep answers more focused. View Documentation

Type
FLOAT
Default
1.00
Range
0.00 ~ 1.00
New Idea Encouragement
presence_penalty

Encourages or discourages using new words. Higher values promote fresh ideas, while lower values allow more reuse. View Documentation

Type
FLOAT
Default
0.00
Range
-2.00 ~ 2.00
Repetition Control
frequency_penalty

Adjusts how often the model repeats words. Higher values mean fewer repeats, while lower values allow more repetition. View Documentation

Type
FLOAT
Default
0.00
Range
-2.00 ~ 2.00
Response Length Limit
max_tokens

Sets the max length of responses. Increase for longer replies, decrease for shorter ones. View Documentation

Type
INT
Default
--
Reasoning Depth
reasoning_effort

Determines how much effort the model puts into reasoning. Higher settings generate more thoughtful responses but take longer. View Documentation

Type
STRING
Default
--
Range
low ~ high

Related Models

DeepSeek

DeepSeek: DeepSeek R1 (community)

deepseek/deepseek-r1/community
DeepSeek R1 is the latest open-source model released by the DeepSeek team, featuring impressive inference performance, particularly in mathematics, programming, and reasoning tasks, reaching levels comparable to OpenAI's o1 model.
62K
DeepSeek

DeepSeek R1

deepseek/deepseek-r1
DeepSeek-R1 significantly enhances model reasoning capabilities with minimal labeled data. Before outputting the final answer, the model first provides a chain of thought to improve the accuracy of the final response.
62K
DeepSeek

DeepSeek V3

deepseek/deepseek-v3
DeepSeek-V3 has achieved a significant breakthrough in inference speed compared to previous models. It ranks first among open-source models and can compete with the world's most advanced closed-source models. DeepSeek-V3 employs Multi-Head Latent Attention (MLA) and DeepSeekMoE architectures, which have been thoroughly validated in DeepSeek-V2. Additionally, DeepSeek-V3 introduces an auxiliary lossless strategy for load balancing and sets multi-label prediction training objectives for enhanced performance.
62K
Meta

DeepSeek R1 Distill Llama 70B

deepseek/deepseek-r1-distill-llama-70b
DeepSeek R1 Distill Llama 70B is a large language model based on Llama3.3 70B, which achieves competitive performance comparable to large cutting-edge models by utilizing fine-tuning from DeepSeek R1 outputs.
32K
Qwen

DeepSeek: DeepSeek R1 Distill Qwen 32B

deepseek/deepseek-r1-distill-qwen-32b
DeepSeek R1 Distill Qwen 32B is a distilled large language model based on Qwen 2.5 32B, trained using outputs from DeepSeek R1. This model has surpassed OpenAI's o1-mini in several benchmark tests, achieving state-of-the-art results for dense models. Here are some benchmark results: AIME 2024 pass@1: 72.6 MATH-500 pass@1: 94.3 CodeForces Rating: 1691 This model demonstrates competitive performance comparable to larger cutting-edge models through fine-tuning from DeepSeek R1 outputs.
62K