OBxChat

Providers Supporting This Model

Qwen
InfinigenceInfinigence
Qwenqwen2-7b-instruct
Max Context Length
32K
Max Response Length
--
Input Cost
--
Output Cost
--

Model Settings

Creativity Level
temperature

Controls creativity. 0 = predictable responses, higher = more varied and unexpected results. View Documentation

Type
FLOAT
Default
1.00
Range
0.00 ~ 2.00
Response Diversity
top_p

Filters responses to the most likely words within a probability range. Lower values keep answers more focused. View Documentation

Type
FLOAT
Default
1.00
Range
0.00 ~ 1.00
New Idea Encouragement
presence_penalty

Encourages or discourages using new words. Higher values promote fresh ideas, while lower values allow more reuse. View Documentation

Type
FLOAT
Default
0.00
Range
-2.00 ~ 2.00
Repetition Control
frequency_penalty

Adjusts how often the model repeats words. Higher values mean fewer repeats, while lower values allow more repetition. View Documentation

Type
FLOAT
Default
0.00
Range
-2.00 ~ 2.00
Response Length Limit
max_tokens

Sets the max length of responses. Increase for longer replies, decrease for shorter ones. View Documentation

Type
INT
Default
--
Reasoning Depth
reasoning_effort

Determines how much effort the model puts into reasoning. Higher settings generate more thoughtful responses but take longer. View Documentation

Type
STRING
Default
--
Range
low ~ high

Related Models

DeepSeek

DeepSeek R1

deepseek-r1
DeepSeek-R1 is a reinforcement learning (RL) driven inference model that addresses issues of repetitiveness and readability within the model. Prior to RL, DeepSeek-R1 introduced cold start data to further optimize inference performance. It performs comparably to OpenAI-o1 in mathematical, coding, and reasoning tasks, and enhances overall effectiveness through meticulously designed training methods.
64K
DeepSeek

DeepSeek V3

deepseek-v3
DeepSeek-V3 is a MoE model developed by Hangzhou DeepSeek Artificial Intelligence Technology Research Co., Ltd., achieving outstanding results in multiple evaluations and ranking first among open-source models on mainstream leaderboards. Compared to the V2.5 model, V3 has achieved a threefold increase in generation speed, providing users with a faster and smoother experience.
64K
Qwen

QwQ

qwq-32b
The QwQ inference model is trained based on the Qwen2.5-32B model, significantly enhancing its reasoning capabilities through reinforcement learning. The core metrics of the model, including mathematical code (AIME 24/25, LiveCodeBench) and some general metrics (IFEval, LiveBench, etc.), reach the level of the full version of DeepSeek-R1, with all metrics significantly surpassing those of DeepSeek-R1-Distill-Qwen-32B, which is also based on Qwen2.5-32B.
64K
Qwen

DeepSeek R1 Distill Qwen 32B

deepseek-r1-distill-qwen-32b
The DeepSeek-R1-Distill series models are fine-tuned versions of samples generated by DeepSeek-R1, using knowledge distillation techniques on open-source models like Qwen and Llama.
32K
Qwen

Qwen2.5 72B Instruct

qwen2.5-72b-instruct
qwen2.5-72b-instruct.description
32K