OBxChat

Providers Supporting This Model

Qwen
ModelScopeModelScope
QwenQwen/Qwen3-32B
Max Context Length
128K
Max Response Length
--
Input Cost
--
Output Cost
--

Model Settings

Creativity Leveltemperature

Controls creativity. 0 = predictable responses, higher = more varied and unexpected results. View Documentation

Type
FLOAT
Default
1.00
Range
0.00 ~ 2.00
Response Diversitytop_p

Filters responses to the most likely words within a probability range. Lower values keep answers more focused. View Documentation

Type
FLOAT
Default
1.00
Range
0.00 ~ 1.00
New Idea Encouragementpresence_penalty

Encourages or discourages using new words. Higher values promote fresh ideas, while lower values allow more reuse. View Documentation

Type
FLOAT
Default
0.00
Range
-2.00 ~ 2.00
Repetition Controlfrequency_penalty

Adjusts how often the model repeats words. Higher values mean fewer repeats, while lower values allow more repetition. View Documentation

Type
FLOAT
Default
0.00
Range
-2.00 ~ 2.00
Response Length Limitmax_tokens

Sets the max length of responses. Increase for longer replies, decrease for shorter ones. View Documentation

Type
INT
Default
--
Reasoning Depthreasoning_effort

Determines how much effort the model puts into reasoning. Higher settings generate more thoughtful responses but take longer. View Documentation

Type
STRING
Default
--
Range
low ~ high

Related Models

DeepSeek

DeepSeek-R1-0528

deepseek-ai/DeepSeek-R1-0528
DeepSeek R1 significantly enhances its reasoning and inference depth by leveraging increased computational resources and introducing algorithmic optimizations during post-training. The model performs excellently across various benchmarks, including mathematics, programming, and general logic. Its overall performance now approaches leading models such as O3 and Gemini 2.5 Pro.
128K
DeepSeek

DeepSeek-V3

deepseek-ai/DeepSeek-V3
DeepSeek-V3 is a mixture of experts (MoE) language model with 671 billion parameters, utilizing multi-head latent attention (MLA) and the DeepSeekMoE architecture, combined with a load balancing strategy that does not rely on auxiliary loss, optimizing inference and training efficiency. Pre-trained on 14.8 trillion high-quality tokens and fine-tuned with supervision and reinforcement learning, DeepSeek-V3 outperforms other open-source models and approaches leading closed-source models in performance.
128K
DeepSeek

DeepSeek-R1

deepseek-ai/DeepSeek-R1
DeepSeek-R1 is a reinforcement learning (RL) driven inference model that addresses issues of repetitiveness and readability within the model. Prior to RL, DeepSeek-R1 introduced cold start data to further optimize inference performance. It performs comparably to OpenAI-o1 in mathematical, coding, and reasoning tasks, and enhances overall effectiveness through meticulously designed training methods.
128K
Qwen

Qwen3-235B-A22B

Qwen/Qwen3-235B-A22B
Qwen3 is a next-generation model with significantly enhanced capabilities, achieving industry-leading levels in reasoning, general tasks, agent functions, and multilingual support, with a switchable thinking mode.
128K