OBxChat

Providers Supporting This Model

CodeGeeX
GiteeAIGiteeAI
CodeGeeXcodegeex4-all-9b
Max Context Length
32K
Max Response Length
--
Input Cost
--
Output Cost
--

Model Settings

Creativity Level
temperature

Controls creativity. 0 = predictable responses, higher = more varied and unexpected results. View Documentation

Type
FLOAT
Default
1.00
Range
0.00 ~ 2.00
Response Diversity
top_p

Filters responses to the most likely words within a probability range. Lower values keep answers more focused. View Documentation

Type
FLOAT
Default
1.00
Range
0.00 ~ 1.00
New Idea Encouragement
presence_penalty

Encourages or discourages using new words. Higher values promote fresh ideas, while lower values allow more reuse. View Documentation

Type
FLOAT
Default
0.00
Range
-2.00 ~ 2.00
Repetition Control
frequency_penalty

Adjusts how often the model repeats words. Higher values mean fewer repeats, while lower values allow more repetition. View Documentation

Type
FLOAT
Default
0.00
Range
-2.00 ~ 2.00
Response Length Limit
max_tokens

Sets the max length of responses. Increase for longer replies, decrease for shorter ones. View Documentation

Type
INT
Default
--
Reasoning Depth
reasoning_effort

Determines how much effort the model puts into reasoning. Higher settings generate more thoughtful responses but take longer. View Documentation

Type
STRING
Default
--
Range
low ~ high

Related Models

Qwen

Qwen2.5 72B Instruct

Qwen2.5-72B-Instruct
Qwen2.5-72B-Instruct supports 16k context and generates long texts exceeding 8K. It enables seamless interaction with external systems through function calls, greatly enhancing flexibility and scalability. The model's knowledge has significantly increased, and its coding and mathematical abilities have been greatly improved, with multilingual support for over 29 languages.
16K
Qwen

Qwen2.5 32B Instruct

Qwen2.5-32B-Instruct
Qwen2.5-32B-Instruct is a large language model with 32 billion parameters, offering balanced performance, optimized for Chinese and multilingual scenarios, and supporting applications such as intelligent Q&A and content generation.
32K
Qwen

Qwen2.5 14B Instruct

Qwen2.5-14B-Instruct
Qwen2.5-14B-Instruct is a large language model with 14 billion parameters, delivering excellent performance, optimized for Chinese and multilingual scenarios, and supporting applications such as intelligent Q&A and content generation.
24K
Qwen

Qwen2.5 7B Instruct

Qwen2.5-7B-Instruct
Qwen2.5-7B-Instruct is a large language model with 7 billion parameters, supporting function calls and seamless interaction with external systems, greatly enhancing flexibility and scalability. It is optimized for Chinese and multilingual scenarios, supporting applications such as intelligent Q&A and content generation.
32K
Qwen

Qwen2 72B Instruct

Qwen2-72B-Instruct
Qwen2 is the latest series of the Qwen model, supporting 128k context. Compared to the current best open-source models, Qwen2-72B significantly surpasses leading models in natural language understanding, knowledge, coding, mathematics, and multilingual capabilities.
32K