langchain-ai
langchain
BlogDocsChangelog

fix(openai): support reasoning_content in streaming delta chunks

#34152
Comparing
codeonbush:fix/openai-reasoning-content-streaming
(
82e6e5c
) with
master
(
12df938
)
CodSpeed Performance Gauge
0%
Untouched
6
Skipped
28

Benchmarks

Skipped (28)

Passed

test_init_time
libs/partners/openai/tests/unit_tests/chat_models/test_responses_standard.py::TestOpenAIResponses
CodSpeed Performance Gauge
0%
12.3 ms*12.3 ms
test_stream_time
libs/partners/openai/tests/integration_tests/chat_models/test_base_standard.py::TestOpenAIStandard
CodSpeed Performance Gauge
0%
1.2 s*1.2 s
test_init_time
libs/partners/openai/tests/unit_tests/chat_models/test_base_standard.py::TestOpenAIStandard
CodSpeed Performance Gauge
0%
12.3 ms*12.2 ms
test_stream_time
libs/partners/openai/tests/integration_tests/chat_models/test_responses_standard.py::TestOpenAIStandard
CodSpeed Performance Gauge
0%
1.2 s*1.2 s
test_stream_time
libs/partners/openai/tests/integration_tests/chat_models/test_responses_standard.py::TestOpenAIResponses
CodSpeed Performance Gauge
0%
857.8 ms*857.3 ms
test_init_time
libs/partners/openai/tests/unit_tests/chat_models/test_azure_standard.py::TestOpenAIStandard
CodSpeed Performance Gauge
0%
1.7 s*1.7 s

Commits

Click on a commit to change the comparison range
Base
master
12df938
+0.14%
openai: support reasoning_content in streaming delta chunks This change adds support for the reasoning_content field in streaming delta chunks from the Chat Completions API. This field is used by: - LiteLLM when proxying Gemini models with reasoning_effort enabled - DeepSeek reasoning models (deepseek-reasoner) - vLLM with reasoning parser enabled - Other OpenAI-compatible providers that support reasoning/thinking output The reasoning_content is stored in additional_kwargs, consistent with how the Responses API handles reasoning content. Fixes #29513, #31326, #32981, #30580
82e6e5c
4 days ago
© 2025 CodSpeed Technology
Home Terms Privacy Docs