langchain-ai
langchain
BlogDocsChangelog

fix(core): include llm_output in streaming LLMResult

#34060
Comparing
zhangzhefang-github:fix/streaming-llm-output
(
b2abc0d
) with
master
(
525d5c0
)
CodSpeed Performance Gauge
-1%
Untouched
13
Skipped
21

Benchmarks

Skipped (21)

Passed

test_import_time[Document]
libs/core/tests/benchmarks/test_imports.py
CodSpeed Performance Gauge
+2%
174.9 ms171.1 ms
test_import_time[RunnableLambda]
libs/core/tests/benchmarks/test_imports.py
CodSpeed Performance Gauge
+1%
447.5 ms444.5 ms
test_import_time[ChatPromptTemplate]
libs/core/tests/benchmarks/test_imports.py
CodSpeed Performance Gauge
+1%
534.9 ms532 ms
test_import_time[Runnable]
libs/core/tests/benchmarks/test_imports.py
CodSpeed Performance Gauge
0%
444.2 ms444.9 ms
test_import_time[InMemoryVectorStore]
libs/core/tests/benchmarks/test_imports.py
CodSpeed Performance Gauge
-1%
559.9 ms563.3 ms
test_import_time[InMemoryRateLimiter]
libs/core/tests/benchmarks/test_imports.py
CodSpeed Performance Gauge
-1%
160.3 ms162.2 ms
test_import_time[LangChainTracer]
libs/core/tests/benchmarks/test_imports.py
CodSpeed Performance Gauge
-1%
395.2 ms400.6 ms
test_import_time[CallbackManager]
libs/core/tests/benchmarks/test_imports.py
CodSpeed Performance Gauge
-2%
407.1 ms413.8 ms
test_import_time[BaseChatModel]
libs/core/tests/benchmarks/test_imports.py
CodSpeed Performance Gauge
-2%
468.8 ms479.4 ms
test_import_time[PydanticOutputParser]
libs/core/tests/benchmarks/test_imports.py
CodSpeed Performance Gauge
-3%
465.9 ms477.9 ms
test_import_time[tool]
libs/core/tests/benchmarks/test_imports.py
CodSpeed Performance Gauge
-3%
451.6 ms467.8 ms
test_async_callbacks_in_sync
libs/core/tests/benchmarks/test_async_callbacks.py
CodSpeed Performance Gauge
-4%
18.4 ms19.1 ms
test_import_time[HumanMessage]
libs/core/tests/benchmarks/test_imports.py
CodSpeed Performance Gauge
-5%
236.9 ms248.1 ms

Commits

Click on a commit to change the comparison range
Base
master
525d5c0
-25.18%
fix(core): include llm_output in streaming LLMResult Fixes #34057 Previously, streaming mode did not include the `llm_output` field in the `LLMResult` object passed to `on_llm_end` callbacks. This broke integrations like Langfuse that rely on this field to extract metadata such as model name. This commit ensures that `llm_output` is always present in streaming mode by passing an empty dict `{}` in all streaming methods (`stream` and `astream`) for both `BaseLLM` and `BaseChatModel`. Changes: - Updated `BaseLLM.stream()` to include `llm_output={}` in LLMResult - Updated `BaseLLM.astream()` to include `llm_output={}` in LLMResult - Updated `BaseChatModel.stream()` to include `llm_output={}` in LLMResult - Updated `BaseChatModel.astream()` to include `llm_output={}` in LLMResult - Added test to verify `llm_output` is present in streaming callbacks
4813a9a
2 days ago
by zhangzhefang-github
+2.41%
test(core): update test expectations for llm_output in streaming mode Fix test_event_stream_with_simple_chain to expect llm_output={} instead of llm_output=None in streaming mode, consistent with the fix for issue #34057
b904b4a
2 days ago
by zhangzhefang-github
+21.39%
fix(core): ensure llm_output is always dict in LLMResult, never None This commit comprehensively fixes issue #34057 where streaming mode was returning LLMResult with llm_output: None instead of llm_output: {}. Root cause: Multiple code paths were creating ChatResult/LLMResult without explicitly setting llm_output={}, causing it to default to None. Changes: - chat_models.py: Added llm_output={} to cache retrieval paths (sync/async), generate_from_stream(), and SimpleChatModel._generate() - llms.py: Added llm_output={} to SimpleLLM._generate() and _agenerate() - fake_chat_models.py: Fixed all 4 fake model _generate() methods - event_stream.py: Improved llm_output serialization in on_llm_end() - test_runnable_events_v1.py: Updated test expectations Tests: - test_astream_events_from_model: PASSED ✓ - test_event_stream_with_simple_chain: PASSED ✓ - All linting checks: PASSED ✓
b2abc0d
2 days ago
by zhangzhefang-github
© 2025 CodSpeed Technology
Home Terms Privacy Docs