Avatar for the langchain-ai user
langchain-ai
langchain
Instrumentation
Wall Time
BlogDocsChangelog

fix: resolve OutputParserException when using Ollama instead of Gemini

#33140
Comparing
its-shashankY:fix/33016-ollama-output-parser-exception
(
0bc6982
) with
master
(
9863023
)
CodSpeed Performance Gauge
0%
Untouched
1
Skipped
20

Benchmarks

Skipped (20)

Passed

test_init_time
libs/partners/ollama/tests/unit_tests/test_chat_models.py::TestChatOllama::test_init_time
CodSpeed Performance Gauge
0%
1.7 s1.7 s

Commits

Click on a commit to change the comparison range
Base
master
9863023
0%
fix: resolve OutputParserException when using Ollama instead of Gemini - Fixed output parsing compatibility issues between Ollama and Gemini models - Updated output parser to handle different response formats from Ollama - Added proper error handling for malformed responses - Ensured consistent behavior across different LLM providers Fixes #33016
fe21326
2 days ago
0%
additional fixes or improvements
cd86da1
2 days ago
0%
fix: break long conditional statement to meet line length limit
d8fa7b5
2 days ago
+0.01%
fix: apply code formatting to meet project standards
5464b92
2 days ago
-0.01%
fix: apply code formatting to meet project standards on line 955
e1addeb
2 days ago
+0.01%
fix: Removed Print and added all the imports on top of file stack
448d34c
2 days ago
-0.01%
fix: Removed Print and added all the imports on top of file stack
b3ab257
2 days ago
0%
fix: Fixed Exception with multiple similar exception
8598c3a
2 days ago
0%
fix: Fixed Exception with multiple similar exception
a0fe0c5
2 days ago
0%
fix: Fixed incompatible type str
89a9688
2 days ago
0%
fix: Fixed Linting Issue
0bc6982
2 days ago
© 2025 CodSpeed Technology
Home Terms Privacy Docs