Avatar for the BerriAI user
BerriAI
litellm
BlogDocsChangelog

fix: expose reasoning effort fields in get_model_info + add together_ai/gpt-oss-120b

#25263Merged
Comparing
avarga1:fix/model-info-reasoning-fields-and-gpt-oss-120b
(
374eabb
) with
litellm_oss_staging_04_08_2026
(
62757ff
)
CodSpeed Performance Gauge
0%
Untouched
16

Benchmarks

16 total
test_get_model_info_anthropic
tests/benchmarks/test_benchmarks.py
CodSpeed Performance Gauge
+1%
84.7 µs83.8 µs
test_token_counter_multi_turn
tests/benchmarks/test_benchmarks.py
CodSpeed Performance Gauge
+1%
573.7 µs569.4 µs
test_token_counter_raw_text
tests/benchmarks/test_benchmarks.py
CodSpeed Performance Gauge
0%
187.2 µs186.8 µs
test_cost_per_token_openai
tests/benchmarks/test_benchmarks.py
CodSpeed Performance Gauge
0%
544.8 µs543.8 µs
test_get_model_cost_key_case_insensitive
tests/benchmarks/test_benchmarks.py
CodSpeed Performance Gauge
0%
87.4 µs87.4 µs
test_cost_per_token_anthropic
tests/benchmarks/test_benchmarks.py
CodSpeed Performance Gauge
0%
544.9 µs545.1 µs
test_get_model_cost_key_exact_match
tests/benchmarks/test_benchmarks.py
CodSpeed Performance Gauge
0%
85.1 µs85.3 µs
test_get_model_info_with_provider
tests/benchmarks/test_benchmarks.py
CodSpeed Performance Gauge
0%
85.2 µs85.5 µs
test_get_model_info_openai
tests/benchmarks/test_benchmarks.py
CodSpeed Performance Gauge
0%
87.2 µs87.6 µs
test_token_counter_with_tools
tests/benchmarks/test_benchmarks.py
CodSpeed Performance Gauge
-1%
415.9 µs418.3 µs
test_token_counter_long_content
tests/benchmarks/test_benchmarks.py
CodSpeed Performance Gauge
-1%
1.7 ms1.7 ms
test_token_counter_simple_message
tests/benchmarks/test_benchmarks.py
CodSpeed Performance Gauge
-1%
239.2 µs241.2 µs
test_get_llm_provider_azure
tests/benchmarks/test_benchmarks.py
CodSpeed Performance Gauge
-1%
148.8 µs150.1 µs
test_get_llm_provider_with_prefix
tests/benchmarks/test_benchmarks.py
CodSpeed Performance Gauge
-1%
141.7 µs143.1 µs
test_get_llm_provider_openai
tests/benchmarks/test_benchmarks.py
CodSpeed Performance Gauge
-1%
142.8 µs144.6 µs
test_get_llm_provider_anthropic
tests/benchmarks/test_benchmarks.py
CodSpeed Performance Gauge
-2%
146.2 µs148.9 µs

Commits

Click on a commit to change the comparison range
Base
main
62757ff
-0.49%
fix: expose reasoning effort fields in get_model_info and add together_ai/gpt-oss-120b
20b7e9e
10 days ago
+0.67%
fix: consolidate duplicate together_ai/openai/gpt-oss-120b entry and sync backup file
9169f8c
10 days ago
-0.54%
fix: link commit to GitHub account for CLA verification
374eabb
9 days ago
by avarga1
© 2026 CodSpeed Technology
Home Terms Privacy Docs