[issue-4270] [P SDK] Improve GEval compatibility with DashScope Qwen judge model #4271
+88
−6
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Details
This PR is a follow-up to #4229 and makes DashScope Qwen more robust as a GEval judge model when used via
LiteLLMChatModel.Currently, when a model advertises
logprobsandtop_logprobssupport, GEval enables the logprobs-aware scoring path. For DashScope Qwen this can occasionally lead toMetricComputationError("Failed to calculate g-eval score")because the returned logprobs do not always match the OpenAI-style format expected by the parser.This PR treats DashScope Qwen as not logprobs-supported in this context, so GEval falls back to the standard text/JSON-based parsing path instead of relying on logprobs.
Change checklist
Issues
Testing
Locally:
pytest tests/unit/evaluation/models/test_litellm_chat_model.pydashscope/qwen-flashas the judge model with code snippets:Failed to calculate g-eval score.