LM Studio
#450
Replies: 2 comments
-
Here is the next version of the code .... # Parses error response from provider API.
#
# Supports two error formats:
# 1. OpenAI standard: {"error": {"message": "...", "type": "...", "code": "..."}}
# 2. Simple format: {"error": "error message"}
#
# @param response [Faraday::Response] The HTTP response
# @return [String, nil] The error message or nil if parsing fails
#
# @example OpenAI format
# response = double(body: '{"error": {"message": "Rate limit exceeded"}}')
# parse_error(response) #=> "Rate limit exceeded"
#
# @example Simple format (LM Studio, some local providers)
# response = double(body: '{"error": "Token limit exceeded"}')
# parse_error(response) #=> "Token limit exceeded"
def parse_error(response)
return if response.body.empty?
body = try_parse_json(response.body)
case body
when Hash
# Handle both formats:
# - {"error": "message"} (LM Studio, some providers)
# - {"error": {"message": "..."}} (OpenAI standard)
error_value = body['error']
return nil unless error_value
case error_value
when Hash
error_value['message']
when String
error_value
else
error_value.to_s if error_value
end
when Array
body.filter_map do |part|
next unless part.is_a?(Hash)
error_value = part['error']
next unless error_value
case error_value
when Hash then error_value['message']
when String then error_value
else error_value.to_s if error_value
end
end.join('. ')
else
body.to_s
end
rescue StandardError => e
RubyLLM.logger.debug "Error parsing response: #{e.message}"
nil
end |
Beta Was this translation helpful? Give feedback.
0 replies
-
@crmne I'm sitting on a PR within my fork of your library. Would you like a bug issue and a PR for this fix? |
Beta Was this translation helpful? Give feedback.
0 replies
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Uh oh!
There was an error while loading. Please reload this page.
-
I have been switching over to using LM Studio slowly repacing my use of ollama. I've had problems with RubyLLM::Provider using LM Studio because its responses are not 100% OpenAI compliant. I finally did this:
Beta Was this translation helpful? Give feedback.
All reactions