-
Notifications
You must be signed in to change notification settings - Fork 80
feat(rubric-auto-grading): enhance usage experience #8009
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
335b011
to
65f1e5f
Compare
…o updating the latest one
65f1e5f
to
33f816a
Compare
a861095
to
7dd1d22
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR enhances the rubric auto‐grading flow by introducing dynamic schema generation, retry logic, draft‐post updates, and prompt formatting improvements to better handle blank or multiple AI messages.
- Refactor
RubricLlmService
to build a dynamic JSON schema, parse LLM responses with retries, and process category grades - Update auto‐grading service to find/update existing AI‐generated draft posts instead of always creating new ones
- Add tags in the user prompt, adjust system prompt wording, and tweak stub behavior for testing
Reviewed Changes
Copilot reviewed 10 out of 10 changed files in this pull request and generated 2 comments.
Show a summary per file
File | Description |
---|---|
spec/support/stubs/langchain/llm_stubs.rb | Fix stub to parse dynamic schema and handle grading output |
spec/services/.../rubric_llm_service_spec.rb | Update symbol‐key expectations and introduce output_parser |
spec/services/.../rubric_auto_grading_service_spec.rb | Adjust tests for selection updates and add draft‐post specs |
client/app/.../reducers/topics.js | Prevent duplicate post IDs in Redux state |
app/views/.../_rubric_based_response.json.jbuilder | Filter for AI‐generated draft posts only |
app/services/.../rubric_llm_service.rb | Generate dynamic schema, add retry logic, and process grades |
app/services/.../rubric_auto_grading_service.rb | Fix class inheritance, update draft‐post creation logic |
app/services/.../prompts/rubric_auto_grading_user_prompt.json | Add XML‐style tags around prompt sections |
app/services/.../prompts/rubric_auto_grading_system_prompt.json | Remove unused format_instructions variable |
app/services/.../prompts/rubric_auto_grading_output_format.json | Change category_grades from array to object map |
Comments suppressed due to low confidence (3)
app/services/course/assessment/answer/rubric_llm_service.rb:41
- There's a typo in the variable name
llm_reponse
; it should bellm_response
for consistency and clarity.
llm_reponse = call_llm_with_retries(messages, dynamic_schema, output_parser)
spec/services/course/assessment/answer/rubric_auto_grading_service_spec.rb:75
- The tests for
process_llm_grading_response
return values (correct status, total grade, messages, feedback) were removed. Re-introduce tests to ensure that the service returns the expected tuple.
end
app/services/course/assessment/answer/rubric_auto_grading_service.rb:2
- The superclass has been removed from the class definition, causing a syntax error. It should remain
< Course::Assessment::Answer::AutoGradingService
before the rubocop disable comment.
class Course::Assessment::Answer::RubricAutoGradingService < # rubocop:disable Metrics/ClassLength
app/views/course/assessment/answer/rubric_based_responses/_rubric_based_response.json.jbuilder
Show resolved
Hide resolved
…ach category selected criterion
7dd1d22
to
e0d6e4b
Compare
Description
Changes made
OutputFixingParser
already has a retry mechanism to send a seperate message asking the LLM to fix the output according the schema (this will now be done after retrying with OpenAI LLM fails as this message would not have the question and rubric content).