Skip to content

Fix: model_type may not read properly when it's not provided in training args #5078

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 2 commits into
base: main
Choose a base branch
from

Conversation

kiritoxkiriko
Copy link

@kiritoxkiriko kiritoxkiriko commented Jul 23, 2025

PR type

  • Bug Fix
  • New Feature
  • Document Updates
  • More Models or Datasets Support

PR information

Original model_type read model_type from CLI tool, traning config, arg.json and finally fallback into huggingface's model config

This step seem's reasonable, but it read architectures from models's config.json instead of model_type, nowadays most of models provide model_type natively, we don't have to infer model_type from architectures

For example, in Qwen3's config.json, we have model_type

Screenshot 2025-07-23 at 15 05 58

So I modify the _get_model_info, read model_type first, and then fallback into architectures if it's empty of not set.

Experiment results

Paste your experiment result here(if needed).

Minor change, no need

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @kiritoxkiriko, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a bug in how model types are determined by adjusting the lookup order. It ensures that the model_type field, if explicitly present in a model's configuration, is used first, leading to more accurate and direct model identification without relying on inference from architectures.

Highlights

  • Model Type Resolution: Prioritized reading the model_type directly from the Hugging Face model configuration (config.json) when determining a model's type. This change improves accuracy for modern models that explicitly define model_type.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @kiritoxkiriko, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a bug fix that refines how model types are identified within the system. It improves the robustness and accuracy of model type detection by prioritizing direct retrieval of the model_type from model configuration files, aligning with modern model practices and reducing reliance on inferred properties.

Highlights

  • Model Type Resolution Logic: The logic for determining a model's type has been updated to first attempt to read the model_type directly from the Hugging Face model configuration (config.json). This change prioritizes explicit model_type definitions over inferring the type from the architectures field, which was the previous primary method.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request modifies the logic for determining the model_type by prioritizing the value from the model's config.json before falling back to inferring it from the architectures field. This is a sensible change that aligns with modern Hugging Face model configurations. The implementation is correct, and I've added one suggestion to make the new code block more concise.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request modifies the logic to prioritize reading model_type from the model's configuration file. A suggestion has been added to improve the conciseness of the newly added code.

Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
@kiritoxkiriko kiritoxkiriko changed the title fix(model): get model_type from model config first Fix model_type may not read properly when it's not provided in training args Jul 23, 2025
@kiritoxkiriko kiritoxkiriko changed the title Fix model_type may not read properly when it's not provided in training args Fix: model_type may not read properly when it's not provided in training args Jul 23, 2025
@Jintao-Huang
Copy link
Collaborator

hello!
These two model_types have different meanings. The concept of model_type in ms-swift is different from that in transformers.

@kiritoxkiriko
Copy link
Author

hello! These two model_types have different meanings. The concept of model_type in ms-swift is different from that in transformers.

Oh, so it may have different value with hf's model_type under certain condtion? Can you make an example fot this, since they share same value most of time.

My original goal is let swift fill model_type automaticlly if the value is not present. For example if I use qwen2.5-instruct it will return an error that let me pass model_type explicitly, i hope it could be derived from model config.json

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

2 participants