Skip to content

Add generic export_to_transformers to the base classes #2346

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 12 commits into
base: master
Choose a base branch
from

Conversation

Bond099
Copy link
Contributor

@Bond099 Bond099 commented Jul 26, 2025

Description of the change

Reference

Colab Notebook

https://colab.research.google.com/drive/1CNUkqbRTBPirTaU1-2UrWuK2dTZhgOtn?usp=sharing

Checklist

  • I have added all the necessary unit tests for my change.
  • I have verified that my change does not break existing code and works with all backends (TensorFlow, JAX, and PyTorch).
  • My PR is based on the latest changes of the main branch (if unsure, rebase the code).
  • I have followed the Keras Hub Model contribution guidelines in making these changes.
  • I have followed the Keras Hub API design guidelines in making these changes.
  • I have signed the Contributor License Agreement.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Summary of Changes

Hello @Bond099, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces a significant enhancement by providing direct export capabilities from Keras Hub models and tokenizers to the Hugging Face Transformers format. This change streamlines the process of converting Keras-based components for use within the Hugging Face ecosystem, improving interoperability and ease of use for developers working with both frameworks.

Highlights

  • Direct Hugging Face Export for Models: I've added a new export_to_transformers method to the Backbone and CausalLM base classes. This allows users to directly export their Keras models (or just the backbone) into a format compatible with Hugging Face Transformers, including configuration and weights, simplifying interoperability.
  • Direct Hugging Face Export for Tokenizers: A corresponding export_to_transformers method has been added to the Tokenizer base class. This enables the direct export of Keras tokenizers, saving their assets in a format that Hugging Face Transformers can readily consume.
  • Refactored Export Logic: The core export logic in keras_hub/src/utils/transformers/export/hf_exporter.py has been modularized. The existing export_to_safetensors function now accepts a verbose argument, and two new functions, export_backbone and export_tokenizer, have been introduced to handle the specific export needs of model backbones and tokenizers, respectively.
  • Gemma Model Export Improvements: For Gemma models, I've updated the Hugging Face configuration to explicitly set tie_word_embeddings to True. Additionally, the explicit weight tying for lm_head.weight in the Gemma exporter has been removed, as this is now implicitly handled by the tie_word_embeddings configuration in Hugging Face.
  • Updated Export Tests: The gemma_test.py file has been updated to reflect and validate the new export_to_transformers methods. Tests now separately export and verify the backbone, tokenizer, and the full CausalLM model, ensuring that all components are correctly converted and loadable by Hugging Face.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in issue comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a generic export_to_transformers method to the Backbone, CausalLM, and Tokenizer base classes, enabling models to be exported to the Hugging Face Transformers format. The implementation adds new exporter functions and updates tests accordingly. I've provided feedback to reduce code duplication, improve error message clarity, and enhance code documentation.

@divyashreepathihalli
Copy link
Collaborator

Can you add a colab example demo?

@mattdangerw mattdangerw added the kokoro:force-run Runs Tests on GPU label Jul 31, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Runs Tests on GPU label Jul 31, 2025
Copy link
Member

@mattdangerw mattdangerw left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! Overall looks good! Just had some questions on the API and a request for more testing.

missing.append("tokenizer")
if missing:
raise ValueError(
"CausalLM must have a preprocessor and a tokenizer for export. "
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

would it work to support saving with preprocessing either attached or unattached?

# Attached (most common).
causal_lm.export_to_transformers(path)

preprocessor = causal_lm.preprocessor
causal_lm.preprocessor = None
# Detached (for preprocessing separately).
causal_lm.export_to_transformers(path)
preprocessor.export_to_transformers(path)

That would be most similar to what we have in save_to_preset currently.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've updated the implementation accordingly in the latest commit:

  • causalLM.export_to_transformers(path) now exports the backbone unconditionally, and the tokenizer only if preprocessor (with its tokenizer) is attached. If preprocessor is None, it skips the tokenizer export without error.

  • Added export_to_transformers(path) to CausalLMPreprocessor, which exports just the tokenizer (raising an error if the tokenizer is missing).

export_backbone,
)

export_backbone(self, path, verbose=verbose)
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

let's add some high level testing of the exposed API in backbone_test.py, task_test.py, tokenizer_test.py, next to save to preset. Doesn't need to be as thorough as the gemma_test below. We just want to test the high level usage.

E.g. that you can call this function without error for a support model, and it errors for models that do not yet support this.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sure, I've added the tests to the test classes in backbone_test.py, task_test.py, tokenizer_test.py and causal_lm_preprocessor_test.py.I also factored common setup code (e.g. dummy tokenizer proto creation and model instantiation) into setUp methods to prevent redundancy. Please let me know if you'd like any adjustments!

@abheesht17 abheesht17 added the kokoro:force-run Runs Tests on GPU label Aug 7, 2025
@kokoro-team kokoro-team removed the kokoro:force-run Runs Tests on GPU label Aug 7, 2025
Copy link
Collaborator

@abheesht17 abheesht17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks! LGTM, mostly. Left one important question and a few nits.

Comment on lines 413 to 414
if self.preprocessor is not None:
if self.preprocessor.tokenizer is None:
Copy link
Collaborator

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we moved to one line:

if self.preprocessor is not None and self.preprocessor.tokenizer is None


def export_backbone(backbone, path):
Copy link
Collaborator

@abheesht17 abheesht17 Aug 7, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

One question I have is:

In this case, just having export_backbone works because the head is a transpose of the input embedding in the case of Gemma. But we have models for which the head is not tied. We need to take care of those too.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

yeah, I had the same thought, it should be handled by the latest commit

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

5 participants