-
Notifications
You must be signed in to change notification settings - Fork 291
Add generic export_to_transformers to the base classes #2346
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: master
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Summary of Changes
Hello @Bond099, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!
This pull request introduces a significant enhancement by providing direct export capabilities from Keras Hub models and tokenizers to the Hugging Face Transformers format. This change streamlines the process of converting Keras-based components for use within the Hugging Face ecosystem, improving interoperability and ease of use for developers working with both frameworks.
Highlights
- Direct Hugging Face Export for Models: I've added a new
export_to_transformers
method to theBackbone
andCausalLM
base classes. This allows users to directly export their Keras models (or just the backbone) into a format compatible with Hugging Face Transformers, including configuration and weights, simplifying interoperability. - Direct Hugging Face Export for Tokenizers: A corresponding
export_to_transformers
method has been added to theTokenizer
base class. This enables the direct export of Keras tokenizers, saving their assets in a format that Hugging Face Transformers can readily consume. - Refactored Export Logic: The core export logic in
keras_hub/src/utils/transformers/export/hf_exporter.py
has been modularized. The existingexport_to_safetensors
function now accepts averbose
argument, and two new functions,export_backbone
andexport_tokenizer
, have been introduced to handle the specific export needs of model backbones and tokenizers, respectively. - Gemma Model Export Improvements: For Gemma models, I've updated the Hugging Face configuration to explicitly set
tie_word_embeddings
toTrue
. Additionally, the explicit weight tying forlm_head.weight
in the Gemma exporter has been removed, as this is now implicitly handled by thetie_word_embeddings
configuration in Hugging Face. - Updated Export Tests: The
gemma_test.py
file has been updated to reflect and validate the newexport_to_transformers
methods. Tests now separately export and verify the backbone, tokenizer, and the full CausalLM model, ensuring that all components are correctly converted and loadable by Hugging Face.
Using Gemini Code Assist
The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.
Invoking Gemini
You can request assistance from Gemini at any point in your pull request via creating an issue comment (i.e. comment on the pull request page) using either /gemini <command>
or @gemini-code-assist <command>
. Below is a summary of the supported commands.
Feature | Command | Description |
---|---|---|
Code Review | /gemini review |
Performs a code review for the current pull request in its current state. |
Pull Request Summary | /gemini summary |
Provides a summary of the current pull request in its current state. |
Comment | @gemini-code-assist | Responds in comments when explicitly tagged, both in issue comments and review comments. |
Help | /gemini help |
Displays a list of available commands. |
Customization
To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/
folder in the base of the repository. Detailed instructions can be found here.
Limitations & Feedback
Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments or fill out our survey to provide feedback.
You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.
Footnotes
-
Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution. ↩
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a generic export_to_transformers
method to the Backbone
, CausalLM
, and Tokenizer
base classes, enabling models to be exported to the Hugging Face Transformers format. The implementation adds new exporter functions and updates tests accordingly. I've provided feedback to reduce code duplication, improve error message clarity, and enhance code documentation.
Can you add a colab example demo? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! Overall looks good! Just had some questions on the API and a request for more testing.
keras_hub/src/models/causal_lm.py
Outdated
missing.append("tokenizer") | ||
if missing: | ||
raise ValueError( | ||
"CausalLM must have a preprocessor and a tokenizer for export. " |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
would it work to support saving with preprocessing either attached or unattached?
# Attached (most common).
causal_lm.export_to_transformers(path)
preprocessor = causal_lm.preprocessor
causal_lm.preprocessor = None
# Detached (for preprocessing separately).
causal_lm.export_to_transformers(path)
preprocessor.export_to_transformers(path)
That would be most similar to what we have in save_to_preset
currently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've updated the implementation accordingly in the latest commit:
-
causalLM.export_to_transformers(path)
now exports the backbone unconditionally, and the tokenizer only ifpreprocessor
(with its tokenizer) is attached. If preprocessor is None, it skips the tokenizer export without error. -
Added
export_to_transformers(path)
toCausalLMPreprocessor
, which exports just the tokenizer (raising an error if the tokenizer is missing).
keras_hub/src/models/backbone.py
Outdated
export_backbone, | ||
) | ||
|
||
export_backbone(self, path, verbose=verbose) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
let's add some high level testing of the exposed API in backbone_test.py, task_test.py, tokenizer_test.py, next to save to preset. Doesn't need to be as thorough as the gemma_test below. We just want to test the high level usage.
E.g. that you can call this function without error for a support model, and it errors for models that do not yet support this.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sure, I've added the tests to the test classes in backbone_test.py, task_test.py, tokenizer_test.py and causal_lm_preprocessor_test.py.I also factored common setup code (e.g. dummy tokenizer proto creation and model instantiation) into setUp methods to prevent redundancy. Please let me know if you'd like any adjustments!
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks! LGTM, mostly. Left one important question and a few nits.
keras_hub/src/models/causal_lm.py
Outdated
if self.preprocessor is not None: | ||
if self.preprocessor.tokenizer is None: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can we moved to one line:
if self.preprocessor is not None and self.preprocessor.tokenizer is None
|
||
def export_backbone(backbone, path): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
One question I have is:
In this case, just having export_backbone
works because the head is a transpose of the input embedding in the case of Gemma. But we have models for which the head is not tied. We need to take care of those too.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
yeah, I had the same thought, it should be handled by the latest commit
Description of the change
Reference
Colab Notebook
https://colab.research.google.com/drive/1CNUkqbRTBPirTaU1-2UrWuK2dTZhgOtn?usp=sharingChecklist