Skip to content

849 quantum vision transformer #967

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Open
wants to merge 4 commits into
base: main
Choose a base branch
from

Conversation

neogyk
Copy link

@neogyk neogyk commented Apr 23, 2025

Quantum Vision Transformer. Paper Implementation

The purpose of this pr is to bring the implementation of Quantum Vision Transformer to classiq community.
The related issue is here

Copy link

Check out this pull request on  ReviewNB

See visual diffs & provide feedback on Jupyter Notebooks.


Powered by ReviewNB

@TomerGoldfriend TomerGoldfriend self-assigned this Apr 23, 2025
@@ -0,0 +1,528 @@
{
Copy link
Member

@TomerGoldfriend TomerGoldfriend Apr 24, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Show convergence results etc...

If running is too slow, you can train it once and in the notebook upload a pretrained weights.


Reply via ReviewNB

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TomerGoldfriend, The training procedure is very slow due to the API call, it takes more than 1 minute to get the result of one batch iteration.
Also, I am getting this error when I want to train the Quantum Vision Transformer:

ClassiqAPIError: Call to API failed with code 400: Apologies for the inconvenience. We're currently experiencing an overwhelming surge in user activity, causing our system to be temporarily overloaded. Our team is actively addressing this and working on resolving the issue to provide you with a smoother experience. Please bear with us as we work to accommodate the high interest and requests. Thank you for your patience.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@neogyk OK, I see. Could you please clarify the size of your quantum layer? How many input parameters and how many weights, as well as the batch size you are using.
Is it possible to reduce the problem, treating a smaller usecase?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TomerGoldfriend , The quantum layer has a 4 qubits. I am using batchsize of 1 element. The dimensionality of weights is 4*4.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TomerGoldfriend , I can try to use the smaller dataset, the problem appears during the backpropagation.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TomerGoldfriend

  1. The number of the parameter is not large.
  2. I will try to run it in the studio, before I used the local execution.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@neogyk any update on this?

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TomerGoldfriend , No updated, can't run the full training with the classic studio.

Copy link
Member

@TomerGoldfriend TomerGoldfriend Jun 16, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@neogyk OK, let me try to collect all the relevant info here so we could understand where the problem is. You have a hybrid classical-quantum neural network. You have one quantum layer on 4 qubits. The input size of the layer is 4, and the weights is 4*4. If you take a batch size of 1 data point it takes 6 minutes for backward propagation and forward evaluation. Is that correct?

What happen if you take batch size of 2? 5? 10?

Can you decrease the weight size from 16 to 4? Even just to see that everything runs end-to-end, regardless convergence.

Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

@TomerGoldfriend ,

  1. Yes, it's correct
  2. I will send you the benchmark asap.

@NadavClassiq NadavClassiq added the Paper Implementation Project Implement a paper using Classiq label May 22, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Paper Implementation Project Implement a paper using Classiq
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants