-
Notifications
You must be signed in to change notification settings - Fork 874
849 quantum vision transformer #967
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
@@ -0,0 +1,528 @@ | |||
{ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Show convergence results etc...
If running is too slow, you can train it once and in the notebook upload a pretrained weights.
Reply via ReviewNB
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TomerGoldfriend, The training procedure is very slow due to the API call, it takes more than 1 minute to get the result of one batch iteration.
Also, I am getting this error when I want to train the Quantum Vision Transformer:
ClassiqAPIError: Call to API failed with code 400: Apologies for the inconvenience. We're currently experiencing an overwhelming surge in user activity, causing our system to be temporarily overloaded. Our team is actively addressing this and working on resolving the issue to provide you with a smoother experience. Please bear with us as we work to accommodate the high interest and requests. Thank you for your patience.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@neogyk OK, I see. Could you please clarify the size of your quantum layer? How many input parameters and how many weights, as well as the batch size you are using.
Is it possible to reduce the problem, treating a smaller usecase?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TomerGoldfriend , The quantum layer has a 4 qubits. I am using batchsize of 1 element. The dimensionality of weights is 4*4.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TomerGoldfriend , I can try to use the smaller dataset, the problem appears during the backpropagation.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- The number of the parameter is not large.
- I will try to run it in the studio, before I used the local execution.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@neogyk any update on this?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@TomerGoldfriend , No updated, can't run the full training with the classic studio.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@neogyk OK, let me try to collect all the relevant info here so we could understand where the problem is. You have a hybrid classical-quantum neural network. You have one quantum layer on 4 qubits. The input size of the layer is 4, and the weights is 4*4. If you take a batch size of 1 data point it takes 6 minutes for backward propagation and forward evaluation. Is that correct?
What happen if you take batch size of 2? 5? 10?
Can you decrease the weight size from 16 to 4? Even just to see that everything runs end-to-end, regardless convergence.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
- Yes, it's correct
- I will send you the benchmark asap.
Quantum Vision Transformer. Paper Implementation
The purpose of this pr is to bring the implementation of Quantum Vision Transformer to classiq community.
The related issue is here