-
Notifications
You must be signed in to change notification settings - Fork 9.9k
removing unused RoPE parameters #590
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
Check out this pull request on See visual diffs & provide feedback on Jupyter Notebooks. Powered by ReviewNB |
Ah thanks! I believe this was a leftover from my implementation where I computed the RoPE on the fly, which saves memory even further if you stay under the max context length. But I found that this was 4x slower. Must have then forgotten to clean that part up. Anyways, it seems like all unit tests fail for some unrelated reason. Maybe some change on the GitHub Actions platform. I will look into it later today. |
Works now! (The UCI server for the chapter 6 dataset is down, which initially caused the issue.) |
commit 9df572f Author: Sebastian Raschka <[email protected]> Date: Sun Apr 6 18:29:22 2025 -0500 Improve ModernBERT comments (rasbt#606) * Improve modernbert comments * bash code formatting commit 3654571 Author: Sebastian Raschka <[email protected]> Date: Sun Apr 6 16:46:53 2025 -0500 align formulas in notes with code (rasbt#605) commit 67e0680 Author: Sebastian Raschka <[email protected]> Date: Sun Apr 6 09:33:36 2025 -0500 Disable mask saving as weight in Llama 3 model (rasbt#604) * Disable mask saving as weight * update pixi * update pixi commit f143465 Author: Sebastian Raschka <[email protected]> Date: Sat Apr 5 16:18:27 2025 -0500 reformat nbs (rasbt#602) commit 371ab9e Author: Sebastian Raschka <[email protected]> Date: Sat Apr 5 10:05:15 2025 -0500 Correct BERT experiments (rasbt#600) commit 4a96541 Author: Sebastian Raschka <[email protected]> Date: Sat Apr 5 09:13:30 2025 -0500 Add ModernBERT (rasbt#598) commit d4c8d8f Author: Sebastian Raschka <[email protected]> Date: Wed Apr 2 21:41:36 2025 -0500 Fix Llama language typo in bonus materials (rasbt#597) commit 49330d0 Author: Sebastian Raschka <[email protected]> Date: Wed Apr 2 09:47:07 2025 -0500 Fix link (rasbt#596) commit 43e25a5 Author: Sebastian Raschka <[email protected]> Date: Tue Apr 1 12:56:11 2025 -0500 Llama3Fast (rasbt#593) * Llama3Fast * Update pkg/llms_from_scratch/tests/test_llama3.py commit aedad7e Author: Sebastian Raschka <[email protected]> Date: Mon Mar 31 18:59:47 2025 -0500 Add Llama 3.2 to pkg (rasbt#591) * Add Llama 3.2 to pkg * remove redundant attributes * update tests * updates * updates * updates * fix link * fix link commit 152a087 Author: casinca <[email protected]> Date: Tue Apr 1 00:10:39 2025 +0200 removing unused RoPE parameters (rasbt#590) * removing unused RoPE parameters * remove redundant context_length in GQA --------- Co-authored-by: Sebastian Raschka <[email protected]> commit 2228037 Author: Sebastian Raschka <[email protected]> Date: Mon Mar 31 16:25:53 2025 -0500 Fix data download if UCI is temporarily down (rasbt#592) commit 6ea4dd3 Author: Sebastian Raschka <[email protected]> Date: Sun Mar 30 16:01:37 2025 -0500 Clarify dataset length in chapter 2 (rasbt#589) commit 0f6894f Author: Sebastian Raschka <[email protected]> Date: Sun Mar 30 15:18:12 2025 -0500 Memory optimized Llama (rasbt#588) * Memory optimized Llama * re-ad login commit 3f93d73 Author: Sebastian Raschka <[email protected]> Date: Thu Mar 27 20:10:23 2025 -0500 Alt weight loading code via PyTorch (rasbt#585) * Alt weight loading code via PyTorch * commit additional files commit ffd4035 Author: Sebastian Raschka <[email protected]> Date: Thu Mar 27 14:00:25 2025 -0500 Add GPTModelFast (rasbt#584) * Add GPTModelFast * update commit 2e143f1 Author: Sebastian Raschka <[email protected]> Date: Thu Mar 27 10:43:45 2025 -0500 Adjust comment to save compiled model (rasbt#583) commit f01e163 Author: Daniel Kleine <[email protected]> Date: Wed Mar 26 19:21:14 2025 +0100 updated .gitignore (rasbt#581) commit 92f1313 Author: Sebastian Raschka <[email protected]> Date: Wed Mar 26 13:19:55 2025 -0500 Vocab padding clarification (rasbt#582) * vocab padding clarification * Update ch05/10_llm-training-speed/README.md commit b789345 Author: Sebastian Raschka <[email protected]> Date: Mon Mar 24 12:01:03 2025 -0500 More explicit torchrun usage doc (rasbt#578) commit feb1e9a Author: Sebastian Raschka <[email protected]> Date: Sun Mar 23 19:35:12 2025 -0500 Add readme (rasbt#577) commit c21bfe4 Author: Sebastian Raschka <[email protected]> Date: Sun Mar 23 19:28:49 2025 -0500 Add PyPI package (rasbt#576) * Add PyPI package * fixes * fixes commit 7757c3d Author: Sebastian Raschka <[email protected]> Date: Fri Mar 21 11:29:49 2025 -0500 Speed comparison figure (rasbt#575)
* removing unused RoPE parameters * remove redundant context_length in GQA --------- Co-authored-by: Sebastian Raschka <[email protected]>
Since now that constant params/buffers are initialized inside the model's
__init__
method, this PR removes:GroupedQueryAttention
classTransformerBlock
classIn the case it could confuse readers.
On a side note, is there a specific reason you're not masking in place for
attn_scores = attn_scores.masked_fill(mask[:num_tokens, :num_tokens], -torch.inf)
? I'm getting same losses on training (no autograd interference) and a ~ 3% boost on avg token/s