Skip to content

๐ŸŽˆ TinyGPT โ€” A playful, lightweight 50M parameter GPT model pretrained on whimsical short stories. Fast, fun, and surprisingly creative!

License

Notifications You must be signed in to change notification settings

NotShrirang/tinygpt

Folders and files

NameName
Last commit message
Last commit date

Latest commit

ย 

History

20 Commits
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 
ย 

Repository files navigation

TinyGPT Banner

TinyGPT ๐Ÿค–

GitHub stars GitHub forks Streamlit App

TinyGPT is a compact 50M parameter GPT model trained on a dataset of tiny stories, designed to generate coherent and creative text based on user input. โœจ

HuggingFace Repository: https://huggingface.co/NotShrirang/tinygpt

Hosted Streamlit Application: https://tinygpt.streamlit.app/

Overview ๐Ÿ”

TinyGPT is a lightweight GPT implementation trained on a comprehensive dataset of short stories. With 50M parameters, it strikes a balance between computational efficiency and generative capability. The model was trained using a transformer architecture with self-attention mechanisms to capture contextual relationships in text.

Model Architecture ๐Ÿ—๏ธ

TinyGPT uses a standard GPT decoder-only transformer architecture with:

  • 8 transformer blocks ๐Ÿงฑ
  • 8 attention heads ๐Ÿ‘๏ธ
  • 512 embedding dimensions ๐Ÿ“Š
  • Vocabulary size of 50,304 tokens ๐Ÿ“š
  • Context window of 512 tokens ๐ŸชŸ

Dataset ๐Ÿ“–

The model was trained on the TinyStories dataset, a collection of short stories designed for training language models. This dataset provides simple narratives that help the model learn coherent story generation while maintaining a smaller size compared to larger language models.

Training Data Improvements ๐Ÿ“ˆ

  • Scale: TinyGPT was trained on approximately 300M tokens, significantly enhancing its language understanding capabilities.
  • Data Processing: Initially faced challenges with data preprocessing pipelines that affected how data was passed to the model. These issues have been resolved, leading to more consistent and higher-quality training.

Installation ๐Ÿ’ฟ

To install TinyGPT, follow these steps:

# Clone the repository
git clone https://github.com/NotShrirang/tinygpt.git

# Navigate to the project directory
cd tinygpt

# Install the required packages
pip install -r requirements.txt

# Download the model weights
mkdir -p tinygpt/weights

Usage ๐Ÿš€

Streamlit Interface ๐Ÿ–ฅ๏ธ

The easiest way to interact with TinyGPT is through its Streamlit interface:

streamlit run main.py

This will launch a web application where you can input text and see the model's generated responses.

Training โš™๏ธ

TinyGPT was trained using PyTorch on the TinyStories dataset. The training process involved:

  1. Tokenizing the input text
  2. Creating sliding windows of fixed block size
  3. Training the model with cross-entropy loss
  4. Applying learning rate scheduling with warmup and cosine decay
Loss Curve

Training Optimizations ๐Ÿš€

TinyGPT's training process leverages several optimization techniques to enhance speed, stability, and performance:

  • Kernel Fusion: Implemented to reduce memory bandwidth bottlenecks and speed up training operations
  • Mixed Precision Training: Utilizes bfloat16 format for significantly faster training while maintaining numerical stability
  • Gradient Accumulation: Applied to improve training stability and allow effective training with larger batch sizes
  • Cosine Scheduler: Implements variable learning rate throughout training for better convergence
  • PyTorch's Multi-Head Attention: Uses standard PyTorch implementations for Multi-Head Attention layers to boost training speed

While using PyTorch's native attention implementation deviates from the "from scratch" philosophy, it enables more rapid model iteration and training with available resources.

For details on the training process, see the training notebook in the notebooks/ directory.

Sample Outputs ๐Ÿ“

Example 1

Prompt: One day, a dragon

Output:
One day, a dragon named Bobo was walking in the forest when he saw a little bunny. The bunny was sad because he had no friends. Bobo wanted to help the bunny, so he asked the bunny to give him a hug. The bunny said yes, and the bunny gave the bunny a hug.

Bobo was very happy and thanked the bunny. He named the bunny, and they became good friends. The bunny was always grateful for Bobo's help. They became good friends, and they always shared their toys and treats!

Example 2

Prompt: A dog named

Output:
A dog named Max went for a walk. He saw a big tree and wanted to climb it. Max was very excited and started to climb the tree. He was very careful and did not fall.

Max saw a little girl named Sue. Sue was sad because she lost her toy. Max wanted to help Sue. He said, "Don't worry, Sue. I will help you find your toy."

Max and Sue looked for the toy together. They looked under the tree, behind the tree, and behind the tree. Finally, they found the toy under a big tree. Max was so happy and said, "Thank you, Sue! You are a good friend."

Sue and Max played with the toy all day. They were very happy and had a fun day!

Inference ๐Ÿ”ฎ

During inference, TinyGPT uses several techniques to produce high-quality text:

  • Temperature scaling for controlling randomness
  • Top-k and top-p sampling for focus and diversity
  • Efficient token generation one at a time

License ๐Ÿ“œ

This project is licensed under the GPL-3.0 license - see the LICENSE file for details.

Contributing ๐Ÿ‘ฅ

Contributions are welcome! Feel free to submit pull requests, create issues, or suggest improvements to the model or codebase.

Support โค๏ธ

If you find TinyGPT useful, please consider starring the repository โญ

About

๐ŸŽˆ TinyGPT โ€” A playful, lightweight 50M parameter GPT model pretrained on whimsical short stories. Fast, fun, and surprisingly creative!

Topics

Resources

License

Stars

Watchers

Forks