
TinyGPT is a compact 50M parameter GPT model trained on a dataset of tiny stories, designed to generate coherent and creative text based on user input. โจ
HuggingFace Repository: https://huggingface.co/NotShrirang/tinygpt
Hosted Streamlit Application: https://tinygpt.streamlit.app/
TinyGPT is a lightweight GPT implementation trained on a comprehensive dataset of short stories. With 50M parameters, it strikes a balance between computational efficiency and generative capability. The model was trained using a transformer architecture with self-attention mechanisms to capture contextual relationships in text.
TinyGPT uses a standard GPT decoder-only transformer architecture with:
- 8 transformer blocks ๐งฑ
- 8 attention heads ๐๏ธ
- 512 embedding dimensions ๐
- Vocabulary size of 50,304 tokens ๐
- Context window of 512 tokens ๐ช
The model was trained on the TinyStories dataset, a collection of short stories designed for training language models. This dataset provides simple narratives that help the model learn coherent story generation while maintaining a smaller size compared to larger language models.
- Scale: TinyGPT was trained on approximately 300M tokens, significantly enhancing its language understanding capabilities.
- Data Processing: Initially faced challenges with data preprocessing pipelines that affected how data was passed to the model. These issues have been resolved, leading to more consistent and higher-quality training.
To install TinyGPT, follow these steps:
# Clone the repository
git clone https://github.com/NotShrirang/tinygpt.git
# Navigate to the project directory
cd tinygpt
# Install the required packages
pip install -r requirements.txt
# Download the model weights
mkdir -p tinygpt/weights
The easiest way to interact with TinyGPT is through its Streamlit interface:
streamlit run main.py
This will launch a web application where you can input text and see the model's generated responses.
TinyGPT was trained using PyTorch on the TinyStories dataset. The training process involved:
- Tokenizing the input text
- Creating sliding windows of fixed block size
- Training the model with cross-entropy loss
- Applying learning rate scheduling with warmup and cosine decay

TinyGPT's training process leverages several optimization techniques to enhance speed, stability, and performance:
- Kernel Fusion: Implemented to reduce memory bandwidth bottlenecks and speed up training operations
- Mixed Precision Training: Utilizes bfloat16 format for significantly faster training while maintaining numerical stability
- Gradient Accumulation: Applied to improve training stability and allow effective training with larger batch sizes
- Cosine Scheduler: Implements variable learning rate throughout training for better convergence
- PyTorch's Multi-Head Attention: Uses standard PyTorch implementations for Multi-Head Attention layers to boost training speed
While using PyTorch's native attention implementation deviates from the "from scratch" philosophy, it enables more rapid model iteration and training with available resources.
For details on the training process, see the training notebook in the notebooks/
directory.
Prompt: One day, a dragon
Output:
One day, a dragon named Bobo was walking in the forest when he saw a little bunny. The bunny was sad because he had no friends. Bobo wanted to help the bunny, so he asked the bunny to give him a hug. The bunny said yes, and the bunny gave the bunny a hug.
Bobo was very happy and thanked the bunny. He named the bunny, and they became good friends. The bunny was always grateful for Bobo's help. They became good friends, and they always shared their toys and treats!
Prompt: A dog named
Output:
A dog named Max went for a walk. He saw a big tree and wanted to climb it. Max was very excited and started to climb the tree. He was very careful and did not fall.
Max saw a little girl named Sue. Sue was sad because she lost her toy. Max wanted to help Sue. He said, "Don't worry, Sue. I will help you find your toy."
Max and Sue looked for the toy together. They looked under the tree, behind the tree, and behind the tree. Finally, they found the toy under a big tree. Max was so happy and said, "Thank you, Sue! You are a good friend."
Sue and Max played with the toy all day. They were very happy and had a fun day!
During inference, TinyGPT uses several techniques to produce high-quality text:
- Temperature scaling for controlling randomness
- Top-k and top-p sampling for focus and diversity
- Efficient token generation one at a time
This project is licensed under the GPL-3.0 license - see the LICENSE file for details.
Contributions are welcome! Feel free to submit pull requests, create issues, or suggest improvements to the model or codebase.
If you find TinyGPT useful, please consider starring the repository โญ