A system that detects potentially harmful or inappropriate images and applies blurring to them.
Extension_Signup_Login.mp4
Extension.mp4
- Detects potentially harmful or inappropriate content in images
- Uses deep learning models for image and text analysis
- Provides real-time image processing
- Easy to integrate into existing applications
- Python 3.8 or higher
- Node.js (for frontend development)
- Git
- Clone the repository:
git clone https://github.com/afaq-ahmed07/SafeScroll.git
cd SafeScroll
- Install Python dependencies:
# Navigate to python-model directory
cd python-model
pip install -r requirements.txt
- Set up environment variables:
cd ..
# Create a .env file (example provided in .env.example)
touch .env
- Build the frontend:
# Navigate to public directory
cd public
npm install
npm run build
- Start the Python backend:
cd python-model
python app.py
The backend will run on http://localhost:5000
- Start the frontend (optional):
cd ../public
npm start
The frontend will run on http://localhost:3000
POST /predict
Request Body:
{
"image_url": "URL_TO_IMAGE"
}
Response:
{
"prediction": 0, # 0 for safe, 1 for unsafe
"confidence": 0.95
}
SafeScroll/
├── python-model/ # Backend Python code
│ ├── app.py # Flask server
│ ├── predict_DualBranch.py # Prediction model
│ ├── fe.py # Feature extraction
│ └── requirements.txt # Python dependencies
├── public/ # Frontend code
│ ├── background.js # Browser extension background script
│ └── content.js # Browser extension content script
└── .env # Environment variables
- The system can be used as a standalone API service
- It can be integrated into browser extensions
- It can be used to process images in batch mode
- Fork the repository
- Create your feature branch
- Commit your changes
- Push to the branch
- Create a new Pull Request
This project is licensed under the MIT License - see the LICENSE file for details.