Skip to content

Proxy scraper is a simple and powerful tool that extracts proxies from various websites. It supports regex patterns and can scrape from up to 75 sites asynchronously. πŸ› οΈπŸŒ

Notifications You must be signed in to change notification settings

ZyanMath/proxy-scraper

Folders and files

NameName
Last commit message
Last commit date

Latest commit

Β 

History

3 Commits
Β 
Β 

Repository files navigation

Proxy Scraper 🌐

GitHub repo size GitHub issues GitHub license

Welcome to the Proxy Scraper repository! This project focuses on scraping proxy lists from a variety of websites asynchronously. It is designed to be efficient and effective, providing users with a robust tool for gathering proxies.

Table of Contents

  1. Introduction
  2. Features
  3. Installation
  4. Usage
  5. Supported Websites
  6. Contributing
  7. License
  8. Releases
  9. Contact

Introduction

The Proxy Scraper is a powerful tool that allows users to collect proxies from up to 75 different websites. By utilizing asynchronous scraping techniques, it ensures that users can gather data quickly and efficiently. This tool is particularly useful for developers, data scientists, and anyone who needs reliable proxy lists for their projects.

Features

  • Asynchronous Scraping: Collect proxies from multiple sources at once.
  • Supports HTTP and HTTPS: Gather both types of proxies.
  • Variety of Proxy Types: Includes SOCKS4 and SOCKS5 proxies.
  • Regex Support: Filter and customize your proxy lists easily.
  • User-Friendly: Simple setup and easy-to-follow instructions.

Installation

To install the Proxy Scraper, follow these steps:

  1. Clone the repository:

    git clone https://github.com/ZyanMath/proxy-scraper.git
  2. Navigate to the project directory:

    cd proxy-scraper
  3. Install the required packages:

    pip install -r requirements.txt
  4. Run the scraper:

    python scraper.py

Usage

To use the Proxy Scraper, simply execute the script after installation. You can customize the parameters to suit your needs. Here’s a basic command to start scraping:

python scraper.py --output proxies.txt

This command will save the scraped proxies into a file named proxies.txt.

Supported Websites

The Proxy Scraper supports scraping from a wide range of websites. Here are some of the key sites included:

  • Free Proxy List
  • ProxyNova
  • SSL Proxy
  • Spys.one
  • and many more...

For a complete list of supported websites, please check the code or documentation within the repository.

Contributing

We welcome contributions to improve the Proxy Scraper. If you have ideas or enhancements, please follow these steps:

  1. Fork the repository.
  2. Create a new branch (git checkout -b feature/YourFeature).
  3. Make your changes and commit them (git commit -m 'Add new feature').
  4. Push to the branch (git push origin feature/YourFeature).
  5. Open a pull request.

Your contributions help make this tool better for everyone!

License

This project is licensed under the MIT License. See the LICENSE file for details.

Releases

To download the latest version of the Proxy Scraper, visit our Releases section. Download the appropriate file and execute it to get started with scraping.

Contact

For any questions or feedback, feel free to reach out:

Conclusion

The Proxy Scraper is a valuable tool for anyone needing reliable proxies. With its asynchronous capabilities and support for various proxy types, it stands out as a comprehensive solution. We encourage you to explore the repository, contribute, and make the most of this tool.

For more details, please check the Releases section for the latest updates and downloads.

About

Proxy scraper is a simple and powerful tool that extracts proxies from various websites. It supports regex patterns and can scrape from up to 75 sites asynchronously. πŸ› οΈπŸŒ

Topics

Resources

Stars

Watchers

Forks

Packages

No packages published

Contributors 2

  •  
  •