Important
- You can check the other version with Remediation Track and more simple features: Vulnerability Management Platform
- The data range on the left side is based on 'from this date to that date', and you have to click twice on the calendar date to set a range
![]() |
![]() |
![]() |
![]() |
![]() |
An interactive web dashboard for monitoring, analyzing, and searching cybersecurity vulnerabilities (CVEs) from the NIST National Vulnerability Database (NVD). This tool provides a user-friendly interface to visualize vulnerability trends, filter data, and manage asset-specific security risks.
- Key Features
- How It Works
- Getting Started
- Usage
- Project Structure
- Contributing
- License
- Code Explanation
- AI Transparency
- About The Author
- Ways to Contribute
- Interactive Dashboard: A comprehensive overview of CVE data with key metrics like total vulnerabilities, counts by severity, and average CVSS scores.
- Advanced Filtering: Dynamically filter vulnerabilities by severity, CVSS score range, and publication date.
- Powerful Search: Search for specific CVEs by their ID or by keywords in their description.
- Data Visualization: Rich, interactive charts powered by Plotly for visualizing:
- Vulnerability distribution by severity (Pie Chart)
- Vulnerability trends over time (Line Chart)
- CVSS score distribution (Histogram)
- Top 10 most affected vendors/products (Bar Chart)
- Asset Management: Create custom "Asset Groups" to monitor vulnerabilities relevant to your specific software, products, or technologies.
- Detailed Views: Click on any CVE to see a detailed card with its description, score, publication dates, and other technical information.
- Data Export: Download the filtered vulnerability data as a CSV file for offline analysis or reporting.
The project consists of two main components:
fetch_nvd_data.py
: A Python script that connects to the NIST NVD API 2.0, fetches all available CVE data, and saves it into a structured CSV file (nvd_cve_data.csv
).dashboard.py
: A Streamlit application that reads the generated CSV file and builds the interactive web dashboard. It handles all the filtering, searching, and visualization logic.
Follow these instructions to get the CVE Monitor dashboard running on your local machine.
- Python 3.8 or higher
- A NIST NVD API Key. You can request one for free from the NVD website.
-
Clone the repository:
git clone https://github.com/ThiagoMaria-SecurityIT/cve-monitor.git cd cve-monitor
-
Create a virtual environment (recommended):
python -m venv venv source venv/bin/activate # On Windows, use `venv\Scripts\activate`
-
Install the required Python libraries:
pip install -r requirements.txt
(Note: You will need to create a
requirements.txt
file. See the content for it below.) -
Set up your NVD API Key: The
fetch_nvd_data.py
script requires your NVD API key to be set as an environment variable.- On Linux/macOS:
export NVD_API_KEY="your_api_key_here"
- On Windows (Command Prompt):
set NVD_API_KEY="your_api_key_here"
- On Windows (PowerShell):
$env:NVD_API_KEY="your_api_key_here"
- On Linux/macOS:
-
Fetch the CVE Data: Run the script to download the vulnerability data from the NVD. This may take some time as it fetches a large dataset.
python fetch_nvd_data.py
This will create a file named
nvd_cve_data.csv
in the project directory.
Once the setup is complete and the nvd_cve_data.csv
file is present, you can launch the Streamlit dashboard:
streamlit run dashboard.py
To stop the dashboard, press Ctrl+C
in your terminal. To exit the virtual environment, simply type:
deactivate
Your web browser will automatically open a new tab with the dashboard. You can now:
- Use the sidebar filters to narrow down the data.
- Explore the different tabs: Overview, Search, Analytics, Data, and Asset Groups.
- Create asset groups to track vulnerabilities for technologies you care about (e.g., 'Apache', 'Windows Server', 'TensorFlow').
cve-monitor/
βββ .gitignore
βββ dashboard.py # The main Streamlit dashboard application
βββ fetch_nvd_data.py # Script to fetch data from the NVD API
βββ nvd_cve_data.csv # (Generated by fetch_nvd_data.py) The CVE dataset
βββ requirements.txt # Project dependencies
βββ README.md # This file
Contributions are welcome! If you have ideas for new features, improvements, or bug fixes, please feel free to:
- Fork the repository.
- Create a new branch (
git checkout -b feature/your-feature-name
). - Make your changes.
- Commit your changes (
git commit -m 'Add some feature'
). - Push to the branch (
git push origin feature/your-feature-name
). - Open a Pull Request.
This project is licensed under the MIT License. See the LICENSE file for details.
Warning
Protect Your API Key
The most critical security aspect of this project is handling your NVD API key.
- Do Not Hardcode Your API Key: Never write your API key directly into the source code (
.py
files). If you commit this code to a public repository like GitHub, your key will be exposed and can be abused by anyone. - Use Environment Variables: The provided code is designed to read the API key from an environment variable (
NVD_API_KEY
). This is a standard practice for keeping secrets separate from code. .gitignore
: If you decide to use a.env
file to store your key locally (a common alternative), make sure to add.env
to your.gitignore
file to prevent it from ever being tracked by Git.
Exposing your API key could lead to rate-limiting or revocation of your key by NIST.
This script is responsible for the entire data extraction process.
- Configuration: Sets the base API URL, the output filename, and retrieves the API key from the environment.
- API Communication: It sends
GET
requests to the NVD API endpoint. It uses awhile
loop and pagination (startIndex
) to fetch all available records, since the API returns a maximum of 2,000 results per request. - Rate Limiting: A
time.sleep(6)
is included between API calls. This is crucial to respect the NVD's rate limits (no more than 10 requests in a 60-second window with an API key) and prevent your key from being blocked. - Data Parsing: It processes the incoming JSON response, extracts the relevant fields for each CVE (ID, description, dates, CVSS v3.1 score, and severity), and structures them.
- CSV Export: It uses the
pandas
library to convert the list of extracted CVEs into a DataFrame and then saves it asnvd_cve_data.csv
.
This script creates the user-facing web application.
- Page Setup: Configures the browser tab title, icon, and layout using st.set_page_config().
- Data Loading: It loads the nvd_cve_data.csv file into a pandas DataFrame. The @st.cache_data decorator is used to cache the data in memory, so the file isn't re-read every time a filter is changed, making the dashboard much faster.
- Sidebar Filters: It uses st.sidebar to create interactive widgets (multi-select for severity, a slider for the score, and a date range input).
- Data Filtering: The main DataFrame is filtered based on the user's selections in the sidebar.
- Display: The filtered data is displayed in a clean, sortable table using st.dataframe(). Key metrics (like total CVEs found) are shown using st.metric().
- CVE Details Section: Added a dedicated section with dual search functionality:
- Search by CVE ID: Users can search for specific vulnerabilities by CVE ID
- Search by Description: Users can search for vulnerabilities using keywords in descriptions with full pagination support (10/20/50/100 results per page)
- Form-based search implementation with Search and Clear buttons for better user control
- Detailed CVE information display with severity color coding
- Interactive Features:
- Copy-to-clipboard functionality for CVE IDs
- Clear selection button to reset CVE details view
- Session state management to preserve tab selection and search results
- Form-based input handling to prevent session state conflicts
- Download Button: A st.download_button is provided to allow the user to download the currently filtered view as a new CSV file.
This project is open-source and available under the MIT License.
In the spirit of transparency, it is important to note that the code and documentation for this project were generated with the assistance of an AI. The chosen collaborator for this task was Manus and DeepSeek. The process involved defining the project goals, specifying the required components, and refining the generated output to ensure accuracy, security, and adherence to best practices.
Thiago Maria - From Brazil to the World π
Senior Security Information Professional | Passionate Programmer | AI Developer
With a professional background in security analysis and a deep passion for programming, I created this Github acc to share some knowledge about security information, cybersecurity, Python and AI development practices. Most of my work here focuses on implementing security-first at companies and developer tools while maintaining usability and productivity.
Let's Connect: