Skip to content

Commit

Permalink
add skelleton for other module + refactor readme + reporting
Browse files Browse the repository at this point in the history
  • Loading branch information
ElNiak committed Jun 20, 2024
1 parent 8da788f commit 1bc8a7e
Show file tree
Hide file tree
Showing 22 changed files with 819 additions and 375 deletions.
8 changes: 8 additions & 0 deletions INSTALL.md
Original file line number Diff line number Diff line change
@@ -1,5 +1,13 @@
# Installation

## Package

```bash
# For reCAPTCHA
sudo apt-get install portaudio19-dev

```

## Pre-Commit

```bash
Expand Down
136 changes: 91 additions & 45 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -4,72 +4,118 @@
<img src="https://forthebadge.com/images/badges/made-with-python.svg" >
</div>

## Introduction

BountyDrive is a comprehensive tool designed for penetration testers and cybersecurity researchers. It integrates various modules for performing attacks, reporting, and managing VPN/proxy settings, making it an indispensable asset for any security professional.

## Features
- **Automation**: Automate the process of finding vulnerabilities.
- **Dorking**: Automate Google, GitHub, and Shodan dorking to find vulnerabilities.
- **Web Crawling**: Crawl web pages to collect data.
- **Scanning**: Perform different types of vulnerability scans.
- **SQL Injection**: Execute SQL injection attacks.
- **XSS**: Perform Cross-Site Scripting attacks.
- **WAF Bypassing**: Techniques to bypass Web Application Firewalls.
- **Reporting**: Generate detailed reports of findings.
- **VPN/Proxies Management**: Seamlessly switch between different VPN services and proxies to anonymize your activities.
- **pypy3 Support**: Use pypy3 to speed up the execution of the tool.

## Installation

### Packages

```bash
make
# For reCAPTCHA
sudo apt-get install portaudio19-dev

```
## Usage

### Pre-Commit

```bash
python3 py
python3 -m pip install pre-commit
pre-commit installed at .git/hooks/pre-commit
```

### Classical

```bash
Please specify the website extension(eg- .in,.com,.pk) [default: ] ----->
Do you want to restrict search to subdomain present in target.txt ? [default: true (vs false)] -----> true
Please specify the total no. of websites you want [default: 10] ---->
From which Google page you want to start(eg- 1,2,3) [default: 1] ---->
Do you want to do the Google dorking scan phase ? [default: true (vs false)] ---->
Do you want to do the Github dorking scan phase ? [default: true (vs false)] ----> false
Do you want to test for XSS vulnerability ? [default: true (vs false)] ----> true
Do you want to encode XSS payload ? [default: true (vs false)] ----> false
Do you want to fuzz XSS payload ? [default: true (vs false)] ----> true
Do you want to test blind XSS payload ? [default: true (vs false)] ----> false
Do you want to test for SQLi vulnerability ? [default: true (vs false)] ----> false
Extension: , Total Output: 10, Page No: 1, Do Google Dorking: True, Do Github Dorking False
sudo apt-get install python3 python3-dev python3-venv
python3 --version
# Python 3.10.12
```

## Tips
Use Google hacking database(https://www.exploit-db.com/google-hacking-database) for good sqli dorks.
```bash
python3 -m venv python3-venv
source python3-venv/bin/activate
python3 -m pip install -U pip wheel
python3 -m pip install -r requirements.txt
```

## Proxies
Update `config.ini`

Run with `python3 bounty_drive.py`

Free proxies from free-proxy-list.net
Updated at 2024-02-18 15:32:02 UTC.
### PyPy

TODO: we should proxy proxy chains
Not ready - SEGFAULT in some libs (urllib3, cryptography downgraded).

## TODO
Install PyPy from [here](https://doc.pypy.org/en/latest/install.html)

- use singletons for config !!!
Package compatible with PyPy are in `requirements_pypy.txt`
* http://packages.pypy.org/
* https://doc.pypy.org/en/latest/cpython_differences.html

# HAPPY HUNTING
```bash
sudo apt-get install pypy3 pypy3-dev pypy3-venv
pypy3 --version
# Python 3.9.19 (7.3.16+dfsg-2~ppa1~ubuntu20.04, Apr 26 2024, 13:32:24)
# [PyPy 7.3.16 with GCC 9.4.0]
```

sudo apt-get install portaudio19-dev
```bash
pypy3 -m venv pypy3-venv
source pypy3-venv/bin/activate
pypy3 -m pip install -U pip wheel
pypy3 -m pip install -r requirements_pypy.txt
```

pdate `config.ini`

Run with `pypy3 bounty_drive.py`


## Usage

```bash
# update configs/config.ini
python3 bountry_drive.py [config_file]
pypy3 bountry_drive.py [config_file]
```

## VPN/Proxies Management

* NordVPN: Switch between NordVPN servers.
* Proxies: Use different proxy lists to route your traffic.

# Ressource:
https://raw.githubusercontent.com/darklotuskdb/SSTI-XSS-Finder/main/Payloads.txt
https://github.com/nu11secur1ty/nu11secur1ty/blob/master/kaylogger/nu11secur1ty.py
https://github.com/Ishanoshada/GDorks/blob/main/dorks.txt
https://github.com/BullsEye0/google_dork_list/tree/master
https://github.com/Ishanoshada/GDorks/tree/main
https://github.com/anmolksachan/CrossInjector/tree/main?tab=readme-ov-file
https://github.com/Gualty/asqlmap
https://github.com/bambish/ScanQLi/blob/master/scanqli.py
## Contributing

https://github.com/0MeMo07/URL-Seeker
We welcome contributions from the community. To contribute:

https://github.com/obheda12/GitDorker/blob/master/GitDorker.py
https://medium.com/@dub-flow/the-easiest-way-to-find-cves-at-the-moment-github-dorks-29d18b0c6900
https://book.hacktricks.xyz/generic-methodologies-and-resources/external-recon-methodology/github-leaked-secrets
https://github.com/gwen001/github-search
https://obheda12.medium.com/gitdorker-a-new-tool-for-manual-github-dorking-and-easy-bug-bounty-wins-92a0a0a6b8d5
https://github.com/spekulatius/infosec-dorks
* Fork the repository.
* Create a new branch for your feature or bugfix.
* Commit your changes and push the branch.
* Create a pull request detailing your changes.

https://github.com/RevoltSecurities/Subdominator
## Ressource:

https://github.com/Raghavd3v/CRLFsuite/blob/main/crlfsuite/db/wafsignatures.json
* https://github.com/Karmaz95/crimson/blob/master/words/exp/special_chars.txt
* https://github.com/hahwul/dalfox
* https://github.com/Raghavd3v/CRLFsuite/blob/main/crlfsuite/db/wafsignatures.json

# TODO
add a vulnerable wordpress plugin and then dork to find vulnerable wordpress sites
## TODOs
Also watch module for more specfic TODOs
* add a vulnerable wordpress plugin and then dork to find vulnerable wordpress sites
* use singletons for config !!!
* create class for each attack
* change the color used
7 changes: 7 additions & 0 deletions bounty_drive/attacks/crawl/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,7 @@
# Crawl


## Usefull links

* https://github.com/0MeMo07/URL-Seeker
* https://github.com/RevoltSecurities/Subdominator
Empty file.
164 changes: 164 additions & 0 deletions bounty_drive/attacks/crawl/crawling.py
Original file line number Diff line number Diff line change
@@ -0,0 +1,164 @@
import sys
import threading
import concurrent.futures
from urllib.parse import urlparse
from termcolor import cprint
import tqdm

from attacks.xss.xss_striker import photon_crawler
from reporting.results_manager import (
get_processed_crawled,
save_crawling_query,
crawling_results,
)
from vpn_proxies.proxies_manager import get_proxies_and_cycle
from scraping.web_scraper import scrape_links_from_url


def launch_crawling_attack(config, website_to_test):
try:
proxies, proxy_cycle = get_proxies_and_cycle(config)

if config["do_web_scrap"]:
# todo MERGE WITH CRAWL
new_urls = []

lock = threading.Lock()

# Now, append a proxy to each task
number_of_worker = len(proxies)
search_tasks_with_proxy = []
for website in website_to_test:
proxy = next(proxy_cycle)
search_tasks_with_proxy.append({"website": website, "proxy": proxy})

with concurrent.futures.ThreadPoolExecutor(
max_workers=number_of_worker
) as executor:
future_to_search = {
executor.submit(
scrape_links_from_url, task["website"], task["proxy"]
): task
for task in search_tasks_with_proxy
}
for website in tqdm(
concurrent.futures.as_completed(future_to_search),
desc=f"Upating links DB for xss website",
unit="site",
total=len(future_to_search),
):
with lock:
new_urls_temps = website.result()
new_urls += new_urls_temps

cprint(f"Found {len(new_urls)} new links", color="green", file=sys.stderr)

# crawl the website for more links TODO

website_to_test += new_urls

website_to_test = list(set(website_to_test))
elif config["do_crawl"]:
lock = threading.Lock()
number_of_worker = len(proxies)
search_tasks_with_proxy = []

for website in website_to_test:
cprint(
f"Testing {website} for crawling", color="yellow", file=sys.stderr
)
scheme = urlparse(website).scheme
cprint(
"Target scheme: {}".format(scheme),
color="yellow",
file=sys.stderr,
)
host = urlparse(website).netloc

main_url = scheme + "://" + host

cprint("Target host: {}".format(host), color="yellow", file=sys.stderr)

proxy = next(proxy_cycle)
search_tasks_with_proxy.append({"website": website, "proxy": proxy})

forms = []
domURLs = []
processed_xss_photon_crawl = get_processed_crawled(config)

with concurrent.futures.ThreadPoolExecutor(
max_workers=number_of_worker
) as executor:
future_to_search = {
executor.submit(
photon_crawler,
task["website"],
config,
task["proxy"],
processed_xss_photon_crawl,
): task
for task in search_tasks_with_proxy
}
for website in tqdm(
concurrent.futures.as_completed(future_to_search),
desc=f"Photon Crawling links DB for xss website",
unit="site",
total=len(future_to_search),
):
with lock:
crawling_result = website.result()
seedUrl = website["website"]

cprint(
f"Forms: {crawling_result[0]}",
color="green",
file=sys.stderr,
)
cprint(
f"DOM URLs: {crawling_result[1]}",
color="green",
file=sys.stderr,
)
forms_temps = list(set(crawling_result[0]))

domURLs_temps = list(set(list(crawling_result[1])))

difference = abs(len(domURLs) - len(forms))

if len(domURLs_temps) > len(forms_temps):
for i in range(difference):
forms_temps.append(0)
elif len(forms_temps) > len(domURLs_temps):
for i in range(difference):
domURLs_temps.append(0)

result = (seedUrl, forms_temps, domURLs_temps)

crawling_results.append((result, config))

domURLs += domURLs_temps
forms += forms_temps
cprint(
f"Total domURLs links: {len(domURLs)}",
color="green",
file=sys.stderr,
)
cprint(
f"Total forms links: {len(forms)}",
color="green",
file=sys.stderr,
)
except KeyboardInterrupt:
cprint(
"Process interrupted by user during crawling attack phase ... Saving results",
"red",
file=sys.stderr,
)
concurrent.futures.thread._threads_queues.clear()
# https://stackoverflow.com/questions/49992329/the-workers-in-threadpoolexecutor-is-not-really-daemon
for result, config in crawling_results:
save_crawling_query(result, config)
# TODO with attacks
exit(1)
except Exception as e:
cprint(f"Error: {e}", color="red", file=sys.stderr)
18 changes: 18 additions & 0 deletions bounty_drive/attacks/dorks/README.md
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Dorking

## Usefull links

* https://github.com/Ishanoshada/GDorks/blob/main/dorks.txt
* https://github.com/BullsEye0/google_dork_list/tree/master
* https://github.com/Ishanoshada/GDorks/tree/main
* https://github.com/obheda12/GitDorker/blob/master/GitDorker.py
* https://medium.com/@dub-flow/the-easiest-way-to-find-cves-at-the-moment-github-dorks-29d18b0c6900
* https://book.hacktricks.xyz/generic-methodologies-and-resources/external-recon-methodology/github-leaked-secrets
* https://github.com/gwen001/github-search
* https://obheda12.medium.com/gitdorker-a-new-tool-for-manual-github-dorking-and-easy-bug-bounty-wins-92a0a0a6b8d5
* https://github.com/spekulatius/infosec-dorks
* Use Google hacking database(https://www.exploit-db.com/google-hacking-database) for good sqli dorks.

## TODOs

* implement other search engine queries (https://github.com/epsylon/xsser/blob/master/core/dork.py)
1 change: 0 additions & 1 deletion bounty_drive/attacks/dorks/__init__.py
Original file line number Diff line number Diff line change
@@ -1 +0,0 @@
from tqdm import tqdm
Loading

0 comments on commit 1bc8a7e

Please sign in to comment.