diff --git a/INSTALL.md b/INSTALL.md index f8ef76e..d4a1249 100644 --- a/INSTALL.md +++ b/INSTALL.md @@ -1,5 +1,13 @@ # Installation +## Package + +```bash +# For reCAPTCHA +sudo apt-get install portaudio19-dev + +``` + ## Pre-Commit ```bash diff --git a/README.md b/README.md index 02f071d..2dc94c4 100644 --- a/README.md +++ b/README.md @@ -4,72 +4,118 @@ +## Introduction + +BountyDrive is a comprehensive tool designed for penetration testers and cybersecurity researchers. It integrates various modules for performing attacks, reporting, and managing VPN/proxy settings, making it an indispensable asset for any security professional. + +## Features +- **Automation**: Automate the process of finding vulnerabilities. +- **Dorking**: Automate Google, GitHub, and Shodan dorking to find vulnerabilities. +- **Web Crawling**: Crawl web pages to collect data. +- **Scanning**: Perform different types of vulnerability scans. +- **SQL Injection**: Execute SQL injection attacks. +- **XSS**: Perform Cross-Site Scripting attacks. +- **WAF Bypassing**: Techniques to bypass Web Application Firewalls. +- **Reporting**: Generate detailed reports of findings. +- **VPN/Proxies Management**: Seamlessly switch between different VPN services and proxies to anonymize your activities. +- **pypy3 Support**: Use pypy3 to speed up the execution of the tool. + ## Installation + +### Packages + ```bash -make +# For reCAPTCHA +sudo apt-get install portaudio19-dev ``` -## Usage + +### Pre-Commit + ```bash -python3 py +python3 -m pip install pre-commit +pre-commit installed at .git/hooks/pre-commit ``` +### Classical + ```bash -Please specify the website extension(eg- .in,.com,.pk) [default: ] -----> -Do you want to restrict search to subdomain present in target.txt ? [default: true (vs false)] -----> true -Please specify the total no. of websites you want [default: 10] ----> -From which Google page you want to start(eg- 1,2,3) [default: 1] ----> -Do you want to do the Google dorking scan phase ? [default: true (vs false)] ----> -Do you want to do the Github dorking scan phase ? [default: true (vs false)] ----> false -Do you want to test for XSS vulnerability ? [default: true (vs false)] ----> true -Do you want to encode XSS payload ? [default: true (vs false)] ----> false -Do you want to fuzz XSS payload ? [default: true (vs false)] ----> true -Do you want to test blind XSS payload ? [default: true (vs false)] ----> false -Do you want to test for SQLi vulnerability ? [default: true (vs false)] ----> false -Extension: , Total Output: 10, Page No: 1, Do Google Dorking: True, Do Github Dorking False +sudo apt-get install python3 python3-dev python3-venv +python3 --version +# Python 3.10.12 ``` -## Tips -Use Google hacking database(https://www.exploit-db.com/google-hacking-database) for good sqli dorks. +```bash +python3 -m venv python3-venv +source python3-venv/bin/activate +python3 -m pip install -U pip wheel +python3 -m pip install -r requirements.txt +``` -## Proxies +Update `config.ini` +Run with `python3 bounty_drive.py` -Free proxies from free-proxy-list.net -Updated at 2024-02-18 15:32:02 UTC. +### PyPy -TODO: we should proxy proxy chains +Not ready - SEGFAULT in some libs (urllib3, cryptography downgraded). -## TODO +Install PyPy from [here](https://doc.pypy.org/en/latest/install.html) -- use singletons for config !!! +Package compatible with PyPy are in `requirements_pypy.txt` +* http://packages.pypy.org/ +* https://doc.pypy.org/en/latest/cpython_differences.html -# HAPPY HUNTING +```bash +sudo apt-get install pypy3 pypy3-dev pypy3-venv +pypy3 --version +# Python 3.9.19 (7.3.16+dfsg-2~ppa1~ubuntu20.04, Apr 26 2024, 13:32:24) +# [PyPy 7.3.16 with GCC 9.4.0] +``` -sudo apt-get install portaudio19-dev +```bash +pypy3 -m venv pypy3-venv +source pypy3-venv/bin/activate +pypy3 -m pip install -U pip wheel +pypy3 -m pip install -r requirements_pypy.txt +``` + +pdate `config.ini` + +Run with `pypy3 bounty_drive.py` + + +## Usage + +```bash +# update configs/config.ini +python3 bountry_drive.py [config_file] +pypy3 bountry_drive.py [config_file] +``` + +## VPN/Proxies Management + +* NordVPN: Switch between NordVPN servers. +* Proxies: Use different proxy lists to route your traffic. -# Ressource: -https://raw.githubusercontent.com/darklotuskdb/SSTI-XSS-Finder/main/Payloads.txt -https://github.com/nu11secur1ty/nu11secur1ty/blob/master/kaylogger/nu11secur1ty.py -https://github.com/Ishanoshada/GDorks/blob/main/dorks.txt -https://github.com/BullsEye0/google_dork_list/tree/master -https://github.com/Ishanoshada/GDorks/tree/main -https://github.com/anmolksachan/CrossInjector/tree/main?tab=readme-ov-file -https://github.com/Gualty/asqlmap -https://github.com/bambish/ScanQLi/blob/master/scanqli.py +## Contributing -https://github.com/0MeMo07/URL-Seeker +We welcome contributions from the community. To contribute: -https://github.com/obheda12/GitDorker/blob/master/GitDorker.py -https://medium.com/@dub-flow/the-easiest-way-to-find-cves-at-the-moment-github-dorks-29d18b0c6900 -https://book.hacktricks.xyz/generic-methodologies-and-resources/external-recon-methodology/github-leaked-secrets -https://github.com/gwen001/github-search -https://obheda12.medium.com/gitdorker-a-new-tool-for-manual-github-dorking-and-easy-bug-bounty-wins-92a0a0a6b8d5 -https://github.com/spekulatius/infosec-dorks +* Fork the repository. +* Create a new branch for your feature or bugfix. +* Commit your changes and push the branch. +* Create a pull request detailing your changes. -https://github.com/RevoltSecurities/Subdominator +## Ressource: -https://github.com/Raghavd3v/CRLFsuite/blob/main/crlfsuite/db/wafsignatures.json +* https://github.com/Karmaz95/crimson/blob/master/words/exp/special_chars.txt +* https://github.com/hahwul/dalfox +* https://github.com/Raghavd3v/CRLFsuite/blob/main/crlfsuite/db/wafsignatures.json -# TODO -add a vulnerable wordpress plugin and then dork to find vulnerable wordpress sites \ No newline at end of file +## TODOs +Also watch module for more specfic TODOs +* add a vulnerable wordpress plugin and then dork to find vulnerable wordpress sites +* use singletons for config !!! +* create class for each attack +* change the color used diff --git a/bounty_drive/attacks/crawl/README.md b/bounty_drive/attacks/crawl/README.md new file mode 100644 index 0000000..dedc330 --- /dev/null +++ b/bounty_drive/attacks/crawl/README.md @@ -0,0 +1,7 @@ +# Crawl + + +## Usefull links + +* https://github.com/0MeMo07/URL-Seeker +* https://github.com/RevoltSecurities/Subdominator diff --git a/bounty_drive/attacks/crawl/__init__.py b/bounty_drive/attacks/crawl/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/bounty_drive/attacks/crawl/crawling.py b/bounty_drive/attacks/crawl/crawling.py new file mode 100644 index 0000000..6bd74a2 --- /dev/null +++ b/bounty_drive/attacks/crawl/crawling.py @@ -0,0 +1,164 @@ +import sys +import threading +import concurrent.futures +from urllib.parse import urlparse +from termcolor import cprint +import tqdm + +from attacks.xss.xss_striker import photon_crawler +from reporting.results_manager import ( + get_processed_crawled, + save_crawling_query, + crawling_results, +) +from vpn_proxies.proxies_manager import get_proxies_and_cycle +from scraping.web_scraper import scrape_links_from_url + + +def launch_crawling_attack(config, website_to_test): + try: + proxies, proxy_cycle = get_proxies_and_cycle(config) + + if config["do_web_scrap"]: + # todo MERGE WITH CRAWL + new_urls = [] + + lock = threading.Lock() + + # Now, append a proxy to each task + number_of_worker = len(proxies) + search_tasks_with_proxy = [] + for website in website_to_test: + proxy = next(proxy_cycle) + search_tasks_with_proxy.append({"website": website, "proxy": proxy}) + + with concurrent.futures.ThreadPoolExecutor( + max_workers=number_of_worker + ) as executor: + future_to_search = { + executor.submit( + scrape_links_from_url, task["website"], task["proxy"] + ): task + for task in search_tasks_with_proxy + } + for website in tqdm( + concurrent.futures.as_completed(future_to_search), + desc=f"Upating links DB for xss website", + unit="site", + total=len(future_to_search), + ): + with lock: + new_urls_temps = website.result() + new_urls += new_urls_temps + + cprint(f"Found {len(new_urls)} new links", color="green", file=sys.stderr) + + # crawl the website for more links TODO + + website_to_test += new_urls + + website_to_test = list(set(website_to_test)) + elif config["do_crawl"]: + lock = threading.Lock() + number_of_worker = len(proxies) + search_tasks_with_proxy = [] + + for website in website_to_test: + cprint( + f"Testing {website} for crawling", color="yellow", file=sys.stderr + ) + scheme = urlparse(website).scheme + cprint( + "Target scheme: {}".format(scheme), + color="yellow", + file=sys.stderr, + ) + host = urlparse(website).netloc + + main_url = scheme + "://" + host + + cprint("Target host: {}".format(host), color="yellow", file=sys.stderr) + + proxy = next(proxy_cycle) + search_tasks_with_proxy.append({"website": website, "proxy": proxy}) + + forms = [] + domURLs = [] + processed_xss_photon_crawl = get_processed_crawled(config) + + with concurrent.futures.ThreadPoolExecutor( + max_workers=number_of_worker + ) as executor: + future_to_search = { + executor.submit( + photon_crawler, + task["website"], + config, + task["proxy"], + processed_xss_photon_crawl, + ): task + for task in search_tasks_with_proxy + } + for website in tqdm( + concurrent.futures.as_completed(future_to_search), + desc=f"Photon Crawling links DB for xss website", + unit="site", + total=len(future_to_search), + ): + with lock: + crawling_result = website.result() + seedUrl = website["website"] + + cprint( + f"Forms: {crawling_result[0]}", + color="green", + file=sys.stderr, + ) + cprint( + f"DOM URLs: {crawling_result[1]}", + color="green", + file=sys.stderr, + ) + forms_temps = list(set(crawling_result[0])) + + domURLs_temps = list(set(list(crawling_result[1]))) + + difference = abs(len(domURLs) - len(forms)) + + if len(domURLs_temps) > len(forms_temps): + for i in range(difference): + forms_temps.append(0) + elif len(forms_temps) > len(domURLs_temps): + for i in range(difference): + domURLs_temps.append(0) + + result = (seedUrl, forms_temps, domURLs_temps) + + crawling_results.append((result, config)) + + domURLs += domURLs_temps + forms += forms_temps + cprint( + f"Total domURLs links: {len(domURLs)}", + color="green", + file=sys.stderr, + ) + cprint( + f"Total forms links: {len(forms)}", + color="green", + file=sys.stderr, + ) + except KeyboardInterrupt: + cprint( + "Process interrupted by user during crawling attack phase ... Saving results", + "red", + file=sys.stderr, + ) + concurrent.futures.thread._threads_queues.clear() + # https://stackoverflow.com/questions/49992329/the-workers-in-threadpoolexecutor-is-not-really-daemon + for result, config in crawling_results: + save_crawling_query(result, config) + # TODO with attacks + exit(1) + except Exception as e: + cprint(f"Error: {e}", color="red", file=sys.stderr) diff --git a/bounty_drive/attacks/dorks/README.md b/bounty_drive/attacks/dorks/README.md new file mode 100644 index 0000000..ca5354a --- /dev/null +++ b/bounty_drive/attacks/dorks/README.md @@ -0,0 +1,18 @@ +# Dorking + +## Usefull links + +* https://github.com/Ishanoshada/GDorks/blob/main/dorks.txt +* https://github.com/BullsEye0/google_dork_list/tree/master +* https://github.com/Ishanoshada/GDorks/tree/main +* https://github.com/obheda12/GitDorker/blob/master/GitDorker.py +* https://medium.com/@dub-flow/the-easiest-way-to-find-cves-at-the-moment-github-dorks-29d18b0c6900 +* https://book.hacktricks.xyz/generic-methodologies-and-resources/external-recon-methodology/github-leaked-secrets +* https://github.com/gwen001/github-search +* https://obheda12.medium.com/gitdorker-a-new-tool-for-manual-github-dorking-and-easy-bug-bounty-wins-92a0a0a6b8d5 +* https://github.com/spekulatius/infosec-dorks +* Use Google hacking database(https://www.exploit-db.com/google-hacking-database) for good sqli dorks. + +## TODOs + +* implement other search engine queries (https://github.com/epsylon/xsser/blob/master/core/dork.py) \ No newline at end of file diff --git a/bounty_drive/attacks/dorks/__init__.py b/bounty_drive/attacks/dorks/__init__.py index fae04a2..e69de29 100644 --- a/bounty_drive/attacks/dorks/__init__.py +++ b/bounty_drive/attacks/dorks/__init__.py @@ -1 +0,0 @@ -from tqdm import tqdm diff --git a/bounty_drive/attacks/dorks/google_dorking.py b/bounty_drive/attacks/dorks/search_engine_dorking.py similarity index 98% rename from bounty_drive/attacks/dorks/google_dorking.py rename to bounty_drive/attacks/dorks/search_engine_dorking.py index 47d1b59..55aeff9 100644 --- a/bounty_drive/attacks/dorks/google_dorking.py +++ b/bounty_drive/attacks/dorks/search_engine_dorking.py @@ -488,17 +488,25 @@ def launch_google_dorks_and_search_attack(config, categories): unit="site", ): future.result() + + cprint( + f"Saving dorks - Total number of dorks processed: {len(google_dorking_results)}", + "green", + file=sys.stderr, + ) + for result, config in google_dorking_results: + save_dorking_query(result, config) except KeyboardInterrupt: cprint( "Process interrupted by user during google dorking phase ... Saving results", "red", file=sys.stderr, ) - concurrent.futures.thread._threads_queues.clear() + # concurrent.futures.thread._threads_queues.clear() # https://stackoverflow.com/questions/49992329/the-workers-in-threadpoolexecutor-is-not-really-daemon for result, config in google_dorking_results: save_dorking_query(result, config) - quit() + exit() except Exception as e: cprint(f"Error searching for dorks: {e}", "red", file=sys.stderr) raise e diff --git a/bounty_drive/attacks/scanning/README.md b/bounty_drive/attacks/scanning/README.md new file mode 100644 index 0000000..b75adf0 --- /dev/null +++ b/bounty_drive/attacks/scanning/README.md @@ -0,0 +1,6 @@ +# Scanning + +## Usefull links + +* https://github.com/blacklanternsecurity/bbot +* https://github.com/shmilylty/OneForAll diff --git a/bounty_drive/attacks/scanning/__init__.py b/bounty_drive/attacks/scanning/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/bounty_drive/attacks/scanning/scanning.py b/bounty_drive/attacks/scanning/scanning.py new file mode 100644 index 0000000..1961a3f --- /dev/null +++ b/bounty_drive/attacks/scanning/scanning.py @@ -0,0 +1,9 @@ +web_service_port = [80, 443] + + +def enumerate_subdomains(): + raise NotImplementedError() + + +def is_web_app(url): + raise NotImplementedError() diff --git a/bounty_drive/attacks/sqli/README.md b/bounty_drive/attacks/sqli/README.md new file mode 100644 index 0000000..d6d8c6f --- /dev/null +++ b/bounty_drive/attacks/sqli/README.md @@ -0,0 +1,8 @@ +# SQLi + +## Usefull links + +* https://github.com/anmolksachan/CrossInjector/tree/main?tab=readme-ov-file +* https://github.com/Gualty/asqlmap +* https://github.com/bambish/ScanQLi/blob/master/scanqli.py +* https://github.com/sqlmapproject/sqlmap/blob/6ae0d0f54e37cc8c2d56a4bfa9ec34bd3e72766a/lib/core/common.py#L5053 \ No newline at end of file diff --git a/bounty_drive/attacks/ssti/__init__.py b/bounty_drive/attacks/ssti/__init__.py new file mode 100644 index 0000000..e69de29 diff --git a/bounty_drive/attacks/xss/README.md b/bounty_drive/attacks/xss/README.md new file mode 100644 index 0000000..2a41586 --- /dev/null +++ b/bounty_drive/attacks/xss/README.md @@ -0,0 +1,12 @@ +# XSS + +## usefull links + +* https://raw.githubusercontent.com/darklotuskdb/SSTI-XSS-Finder/main/Payloads.txt +* https://github.com/nu11secur1ty/nu11secur1ty/blob/master/kaylogger/nu11secur1ty.py +* https://github.com/s0md3v/XSStrike +* https://github.com/PortSwigger/xss-validator/blob/master/xss-detector/xss.js +* https://github.com/ItsIgnacioPortal/XSStrike-Reborn +* https://github.com/Ekultek/Pybelt +* https://github.com/pwn0sec/PwnXSS +* https://github.com/epsylon/xsser \ No newline at end of file diff --git a/bounty_drive/attacks/xss/xss.py b/bounty_drive/attacks/xss/xss.py index ac32c95..18ba467 100644 --- a/bounty_drive/attacks/xss/xss.py +++ b/bounty_drive/attacks/xss/xss.py @@ -12,10 +12,9 @@ from termcolor import cprint from tqdm import tqdm -from attacks.dorks.google_dorking import get_proxies_and_cycle -from attacks.xss.xss_striker import attacker_crawler, photon_crawler -from reporting.results_manager import save_dorking_query, update_attack_result -from scraping.web_scraper import scrape_links_from_url +from attacks.dorks.search_engine_dorking import get_proxies_and_cycle +from attacks.xss.xss_striker import attacker_crawler +from reporting.results_manager import update_attack_result from vpn_proxies.proxies_manager import prepare_proxies from bypasser.waf_mitigation import waf_detector from utils.app_config import ( @@ -162,165 +161,32 @@ def launch_xss_attack(config, website_to_test): """ if len(website_to_test) > 0: try: - proxies, proxy_cycle = get_proxies_and_cycle(config) - - vuln_path = [] - - if config["do_web_scrap"]: - # todo MERGE WITH CRAWL - new_urls = [] - - lock = threading.Lock() - - # Now, append a proxy to each task - number_of_worker = len(proxies) - search_tasks_with_proxy = [] - for website in website_to_test: - proxy = next(proxy_cycle) - search_tasks_with_proxy.append({"website": website, "proxy": proxy}) - - with concurrent.futures.ThreadPoolExecutor( - max_workers=number_of_worker - ) as executor: - future_to_search = { - executor.submit( - scrape_links_from_url, task["website"], task["proxy"] - ): task - for task in search_tasks_with_proxy - } - for website in tqdm( - concurrent.futures.as_completed(future_to_search), - desc=f"Upating links DB for xss website", - unit="site", - total=len(future_to_search), - ): - with lock: - new_urls_temps = website.result() - new_urls += new_urls_temps - - cprint( - f"Found {len(new_urls)} new links", color="green", file=sys.stderr - ) - - # crawl the website for more links TODO - - website_to_test += new_urls - - website_to_test = list(set(website_to_test)) + number_of_worker = len(proxies) + # TODO: use blind-xss-payload-list.txt + # configure a domain for the attacks if config["fuzz_xss"]: raise NotImplementedError("Fuzzing is not implemented yet") - elif config["do_crawl"]: - lock = threading.Lock() - number_of_worker = len(proxies) - search_tasks_with_proxy = [] - for website in website_to_test: - - cprint( - f"Testing {website} for XSS", color="yellow", file=sys.stderr - ) - scheme = urlparse(website).scheme - cprint( - "Target scheme: {}".format(scheme), - color="yellow", - file=sys.stderr, - ) - host = urlparse(website).netloc - - main_url = scheme + "://" + host - - cprint( - "Target host: {}".format(host), color="yellow", file=sys.stderr - ) - - proxy = next(proxy_cycle) - search_tasks_with_proxy.append({"website": website, "proxy": proxy}) - - forms = [] - domURLs = [] - processed_xss_photon_crawl = get_processed_xss_crawled(config) - - with concurrent.futures.ThreadPoolExecutor( - max_workers=number_of_worker - ) as executor: - future_to_search = { - executor.submit( - photon_crawler, - task["website"], - config, - task["proxy"], - processed_xss_photon_crawl, - ): task - for task in search_tasks_with_proxy - } - for website in tqdm( - concurrent.futures.as_completed(future_to_search), - desc=f"Photon Crawling links DB for xss website", - unit="site", - total=len(future_to_search), - ): - with lock: - crawling_result = website.result() - - cprint( - f"Forms: {crawling_result[0]}", - color="green", - file=sys.stderr, - ) - cprint( - f"DOM URLs: {crawling_result[1]}", - color="green", - file=sys.stderr, - ) - forms_temps = list(set(crawling_result[0])) - - domURLs_temps = list(set(list(crawling_result[1]))) - - difference = abs(len(domURLs) - len(forms)) - - if len(domURLs_temps) > len(forms_temps): - for i in range(difference): - forms_temps.append(0) - elif len(forms_temps) > len(domURLs_temps): - for i in range(difference): - domURLs_temps.append(0) - domURLs += domURLs_temps - forms += forms_temps - - # TODO: use blind-xss-payload-list.txt - # configure a domain for the attacks - - blindPayload = "alert(1)" # TODO + else: + blindPayloads = "alert(1)" # TODO encoding = base64 if config["encode_xss"] else False - - cprint( - f"Total domURLs links: {len(domURLs)}", - color="green", - file=sys.stderr, - ) - cprint( - f"Total forms links: {len(forms)}", - color="green", - file=sys.stderr, - ) - with concurrent.futures.ThreadPoolExecutor( max_workers=number_of_worker ) as executor: future_to_search = { executor.submit( attacker_crawler, - scheme, - host, - main_url, + # scheme, + # host, + # main_url, form, - blindPayload, + blindPayloads, encoding, config, - proxy, + next(proxy_cycle), ): form - for form, domURL in zip(forms, domURLs) # TODO use domURL + for form, domURL in zip([], []) # TODO use domURL } for website in tqdm( concurrent.futures.as_completed(future_to_search), @@ -330,65 +196,65 @@ def launch_xss_attack(config, website_to_test): ): website.result() - # lock = threading.Lock() - - # # Now, append a proxy to each task - # number_of_worker = len(proxies) - # search_tasks_with_proxy = [] - # for website in website_to_test: - # total_parsed_targets = [] - # try: - # cprint( - # f"Intializing Payload Generator for url {website}", - # color="yellow", - # file=sys.stderr, - # ) - # parsed_target = generate_xss_urls(website) - # cprint( - # f"Generated {parsed_target[1]} payloads", - # color="yellow", - # file=sys.stderr, - # ) - # for each in parsed_target[0]: - # total_parsed_targets.append(each) - - # cprint( - # f"Total Parsed Targets: {len(total_parsed_targets)}", - # color="yellow", - # file=sys.stderr, - # ) - # for url in total_parsed_targets: - # proxy = next(proxy_cycle) - # search_tasks_with_proxy.append({"website": url, "proxy": proxy}) - # except Exception as e: - # cprint( - # f"Error generating payloads for {website}: {e}", - # "red", - # file=sys.stderr, - # ) - - # with concurrent.futures.ThreadPoolExecutor( - # max_workers=number_of_worker - # ) as executor: - # future_to_search = { - # executor.submit( - # test_xss_target, task["website"], task["proxy"], config - # ): task - # for task in search_tasks_with_proxy - # } - # for website in tqdm( - # concurrent.futures.as_completed(future_to_search), - # desc=f"Testing for XSS", - # unit="site", - # total=len(future_to_search), - # ): - # result, payload_url = website.result() - - # if vuln_path: - # driver.execute_script("window.open('');") - # driver.switch_to.window(driver.window_handles[-1]) - # for vulnerable_url in vuln_path: - # driver.get(vulnerable_url) + # lock = threading.Lock() + + # # Now, append a proxy to each task + # number_of_worker = len(proxies) + # search_tasks_with_proxy = [] + # for website in website_to_test: + # total_parsed_targets = [] + # try: + # cprint( + # f"Intializing Payload Generator for url {website}", + # color="yellow", + # file=sys.stderr, + # ) + # parsed_target = generate_xss_urls(website) + # cprint( + # f"Generated {parsed_target[1]} payloads", + # color="yellow", + # file=sys.stderr, + # ) + # for each in parsed_target[0]: + # total_parsed_targets.append(each) + + # cprint( + # f"Total Parsed Targets: {len(total_parsed_targets)}", + # color="yellow", + # file=sys.stderr, + # ) + # for url in total_parsed_targets: + # proxy = next(proxy_cycle) + # search_tasks_with_proxy.append({"website": url, "proxy": proxy}) + # except Exception as e: + # cprint( + # f"Error generating payloads for {website}: {e}", + # "red", + # file=sys.stderr, + # ) + + # with concurrent.futures.ThreadPoolExecutor( + # max_workers=number_of_worker + # ) as executor: + # future_to_search = { + # executor.submit( + # test_xss_target, task["website"], task["proxy"], config + # ): task + # for task in search_tasks_with_proxy + # } + # for website in tqdm( + # concurrent.futures.as_completed(future_to_search), + # desc=f"Testing for XSS", + # unit="site", + # total=len(future_to_search), + # ): + # result, payload_url = website.result() + + # if vuln_path: + # driver.execute_script("window.open('');") + # driver.switch_to.window(driver.window_handles[-1]) + # for vulnerable_url in vuln_path: + # driver.get(vulnerable_url) except KeyboardInterrupt: cprint( "Process interrupted by user during xss attack phase ... Saving results (TODO)", diff --git a/bounty_drive/attacks/xss/xss_striker.py b/bounty_drive/attacks/xss/xss_striker.py index 229d490..e0b7ee7 100644 --- a/bounty_drive/attacks/xss/xss_striker.py +++ b/bounty_drive/attacks/xss/xss_striker.py @@ -1,5 +1,6 @@ import concurrent.futures import copy +import os import random import re import sys @@ -7,8 +8,8 @@ from urllib.parse import unquote, urlparse import bs4 from termcolor import cprint -from tqdm import tqdm from fuzzywuzzy import fuzz +from bypasser.waf_mitigation import waf_detector from reporting.results_manager import write_xss_vectors from vpn_proxies.proxies_manager import prepare_proxies @@ -259,6 +260,7 @@ checkedForms = {} # Forms that have been checked + # Load proxies from file def load_xss_payload(): """_summary_ @@ -480,6 +482,7 @@ def d(string): def photon_crawler(seedUrl, config, proxy, processed_xss_photon_crawl): """Crawls a website to find forms and links for XSS vulnerability testing. + # TODO update to crawl also for sqli Args: seedUrl (str): The starting URL for crawling. @@ -533,6 +536,7 @@ def recursive_crawl(target): "DNT": "1", "Upgrade-Insecure-Requests": "1", } + # TODO add session proxies = prepare_proxies(proxy, config) cprint( @@ -1319,7 +1323,7 @@ def generator(occurences, response): def attacker_crawler( - scheme, host, main_url, form, blindPayload, encoding, config, proxy + scheme, host, main_url, form, blindPayloads, encoding, config, proxy ): """Attacks a web application by crawling and testing XSS vulnerabilities. @@ -1328,7 +1332,7 @@ def attacker_crawler( host (str): The host of the target web application. main_url (str): The main URL of the target web application. form (dict): The form data of the target web application. - blindPayload (str): The blind payload to test for blind XSS vulnerabilities. + blindPayloads (str): The blind payload to test for blind XSS vulnerabilities. encoding (str): The encoding to use for the payloads. config (dict): The configuration settings for the attack. proxy (str): The proxy server to use for the attack. @@ -1359,6 +1363,20 @@ def attacker_crawler( paramsCopy = copy.deepcopy(paramData) paramsCopy[paramName] = xsschecker + is_waffed = waf_detector(proxy, url, config) + if is_waffed: + cprint( + "WAF detected: %s%s%s" % (green, is_waffed, end), + "red", + file=sys.stderr, + ) + else: + cprint( + "WAF Status: %sOffline%s" % (green, end), + "green", + file=sys.stderr, + ) + headers = { "User-Agent": random.choice(USER_AGENTS), "X-HackerOne-Research": "elniak", @@ -1388,6 +1406,21 @@ def attacker_crawler( ) occurences = html_xss_parser(response, encoding) + cprint( + "Scan occurences: {}".format(occurences), + "green", + file=sys.stderr, + ) + if not occurences: + cprint("No reflection found", "yellow", file=sys.stderr) + continue + else: + cprint( + "Reflections found: %i" % len(occurences), + "green", + file=sys.stderr, + ) + cprint("Analysing reflections:", "green", file=sys.stderr) positions = occurences.keys() occurences = check_filter_efficiency( config, @@ -1398,44 +1431,118 @@ def attacker_crawler( occurences, encoding, ) + + cprint( + "Scan efficiencies: {}".format(occurences), + "green", + file=sys.stderr, + ) + cprint("Generating payloads:", "green", file=sys.stderr) + vectors = generator(occurences, response.text) + write_xss_vectors( + vectors, + os.path.join( + config["experiment_folder"], "xss_vectors.txt" + ), + ) if vectors: for confidence, vects in vectors.items(): try: payload = list(vects)[0] cprint( - "[Vulnerable webpage] - %s%s%s" + "[Potential Vulnerable Webpage] - %s%s%s" % (green, url, end), color="green", file=sys.stderr, ) cprint( - "Vector for %s%s%s: %s" + "\tVector for %s%s%s: %s" % (green, paramName, end, payload), color="green", file=sys.stderr, ) - # TODO add to report more cleanly - write_xss_vectors( - vectors, "outputs/xss_vectors.txt" + cprint( + "\tConfidence: %s%s%s" + % (green, confidence, end), + color="green", + file=sys.stderr, ) - # TODO open selenium and perform attack to take screenshot + # Only test most confident payloads ? + # TODO perform the attacks break except IndexError: pass - if config["blind_xss"] and blindPayload: - paramsCopy[paramName] = blindPayload - cprint( - f"Testing attack for GET with blind payload - Session (n° 0): {url} \n\t - parameters {paramsCopy} \n\t - headers {headers} \n\t - xss - with proxy {proxies} ...", - "yellow", - file=sys.stderr, - ) - proxies = prepare_proxies(proxy, config) - response = start_request( - proxies=proxies, - config=config, - base_url=url, - params=paramsCopy, - headers=headers, - GET=GET, - ) + if config["blind_xss"] and blindPayloads: + for blindPayload in blindPayloads: + paramsCopy[paramName] = blindPayload + cprint( + f"Testing attack for GET with blind payload - Session (n° 0): {url} \n\t - parameters {paramsCopy} \n\t - headers {headers} \n\t - xss - with proxy {proxies} ...", + "yellow", + file=sys.stderr, + ) + proxies = prepare_proxies(proxy, config) + response = start_request( + proxies=proxies, + config=config, + base_url=url, + params=paramsCopy, + headers=headers, + GET=GET, + ) + + +# def xss_attack( +# config, +# proxy, +# vects, +# url, +# params, +# GET, +# environment, +# positions, +# encoding, +# skip=False, +# minEfficiency=95, +# confidence=10, +# ): +# for vec in vects: +# if config["update_path"]: +# vect = vect.replace("/", "%2F") +# loggerVector = vect +# progress += 1 +# logger.run("Progress: %i/%i\r" % (progress, total)) +# if not GET: +# vect = unquote(vect) + +# efficiencies = checker( +# config, +# proxy, +# url, +# params, +# GET, +# environment, +# positions, +# encoding, +# ) +# if not efficiencies: +# for i in range(len(occurences)): +# efficiencies.append(0) + +# bestEfficiency = max(efficiencies) +# if bestEfficiency == 100 or (vect[0] == "\\" and bestEfficiency >= 95): +# logger.red_line() +# logger.good("Payload: %s" % loggerVector) +# logger.info("Efficiency: %i" % bestEfficiency) +# logger.info("Confidence: %i" % confidence) +# if not skip: +# choice = input( +# "%s Would you like to continue scanning? [y/N] " % que +# ).lower() +# if skip or choice != "y": +# return target, loggerVector +# elif bestEfficiency > minEfficiency: +# logger.red_line() +# logger.good("Payload: %s" % loggerVector) +# logger.info("Efficiency: %i" % bestEfficiency) +# logger.info("Confidence: %i" % confidence) diff --git a/bounty_drive/bounty_drive.py b/bounty_drive/bounty_drive.py index 7b9727b..fa046d5 100755 --- a/bounty_drive/bounty_drive.py +++ b/bounty_drive/bounty_drive.py @@ -17,7 +17,7 @@ from tqdm import tqdm -from attacks.dorks.google_dorking import ( +from attacks.dorks.search_engine_dorking import ( google_search_with_proxy, launch_google_dorks_and_search_attack, ) @@ -26,6 +26,7 @@ from reporting.results_manager import ( get_last_processed_ids, + get_links, get_processed_dorks, get_xss_links, ) @@ -34,6 +35,8 @@ from attacks.xss.xss import launch_xss_attack +from attacks.crawl.crawling import launch_crawling_attack + from attacks.sqli.sqli_scan_config import * from attacks.sqli.sqli import launch_sqli_attack @@ -63,7 +66,6 @@ def read_config(file_path): "subdomain": config["Settings"].getboolean("subdomain"), "do_web_scrap": config["Settings"].getboolean("do_web_scrap"), "target_file": config["Settings"].get("target_file"), - "experiment_file_path": config["Settings"].get("experiment_file_path"), "max_thread": config["Settings"].getint("max_thread", 30), "logging": config["Settings"].get("logging", "DEBUG"), "runtime_save": config["Settings"].getboolean("runtime_save"), @@ -155,7 +157,7 @@ def get_user_input(config_file="configs/config.ini"): categories = [] # Define headers based on enabled parameters - setup_csv(config, categories) + setup_experiment_folder(config, categories) cprint( f"-Extension: {config['extension']}\n-Total Output: {config['total_output']}\n-Page No: {config['page_no']}\n-Do Google Dorking: {config['do_dorking_google']}\n-Do Github Dorking {config['do_dorking_github']}\n-Do XSS: {config['do_xss']}\n-Do SQLi: {config['do_sqli']},\n Domain: {config['subdomain']}\n-Use Proxy: {config['use_proxy']}", @@ -200,9 +202,7 @@ def get_user_input(config_file="configs/config.ini"): for domain in subdomain_list: processed = False for category in categories: - with open( - config["experiment_file_path"], mode="r", newline="" - ) as file: + with open(config["dorking_csv"], mode="r", newline="") as file: reader = csv.DictReader(file) for row in reader: if domain in row["dork"]: @@ -265,7 +265,18 @@ def get_user_input(config_file="configs/config.ini"): return config, last_dork_id, last_link_id, last_attack_id, categories -def setup_csv(config, categories): +def setup_experiment_folder(config, categories): + folder_name = os.path.join( + "outputs/reports/", config["target_file"].split("_")[-1].replace(".txt", "") + ) + config["experiment_folder"] = folder_name + if not os.path.exists(folder_name): + cprint(f"Creating folder {folder_name}", "yellow", file=sys.stderr) + os.makedirs(folder_name) + setup_csv(config, categories, folder_name) + + +def setup_csv(config, categories, folder_name): """Set up the CSV file for storing experiment results. This function creates the necessary CSV file and writes the headers based on the provided configuration. @@ -285,6 +296,9 @@ def setup_csv(config, categories): "success", "payload", ] + config["dorking_csv"] = os.path.join( + folder_name, folder_name.split("/")[-1] + "_dorking.csv" + ) if config["do_dorking_github"]: csv_headers.append("github_success") if config["do_sqli"]: @@ -297,7 +311,7 @@ def setup_csv(config, categories): "success", "payload", ] - sqli_csv = config["experiment_file_path"].replace(".csv", "_sqli.csv") + sqli_csv = os.path.join(folder_name, folder_name.split("/")[-1] + "_sqli.csv") config["sqli_csv"] = sqli_csv if not os.path.exists(sqli_csv) or os.path.getsize(sqli_csv) == 0: with open(sqli_csv, mode="a", newline="") as file: @@ -321,7 +335,7 @@ def setup_csv(config, categories): "is_unknown", "already_attacked", ] - xss_csv = config["experiment_file_path"].replace(".csv", "_xss.csv") + xss_csv = os.path.join(folder_name, folder_name.split("/")[-1] + "_xss.csv") config["xss_csv"] = xss_csv if not os.path.exists(xss_csv) or os.path.getsize(xss_csv) == 0: with open(xss_csv, mode="a", newline="") as file: @@ -330,12 +344,28 @@ def setup_csv(config, categories): csv_headers.append("xss_success") categories.append("xss") + if config["do_crawl"]: + crawl_csv_headers = [ + "crawl_id", + "seedUrl", + "success", + "doms", + "forms", + ] + crawl_csv = os.path.join(folder_name, folder_name.split("/")[-1] + "_crawl.csv") + config["crawl_csv"] = crawl_csv + if not os.path.exists(crawl_csv) or os.path.getsize(crawl_csv) == 0: + with open(crawl_csv, mode="a", newline="") as file: + writer = csv.writer(file) + writer.writerow(crawl_csv_headers) + csv_headers.append("crawl_success") + categories.append("crawl") if ( - not os.path.exists(config["experiment_file_path"]) - or os.path.getsize(config["experiment_file_path"]) == 0 + not os.path.exists(config["dorking_csv"]) + or os.path.getsize(config["dorking_csv"]) == 0 ): - with open(config["experiment_file_path"], mode="a", newline="") as file: + with open(config["dorking_csv"], mode="a", newline="") as file: writer = csv.writer(file) writer.writerow(csv_headers) @@ -390,6 +420,21 @@ def setup_csv(config, categories): raise NotImplementedError("Shodan dorking scan phase not implemented yet") launch_shodan_dorks_and_search_attack(config, categories) + if config["do_crawl"]: + website_to_test = get_links(config) + cprint( + "\nTesting websites for XSS vulnerability...\n", + "yellow", + file=sys.stderr, + ) + if not website_to_test: + cprint( + "No websites found matching the dorks. Please adjust your search criteria.", + "red", + file=sys.stderr, + ) + launch_crawling_attack(config, website_to_test) + if config["do_xss"]: website_to_test = get_xss_links(config) cprint( diff --git a/bounty_drive/bypasser/waf_mitigation.py b/bounty_drive/bypasser/waf_mitigation.py index 72b3e36..1024b72 100644 --- a/bounty_drive/bypasser/waf_mitigation.py +++ b/bounty_drive/bypasser/waf_mitigation.py @@ -7,10 +7,12 @@ import json import random import re +import sys from urllib.parse import urlparse import eventlet, requests from termcolor import cprint +from vpn_proxies.proxies_manager import prepare_proxies from utils.app_config import USER_AGENTS from requester.request_manager import start_request @@ -40,7 +42,16 @@ def waf_detector(proxies, url, config, mode="xss"): headers = { "User-Agent": random.choice(USER_AGENTS), "X-HackerOne-Research": "elniak", + "Accept": "text/html,application/xhtml+xml,application/xml;q=0.9,*/*;q=0.8", + "Accept-Language": "en-US,en;q=0.5", + "Accept-Encoding": "gzip,deflate", + "accept-language": "en-US,en;q=0.9", + "cache-control": "max-age=0", + "Connection": "close", + "DNT": "1", + "Upgrade-Insecure-Requests": "1", } + proxies = prepare_proxies(proxies, config) # Opens the noise injected payload response = start_request( proxies=proxies, @@ -52,12 +63,13 @@ def waf_detector(proxies, url, config, mode="xss"): else False, GET=True, config=config, + bypassed_403=True, ) page = response.text code = str(response.status_code) headers = str(response.headers) - cprint("Waf Detector code: {}".format(code)) - cprint("Waf Detector headers:", response.headers) + cprint("Waf Detector code: {}".format(code), "blue", file=sys.stderr) + cprint("Waf Detector headers:", response.headers, "blue", file=sys.stderr) waf_signatures_files = glob.glob("bypasser/waf_signature/*.json", recursive=True) bestMatch = [0, None] @@ -92,49 +104,49 @@ def waf_detector(proxies, url, config, mode="xss"): return None -def heuristic_scanner( - url, - payload, - method, - cookie, - headers, - timeout, - ssl, - data, - verbose, - silent, - stable, - delay, -): - """ - A basic scan to check if the URL is vulnerable or not - """ - url = url.strip() - scheme, host = urlparse(url).scheme, urlparse(url).netloc - url = scheme + "://" + host - if not url.endswith("/"): - url = url + "/" - final_url = url + payload - response = start_request.do( - final_url, - method, - cookie, - headers, - timeout, - ssl, - data, - verbose, - silent, - stable, - delay, - ) - try: - code, rheaders = response[1], str(response[2]) - if not int(code) >= 400: - if "nefcore" and "crlfsuite" in rheaders: - heuristic_result.add(final_url) - except TypeError: - pass +# def heuristic_scanner( +# url, +# payload, +# method, +# cookie, +# headers, +# timeout, +# ssl, +# data, +# verbose, +# silent, +# stable, +# delay, +# ): +# """ +# A basic scan to check if the URL is vulnerable or not +# """ +# url = url.strip() +# scheme, host = urlparse(url).scheme, urlparse(url).netloc +# url = scheme + "://" + host +# if not url.endswith("/"): +# url = url + "/" +# final_url = url + payload +# response = start_request.do( +# final_url, +# method, +# cookie, +# headers, +# timeout, +# ssl, +# data, +# verbose, +# silent, +# stable, +# delay, +# ) +# try: +# code, rheaders = response[1], str(response[2]) +# if not int(code) >= 400: +# if "nefcore" and "crlfsuite" in rheaders: +# heuristic_result.add(final_url) +# except TypeError: +# pass # https://github.com/MichaelStott/CRLF-Injection-Scanner/blob/master/scanner.py#L28 diff --git a/bounty_drive/configs/config.ini b/bounty_drive/configs/config.ini index 721d5ef..4e73714 100644 --- a/bounty_drive/configs/config.ini +++ b/bounty_drive/configs/config.ini @@ -4,7 +4,6 @@ subdomain = true do_web_scap = true target_file = configs/target_rei.txt target_login = [] -experiment_file_path = outputs/reports/experiment_results_rei.csv logging=DEBUG max_thread = 30 runtime_save = true @@ -41,6 +40,7 @@ dom_xss = true do_crawl = true skip_dom = false level = 1 +update_path = false [SQLi] do_sqli = false diff --git a/bounty_drive/reporting/results_manager.py b/bounty_drive/reporting/results_manager.py index bcd9af0..6391300 100644 --- a/bounty_drive/reporting/results_manager.py +++ b/bounty_drive/reporting/results_manager.py @@ -7,10 +7,12 @@ import threading google_dorking_results = [] +crawling_results = [] xss_attack_results = [] LOCKS = { - "experiment": threading.Lock(), + "dorking": threading.Lock(), + "crawl": threading.Lock(), "sqli": threading.Lock(), "xss": threading.Lock(), } @@ -38,8 +40,8 @@ def get_processed_dorks(settings): """ processed_dorks = set() - if os.path.exists(settings["experiment_file_path"]): - with open(settings["experiment_file_path"], mode="r", newline="") as file: + if os.path.exists(settings["dorking_csv"]): + with open(settings["dorking_csv"], mode="r", newline="") as file: reader = csv.DictReader(file) for row in reader: processed_dorks.add(row["dork"]) @@ -47,15 +49,15 @@ def get_processed_dorks(settings): return processed_dorks -def get_processed_xss_crawled(settings): +def get_processed_crawled(settings): """ TODO: Implement this function Reads the experiment CSV file to get the list of processed dorks. """ processed_dorks = set() - if os.path.exists(settings["xss_csv"]): - with open(settings["xss_csv"], mode="r", newline="") as file: + if os.path.exists(settings["crawl_csv"]): + with open(settings["crawl_csv"], mode="r", newline="") as file: reader = csv.DictReader(file) for row in reader: processed_dorks.add(row["seedUrl"]) @@ -63,6 +65,20 @@ def get_processed_xss_crawled(settings): return processed_dorks +def get_processed_crawled_form_dom(settings): + """ + TODO: Implement this function + Reads the experiment CSV file to get the list of processed dorks. + """ + if os.path.exists(settings["crawl_csv"]): + with open(settings["crawl_csv"], mode="r", newline="") as file: + reader = csv.DictReader(file) + for row in reader: + processed_dorks.add((row["seedUrl"])) + + return processed_dorks + + def get_processed_xss(settings): """ Reads the experiment CSV file to get the list of processed dorks. @@ -93,14 +109,27 @@ def get_attacked_xss(settings): return processed_dorks +def get_links(settings): + links = set() + if os.path.exists(settings["dorking_csv"]): + with open(settings["dorking_csv"], mode="r", newline="") as file: + reader = csv.DictReader(file) + for row in reader: + if row["category"] == "xss" and settings["do_xss"]: + links.add(row["url"]) + elif row["category"] == "sqli" and settings["do_sqli"]: + links.add(row["url"]) + return links + + def get_xss_links(settings): """ Reads the experiment CSV file to get the list of XSS-related links. """ xss_links = set() - if os.path.exists(settings["experiment_file_path"]): - with open(settings["experiment_file_path"], mode="r", newline="") as file: + if os.path.exists(settings["dorking_csv"]): + with open(settings["dorking_csv"], mode="r", newline="") as file: reader = csv.DictReader(file) for row in reader: if row["category"] == "xss": @@ -117,8 +146,8 @@ def get_last_processed_ids(settings): last_link_id = 0 last_attack_id = 0 - if os.path.exists(settings["experiment_file_path"]): - with open(settings["experiment_file_path"], mode="r", newline="") as file: + if os.path.exists(settings["dorking_csv"]): + with open(settings["dorking_csv"], mode="r", newline="") as file: reader = csv.DictReader(file) for row in reader: last_dork_id = int(row["dork_id"]) @@ -128,14 +157,28 @@ def get_last_processed_ids(settings): return last_dork_id, last_link_id, last_attack_id +def get_last_processed_crawl_ids(settings): + """ + Get the last processed dork_id, link_id, and attack_id from the CSV file. + """ + last_crawl_id = 0 + + if os.path.exists(settings["crawl_csv"]): + with open(settings["crawl_csv"], mode="r", newline="") as file: + reader = csv.DictReader(file) + for row in reader: + last_crawl_id = int(row["crawl_id"]) + return last_crawl_id + + # Thread-safe addition to results lists def save_dorking_query(result, settings): """ Safely adds results to the single experiment CSV file with tracking IDs. """ dork_id, category, urls, dork = result - with LOCKS["experiment"]: - with open(settings["experiment_file_path"], mode="a", newline="") as file: + with LOCKS["dorking"]: + with open(settings["dorking_csv"], mode="a", newline="") as file: writer = csv.writer(file) _, link_id, last_attack_id = get_last_processed_ids(settings) link_id += 1 # Increment link_id for next link @@ -160,12 +203,12 @@ def save_dorking_query(result, settings): "yes", "", ] # Success and payload columns are initially empty - if settings["do_dorking_github"] and category == "github": - row.append("no") - if settings["do_sqli"] and category == "sqli": - row.append("no") - if settings["do_xss"] and category == "xss": - row.append("no") + # if settings["do_dorking_github"] and category == "github": + # row.append("no") + # if settings["do_sqli"] and category == "sqli": + # row.append("no") + # if settings["do_xss"] and category == "xss": + # row.append("no") writer.writerow(row) cprint( f"Added {url} to experiment list under category {category}", @@ -179,6 +222,7 @@ def save_dorking_query(result, settings): "red", file=sys.stderr, ) + link_id += 1 # Increment link_id for next link else: # Write a row indicating no URLs found for this dork row = [ @@ -191,19 +235,63 @@ def save_dorking_query(result, settings): "no", "", ] # No URLs found - if settings["do_dorking_github"]: - row.append("no") - if settings["do_sqli"]: - row.append("no") - if settings["do_xss"]: - row.append("no") + # if settings["do_dorking_github"]: + # row.append("no") + # if settings["do_sqli"]: + # row.append("no") + # if settings["do_xss"]: + # row.append("no") writer.writerow(row) cprint(f"No URLs found for {category} dorks...", "red", file=sys.stderr) - if settings["do_xss"] and category == "xss": - update_xss_csv(dork_id, link_id, last_attack_id, urls, dork, settings) - if settings["do_sqli"] and category == "sqli": - update_sqli_csv(dork_id, link_id, last_attack_id, urls, dork, settings) + # if settings["do_xss"] and category == "xss": + # update_xss_csv(dork_id, link_id, last_attack_id, urls, dork, settings) + # if settings["do_sqli"] and category == "sqli": + # update_sqli_csv(dork_id, link_id, last_attack_id, urls, dork, settings) + + +def save_crawling_query(result, settings): + """ + Safely adds results to the single experiment CSV file with tracking IDs. + """ + seedUrl, forms_temps, domURLs_temps = result + with LOCKS["crawl"]: + with open(settings["crawl_csv"], mode="a", newline="") as file: + writer = csv.writer(file) + crawl_id = get_last_processed_crawl_ids(settings) + crawl_id += 1 # Increment link_id for next link + if seedUrl: + cprint( + f"Adding {len(seedUrl)} URLs to experiment list...", + "blue", + file=sys.stderr, + ) + row = [ + crawl_id, + seedUrl, + "yes", + domURLs_temps, + forms_temps, + ] # Success and payload columns are initially empty + writer.writerow(row) + cprint( + f"Added {domURLs_temps} & {forms_temps} to experiment list under category crawl DOM", + "blue", + file=sys.stderr, + ) + else: + # Write a row indicating no URLs found for this dork + row = [ + crawl_id, + seedUrl, + "no", + "no", + "no", + ] # No URLs found + writer.writerow(row) + cprint( + f"No URLs found for {seedUrl} crawling...", "red", file=sys.stderr + ) def update_xss_csv(dork_id, link_id, attack_id, urls, dork, settings): @@ -266,8 +354,8 @@ def update_attack_result( if settings["do_xss"]: csv_headers.append("xss_success") - with LOCKS["experiment"]: - with open(settings["experiment_file_path"], mode="r", newline="") as file: + with LOCKS["dorking"]: + with open(settings["dorking_csv"], mode="r", newline="") as file: reader = csv.DictReader(file) for row in reader: if ( @@ -285,7 +373,7 @@ def update_attack_result( row["xss_success"] = "yes" if success else "no" rows.append(row) - with open(settings["experiment_file_path"], mode="w", newline="") as file: + with open(settings["dorking_csv"], mode="w", newline="") as file: writer = csv.DictWriter(file, fieldnames=csv_headers) writer.writeheader() writer.writerows(rows) diff --git a/bounty_drive/requester/request_manager.py b/bounty_drive/requester/request_manager.py index 388becd..f332718 100644 --- a/bounty_drive/requester/request_manager.py +++ b/bounty_drive/requester/request_manager.py @@ -294,6 +294,7 @@ def start_request( secured=False, cookies=None, session=None, + bypassed_403=False, ): """ Send a HTTP request to the specified URL. @@ -371,12 +372,33 @@ def start_request( time.sleep(retry_after) elif response.status_code == 403: # TODO with headers_403_bypass() - cprint( - "WAF is dropping suspicious requests. Scanning will continue after 10 minutes.", - color="red", - file=sys.stderr, - ) - time.sleep(config["waf_delay"]) + if not bypassed_403: + cprint( + "403 Forbidden - Trying to bypass ...", + "yellow", + file=sys.stderr, + ) + delay = random.uniform( + config["current_delay"] - 5, config["current_delay"] + 5 + ) + time.sleep(delay) # Wait before retrying + response = start_request( + proxies=proxies, + base_url=base_url, + params=params, + headers=headers_403_bypass(), + secured=secured, + GET=GET, + config=config, + bypassed_403=True, + ) + else: + cprint( + "WAF is dropping suspicious requests. Scanning will continue after 10 minutes.", + color="red", + file=sys.stderr, + ) + time.sleep(config["waf_delay"]) else: cprint( f"Error in request ... - status code = {response.status_code}", diff --git a/bounty_drive/utils/app_config.py b/bounty_drive/utils/app_config.py index b1c53c0..4c59938 100644 --- a/bounty_drive/utils/app_config.py +++ b/bounty_drive/utils/app_config.py @@ -2,6 +2,25 @@ # Global variables ######################################################################################### +from termcolor import cprint +import urllib3 + +# TODO +# def integrity_check(url=MD5_CHECKSUM_URL): +# """ Check the integrity of the application """ +# if open("{}/docs/checksum.md5".format(PATH)).read() == urllib3.urlopen(url).read(): +# pass +# else: +# checksum_fail = "MD5 sums did not match from origin master, " +# checksum_fail += "integrity check has failed, this could be because " +# checksum_fail += "there is a new version available." +# cprint(checksum_fail) +# update = prompt("Would you like to update to the latest version[y/N]: ") +# if update.upper().startswith("Y"): +# update_pybelt() +# else: +# pass + USER_AGENTS = [ "Mozilla/5.0 (Windows NT 10.0; Win64; x64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/58.0.3029.110 Safari/537.3", "Mozilla/5.0 (Macintosh; Intel Mac OS X 10_15_7) AppleWebKit/605.1.15 (KHTML, like Gecko) Version/14.0 Safari/605.1.15",