Automated Scanners

- Nesus

https://limberduck.github.io/nessus-cheat-sheet/nessus-cheat-sheet.pdf

- Burp Suite Pro Website Scanner

- Burp Suite Pro Automatic Crawling

Target --> Site map --> Right-click --> Spider this host

This can be done manually, Target --> Site map and then browse recursively to detect identified resources that have not yet been visited (they are grayed out).

Special attention to Client-side objects (java applets, flash, silverlight, ...), functionalities implemented in Javascript and functionalities with captchas or other mechanisms that prevent automation.

Interesting Addons to complement scans (other more specific, are described in specific sections):

- Sumrecon

https://github.com/Gr1mmie/sumrecon

- Crawleet

python crawleet.py -u <URL> -b -d 3 -e jpg,png,css -f -m -s -x php,txt -y --threads 20

- autorecon

https://github.com/Tib3rius/AutoRecon

- nuclei

https://github.com/projectdiscovery/nuclei

nuclei -u https://example.com

nuclie -l list-of-hosts.txt

For a previouslly obtained list of parameters with gau or katana:

nuclei -list params.txt -c 70 -rl 200 -fhr -lfa -o nuclei-results.txt -es info

To fuzz for query parameters:

git clone https://github.com/projectdiscovery/fuzzing-templates.git

katana -f qurl > fuzz_endpoints.txt

paramspider -d example.com --subs >> fuzz_endpoints.txt

nuclei -t fuzzing-templates -list fuzz_endpoints.txt

To obatain HTTP Exposures and missconfigurations:

nuclei -nh --list subdomains_alive.txt -t http/exposures/configs -json-export output.json -markdown-export nuclei_report/

nuclei -nh --list subdomains_alive.txt -t http -json-export nuclei-output.json -markdown-export nuclei_report/

To hunt for LFI:

cat urls-params.txt | gf lfi >> urls-lfi.txt

nuclei -l urls-lfi.txt -tags lfi

nuclei -t 'https://example.com/home.php?page=about.php' -tags lfi

Last updated