(bugbounty) Wildcrawl
Last updated
Last updated
Here's a summary of what the script does:
Crawls the target URL using Hakrawler .
Removes duplicate links.
Extracts all files of certain types (e.g., PDF, DOC, ZIP, JPG) and saves them to a separate file.
Extracts all domains and removes duplicates.
Filters out certain domains (e.g., Facebook, Twitter, LinkedIn).
Extract records of each domain.
Gets all the IPs associated with the filtered domains.
Extracts the title of each domain and IP.
Filters out any results that have a title starting with "404".
At the end you will have these files saved in "scan_1, scan_2, etc.";