Crawl URLs and Discover JavaScript URLs & Endpoints
Gathering wordlists from JavaScript code paths can potentially lead to finding more vulnerabilities in a web application. This is because JavaScript code often includes strings used as input to the application, such as URLs, file paths, and more. These strings can contain information about the application's internal structure and functionality and can potentially be exploited by attackers to gain unauthorized access or to find other vulnerabilities.
Complexity: basic
Category: Web Discovery
Tools
Setup
Set up this workflow by changing initial input value:
- TARGETS - provide a file containing a list of webservers, as a target
In the example below, we're providing a sample as a target, afterwords the workflow will crawl all of the endpoints, extract javascript files and create two files in the end:
- urls.txt - with all URLs contained in JavaScript code
- paths.txt - all file and folder paths contained in JavaScript code
Execution and results
After setup workflow is ready to be executed. Once workflow's last node, links-paths
script, is finished result can be viewed and downloaded.
links-paths
will contain the results of URLs and paths.
Try it out!
This workflow is available in the Library, you can copy it and execute it immediately!
Improve this workflow
- Better parsing of the paths and URLs
- Verify javascript URLs are responding with status code
200
- use notify to send newly found results via anew
Check out another Web Discovery workflow from Trickest library - Custom Parameter Discovery Wordlist!