Tool Performance Optimizations
How to fix tool’s “No enough resources” failure?
Capacity of a machine assigned to a tool’s execution can cause the failure of a tool within a Run. Solution:
- Go back to your workflow’s Builder tab, define larger machine type for given tool. Learn more on How to assign appropriate machine type to a tool.
- Execute a workflow again with success.
What to do when to many requests are given as a tool’s input value?
If the number of requests (IPs, URLs or something else) given as a tool’s input is too high, stability problems can appear. Try following solution:
Process requests in batches
Use Trickest Splitter nodes to process requests in batches and get maximum performance. Splitters allow you to split a file into smaller chunks (by line or by line ranges) and pass them to multiple duplicates of the target tool. This has multiple benefits:
- You’d avoid passing too large a file to a single node which could make some tools crash.
- If there are any rogue entries in your targets file that would make the tool hang (e.g. a server timing out, or responding expectedly), it will only crash one iteration of the tool without affecting the others
Process batches in parallel
On executing a workflow increase a number of machines of a type defined for given tool, and enable a parallelisation of a tool’s execution. Pre-request for given improvement is having more than one available machine of given machine type. All available machines you can find in your Fleet page.
What to do when masscan tool goes slow?
A few solutions can be tried here:
- Increase the
rateparameter (default is 100 - you can go as high as 10000 or even 25000 in some cases).
- If the rate is already set, the machine could be the bottleneck - use larger machine for
masscanexecution. Learn how to assign appropriate machine type to a tool.
- If the number of IPs is too high, this could cause some stability problems. You should try to process them in batches by using special Trickest Splitter nodes.
What to do when nuclei tool goes slow?
Try some of following solutions to improve nuclei execution performance:
Tool options related to speed
Nucleihas some options that will make it faster depending on your list of targets. I recommend you take a look at:
threads(the default value of 500 might be a bit too high)
Tool options related to filtering
It might be a good idea to filter the templates that are passed to , using one or more of the following options
lowtemplates may or may not be worth it for you)
automatic-scanthis will let nuclei run templates based on the discovered technologies
Tool options related to debugging
It’s always good to have more debug info using the
statsflags so that you can analyze the tool’s stdout and figure out ways to make it go faster.
What to do when fuff tool goes slow?
target-url input value can cause the slowness in
fuff execution, when given host is timing out. To get around this issue, use one of solutions:
- Set the
max-run-time-for-processparameter as mentioned here.
- Run an instance of
httprobeon your host (in case of using splitter hosts) with a timeout of 5-ish seconds before passing them to
fuffnode. This way, it will filter out the inaccessible host(s) before executing
ffufand save you some runtime.
What to do when arjun tool goes slow?
If the number of URLs to be processed is too high, stability problems can appear due to more chance of invalid URLs. If
arjun’s input contains some un-processable URLS that causes tool
arjun goes into a validation loop. Possible solutions:
- Instead of
x8which has a check for this kind of behavior and exits immediately if it’s detected.
grep -vthese URLs out of the list of hosts before passing them to
arjun(either hardcode them if the list isn’t going to change or write some shell code to check if the response is dynamic regardless of the passed parameters).
What to do when gobuster-dir fails with error “Error on parsing arguments: status-codes and status-codes-blacklist” ?
gobuster-dir is failing with the error “Error: error on parsing arguments: status-codes and status-codes-blacklist are both set, please set only one” even only status-codes is set, not status-codes-blacklists. Solution:
Given issue seems to be related to the
gobuster-dir behavior mentioned here.
Gobuster-dir has a default value for
blacklist-status-codes that’s always there unless you nullify it. To get around this, try one of following solutions:
blacklist-status-codesto an empty string
positive-status-codesto do your filtering (which might be better in some cases to avoid accidentally filtering an unknown “positive” status code that’s not included in the list).