Sign Up

Tool Performance Optimizations

How to fix tool's "No enough resources" failure?

Capacity of a machine assigned to a tool's execution can cause the failure of a tool within a Run. Solution:

  1. Go back to your workflow's Builder tab, define larger machine type for given tool. Learn more on How to assign appropriate machine type to a tool.
  2. Execute a workflow again with success.

What to do when to many requests are given as a tool's input value?

If the number of requests (IPs, URLs or something else) given as a tool’s input is too high, stability problems can appear. Try following solution:

  1. Process requests in batches

    Use Trickest Splitter nodes to process requests in batches and get maximum performance. Splitters allow you to split a file into smaller chunks (by line or by line ranges) and pass them to multiple duplicates of the target tool. This has multiple benefits;

  • You'd avoid passing too large a file to a single node which could make some tools crash.
  • If there are any rogue entries in your targets file that would make the tool hang (e.g. a server timing out, or responding expectedly), it will only crash one iteration of the tool without affecting the others
  1. Process batches in parallel

    On executing a workflow increase a number of machines of a type defined for given tool, and enable a parallelisation of a tool's execution. Pre-request for given improvement is having more than one available machine of given machine type. All available machines you can find in your Fleet page.

What to do when masscan tool goes slow?

A few solutions can be tried here:

  • Increase the rate parameter (default is 100 - you can go as high as 10000 or even 25000 in some cases).
  • If the rate is already set, the machine could be the bottleneck - use larger machine for masscan execution. Learn how to assign appropriate machine type to a tool.
  • If the number of IPs is too high, this could cause some stability problems. You should try to process them in batches by using special Trickest Splitter nodes.

What to do when nuclei tool goes slow?

Try some of following solutions to improve nuclei execution performance:

  1. Tool options related to speed

    Nuclei has some options that will make it faster depending on your list of targets. I recommend you take a look at:

  • bulk-size
    • rate-limit
  • threads (the default value of 500 might be a bit too high)
  1. Tool options related to filtering

    It might be a good idea to filter the templates that are passed to , using one or more of the following options

  • severity and exclude-severity (info and low templates may or may not be worth it for you)
  • automatic-scan this will let nuclei run templates based on the discovered technologies
  1. Tool options related to debugging

    It's always good to have more debug info using the verbose and stats flags so that you can analyze the tool's stdout and figure out ways to make it go faster.

What to do when fuff tool goes slow?

Sometime target-url input value can cause the slowness in fuff execution, when given host is timing out. To get around this issue, use one of solutions:

  1. Set the max-run-time-for-process parameter as mentioned with ffuff tool.
  2. Run an instance of httpx or httprobe on your host (in case of using splitter hosts) with a timeout of 5-ish seconds before passing them to fuff node. This way, it will filter out the inaccessible host(s) before executing ffuf and save you some runtime.

What to do when arjun tool goes slow?

If the number of URLs to be processed is too high, stability problems can appear due to more chance of invalid URLs. If arjun's input contains some un-processable URLS that causes tool arjun goes into a validation loop. Possible solutions:

  1. Instead of arjun use x8 which has a check for this kind of behavior and exits immediately if it's detected.
  2. grep -v these URLs out of the list of hosts before passing them to arjun (either hardcode them if the list isn't going to change or write some shell code to check if the response is dynamic regardless of the passed parameters).

What to do when gobuster-dir fails with error "Error on parsing arguments: status-codes and status-codes-blacklist" ?

Tool gobuster-dir is failing with the error "Error: error on parsing arguments: status-codes and status-codes-blacklist are both set, please set only one" even only status-codes is set, not status-codes-blacklists. Solution:

Given issue seems to be related to the gobuster-dir behaviorhere.

Gobuster-dir has a default value for blacklist-status-codes that's always there unless you nullify it. To get around this, try one of following solutions:

  1. Set blacklist-status-codes to an empty string "".
  2. Use blacklist-status-codes instead of positive-status-codes to do your filtering (which might be better in some cases to avoid accidentally filtering an unknown "positive" status code that's not included in the list).

Is my tool updated?

Trickest will take care of the latest tool versioning.

Find out how to optimize the whole workflow performance!