Add Targets to Nmap: -------------------- Table: ------ 1) Add targets. 2) Check for duplicate scanned IPs (delayed). Add targets: ------------ o prerule, portrule and hostrule scripts can add new targets to Nmap. o New added targets are in the form of strings: IPs or hostnames. o Adding new targets to Nmap scan queue will be designed for NSE. Adding traceroute hops to Nmap scan queue should be done by a new script, since this information is saved in the Target class, then script can have access to it, we should code a new API to permit it. (The script can have its own arguments to edit the behaviour, like: add only the three last hops that are before our target to Nmap scan queue etc). o Adding new targets to Nmap scan queue, should be activated by a new argument: * Since this feature is designed for NSE, then we should define a general script argument that activate this behaviour, like: --script-args="allow-new-targets=1" * or Add a new Nmap core option "--script-newtargets" or "-iA". * Each script must have its own argument to activate this feature, since a user can want only X script to add targets and not other scripts. So to activate this feature the user must specify two arguments, the global adding targets option, and an other script argument. And this is why perhaps an Nmap core option "--script-newtargets" would be better to avoid specifying two script arguments "--script-args" for the same feature !!. However even with the Nmap core option "--script-newtargets" we must set a global variable "nmap.registry.allow-new-targets" that will be checked by script to see if feature was requested. Other considerations: o The new added targets must be counted and included in the host group calculation to honor --min-hostgroup and --max-hostgroup options. o Nmap must check if the new targets are not in the exclude list (--exclude option). to acheive this we must provide the new targets to the nexthost() call, which will do the filtering for us. For the current moment the following options are omitted. "--max-new-targets" Set the maximum number of the new discovered targets that are pushed to the Nmap scan queue. We should setup a default value to avoid runaways scans. Perhaps we should find a way to combine this with the already used o.max_ips_to_scan variable. Implementation details: ----------------------- * Add support to a global NSE script argument that will activate the feature --script-args="allow-new-targets=1" (e.g: nmap.registry.allow-new-targets=1) * New NSE function "add_target()" should be on its own NSE library or on existing one. The NSE function can take a single target or a list of targets, but the C/C++ code will only take a single target and push it onto the "new_targets_cache". Targets are IPs or hostnames. * The function will return the number of targets wainting to be scanned or 0 in failures. e.g: local status = add_targets("hostname.com") local status = add_targets("127.0.0.1") local targets_list = { "10.0.0.0", "localhost", ... } local status = add_targets(targets_list) Decision to make: ----------------- Since checking for all Nmap duplicate targets is delayed, then perhaps instead of making the "new_targets_cache" a vector, make it a tree std::map. 1) vector solution: store the new targets in a vector. std::vector new_targets_cache. with this solution every popped target from the 'new_targets_cache' will be erased (good for memory), but even scripts can add the same new discovered target twice. 2) std::map solution: store the new targets (targets are string here) in a tree. std::map new_target_cache. Targets are std::string (IPs or hostnames) and they are saved in the std::map, so with this solution we can be sure that scripts do not add the same target twice, and we can avoid the 'runaway scans' but the filtering is done in NSE, so Nmap targets don't count. If scripts add lot of targets this will consume lot of memory (I think that saving all scanned Nmap targets will consume more memory than saving all the new added targets). With this solution add_targets() can faile if the same string target is already present in the std::map. If implemented it would be easy to update this solution to use a vector (to free memory). Filtering all Nmap targets was delayed. ********* This Part was delayed and should be done separately ********* Check for duplicate scanned IPS ------------------------------- :) First we must note that the current version of Nmap do not remember scanned IPs so we can run in a situation when we scan the same IP twice or more. Now with the new feature of adding targets to nmap we can scan the same IP multiple times if we do not filter it. Imagine a simple network scan when all the scanned hosts show us new IPs and these IPs are the already scanned one. Current proposed solutions: --------------------------- (we should note that there will be a limit of the new added targets) Note: For the moment I've omitted the iterator solution but we should discuess it in the meeting. 1) Do not filter scanned IPs (use the current implementation of Nmap) and count on the max new added targets, so we can avoid runaway scans but we can scan the same IP multiple times. 2) Save all the scanned IPs in a hash table (lot of memory trouble). 3) Save the scanned IPs in a hash table but provide an other option to the user so he can turn off the adding targets feature. We save all the scanned IPs and we use them to filter for duplicate ones, and we create a new option that will let the user to specify the maximum of memory consumption so if we reach that limit, we turn of the feature, we dump the list of the new added/discovered targets that were not scanned (so the user can save them) and we clear the vector of the new added targets and the hash table of the already scanned IPs and we let Nmap continue it scan with classic targets. I've seen that there other options which change the current behaviour of Nmap, so perhaps we can provide an other one. 4) Be sure to not do more than what a user expect. 5) ... Implementation details: ----------------------- o We should consider making duplicate IP detection an option, since duplicate IPs can be useful for http vhosts. o We should note that any duplicate filtering must be based on the IP address, so we must get the IP address of our target and after that check a hash table or a tree structure to see if this address has already been scanned, IPv6 can be supported later. e.g: std::map scanned_ips_cache; Examples comming soon.