Web Crawler used for Geographical Mapping
The content on the websites visited is irrelevant for the process. Only
properly formatted links referring to external domains to that website
are considered then collected to generate results on the map.
Acting as a very nice bot
The web crawler does not follow redirects and does not hit twice the same url.
This means no bandwith or server overload caused on the visited domains' hosts.
This is not a continuous running process.
You create a crawler run by entering a url address in a form field.
The maximum number of domains that are checked during a crawler run, the maximun number of concurrent requests and the depth in the website tree are set to discrete values:
Maxiun domains 100