Web Crawler used for Geographical Mapping

The content on the websites visited is irrelevant for the process. Only properly formatted links referring to external domains to that website are considered then collected to generate results on the map.

Acting as a very nice bot

The web crawler does not follow redirects and does not hit twice the same url. This means no bandwith or server overload caused on the visited domains' hosts.

Web Application

This website can be freely used under a reasonable usage. Any abuse or improper use of the application will force the ban of the request origin ip address during an indeterminate perior of time, being non possible the whitelisting of any ip address causing abuse.


This is not a continuous running process.
You create a crawler run by entering a url address in a form field.
The maximum number of domains that are checked during a crawler run, the maximun number of concurrent requests and the depth in the website tree are set to discrete values:
Maxiun domains 100
Concurrency 4
Depth 5