Crawler generates geographical graphs from an origin website (sent by the user) to a number of domain urls present on the origin website as href links.
Visited sites are threaten as a single domain.
The geographical information is collected and then displayed in a world-wide map, with websites represented as markers in the map, linked each other in nodes graph shape.. Only properly formatted links referring to external domains to that website are considered then collected to generate results on the map.
A nice bot
The web crawler does not follow redirects and does not hit twice the same url. This means no bandwith or server overload caused on the visited domains' hosts. The urls found in each website are filtered, several regional tls are restricted for security reasons.
Most of the well known web services as google or twitter, are filtered so the web crawler avoids general information/social networks which are present in most of the websites out there.
Login with or share links are extensively used by websites, for that reasoun they will not appear in results.
Explicit content websties are skipped by the crawler, and no sex or violent results are taken into consideration.
This website can be freely used under a reasonable usage. Any abuse or improper use of the application will force the ban of the request origin ip address during an indeterminate perior of time, being non possible the whitelisting of any ip address causing abuse.