Search engine site Google updated the list of their official crawlers. The IT giant added the name and information for a relatively unknown crawler. The move will boost the presence of crawler with more authenticity. The new Crawler documentation is for the Google-Safety user agent which is not new but the development is key for the more transparency.
Earlier, publishers have been seeing it often but with no documentation and clarity.
What is Crawl?
Crawl is the process by which the Google Search Appliance discovers enterprise content. The Google Crawl also creates a master index. Crawls are also known as bots and spiders.
The move to add official documentation for this crawler will encourage more clarification.
Kind of Crawlers
There are several forms of Google crawlers.
Here are different forms of crawlers–
- Common Crawlers: These kind of Crawlers are generally responsible for indexing different kinds of content. They also work for search testing tools, internal Google product team, and subjects related to AI.
- User-triggered fetchers: These are bots that are triggered by users, which includes fetching feeds or site verification.
- Special-case Crawlers: Bots of this category are for special cases like – mobile ads webpage quality checks or for push notification messages via Google APIs. Notably, this Crawler does not follow the global user agent directives in robots.txt highlighted with the asterisk (*).
The documentation in the Special-case – Crawlers Google-Safety by Google’s processes – aims to find malware. The Special-case Crawlers completely ignores all robots.txt directives.
What Google said on the new documentation
According to Google, “The Google-Safety user agent handles abuse-specific crawling, such as malware discovery for publicly posted links on Google properties.This user agent ignores robots.txt rules.”
Full agent string for the Crawler:
Read the new documentation for the Google-Safety user agent on the Google Search Central page for crawlers in the section devoted to Special-case crawlers.