Googlebot
Googlebot is Google's web crawling bot (sometimes also called a "spider"). Crawling is the process by which Googlebot discovers new and updated pages to be added to the Google index.
We use a huge set of computers to fetch (or "crawl") billions of pages on the web. Googlebot uses an algorithmic process: computer programs determine which sites to crawl, how often, and how many pages to fetch from each site.
How Googlebot accesses your site
For most sites, Googlebot shouldn't access your site more than once every few seconds on average. However, due to network delays, it's possible that the rate will appear to be slightly higher over short periods.
Googlebot was designed to be distributed on several machines to improve performance and scale as the web grows. Also, to cut down on bandwidth usage, we run many crawlers on machines located near the sites they're indexing in the network. Therefore, your logs may show visits from several machines at google.com, all with the user-agent Googlebot. Our goal is to crawl as many pages from your site as we can on each visit without overwhelming your server's bandwidth. Request a change in the crawl rate.
Blocking Googlebot from content on your site
It's almost impossible to keep a web server secret by not publishing links to it. As soon as someone follows a link from your "secret" server to another web server, your "secret" URL may appear in the referrer tag and can be stored and published by the other web server in its referrer log. Similarly, the web has many outdated and broken links. Whenever someone publishes an incorrect link to your site or fails to update links to reflect changes in your server, Googlebot will try to download an incorrect link from your site.
If you want to prevent Googlebot from crawling content on your site, you have a number of options. Be aware of the difference between preventing Googlebot from crawling a page, preventing Googlebot from indexing a page, and preventing a page from being accessible at all by both crawlers or users.
Problems with spammers and other user-agents
The IP addresses used by Googlebot change from time to time. Googlebot identifies itself with a user-agent string, but this can be spoofed; the best way to identify accesses by Googlebot is to use a reverse DNS lookup.
Googlebot and all respectable search engine bots will respect the directives in robots.txt, but some nogoodniks and spammers do not. Report spam to Google.
Google has several other user-agents, including Feedfetcher (user-agent Feedfetcher-Google). Since Feedfetcher requests come from explicit action by human users who have added the feeds to their Google home page and not from automated crawlers, Feedfetcher does not follow robots.txt guidelines. You can prevent Feedfetcher from crawling your site by configuring your server to serve a 404, 410, or other error status message to user-agent Feedfetcher-Google. More information about Feedfetcher.
* Nguồn: Google Search Console