Updated on June 6, 2017
  • Social Media Scraping: Web scraping is used to collect data from Social Media websites such as Twitter to find out what’s trending. Research and Development: Web scraping is used to collect a large set of data (Statistics, General Information, Temperature, etc.) from websites, which are analyzed and used to carry out Surveys or for R&D.
  • 'select CountryName from CountryList where Region = 'EU' But this assumes you have a country list hanging around. Another way is to go to a website that has a list of Countries, navigate to the page with a list of European Countries, and get the list from there - and that's where web-scraping comes in. Web-scraping is the process of writing code that combines HTTP calls with HTML parsing, to.

When it comes to the world wide web there are both bad bots and good bots. The bad bots you definitely want to avoid as these consume your CDN bandwidth, take up server resources, and steal your content. Good bots (also known as web crawlers) on the other hand, should be handled with care as they are a vital part of getting your content to index with search engines such as Google, Bing, and Yahoo. Read more below about some of the top 10 web crawlers and user agents to ensure you are handling them correctly.

Introduction By definition, web scraping refers to the process of extracting a significant amount of information from a website using scripts or programs. Such scripts or programs allow one to extract data from a website, store it and present it as designed by the creator. The data collected can also be part of a larger project that uses the extracted data as input. Previously, to extract data.

Web crawlers

Web crawlers, also known as web spiders or internet bots, are programs that browse the web in an automated manner for the purpose of indexing content. Crawlers can look at all sorts of data such as content, links on a page, broken links, sitemaps, and HTML code validation.

Search engines like Google, Bing, and Yahoo use crawlers to properly index downloaded pages so that users can find them faster and more efficiently when they are searching. Without web crawlers, there would be nothing to tell them that your website has new and fresh content. Sitemaps also can play a part in that process. So web crawlers, for the most part, are a good thing. However there are also issues sometimes when it comes to scheduling and load as a crawler might be constantly polling your site. And this is where a robots.txt file comes into play. This file can help control the crawl traffic and ensure that it doesn't overwhelm your server.

Web crawlers identify themselves to a web server by using the User-Agent request header in an HTTP request, and each crawler has their own unique identifier. Most of the time you will need to examine your web server referrer logs to view web crawler traffic.

Robots.txt

By placing a robots.txt file at the root of your web server you can define rules for web crawlers, such as allow or disallow certain assets from being crawled. Web crawlers must follow the rules defined in this file. You can apply generic rules which apply to all bots or get more granular and specify their specific User-Agent string.

Example 1

This example instructs all Search engine robots to not index any of the website's content. This is defined by disallowing the root / of your website.

Example 2

This example achieves the opposite of the previous one. In this case, the instructions are still applied to all user agents, however there is nothing defined within the Disallow instruction, meaning that everything can be indexed.

To see more examples make sure to check out our in-depth post on how to use a robots.txt file.

Top 10 web crawlers and bots

There are hundreds of web crawlers and bots scouring the internet but below is a list of 10 popular web crawlers and bots that we have been collected based on ones that we see on a regular basis within our web server logs.

1. GoogleBot

Googlebot is obviously one of the most popular web crawlers on the internet today as it is used to index content for Google's search engine. Patrick Sexton wrote a great article about what a Googlebot is and how it pertains to your website indexing. One great thing about Google's web crawler is that they give us a lot of tools and control over the process.

User-Agent

Full User-Agent string

Googlebot example in robots.txt

This example displays a little more granularity pertaining to the instructions defined. Here, the instructions are only relevant to Googlebot. More specifically, it is telling Google not to index a specific page (/no-index/your-page.html).

Besides Google's web search crawler, they actually have 9 additional web crawlers:

Web crawlerUser-Agent string
Googlebot NewsGooglebot-News
Googlebot ImagesGooglebot-Image/1.0
Googlebot VideoGooglebot-Video/1.0
Google Mobile (featured phone)SAMSUNG-SGH-E250/1.0 Profile/MIDP-2.0 Configuration/CLDC-1.1 UP.Browser/6.2.3.3.c.1.101 (GUI) MMP/2.0 (compatible; Googlebot-Mobile/2.1; +http://www.google.com/bot.html)
Google SmartphoneMozilla/5.0 (Linux; Android 6.0.1; Nexus 5X Build/MMB29P) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/41.0.2272.96 Mobile Safari/537.36 (compatible; Googlebot/2.1; +http://www.google.com/bot.html)
Google Mobile Adsense(compatible; Mediapartners-Google/2.1; +http://www.google.com/bot.html)
Google AdsenseMediapartners-Google
Google AdsBot (PPC landing page quality)AdsBot-Google (+http://www.google.com/adsbot.html)
Google app crawler (fetch resources for mobile)AdsBot-Google-Mobile-Apps

You can use the Fetch tool in Google Search Console to test how Google crawls or renders a URL on your site. See whether Googlebot can access a page on your site, how it renders the page, and whether any page resources (such as images or scripts) are blocked to Googlebot.

You can also see the Googlebot crawl stats per day, the amount of kilobytes downloaded, and time spent downloading a page.

See Googlebot robots.txt documentation.

2. Bingbot

Bingbot is a web crawler deployed by Microsoft in 2010 to supply information to their Bing search engine. This is the replacement of what used to be the MSN bot.

User-Agent

Full User-Agent string

Bing also has a very similar tool as Google, called Fetch as Bingbot, within Bing Webmaster Tools. Fetch As Bingbot allows you to request a page be crawled and shown to you as our crawler would see it. You will see the page code as Bingbot would see it, helping you to understand if they are seeing your page as you intended.

See Bingbot robots.txt documentation.

3. Slurp Bot

Yahoo Search results come from the Yahoo web crawler Slurp and Bing's web crawler, as a lot of Yahoo is now powered by Bing. Sites should allow Yahoo Slurp access in order to appear in Yahoo Mobile Search results.

Additionally, Slurp does the following:

  • Collects content from partner sites for inclusion within sites like Yahoo News, Yahoo Finance and Yahoo Sports.
  • Accesses pages from sites across the Web to confirm accuracy and improve Yahoo's personalized content for our users.

User-Agent

Full User-Agent string

See Slurp robots.txt documentation.

4. DuckDuckBot

DuckDuckBot is the Web crawler for DuckDuckGo, a search engine that has become quite popular lately as it is known for privacy and not tracking you. It now handles over 12 million queries per day. DuckDuckGo gets its results from over four hundred sources. These include hundreds of vertical sources delivering niche Instant Answers, DuckDuckBot (their crawler) and crowd-sourced sites (Wikipedia). They also have more traditional links in the search results, which they source from Yahoo!, Yandex and Bing.

User-Agent

Full User-Agent string

It respects WWW::RobotRules and originates from these IP addresses:

  • 72.94.249.34
  • 72.94.249.35
  • 72.94.249.36
  • 72.94.249.37
  • 72.94.249.38

5. Baiduspider

Baiduspider is the official name of the Chinese Baidu search engine's web crawling spider. It crawls web pages and returns updates to the Baidu index. Baidu is the leading Chinese search engine that takes an 80% share of the overall search engine market of China Mainland.

User-Agent

Full User-Agent string

Besides Baidu's web search crawler, they actually have 6 additional web crawlers:

Web crawlerUser-Agent string
Image SearchBaiduspider-image
Video SearchBaiduspider-video
News SearchBaiduspider-news
Baidu wishlistsBaiduspider-favo
Baidu UnionBaiduspider-cpro
Business SearchBaiduspider-ads
Other search pagesBaiduspider

See Baidu robots.txt documentation.

6. Yandex Bot

YandexBot is the web crawler to one of the largest Russian search engines, Yandex. According to LiveInternet, for the three months ended December 31, 2015, they generated 57.3% of all search traffic in Russia.

User-Agent

Full User-Agent string

There are many different User-Agent strings that the YandexBot can show up as in your server logs. See the full list of Yandex robots and Yandex robots.txt documentation.

7. Sogou Spider

Sogou Spider is the web crawler for Sogou.com, a leading Chinese search engine that was launched in 2004. As of April 2016 it has a rank of 103 in Alexa's internet rankings.

Note: The Sogou web spider does not respect the robots exclusion standard, and is therefore banned from many websites because of excessive crawling.

User-Agent

8. Exabot

Exabot is a web crawler for Exalead, which is a search engine based out of France. It was founded in 2000 and now has more than 16 billion pages currently indexed.

User-Agent

See Exabot robots.txt documentation.

9. Facebook external hit

Facebook allows its users to send links to interesting web content to other Facebook users. Part of how this works on the Facebook system involves the temporary display of certain images or details related to the web content, such as the title of the webpage or the embed tag of a video. The Facebook system retrieves this information only after a user provides a link.

One of their main crawling bots is Facebot, which is designed to help improve advertising performance.

User-Agent

See Facebot robots.txt documentation.

10. Alexa crawler

ia_archiver is the web crawler for Amazon's Alexa internet rankings. As you probably know they collect information to show rankings for both local and international sites.

User-Agent

Full User-Agent string

See Ia_archiver robots.txt documentation.

Bad bots

As we mentioned above most of those are actually good web crawlers. You generally don't want to block Google or Bing from indexing your site unless you have a good reason. But what about the thousands of bad bots? KeyCDN released a new feature back in February 2016 that you can enable in your dashboard called Block Bad Bots. KeyCDN uses a comprehensive list of known bad bots and blocks them based on their User-Agent string.

When a new Zone is added the Block Bad Bots feature is set to disabled. This setting can be set to enabled instead if you want bad bots to automatically be blocked.

Bot resources

Perhaps you are seeing some user-agent strings in your logs that have you concerned. Here are a couple of good resources in which you can lookup popular bad bots, crawlers, and scrapers.

Caio Almeida also has a pretty good list on his crawler-user-agents GitHub project.

Automated Web Scraping Tools Free

Summary

There are hundreds of different web crawlers out there but hopefully you are now familiar with couple of the more popular ones. Again you want to be careful when blocking any of these as they could cause indexing issues. It is always good to check your web server logs to see how often they are actually crawling your site.

Did we miss any important ones? If so please let us know below and we will add them.

Some websites can contain a very large amount of invaluable data.

Stock prices, product details, sports stats, company contacts, you name it.

If you wanted to access this information, you’d either have to use whatever format the website uses or copy-paste the information manually into a new document. Here’s where web scraping can help.

What is Web Scraping?

Web scraping refers to the extraction of data from a website. This information is collected and then exported into a format that is more useful for the user. Be it a spreadsheet or an API.

Although web scraping can be done manually, in most cases, automated tools are preferred when scraping web data as they can be less costly and work at a faster rate.

But in most cases, web scraping is not a simple task. Websites come in many shapes and forms, as a result, web scrapers vary in functionality and features.

If you want to find the best web scraper for your project, make sure to read on.

How do Web Scrapers Work?

Automated web scrapers work in a rather simple but also complex way. After all, websites are built for humans to understand, not machines.

First, the web scraper will be given one or more URLs to load before scraping. The scraper then loads the entire HTML code for the page in question. More advanced scrapers will render the entire website, including CSS and Javascript elements.

Then the scraper will either extract all the data on the page or specific data selected by the user before the project is run.

Ideally, the user will go through the process of selecting the specific data they want from the page. For example, you might want to scrape an Amazon product page for prices and models but are not necessarily interested in product reviews.

Lastly, the web scraper will output all the data that has been collected into a format that is more useful to the user.

Most web scrapers will output data to a CSV or Excel spreadsheet, while more advanced scrapers will support other formats such as JSON which can be used for an API.

What Kind of Web Scrapers are There?

Web scrapers can drastically differ from each other on a case-by-case basis.

For simplicity’s sake, we will break down some of these aspects into 4 categories. Of course, there are more intricacies at play when comparing web scrapers.

  • self-built or pre-built
  • browser extension vs software
  • User interface
  • Cloud vs Local

Self-built or Pre-built

Just like how anyone can build a website, anyone can build their own web scraper.

However, the tools available to build your own web scraper still require some advanced programming knowledge. The scope of this knowledge also increases with the number of features you’d like your scraper to have.

On the other hand, there are numerous pre-built web scrapers that you can download and run right away. Some of these will also have advanced options added such as scrape scheduling, JSON and Google Sheets exports and more.

Browser extension vs Software

In general terms, web scrapers come in two forms: browser extensions or computer software.

Browser extensions are app-like programs that can be added onto your browser such as Google Chrome or Firefox. Some popular browser extensions include themes, ad blockers, messaging extensions and more.

Web scraping extensions have the benefit of being simpler to run and being integrated right into your browser.

However, these extensions are usually limited by living in your browser. Meaning that any advanced features that would have to occur outside of the browser would be impossible to implement. For example, IP Rotations would not be possible in this kind of extension.

On the other hand, you will have actual web scraping software that can be downloaded and installed on your computer. While these are a bit less convenient than browser extensions, they make up for it in advanced features that are not limited by what your browser can and cannot do.

User Interface

The user interface between web scrapers can vary quite extremely.

For example, some web scraping tools will run with a minimal UI and a command line. Some users might find this unintuitive or confusing.

On the other hand, some web scrapers will have a full-fledged UI where the website is fully rendered for the user to just click on the data they want to scrape. These web scrapers are usually easier to work with for most people with limited technical knowledge.

Some scrapers will go as far as integrating help tips and suggestions through their UI to make sure the user understands each feature that the software offers.

Cloud vs Local

From where does your web scraper actually do its job?

Local web scrapers will run on your computer using its resources and internet connection. This means that if your web scraper has a high usage of CPU or RAM, your computer might become quite slow while your scrape runs. With long scraping tasks, this could put your computer out of commission for hours.

Additionally, if your scraper is set to run on a large number of URLs (such as product pages), it can have an impact on your ISP’s data caps.

Cloud-based web scrapers run on an off-site server which is usually provided by the company who developed the scraper itself. This means that your computer’s resources are freed up while your scraper runs and gathers data. You can then work on other tasks and be notified later once your scrape is ready to be exported.

This also allows for very easy integration of advanced features such as IP rotation, which can prevent your scraper from getting blocked from major websites due to their scraping activity.

What are Web Scrapers Used For?

By this point, you can probably think of several different ways in which web scrapers can be used. We’ve put some of the most common ones below (plus a few unique ones).

  • Scraping site data before a website migration
  • Scraping financial data for market research and insights
Web

The list of things you can do with web scraping is almost endless. After all, it is all about what you can do with the data you’ve collected and how valuable you can make it.

Read our Beginner's guide to web scraping to start learning how to scrape any website!

The Best Web Scraper

So, now that you know the basics of web scraping, you’re probably wondering what is the best web scraper for you?

The obvious answer is that it depends.

The more you know about your scraping needs, the better of an idea you will have about what’s the best web scraper for you. However, that did not stop us from writing our guide on what makes the Best Web Scraper.

Of course, we would always recommend ParseHub. Not only can it be downloaded for FREE but it comes with an incredibly powerful suite of features which we reviewed in this article. Including a friendly UI, cloud-based scrapping, awesome customer support and more.

Automated Web Scraping Tools Download

Want to become an expert on Web Scraping for Free? Take ourfree web scraping courses and become Certified in Web Scraping today!