What are HTTP Headers?

what are http headers
Share post:
Share on facebook
Share on linkedin
Share on twitter
Share on email

Every internet user constantly comes into contact with HTTP headers, whether they know it or not. The ones that know them well and constantly interact with these headers are web developers and other IT professionals. 

Table of Contents

Yet, even if you have nothing to do with them professionally, HTTP headers are what you send and get with every website request you make online. Thus, it is worthwhile finding out more about what HTTP headers are and how they work to help in various web-related procedures.

Defining HTTP headers

The definition of the HTTP header starts with explaining the first word. HTTP or Hypertext Transfer Protocol is defined as an application-level protocol for distributed, collaborative, hypermedia information systems in the document RFC 2616 published by the Internet Engineering Task Force in 1999. The protocol itself was initiated a decade earlier and developed throughout the first years of the World Wide Web.


Essentially, it is the part of the internet protocol suite that defines how information can be shared and accessed. It enables easy movement from one document to another via hyperlinks, which makes all of our lives so much easier when surfing the net.


As established by the protocol, the HTTP header is a structural part of the information received with every request and response made. In other words, every time a user requests data via the internet, there is also metadata being sent and received. 


The first line of this metadata is the request line which specifies the URL of the request and the HTTP version that is used. The remaining metadata is ordered under HTTP headers that define their type and use. Let’s look at it a bit closer.

Try our new free proxies today!

Get a 500MB of free proxies. No payments & commitments.

How are HTTP headers used?

The main function of HTTP headers is to ensure an informationally richer communication between the client and the server. On the client-side, this is done by providing the information that explains to the server what type of response is expected and could be handled. And on the server-side, it is done by adding the metadata that shows what and why happened with the request.


The information that is shared between the client and the server is structured and displayed in individual lines known as header fields. Each header field contains some metadata expressed as name or value pair separated by colons. An example of a single header field might look like this:


  • Accept-encoding: gzip.


This particle header field tells us that the client supports a particular method of compression, which is the “gzip” type. Similarly, each individual header field tells us something about the client, the user agent, the web server, or the handling of the request.


What sort of header fields are contained within a specific one depends on the type of information that is being shared. Depending on the types of data conveyed, the headers are usually grouped into four major categories. These groups are as follows:


  • Request headers. These are the headers that contain the relevant information about the client-side and define what is being requested.
  • Response headers. These HTTP headers, in turn, are filled with server-side information and the metadata about the handling of the request.
  • General headers hold the information that is relevant for both the requester side and the response.
  • Entity headers provide the metadata about the content of the message, for example, content length. These headers may be present in both request and response messages, but don’t have to be in either.

General headers contain such information as the URL of the request and the server or the IP address. HTTP status codes also fall into the general group. The status code you want to get when making a request is 200 OK, which means that everything went just as it should. Additionally, general headers specify the request method. The three most common HTTP request methods are GET, POST, and HEAD.


The way HTTP headers are used for data collection depends highly on the particular header fields. Let’s look at the most common header types and what they mean.

Common HTTP header types

HTTP Request headers


This header informs the server about the expected language of the returned response. An example of it could read like this: accept-language: en-us, en;q=0.5. Multiple languages can be specified, separated by commas.


An example of this is given above as accept-encoding: gzip. Also, more than one accepted type of encoding can be given, separating each kind by commas. In that case, the header would read, for example, accept-encoding: gzip, deflate.


This HTTP header informs the server about the characteristics of the user agent, meaning the application used to access the server. It can also specify the operating system, its publisher, and the versions used.


To make any document readable for a device, it first has to have a predefined set of characters that it can understand. The accept-charset header informs what set of characters can be accepted by the device on which the user agent runs. An example of the HTTP header specifying this set of characters could go like this: utf-8. As UTF-8 is the most common character encoding you are likely to often see it in the accept-charset header field.


Authentication data is provided under this HTTP request header. The data is encoded by the base64 encoding scheme.


This HTTP request header is used to authenticate the client to the proxy server. After the title, the syntax of this header is the same as of the authorization header.


This header is used to send the previously received HTTP cookies back to the server. The browser can block cookies from being sent, however.

Get datacenter proxies now

Forget confusing implementations as we automatically rotate shared datacenter proxies to hide your identity.

HTTP Response headers


This HTTP response header specifies the compression method or methods used for the response body. Similar to the aforementioned HTTP request header, it can look like this: content-encoding: gzip, deflate.


The web server indicates the size of the response with the content-length header. The size is given in bytes and can read like this: content-length: 728.


The cache-control header is used to give directives that have to be obeyed by any caching mechanism used to cache the information within the request-response chain. Proxies are included in these caching mechanisms. 


The header reads, for example, cache-control: max-age: 6000, public. This means that the cache is valid for 6000 seconds and the data is public, thus can be cached by anyone. The caching can also be prevented by using the header: cache-control: no-cache.


This header specifies the exact time of the response, including the time zone. For example, date: Mon, 9 May 2022, 15:43:12 GMT.


This header sets the date for when the content will no longer be fresh. The syntax is similar to the one of the date header: expires: Fri, 13 May 2022, 12:00:00 GMT.


This HTTP response header is used to specify the media or MIME type of the document returned. This allows the browser to interpret the documents. 


An example of this HTTP header could be: content-type: text/html. In this part, the first part before the slash tells us that it is a text document, while the second part specifies its type further as HTML.


Informs which header fields can cause varying results when the information is cached. For example, vary:user-agent. In this case, we know that the server holds a few different versions and the version returned depends on the user agent used.

HTTP Headers and web scraping

One of the most important areas where HTTP headers can be used to your advantage is web scraping. Companies utilize it to collect the various business-relevant data from publicly available online sources. Web scraping makes data collection and business intelligence much easier and more efficient. Therefore any way to improve and optimize web scraping is considered to be worthwhile by data-gathering businesses.


There are two major ways in which headers matter for web scraping. Firstly, they help define the kind of information to be scraped. Specifying in your request headers what you are after allows you to find it faster and ensure the relevance of the results of the scraping procedure. And when the headers are optimized for particular HTTP requests, the returned data will be structured as close to the way convenient for you as possible.


The second way HTTP headers help web scraping might be even more important. They help to hide your tracks and can prevent getting your web scraper banned. Companies and other owners of the content you want to scrape might want to avoid it due to competition and other reasons. Thus, they try to recognize scraping activity and block those IP addresses they find suspect. 


There are, of course, efficient ways to walk around these blocks, like using rotating proxies. However, you can also increase the security of your web scraping and avoid getting detected by setting up your request headers in a particular way.


One of the ways that servers recognize web scraping is by suspiciously repetitive headers of multiple HTTP requests. Thus, to make sure that your activity is hidden, you need to know which headers have to be altered to keep you undercover. Here are some of the most important headers to work on for more optimal web scraping:


  • The user-agent header allows recognizing what type of application is used. The server might deem it suspicious that a lot of requests are coming from the exact same user agent and identify it as a web scraper. Altering the information in the user-agent header might help to disguise it from the web server.
  • An important HTTP header for web scraping that we have not mentioned before is referer. This header informs the server about the URL of the previous website that you have visited, from which the request is made. It can look like this: referer: http://www.google.com. Since an organic internet user visits many different websites every day, to mimic such a user you need to keep changing your referer header when web scraping. This allows presenting your browsing history as that of a real person instead of always coming from the same previous webpage.
  • The accept HTTP request header is used to specify the type of content that is accepted by the requester, making it the counterpart of the content-type response header. Configuring the accept header to match the server’s offered content type and inform it on what types of information can be processed ensures faster and more organic communication between the server and the web scraper acting as a client.
  • Accept-language HTTP header should be made to match the location of the IP address of the client. If multiple languages are used for different requests from the same location, the web server might suspect a bot-like activity and ban the address. The accept-language HTTP header matters even more when using proxies, to ensure that the accepted language and the proxy location are not in suspicious contrast.
  • The header accept-encoding can be used to reduce the traffic on the server. When the web scraper specifies that it will accept the data compressed in one or all of the ways the server is offering, it gets the same data faster. Additionally, it does not overload the server with traffic, thus reducing the chances of getting blocked.


You can not only use the above examples but adapt them to figure out how to configure other request headers for more efficient web scraping. It will make the scraping procedure seem more like organic traffic, meaning you are less likely to get banned and more likely to get the data you need without the avoidable hiccups.

Summing up

With every online request for data, there also passes metadata arranged under the HTTP headers. These headers help to ensure smooth and organic communication between the client and the server-side. 


Knowing what HTTP headers mean and how they work allow making use of them for various data collection and inspection purposes. Web scraping is among the most important of such purposes that are made better by correctly configuring the HTTP headers.

Choose Razorproxy

Use shared rotating or dedicated datacenter proxies and scale your business with no session, request, location and target limitations.

More To Explore

How to Find a Proxy Server Address?

It’s no secret that proxy servers are the best tool for bypassing geo-blocks and keeping you anonymous while performing web scraping, market research, or other