We save some traffic on the server in Amazon EC2

Let's continue experiments with our server in the cloud Amazon EC2. For those who do not understand what it is about now, you must first read two (first and second) previous articles. Some points, described in detail earlier, will be omitted. If in doubt, do not forget what, once again reread the already published materials. The same note will tell you how to save a bit of traffic, and at the same time a certain amount of nerves when working with our server in the Amazon EC2 cloud. But first you need to decide whether these additional settings are really necessary. The situation is twofold. On the one hand, Amazon EC2, like any decent cloud service, literally considers every user sneeze. That is, it closely follows the consumption of each of the server resources, which in our case are not at all endless. First of all, these are restrictions on the amount of traffic being transmitted, and at the same time the number of disk operations. The remaining resources are given with a much greater margin. On the other hand, it is difficult to predict how much traffic it will save. It can easily turn out that the effort expended will exceed the potential benefit. So it's up to you.

How can I reduce the consumption of traffic? In our case, the simplest option is to install a proxy server. Or rather, just two – caching and compressing. We will work exclusively with the data transmitted via the HTTP protocol and the 80th port. First, this kind of traffic prevails when working on the Web. Secondly, caching or compressing other data (for example, multimedia) in our case is either meaningless or costly. Thirdly, not all programs can work correctly even through transparent proxies. The disadvantage of this method will be a slight increase in the delay in the opening of sites, as well as the fact that some particularly corrosive web services do not allow visitors to the "royal body" who use the proxy.

However, in our case, you can flexibly configure the entire system. For example, completely abandon the caching proxy. Or, make sure that when connecting through IP-over-DNS traffic passes through a proxy chain (at least through a compressing one), and in the case of a conventional VPN (PPTP), it goes directly. Also, a proxy chain can be useful if you have a slow connection to the Web or use a per-megabyte traffic payment. As a caching proxy, we will use the classics of the genre, namely Squid. For compression, Ziproxy is quite suitable. Thus, the scheme for connecting to the Web will look like this: Web browser ↔ Ziproxy ↔ Squid ↔ Internet. All other information will be bypassed by the chain of proxy servers.

⇡ # Configure Squid

First, install and configure Squid. To do this, in PuTTY, open the console of the remote server and enter the command

 sudo apt-get install squid 

and immediately, just in case, stop his demon

 sudo /etc/init.d squid stop 

Now we need to make a backup copy of the original configuration file with the command

 sudo cp /etc/squid/squid.conf/home/ubuntu/squid.conf.backup

Everything, you can start directly to configure the proxy server:

 sudo nano /etc/squid/squid.conf

The Squid configuration file is such a huge "flattening" of solid text weighing 200 kilobytes. By setting up this proxy server, you can write a whole thesis, and to cover all the issues of fine tuning Squid will require scientific work in three volumes. Seriously, there is nothing terrible in this voluminous file. Percentages ninety in it take detailed comments with examples for each described parameter. In our case, it will be enough to edit the standard (recommended) Squid configuration file a little, adding a couple of changes to it.

 acl all src all
acl manager proto cache_object
acl localhost src
acl to_localhost dst
acl localnet src # RFC1918 possible internal network
acl localnet src # RFC1918 possible internal network
acl localnet src # RFC1918 possible internal network
acl SSL_ports port 443 # https
acl SSL_ports port 563 # snews
acl SSL_ports port 873 # rsync
acl Safe_ports port 80 # http
acl Safe_ports port 21 # ftp
acl Safe_ports port 443 # https
acl Safe_ports port 70 # gopher
acl Safe_ports port 210 # wais
acl Safe_ports port 1025-65535 # unregistered ports
acl Safe_ports port 280 # http-mgmt
acl Safe_ports port 488 # gss-http
acl Safe_ports port 591 # filemaker
acl Safe_ports port 777 # multiling http
acl Safe_ports port 631 # cups
acl Safe_ports port 873 # rsync
acl Safe_ports port 901 # SWAT
acl purge method PURGE
http_access allow manager localhost
http_access deny manager
http_access allow purge localhost
http_access deny purge
http_access deny! Safe_ports
http_access deny CONNECT! SSL_ports
http_access allow localhost
http_access allow localnet
http_access deny all
icp_access allow localnet
icp_access deny all
http_port transparent
hierarchy_stoplist cgi-bin?
access_log /var/log/squid/access.log squid
refresh_pattern ^ ftp: 1440 20% 10080
refresh_pattern ^ gopher: 1440 0% 1440
refresh_pattern -i (/ cgi-bin / | ?) 0 0% 0
refresh_pattern (Release | Package (.gz) *) $ 0 20% 2880
refresh_pattern. 0 20% 4320
acl shoutcast rep_header X-HTTP09-First-Line ^ ICY. [0-9]
upgrade_http0.9 deny shoutcast
acl apache rep_header Server ^ Apache
broken_vary_encoding allow apache
hosts_file / etc / hosts
coredump_dir / var / spool / squid

Squid is based on so-called access control lists (ACLs). The description of each list begins with the acl directive, then the name of the list, followed by the type and value. Access rules are applied to lists, for example, http_access. Each rule starts with a type name, then an action (for example, a ban or permission), its parameters and an object (list or something else) to which it applies is specified. There are also simple parameters, such as http_port. All lists, rules, parameters are read out and applied in the order in which they are recorded. Therefore, first the permissive rules are prescribed, and then everything else is forbidden. Here is such a complicated at first glance, but in fact a fairly simple ideology.

By default, Squid access to HTTP is allowed only from the machine on which it is installed. To fix this, add the line "http_access allow localnet" before the string "http_access deny all". In this way, we allowed access to the server with IPs that are on the localnet list. Since we know that we will connect to the server via PPTP or IP-over-DNS, we can edit this list by adding only the subnets that we need and commenting out (removing) the unnecessary ones:

 #acl localnet src # disable access 
 acl localnet src # iodine-subnet from our example 
 acl localnet src # vpn-subnet from our example 

You do not need to do this, you can do with the standard configuration. But the following parameter must be corrected:

 http_port transparent 

This is an indication to the server, on which interface and port to receive requests from users, as well as specifying its type. In our case this will be IP (access only from the local machine), port 8080, type – transparent. If you do not plan to use a Ziproxy server before the caching proxy, then this parameter should be better brought to this form:

 http_port 3128 transparent 

Also after installing and debugging Squid it is useful to replace the line

 access_log /var/log/squid/access.log squid 


 access_log none 

This will disable Squid logging, which means that I / O limit will not be spent, although this is not the best option in terms of security. At the end of the article, you will be given a link to the archive with the ready configuration files from our example and instructions for copying them to the server. If you still decide to edit the file yourself in nano, here is a little tip. To find the desired directive, press Ctrl + W, type TAG: destination_name (for example, TAG: http_port) and press Enter. It only remains to squeeze down the description of the corresponding parameter and make changes. Do not forget to save the file after editing. If there is a desire and confidence in your abilities, you can search the web for instructions on fine-tuning the Squid cache and try to configure it. Just do it carefully.

⇡ # Setting up Ziproxy

Ziproxy receives a request from the client, requests data from the required site, receives it, compresses it on the fly in Gzip and transfers it back in the archived form. Virtually all modern browsers support Gzip compression. Install and immediately stop the Ziproxy daemon, and at the same time and save the original configuration file:

 sudo apt-get install ziproxy 
 sudo /etc/init.d/ziproxy stop 
 sudo cp /etc/ziproxy/ziproxy.conf /home/ubuntu/ziproxy.conf.backup

You can edit the configuration file with the usual command

 sudo nano /etc/ziproxy/ziproxy.conf

The settings should be brought to about this kind (here, as before, a shorter version of the file is given, without comments and other "garbage"):

 Port = 3128
NextProxy = ""
NextPort = 8080
TransparentProxy = true
OverrideAcceptEncoding = true
ZiproxyTimeout = 90
MaxSize = 1048576
UseContentLength = false
Gzip = true
Compressible = {
"shockwave", "msword", "msexcel", "mspowerpoint", "rtf", "postscript",
java, javascript, staroffice, vnd, futuresplash,
asp, class, font, truetype-font, php, cgi, executable,
shellscript, perl, python, awk, dvi, css,
"xhtml + xml", "rss + xml", "xml", "pdf", "tar"
ProcessJPG = false
ProcessPNG = false
ProcessGIF = true
ProcessHTML = true
ProcessHTML_CSS = true
ProcessHTML_JS = false
ProcessHTML_tags = true
ProcessHTML_text = true
ProcessHTML_PRE = true
ProcessHTML_NoComments = true
ProcessHTML_TEXTAREA = true
ImageQuality = {90,75,60,45} 

Let's analyze some parameters. Port specifies the port number on which requests will be received. In NextProxy and NextPort, the IP address and port of the upstream proxy server are entered. In our example, this is Squid (see the http_port option). If you do not plan to use Squid or any other proxy after Ziproxy, then these lines should be commented out (put # at the beginning). The next two options are best not to touch. ZiproxyTimeout is the time to wait for a response from a remote server in seconds. This parameter can not be changed or reduced, for example up to 60 seconds. It's unlikely that the server, not responding in a minute, will respond in one and a half. MaxSize specifies the maximum size of the file being processed in bytes. You can experiment with its meaning, but here there is one feature. If you greatly reduce this parameter, then the server will skip small files like images without compression. If you overdo it, the wait time will increase, because Ziproxy will first have to download the entire file, compress it and then give it to the client.

The next three parameters are left alone and go to Process *. From the name of the items it is clear that the server is allowed to compress and what not. ProcessHTML * is an experimental option. If the page layout is "broken", then it should be disabled. ImageQuality specifies the percentage of JPG and PNG image quality, from best to worst, to which they will shrink. This file will also be attached to this article, but you can edit it manually. Above, only basic traffic compression options are given. In Ziproxy there are a number of experimental functions that you can enable. But they do not always work correctly and steadily.

Save the file after editing and run the proxy:

 sudo /etc/init.d/squid start 
 sudo /etc/init.d/ziproxy start 

If later you need to fix something in the settings, then do not forget to restart the daemons after that:

 sudo /etc/init.d/service_name restart 

⇡ # Routing configuration

We need to make sure that all requests for the 80th port go through a proxy server, and the rest of the data is passed by:

 sudo iptables -t nat -A PREROUTING -s -p tcp -m tcp --dport 80 -j 
REDIRECT --to-ports 3128
 sudo iptables -t nat -A POSTROUTING -s -j MASQUERADE 
 sudo iptables -t nat -A PREROUTING -s -p tcp -m tcp --dport 80 -j 
REDIRECT --to-ports 3128
 sudo iptables -t nat -A POSTROUTING -s -j MASQUERADE 

You can configure the routing differently. If we want the traffic to not shrink, then instead of port 3128, specify 8080 (see the above Squid configuration for this case). and are subnets for VPN and IP-over-DNS connections from previous articles. If, for example, there is no desire to use a proxy chain when connecting via PPTP, then skip the first two lines. Also, do not forget to add and save these lines in /etc/rc.local with the command

 sudo nano /etc/rc.local

If later you need to correct the routing settings, you will have to remove the corresponding lines from /etc/rc.local and execute the following commands:

 sudo iptables-F 
 sudo iptables -F -t nat 
 sudo /etc/rc.local

⇡ # Uploading files to the server

Download the WinSCP program, install and run it. Click the New button on the right and enter the new connection setup dialog. In the Host name field, specify the DynDNS name of the server (in our example it was amazec2.dyndns-ip.com), Port number is left equal to 22, in the User name we drive ubuntu, in Private key file we select our ppk-key file. Protocol is configured as in the screenshot below. Now click Save, select any name for our connection, save it and click Login.

Download the archive with sample configuration files, unpack and copy them to the / home / ubuntu / folder on the remote server. In the console, execute the following commands:

 sudo /etc/init.d/ziproxy stop 
 sudo /etc/init.d/squid stop 
 sudo chown root: root /home/ubuntu/squid.conf
 sudo chown root: root /home/ubuntu/ziproxy.conf
 sudo cp / home /ubuntu/squid.conf /etc/squid/squid.conf
 sudo cp /home/ubuntu/ziproxy.confp /etc/ziproxy/ziproxy.conf
 sudo /etc/init.d/squid start 
 sudo /etc/init.d/ziproxy start 

Go to "Configuring Routing" (see above) and follow the instructions. All proxy servers are ready to go. Try to work in this mode. If something does not suit, then you can tweak the settings, as in the examples above. In extreme cases, you can get rid of Squid and Ziproxy commands

 sudo apt-get remove ziproxy 
 sudo apt-get remove squid 

⇡ # Instead of a postscript

In the course of working with the server, one small nuisance emerged: the authorization log (/var/log/auth.log) had grown to some incredible size in a couple of days. At a cursory examination of the magazine, the following picture was drawn:

Some cheerful comrades tried to get a banal brute-force (search of logins and passwords) to gain access to SSH. Not that this is a rare situation, on the contrary, such a thing often occurs. However, I was surprised by the perseverance of these comrades and, let's say, short-sightedness. Us something does not particularly threaten, since authorization occurs in SSH with a key. But the fact is still unpleasant. A lot of traffic, they did not eat, but due to authorization errors, the number of entries in the corresponding file increased, and hence the number of operations with the disk. To avoid such situations in the future, it is enough to perform a couple of simple actions:

 sudo apt-get install fail2ban 
 sudo cp /etc/fail2ban/jail.conf /etc/fail2ban/jail.local
 sudo /etc/init.d/ fail2ban start 

Fail2ban is a daemon that tracks authorization attempts in various system services. If the number of allotted attempts is exceeded, it automatically bans the culprit. The default settings are sufficient, but you can also change the parameters in the / etc / fail2ban / jail.local file, the syntax is very simple there.

If you notice an error – select it with the mouse and press CTRL + ENTER.

Leave a Reply