NGINX trickery
Learning objectives
- You know how to use NGINX as a cache.
- You know how to enable compression on NGINX.
When we consider our application, there's still some room for improvement. First of all, when we request static content, each request goes through two NGINX servers. The first one acts as a reverse proxy and directs traffic to appropriate services, while the second one is used for serving the static files generated with Astro. This is mainly visible in the server logs, where requests produce logs for both nginx
and items-ui
.
On a positive side, when we request content, we observe that the ETag
and Last-Modified
headers are present in the response. This means that the static are cached on the client and there's no need to reload them on subsequent requests.
curl -v http://localhost:7800/
// ..
< HTTP/1.1 200 OK
// ..
< Content-Type: text/html
< Content-Length: 3363
< Connection: keep-alive
< Last-Modified: Thu, 16 Feb 2023 19:00:30 GMT
< ETag: "63f7b7ce-d23"
< Accept-Ranges: bytes
// ..
The second place for improvement is hidden in plain sight in the above response. When we consider the content type of the response, we see that it is text/html
. That is, the response is sent in plaintext. Most clients support working with compressed resources, so to save traffic and to improve response time due to saved traffic, we should also compress the responses.
Let's take our existing nginx.conf
file in the nginx
folder and use it as the starting point for a new nginx.prod.conf
file. Presently, the file looks as follows.
worker_processes 1;
events {
worker_connections 1024;
}
http {
upstream items-api {
server items-api:7777;
}
upstream items-ui {
server items-ui:3000;
}
server {
listen 7800;
location /api/ {
proxy_pass http://items-api/;
}
location / {
proxy_pass http://items-ui;
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection "Upgrade";
proxy_set_header Host $host;
}
}
}
Adding cache directives
To add caching to NGINX, we first define a cache for our HTTP server. The command proxy_cache_path is used to define the location of the cache on the NGINX server, as well as additional rules for the cache. Below, we would create a cache that resides in /var/cache/nginx
, has a name astro_ssg_cache
, and uses up to 1 megabyte for storing cache keys. Files are purged from the cache if they are not accessed in over 5 minutes, and the maximum size for the cache is 512 megabytes.
# ..
http {
proxy_cache_path /var/cache/nginx keys_zone=astro_ssg_cache:1m inactive=5m max_size=512m;
# ..
Let's start by defining what not to cache. The calls to the path /api/
go to items-api
and should not be cached. To avoid caching those requests, we add a Cache-Control no-store
to every response, indicating that the response should not be stored in any cache. As the requests to /api
are directed to the items-api
and the items-api
does not add cache directives such as ETag
, adding the extra no-store
directive to every response should suffice. To accomplish this, the block for location /api/
is modified as follows.
http://nginx.org/en/docs/http/ngx_http_proxy_module.html#proxy_cache
location /api/ {
proxy_pass http://items-api/;
add_header Cache-Control 'no-store';
}
Even without the above
no-store
directive, the requests would not be cached on NGINX. However, when defining caches, thinking of what not to cache is important.
Once we've defined what not to cache, the next step is to define what to cache. In our case, the contents under /
are static and should be cached. When adding caching, we use proxy_cache to define what cache to use (here, astro_ssg_cache
), proxy_cache_key to define the key for the cache (here, $uri
), and proxy_cache_valid to define how long the cached items should be in the cache (here, 5 minutes).
With this, the block for location /
would look as follows. As you notice, we've also removed the configuration related to the WebSocket connection as the static file server does not use WebSockets.
location / {
proxy_pass http://items-ui;
proxy_cache astro_ssg_cache;
proxy_cache_key $uri;
proxy_cache_valid 5m;
}
By default, as per proxy_cache_methods, only requests using
GET
andHEAD
as the method are cached.
Now, when we make requests to the server, looking at the server logs, we observe that the requests create logs primarily from the nginx
service but not from the items-ui
. This is exactly what we wanted.
Compressing contents
Although the statically generated content is now cached on the nginx
service that acts as the frontend to all of our applications, the contents are still sent in plaintext format. To enable compressing the response contents, we add the gzip directive to our server configuration, and outline the types of contents that should be cached using gzip_types. In addition, we define a minimum size of the content that should be compressed using gzip_min_length to avoid compressing too small files (and even potentially creating compressed files that are larger than the original files).
With these configurations in place, the server
block would start as follows.
server {
gzip on;
gzip_types text/css application/javascript application/json image/svg+xml;
gzip_min_length 1000;
# ...
Now, when restarting the application, we notice that the response contents are compressed, given that we provide headers in the request that indicate that we can interpret compressed content.
curl -v http://localhost:7800/ -H "Accept-Encoding: gzip"
# ..
< Content-Type: text/html
< Transfer-Encoding: chunked
< Connection: keep-alive
# ..
< Content-Encoding: gzip
# ..
Brotli and so on
Although we've used gzip in the above example, the Brotli compression algorithm would in general be a better choice. It is, however, not included in the default NGINX confiration. For the purposes of these materials, we omitted configuring it. For adding brotli into NGINX, you would have to build NGINX as outlined e.g. at https://docs.nginx.com/nginx/admin-guide/dynamic-modules/brotli/.
This content is restricted based on course progress. Cannot determine the current course progress as the user is not registered (or, it takes a while to load the points). If you have not yet registered, please do so now.