Astro production build
Learning objectives
- You know how to create a production configuration for Astro.
- You know the principle of multi-stage builds in Docker.
Let's next look into our Dockerfile for items-ui
that uses Astro. The Dockerfile defines how the the Astro application should be run. As we see from the Dockerfile, it is run using Astro's dev
command.
FROM node:lts-alpine3.17
EXPOSE 3000
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY *.json ./
RUN npm install
COPY . .
CMD [ "astro", "dev" ]
When we study the network traffic of a running application, we observe quite a few requests. Network traffic of the items application, recorded using Chrome's Network tab, is shown in Figure 1.

As we recall, Astro can be used as a static site generator, although it works also in hybrid mode. For the application that is used for listing and working with items, there is no need for server-side rendering, and Astro can be directly used as a static site generator. To accomplish this, the production configuration should use Astro's static site generation capabilities and then run a server that would serve the static files.
Docker multi-stage builds
Multi-stage builds in Docker allow creating a Dockerfile that builds the application in multiple parts, pulling in images during the build process to start new stages, and copying content to the new stages from prior stages.
Docker's multi-stage builds fit our present problem nicely. We first wish to build the items-ui
with Astro's build
command that is used to create the contents intended for production. To achieve this, we take our previous Dockerfile
for the items-ui
and modify it a bit. First, we label the first build stage starting from retrieving the node:lts-alpine3.17
image as build
. This is followed with the same steps that we have taken previously, but instead of starting a development server, we use the RUN
command to run Astro's build
command.
FROM node:lts-alpine3.17 as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY *.json ./
RUN npm install
COPY . .
RUN astro build
Running the above Dockerfile leads to the user interface being created as a static site under a folder dist
under /app
, i.e. into /app/dist
.
In the next stage of the build process, we retrieve an image for a web server that can be used for serving the static files, copy the files into the server, and provide the commands needed to start the server. As we already use nginx to direct traffic to the services in our application, we could rely on NGINX for serving the static files as well. However, as NGINX by default starts on the port 80
, we need to adjust it's configuration to use the port 3000
.
One possibility is to create a separate NGINX configuration file, say items-ui.nginx.prod.conf
and override the default NGINX configuration with it during the build process. The configuration would be used to start a HTTP server on port 3000
that would be used to serve files. The configuration would be as follows.
worker_processes 1;
events {
worker_connections 128;
}
http {
include mime.types;
server {
root /usr/share/nginx/html;
listen 3000;
sendfile on;
tcp_nopush on;
location / {
}
}
}
If we would place the above configuration into items-ui.nginx.prod.conf
under the folder items-ui
, the configuration for the second stage would be as follows. Here, we would import the image nginx:latest
and call the build stage server
. We would expose the port 3000
from the server, and copy our NGINX configuration on top of the default NGINX configuration. Finally, we would copy the contents from the /app/dist
folder of the build
stage to the folder /usr/share/nginx/html
in the present image, and provide the commands used to start the server.
FROM nginx:latest as server
EXPOSE 3000
COPY items-ui.nginx.prod.conf /etc/nginx/nginx.conf
COPY /app/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
The above approach would work nicely.
Another possibility, would be to modify the existing configuration within the NGINX image. When NGINX is started, the default configuration at /etc/nginx/nginx.conf
loads server details from a configuration file at /etc/nginx/conf.d/default.conf
. Among other things, the file at /etc/nginx/conf.d/default.conf
specifies that NGINX should be run at port 80
.
Using sed, which is a command line file editor, we could change the contents of /etc/nginx/conf.d/default.conf
to use the port 3000
instead of the port 80
. In this case, the second build stage would look as follows.
FROM nginx:latest as server
EXPOSE 3000
RUN sed -i "s/80/3000/g" /etc/nginx/conf.d/default.conf
COPY /app/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
Altogether, if we would pick the latter option, our Dockerfile.prod
under the items-ui
folder would look as follows.
FROM node:lts-alpine3.17 as build
WORKDIR /app
ENV PATH /app/node_modules/.bin:$PATH
COPY *.json ./
RUN npm install
COPY . .
RUN astro build
FROM nginx:latest as server
EXPOSE 3000
RUN sed -i "s/80/3000/g" /etc/nginx/conf.d/default.conf
COPY /app/dist /usr/share/nginx/html
CMD ["nginx", "-g", "daemon off;"]
Finally, we would also need to modify the Docker Compose file intended for production to account for the new changes in the items-api
. First, we would need to change the image name and point it to the Dockerfile.build
. Then, in addition, we would likely wish to have a restart policy -- the restart on-failure that we used for the items-api
would work here nicely.
With the modifications, the items-ui
configuration in the docker-compose.prod.yml
would look as follows.
# ...
items-ui:
build:
context: items-ui
dockerfile: Dockerfile.prod
image: items-ui-prod
restart: "on-failure"
ports:
- 3000:80
depends_on:
- items-api
deploy:
restart_policy:
condition: on-failure
delay: "5s"
max_attempts: 5
window: "30s"
# ...
Now, when we restart our application, we see that the traffic from the server looks quite a bit different, as shown in Figure 2.
