Walking Skeleton
Learning Objectives
- You know how to setup a walking skeleton project with server-side and client-side functionality.
- You know how to add performance testing to a project using k6.
- You know how to interpret performance metrics from k6.
Here, we setup the starting point of a walking skeleton that we use in the subsequent parts. The walking skeleton consists of a Docker project with a server-side service using Deno, a client-side application using Astro, and a setup for measuring application performance.
For this part, you need: (1) Docker and Docker Compose installed on your computer, (2) up-to-date Deno, and (3) some familiarity with setting up a development environment. For Windows users, WSL is recommended.
In this course, you should be somewhat familiar with setting up a development environment and using Docker. If you are not, practice with the part on Setting up a Walking Skeleton from the Web Software Development course before continuing.
Setup project and add server-side functionality
First, follow the guidelines at Deno Server-side Application to create a Deno server-side application that can be run with Docker. Use e.g. dab-walking-skeleton
as the name of the folder for the project.
After following the steps in the guidelines, the project structure should be as follows.
tree --dirsfirst
.
├── server
│ ├── app.js
│ ├── app-run.js
│ ├── deno.json
│ └── Dockerfile
└── compose.yaml
Furthermore, running the application with docker compose up --build
should start a web application that responds to requests at port 8000.
docker compose up --build
(lots of stuff)
✔ Container containerization-server-1 Created
Attaching to server-1
server-1 | Watcher Process started.
server-1 | Listening on http://localhost:8000/
And, making a request to the server should respond with a JSON message.
curl localhost:8000
{"message":"Hello world!"}%
Client-side functionality
Next, we add a client-side application using Astro. Go to the folder dab-walking-skeleton
and use Deno to run the astro project generator with the command deno -A npm:create-astro@latest client
. This will create a folder called client
for the client-side application and place relevant files in it.
The project generator asks for a few options.
- When asked how would you link to start your new project, select “A basic, minimal starter”.
- When asked whether dependencies should be installed, select “No”.
- When asked whether a new git repository should be initialized, select “No”.
At this point, the structure of the project should be as follows.
.
├── client
│ ├── public
│ │ └── favicon.svg
│ ├── src
│ │ ├── assets
│ │ │ ├── astro.svg
│ │ │ └── background.svg
│ │ ├── components
│ │ │ └── Welcome.astro
│ │ ├── layouts
│ │ │ └── Layout.astro
│ │ └── pages
│ │ └── index.astro
│ ├── astro.config.mjs
│ ├── package.json
│ ├── README.md
│ └── tsconfig.json
├── server
│ ├── app.js
│ ├── app-run.js
│ ├── deno.json
│ └── Dockerfile
└── compose.yaml
Now, move to the folder client
, and within the folder client
, run the command deno install --allow-scripts
. This will install the necessary dependencies for the client-side application.
Note that unlike with npm, the dependencies are installed in a Deno-specific folder from where they are linked to the
node_modules
folder of the project.
After installing the dependencies, add a Dockerfile
to the client
folder with the following content.
FROM denoland/deno:alpine-2.0.2
WORKDIR /app
COPY package.json .
RUN DENO_FUTURE=1 deno install --allow-scripts
COPY . .
CMD [ "task", "dev", "--host" ]
And modify the compose.yaml
file to include the client-side application.
services:
server:
build: server
restart: unless-stopped
volumes:
- ./server:/app
ports:
- 8000:8000
client:
build: client
restart: unless-stopped
volumes:
- ./client:/app
ports:
- 4321:4321
depends_on:
- server
Now, when you run docker compose up --build
, the client-side application should be available at port 4321 and the server-side application should be available at port 8000.
Then, delete the following folders:
client/src/components/
client/src/assets/
And, modify the file client/src/pages/index.astro
to match the following, saving the file.
---
import Layout from "../layouts/Layout.astro";
---
<Layout>
<p>DAB!</p>
</Layout>
Now, when you reload the application in the browser, you should see a page similar to the one in Figure 1. If you are using Windows and are not running the project from WSL, you might need to restart the application in Docker to see the changes.
Starting with performance testing
Next, we add k6 to the project, which provides the possibility to do performance and load testing on the application. To add k6, create a folder called k6-tests
in the root of the project and create a Dockerfile
to the folder. Place the following to the file.
FROM grafana/k6:latest-with-browser
WORKDIR /tests
COPY tests /tests/
CMD [ "run", "/tests/hello-k6.js" ]
This downloads a k6 image with browser support, and sets the working directory to /tests
. The tests are copied to the folder /tests
in the image, and the command /bin/true
is run when the image is started — that is, we do nothing when the image starts.
Then, create a folder called tests
to the k6-tests
folder, and create a file called hello-k6.js
to the folder. Place the following to the file.
import { browser } from "k6/browser";
import http from "k6/http";
import { check } from "k6";
export const options = {
scenarios: {
client: {
vus: 2,
duration: "10s",
executor: "constant-vus",
exec: "loadPage",
options: {
browser: {
type: "chromium",
},
},
},
server: {
vus: 2,
duration: "10s",
executor: "constant-vus",
exec: "getServerRoot",
},
},
};
export const loadPage = async () => {
const page = await browser.newPage();
try {
await page.goto("http://client:4321");
} finally {
await page.close();
}
};
export const getServerRoot = async () => {
const url = "http://server:8000/";
const res = http.get(url);
check(res, {
"status is 200": (r) => r.status === 200,
});
};
At this point, the structure of the project should be as follows.
tree --dirsfirst
.
├── client
│ ├── node_modules
│ │ └── astro -> .deno/astro@5.1.5/node_modules/astro
│ ├── public
│ │ └── favicon.svg
│ ├── src
│ │ ├── layouts
│ │ │ └── Layout.astro
│ │ └── pages
│ │ └── index.astro
│ ├── astro.config.mjs
│ ├── deno.lock
│ ├── Dockerfile
│ ├── package.json
│ ├── README.md
│ └── tsconfig.json
├── k6-tests
│ ├── tests
│ │ └── hello-k6.js
│ └── Dockerfile
├── server
│ ├── app.js
│ ├── app-run.js
│ ├── deno.json
│ ├── deno.lock
│ └── Dockerfile
└── compose.yaml
Finally, modify compose.yaml
to include the k6 service.
services:
server:
build: server
restart: unless-stopped
volumes:
- ./server:/app
ports:
- 8000:8000
client:
build: client
restart: unless-stopped
volumes:
- ./client:/app
ports:
- 4321:4321
depends_on:
- server
k6-tests:
entrypoint: "/bin/true"
build: k6-tests
volumes:
- ./k6-tests/tests:/tests
depends_on:
- client
Now, when you run docker compose up --build
, the k6 service is created, but it is not started.
This is similar to what we did when setting up end-to-end tests in the Web Software Development course.
Next, with the server and the client running, you can run the k6 tests by running the command docker compose run --rm --entrypoint=k6 k6-tests run /tests/hello-k6.js
in a separate terminal. This runs the tests and outputs the results in the terminal.
As an example, the output might look like the following.
/\ Grafana /‾‾/
/\ / \ |\ __ / /
/ \/ \ | |/ / / ‾‾\
/ \ | ( | (‾) |
/ __________ \ |_|\_\ \_____/
execution: local
script: /tests/hello-k6.js
output: -
scenarios: (100.00%) 2 scenarios, 4 max VUs, 40s max duration (incl. graceful stop):
* client: 2 looping VUs for 10s (exec: loadPage, gracefulStop: 30s)
* server: 2 looping VUs for 10s (exec: getServerRoot, gracefulStop: 30s)
✓ status is 200
browser_data_received..........: 30 MB 2.9 MB/s
browser_data_sent..............: 437 kB 42 kB/s
browser_http_req_duration......: avg=19.91ms min=3.27ms med=19.96ms max=66.51ms p(90)=30.68ms p(95)=34ms
browser_http_req_failed........: 0.00% 0 out of 636
browser_web_vital_cls..........: avg=0 min=0 med=0 max=0 p(90)=0 p(95)=0
browser_web_vital_fcp..........: avg=43.44ms min=30.1ms med=40.39ms max=88.09ms p(90)=54.19ms p(95)=57.67ms
browser_web_vital_lcp..........: avg=43.44ms min=30.1ms med=40.39ms max=88.09ms p(90)=54.19ms p(95)=57.67ms
browser_web_vital_ttfb.........: avg=8.88ms min=5.1ms med=7.94ms max=17.2ms p(90)=12.58ms p(95)=14.56ms
checks.........................: 100.00% 45084 out of 45084
data_received..................: 9.2 MB 891 kB/s
data_sent......................: 3.5 MB 336 kB/s
http_req_blocked...............: avg=3.48µs min=592ns med=2.92µs max=1.29ms p(90)=4.47µs p(95)=5.37µs
http_req_connecting............: avg=12ns min=0s med=0s max=292.48µs p(90)=0s p(95)=0s
http_req_duration..............: avg=347.62µs min=46.17µs med=287.49µs max=7.46ms p(90)=570.56µs p(95)=717.44µs
{ expected_response:true }...: avg=347.62µs min=46.17µs med=287.49µs max=7.46ms p(90)=570.56µs p(95)=717.44µs
http_req_failed................: 0.00% 0 out of 45084
http_req_receiving.............: avg=38.52µs min=4.88µs med=31.11µs max=5.23ms p(90)=53.9µs p(95)=69.57µs
http_req_sending...............: avg=10.53µs min=1.55µs med=8.17µs max=4.53ms p(90)=14.25µs p(95)=18.46µs
http_req_tls_handshaking.......: avg=0s min=0s med=0s max=0s p(90)=0s p(95)=0s
http_req_waiting...............: avg=298.57µs min=38.91µs med=244.55µs max=7.27ms p(90)=498.09µs p(95)=630.95µs
http_reqs......................: 45084 4368.886393/s
iteration_duration.............: avg=599.77µs min=66.25µs med=363.74µs max=320.47ms p(90)=696.43µs p(95)=862.99µs
iterations.....................: 45116 4371.987368/s
vus............................: 4 min=4 max=4
vus_max........................: 4 min=4 max=4
The output has many of the performance metrics that we discussed in the last chapter on Quantifying Performance. These include:
- browser_web_vital_lcp: Largest Contentful Paint, on average 43.4 milliseconds.
- browser_web_vital_cls: Cumulative Layout Shift, on average 0.
- http_reqs: Total number of requests, 45084 requests, or 4368.9 requests per second.
- http_req_duration: Average request duration, 347.6 microseconds.
- http_req_failed: 0% of the requests failed.
The p(x)
values show the xth percentile of the data, e.g., p(90)
shows the 90th percentile of the data, where 90% of the data is below the value. That is, for example, 90% of the requests made to the server had a duration of 570.6 microseconds or less. On the other hand, 90% of the requests made from the browser had a duration of 30.7 milliseconds or less.
These results are from a simple test, and the values can vary depending on the complexity of the application and the load on the server. The local setup, including hardware and so on, also affects the results.
We’ll start building on this in the future parts, where we’ll add more functionality to the server and the client, and do more extensive performance and load testing.
Finally, when you are done, you can stop the services by running docker compose down
in the terminal.