Deno Server-side Application
Learning Objectives
- You know how to add a Deno-based server-side application to a project.
- You know how to access the server-side application using a web browser.
Now that you have Docker and Docker Compose (or an alternative like Podman) installed, we can start working on the walking skeleton.
Follow along the example, creating the files and folders as instructed locally. At the end of the example, you are expected to submit the walking skeleton as a zip file to the course platform.
If you encounter problems during the process, reach out for help.
Folder structure
To start working on the example, create a new folder (called e.g. wsd-walking-skeleton
). Then, create a folder called server
to the newly created folder. Create the files app.js
, app-run.js
, deno.json
, and Dockerfile
into the server
folder.
The structure should be as follows.
$ tree --dirsfirst
.
└── server
├── app.js
├── app-run.js
├── deno.json
└── Dockerfile
The
Dockerfile
will be used for defining how the server-side application will be containerized, while the rest of the files will be used for the actual server-side application.
Now, copy the following code to the app.js
file.
import { Hono } from "@hono/hono";
import { cors } from "@hono/hono/cors";
import { logger } from "@hono/hono/logger";
const app = new Hono();
app.use("/*", cors());
app.use("/*", logger());
app.get("/", (c) => c.json({ message: "Hello world!" }));
export default app;
Copy the following code to the app-run.js
file.
import app from "./app.js";
Deno.serve(app.fetch);
Copy the following code to the deno.json
file.
{
"imports": {
"@hono/hono": "jsr:@hono/hono@4.6.5"
}
}
The deno.json file is a configuration file for Deno projects.
And, finally, copy the following code to the Dockerfile
.
FROM denoland/deno:alpine-2.0.2
WORKDIR /app
COPY deno.json .
RUN DENO_FUTURE=1 deno install
COPY . .
CMD [ "run", "--allow-env", "--allow-net", "--watch", "app-run.js" ]
The above Dockerfile uses an Alpine Linux based Deno image as the starting point and creates a folder app
, setting that as the working directory for the image. Then, the file deno.json
that is used to define dependencies is copied to the image, after which the dependencies are cached with the DENO_FUTURE=1 deno install
command. This is followed by copying the remaining files to the image, and stating the command for running the image.
The base image includes deno
as entrypoint, so the given instructions after CMD
are passed to the deno
executable. Effectively, the last line of the Dockerfile
translates to the following command:
deno run --allow-env --allow-net --unstable --watch app-run.js
If you have Deno installed, you can try running the application locally with Deno using the above command. Running the command starts a server that listens to the port 8000 and that you can access the server by visiting http://localhost:8000
in your browser or by using a tool like curl
in the terminal.
To stop the server, you can press Ctrl + C (or the cancel command specific to your operating system) in the terminal where you started the server.
Compose configuration
Next, create a file called compose.yaml
, and place it in the root folder of the project (one folder above server
). With the new file in place, the folder structure should be as follows.
$ tree --dirsfirst
.
├── server
│ ├── app.js
| ├── app-run.js
│ ├── deno.json
│ └── Dockerfile
└── compose.yaml
Copy the following contents to the file compose.yaml
.
services:
server:
build: server
restart: unless-stopped
volumes:
- ./server:/app
ports:
- 8000:8000
The above configuration states that we have one container — a service called server
. The service is built from the folder server
, and it is restarted unless it is stopped (restart: unless-stopped
). The volumes
section states that the folder server
is mounted to the folder /app
in the container — in effect, this means that any changes that we do in the folder server
in our local file system are reflected to the contents of the container. The ports
section states that the port 8000 is exposed from the container as the port 8000 in the operating system.
The port 8000 is the default port on which the Deno application starts. As the application is started in the container, the port is available for requests within the container, but not to the wider world. The port mapping in
compose.yaml
exposes the port to the operating system.
Once the file is saved, we can run the application with Docker Compose (using the command docker compose
). This is done as follows. We first check that we are in the correct directory and that the compose.yaml
file is in the current directory. Then, we run docker compose up --build
.
tree --dirsfirst
.
├── server
│ ├── app.js
│ ├── app-run.js
│ ├── deno.json
│ └── Dockerfile
└── compose.yaml
1 directory, 5 files
docker compose up --build
(lots of stuff)
✔ Container containerization-server-1 Created
Attaching to server-1
server-1 | Watcher Process started.
server-1 | Listening on http://localhost:8000/
Now, after waiting for a few moments, the application is running and you can access it locally by visiting http://localhost:8000
in your browser or by using a tool like curl
in the terminal. If the application does not start, or if you are not able to access the application in the browser, check the terminal output for any error messages — also, possibly check your firewall configuration.
curl localhost:8000
{"message":"Hello world!"}%
We can stop the application using Ctrl + C (or the cancel command specific to your operating system) in the terminal where we called the docker compose command. Running the command docker compose stop
in the project folder explicitly stops the containers created by the command docker compose up
.
docker compose stop
At this point, when running the application, a new file called deno.lock
has also been created to the server
folder. This is used to keep track of the dependencies. The file is created by the deno install
and deno run
commands.
When we ran the command docker compose up --build
, we stated that the containers outlined in the compose.yaml
file should be built and started. If we would have omitted the --build
flag, we would have used the images that have previously been built (if any existed) to launch the container — if none exist, Docker may ask whether these images should be built.