Cloud Computing

Service and Deployment Models


Learning Objectives

  • You know the term cloud computing and know commonly used service and deployment models in cloud computing.

Cloud computing

The term cloud computing refers to an online and on-demand service that offers computing resources and services such as servers, which can be used without the need to actively manage or maintain them.

Cloud computing is often categorized into four service models: infrastructure as a service (IaaS), platform as a service (PaaS), software as a service (SaaS), and function as a service (FaaS). In addition to the service models, cloud computing can also be categorized based on the deployment model, which can be public, private, or hybrid.

Loading Exercise...

Service models

Cloud computing services are typically categorized into infrastructure, platform, software, and functions, where each of them are offered as a service:

  • Infrastructure as a service (IaaS) providers offer virtual machines, storage, and networking resources, providing the possibility to customize infrastructure by, e.g., requesting specific hardware resources, choosing the operating system, and installing specific software.

  • Platform as a service (PaaS) providers offer a software platform for developing, running, and managing applications, omitting the need to maintain or worry about the underlying infrastructure; PaaS providers typically also offer additional support such as tools for making deployment easier, tools for monitoring applications, and so on.

  • Software as a service (SaaS) providers offer (web) applications as a service, omitting the need to maintain the infrastructure or the platform, as well as the need to install the applications locally.

  • Function as a service (FaaS) providers offer a platform for running functions, which are small pieces of code that are executed in response to events, such as HTTP requests or database changes.

The first three service models are often referred to as the cloud computing stack, as they can be used together to build new services on top of the existing services. They are also often built as a hierarchy, where IaaS is the lowest level, PaaS is built on top of IaaS, and SaaS is built on top of PaaS.

As an example, Heroku that offers a hosting platform for applications is built on top of Amazon Web Services, which is an IaaS provider that also offers PaaS services.

The function as a service model can run on top of a PaaS provider, and it can be used to build new services on top of the existing services. The function as a service model is often referred to as serverless computing, as the responsibility for the infrastructure and the configuration of the platform is given to the cloud provider.

Loading Exercise...

Serverless computing

Serverless computing abstracts away infrastructure management, allowing developers to focus on writing code while automatically scaling resources and offering a pay-as-you-go pricing model. Whenever a function is called — e.g. when a HTTP request is made to a certain path — the cloud provider launches a container that handles the call, shutting the container down when the execution has finished.

As an example, Deno Deploy that we’ve used in the Web Software Development course is a serverless platform that can be used to host Deno applications.

Although serverless computing is often discussed in terms of calling a function, one should not interpret it as if the application can handle just a single function call. The term function refers to an entry point to the application. The application can be large, and consist of multiple files and services, which can also call other services. Further, the term serverless does not mean that there are no servers — they are just abstracted away.

When considering scaling and contrasting it with e.g. the autoscaling functionality of Kubernetes, there is no need to configure rules for scaling when using serverless computing, as the serverless computing provider automatically scales the service based on demand.

See also the article “What Serverless Computing Is and Should Become: The Next Phase of Cloud Computing”.

Loading Exercise...

When considering serverless computing, there are plenty of upsides and a handful of downsides. Serverless computing provides opportunities for high scalability and availability, as containers are launched on demand, while also providing cost efficiency, as the costs incur only from used resources. Similarly, as there is no need to manage (or even configure, depending on the provider) the underlying infrastructure and platform, the development and deployment of serverless applications can be faster and easier.

On the other side, the lack of control over the underlying infrastructure and platform can be a challenge, and monitoring serverless applications can be challenging as containers come and go. Similarly, as the containers are launched on demand, there is implicitly no state. In addition, as the containers are launched on demand, there is a cold start time, which depends on the application, and serverless computing providers also often limit the time and resources that each individual container have at their disposal.

Serverless providers are also often using warm containers, where containers that have already been started are being reused and kept running. As the container is already running (or kept running for a longer time instead of immediately shutting it down), the effect of the cold start problem reduces mainly to cases where the service has not been used in a while and where new containers are created due to increased demand.

As an example, Google’s serverless platform Google Cloud Run can be configured to keep a certain number of containers always up and running. See also Fluid Compute from Vercel.

Loading Exercise...

Deployment models

Cloud computing is often associated with well known providers such as Amazon Web Services, Google Cloud, and Microsoft Azure that offer services to the general public. In addition to public clouds providers that offer services to the general public, there is also the possibility of using a private cloud as well as a hybrid cloud.

In the private cloud deployment model, the cloud computing resources are dedicated to a small group of users, typically within an organization. The private cloud can be hosted and managed on-premises or by a third-party provider. Private cloud can be used to e.g. comply with privacy regulations, to have more control over the infrastructure, or to have more control over the security of the services.

In the hybrid cloud deployment model, the cloud computing resources are a mix of public and private clouds. The hybrid cloud can be used to e.g. extend the capacity of the private cloud to the public cloud, to have more control over the infrastructure, or to have more control over the security of the services. As an example, a hybrid cloud can be used to store sensitive data in a private cloud, while using the public cloud for less sensitive data.

In addition to public, private and hybrid clouds, there is also the possibility of using a multi-cloud deployment model, where the cloud computing resources are a mix of multiple public clouds. The multi-cloud can be used to e.g. avoid vendor lock-in, to have more control over the infrastructure, or to have more control over the security of the services.

Cloud providers also often offer edge computing services, where the cloud computing resources are located closer to the users. While edge computing is not a deployment model, it is a complementary approach that can be used to e.g. reduce latency, to reduce bandwidth usage, or to reduce the environmental impact of the services.

Loading Exercise...