Cloud-Native Computing
Learning Objectives
- You know the term cloud-native computing and know of the key characteristics of cloud-native computing.
- You know of common cloud architecture patterns.
Cloud-native computing
Cloud-native computing is an approach to designing and building applications that are scalable, resilient, and manageable in dynamic environments. Cloud-native applications are typically built using modern technologies such as containers, microservices, and continuous integration/continuous deployment (CI/CD) pipelines.
If you think about the technologies that we’ve emphasized from the beginning of the Web Software Development course, such as containers, stateless services, and dividing the application into separate services, such as one for the user interface and one for the backend, you’re already thinking about approaches that are related to cloud-native computing.
Further, in this course, we’ve discussed how to scale the application up and down based on the demand, which is also a core principle of cloud-native computing. Tooling such as monitoring and the ability to measure the performance of the application are also essential parts of cloud-native computing.
That is, from the get go, we’ve intentionally emphasized technologies and approaches that are related to cloud-native computing.
The key characteristics of cloud-native computing include:
-
Microservices architecture: Cloud-native applications are built as a collection of small, loosely coupled services that communicate with each other via well-defined APIs. This modularity allows teams to update or scale parts of the application independently.
-
Containers: Packaging applications in containers (with e.g. Docker) ensures that they run reliably across development, testing, and production environments.
-
Orchestration: Tools like Kubernetes are used to manage containerized applications, handling deployment, scaling, and management automatically.
-
Observability: Cloud-native applications are designed to be observable, with monitoring, logging, and tracing built into the application from the start. Tools like the LGTM stack are used to collect and analyze data from the application.
-
Infrastructure as code: Cloud-native applications are defined and managed using code instead of manual processes. By defining resources in configuration files like Kubernetes config files, deployments can be automated, which improves consistency and reduces risk of human error.
-
Automation: In part due to infrastructure as code, cloud-native environments are highly automated. Continuous integration and continuous deployment (CI/CD) pipelines are used to test and deliver code quickly.
-
Scalability and resilience: Cloud-native applications are designed to scale horizontally, tolerate failures, and recover quickly. Designing applications to be stateless using message queues and event-driven architectures are common practices.
These characteristics enable cloud-native applications to be highly available, scalable, and resilient.
There exists also the Cloud-native Computing Foundation that hosts open-source projects related to cloud-native computing, including Kubernetes and Prometheus. The objective of the foundation is to help the community to adopt cloud-native technologies and practices — they also offer training on the topic, where some of the courses are available for free.
Cloud architecture patterns
When building cloud applications, there are a handful of cloud architecture patterns that are commonly used to solve common problems that arise with cloud applications. Here are some of the patterns that are relevant for the contents of this course:
-
Problem: The application has variable workloads and unpredictable demand.
Solution: Auto-scale the application using solutions such as Kubernetes-based auto-scaling or serverless computing.
-
Problem: The static resources take plenty of time to load.
Solution: Use content-delivery networks for storing and distributing static data.
-
Problem: There are infrastructure failures.
Solution: Use multi-region and multi-cloud deployments, including distributing database data and having clusters at multiple regions.
-
Problem: Individual server instances are overloaded.
Solution: Distribute incoming requests to the servers using load balancing and scale up the number of server instances that handle the specific functionality.
-
Problem: The quantity of data is growing and the application takes time to load the data.
Solution: Store important parts of the data in a cache and distribute the data to multiple servers.
-
Problem: The application is tightly coupled and changes to one component require changes to other components.
Solution: Decouple the application into services and use event-driven architecture and messaging patterns for communicating between the services.
There are also good books on the topic, including e.g. Cloud Architecture Patterns.