Polaris SLO Cloud
The Polaris SLO Cloud project aims to provide Service Level Objectives (SLOs) for next generation Cloud computing.
Service Level Agreements (SLAs) are very common in cloud computing. Each SLA consists of one or more Service Level Objectives, which are measurable capacity guarantees. Most SLOs in today’s cloud environments are very low-level (e.g., average CPU or memory usage, throughput).
Elasticity is a fundamental property of cloud computing. It is commonly understood as provisioning more resources for an application as the load grows and deprovisioning resources as the demand drops. However, this resource elasticity is only one of three possible elasticity dimensions. The other two are cost elasticity (i.e., how much is a customer willing to pay for a service) and quality elasticity (e.g., the desired accuracy of the prediction of a machine learning model).
The goal of the Polaris project is to bring high-level SLOs to the cloud and enable customers to leverage all three levels of elasticity.
The main documentation can be found here.
Videos and Demos
This video provides an introduction to the Polaris Framework and the concepts behind it. Specifically, it discusses:
- What are the relationships between metrics, SLOs, and elasticity strategies and what are shortcomings of many existing elasticity approaches?
- How is Polaris different?
- How do composed metrics work and how to they enable proactive scaling?
- What are SLO controllers?
- What are elasticity strategies?
- What is the Polaris CLI?
The following videos showcase the demos of the Polaris Framework and its CLI:
- End-to-end demo with reactive scaling of a workload
- Setup of a Polaris workspace using the Polaris CLI.
- Generation of a composed metric and a corresponding controller.
- Generation of an elasticity strategy and its corresponding controller.
- Generation of an SLO mapping type and an SLO controller.
- Application of an SLO mapping and scaling of a workload.
- Proactive scaling concepts and demo
- Overview of how predicted metric controllers work.
- Generation of a predicted metric controller.
- Replacement of a reactive metric controller with the predicted metric controller.
- Proactive scaling of a workload.
Additional demos can be found in this repository.
For more details and background, please see our scientific publications:
- SLOC: Service Level Objectives for Next Generation Cloud Computing in IEEE Internet Computing 24(3).
- SLO Script: A Novel Language for Implementing Complex Cloud-Native Elasticity-Driven SLOs in 2021 IEEE International Conference on Web Services (ICWS).
- A Novel Middleware for Efficiently Implementing Complex Cloud-Native SLOs in 2021 IEEE 14th International Conference on Cloud Computing (CLOUD).
- Polaris Scheduler: Edge Sensitive and SLO Aware Workload Scheduling in Cloud-Edge-IoT Clusters in 2021 IEEE 14th International Conference on Cloud Computing (CLOUD).
Suppose a service provider (i.e., a cloud provider) wants to offer a Content Management System as-a-Service to its customers. The CMS-as-a-Service that is being offered consists of the following services: a database, a headless backend (REST API only), and a frontend user interface. Each service may expose one or more metrics, which can be simple ones like CPU usage or complex ones. These metrics can be used by the service provider to set up SLOs.
Service consumers, i.e., Customers, who deploy the CMS-as-a-Service are not really interested in having an average CPU utiliztion of 80%, but instead want to have a “good performance for an acceptable cost”. Ideally they would like to specify a simple configuration that guarantees them a good performance without making their eyes pop out when they see the bill at the end of the month.
To this end, a cost efficiency SLO could be offered by the provider. Based on this article, we define the cost efficiency of a cloud application as the number of requests per second faster than N milliseconds divided by the total cost of the service. The cost efficiency could be exported by one or more services of the deployment as a custom metric and the service provider can use it as a base for creating an SLO that can be configured by the service consumers. Polaris can then take care of automatically of scaling all services within the CMS-as-a-Service workload, based on this cost efficiency SLO.
A service consumer can now deploy the CMS-as-a-Service as a workload in his/her cloud subscription. To configure the SLO, the consumer configures an SLO Mapping, which associates the cost efficiency SLO to the particular workload and supplies the desired cost efficiency as configuration values.
Beyond Simple Scaling
Polaris does not only allow the development and configuration of complex SLOs, it also allows service consumers the choose the exact elasticity strategy they want to use when their SLO is violated. The most common form (see here) of scaling in today’s clouds is horizontal scaling, i.e., adding additional instances of a service (scaling out) or removing unneeded instances of the service (scaling in).
The service provider can offer multiple elasticity strategies, e.g.,
- Horizontal scaling (adding and removing instances)
- Vertical scaling (adding and removing resources, e.g., CPU and memory, to/from a single instance)
- A combination of horizontal and vertical scaling
- An elasticity strategy specifically tailored for a certain application
SLO Mappings allow service consumers to choose which elasticity strategy they want to use with their SLO, as long as their input/output data types match.
The Polaris project offers the following features:
- SLO Script, a language and framework for
- developing complex SLOs, based on one or more metrics
- configuring these SLOs using SLO Mappings
- developing composed metrics by aggregating other metrics
- using predictions in metrics and SLOs to employ proactive scaling
- developing custom elasticity strategies
- Generic elasticity strategies that can be used with multiple SLOs
- Generic SLOs that can be used with multiple elasticity strategies
- AI-based prediction models for metrics (see polaris-ai), usable as a composed metrics library
- SLO-aware Kubernetes pod scheduling (see polaris-scheduler)
SLO Script consists of an orchestrator-independent core library and connector libraries for specific orchestrators. Currently, there is a connector for Kubernetes.
This is a polyglot monorepository. All code for this project is contained in this repository.
||YAML files for quickly deploying Polaris CRDs and controllers|
||Python code, e.g., predicted metric controller base|
||Configurations and demo applications, we use for testing|
||TypeScript code, i.e., all SLO Script libraries, SLO controllers, etc.|
Building, Running, and Debugging
For details, on how to build, run and debug the Polaris components, please see the README of the TypeScript components.
You need a running Kubernetes cluster with a Prometheus installation to use Polaris. Instructions on setting up such a cluster can be found here.
To quickly deploy the default set of elasticity strategies shipped with Polaris, open a terminal in the root folder of the repository and execute the following command:
kubectl apply -f ./deployment
This will deploy the
VerticalElasticityStrategy CRDs and controllers.
More detailed build and deployment instructions can be found here.
The project’s website is hosted using GitHub pages.
# Clone the polaris repository once more, this time into the ./gh-pages folder and check out the gh-pages branch. git clone -b gh-pages firstname.lastname@example.org:polaris-slo-cloud/polaris.git ./gh-pages # Run the update script to copy the current state from the docs folder to gh-pages and to regenerate the typedoc documentation. ./update-gh-pages.sh # Go into the gh-pages directory and push the branch cd gh-pages git add . git commit -m "Update gh-pages" git push origin
The Polaris SLO Cloud project is actively used by the orchestration layer of the RAINBOW project to bring complex SLOs to Fog Computing.