OpenEBS CI makes use of Litmus to drive its e2e pipelines, right from test bed creation (cluster creation playbooks) through the e2e tests (litmus experiments). The e2e pipeline involves several stages, with one or more gitlab jobs scheduled to run in a given stage. The sequence of stages, grouping of jobs within a given stage, intra (same stage) & inter (across stages) job dependencies, high-level test tunables are all specified within the .gitlab-ci.yml file in the respective e2e repositories.
Each gitlab job is associated with a “runner script” or an “e2e test”, which in turn invokes/executes a litmus experiment (or litmus ansible playbook, as in the case of cluster creation/destroy jobs).
The various stages in the e2e pipeline are discussed in the sections below.
Brings up the Kubernetes cluster by executing the platform-specific playbooks. Several cluster parameters such as the number of nodes, kubernetes versions, compute instance types (that control resources), regions, availability zones, CIDR ranges etc., can be controlled via runtime arguments (extra_vars). The artifacts generated upon this job’s execution (cluster config, i.e., kubeconfig, cluster resource names) are passed over to subsequent stages as dependencies.
The Litmus pre-requisites (RBAC, kubeconfig configmap, Litmus result CRD setup) are also installed once the cluster is created, as part of this stage.
Equips the cluster with additional disk resources, native to the specific platform (GPD, EBS, Packet Block Storage, Azure Block Device) that are used by the storage-engines as physical storage resources.
Deploys the customized/preconditioned OpenEBS Operator manifest (based on the baseline commit) on the cluster, thereby setting up the control plane and preparing default storage pool resources.The logging infrastructure (fluentd) is also setup on the cluster created.
Stateful Application Deployment
The OpenEBS e2e verifies interoperability with several standard stateful applications such as Percona-MySQL, MongoDB, Cassandra, PostGreSQL, Jupyter, Prometheus, Jenkins, Redis etc.,. These applications are deployed with OpenEBS storageClasses (tuned for each app’s storage requirement). Typically, two-versions of most apps are deployed, i.e., with Jiva & cStor storage engines. Each application is accompanied by respective load-generator jobs that simulate client operations/real-world workloads.
App Functionality Tests
Each deployed application is subjected to specific behavioural tests, such as replica scale, upgrade, storage resize, app replica re-deployment, storage affinity etc., most of which are common day1-day2 operations.
Storage/Persistent Volume Chaos Tests
The PV components such as controller/replica pods are subjected to chaos (pod crash/kill, lossy networks, disconnects) using tools such as chaoskube, pumba, and also kubernetes APIs (via kubectl) to verify data availability & application liveness.
Infrastructure Chaos Tests
The cluster components such as storage pools, nodes and disks are subjected to failures using kubernetes APIs (forces evicts, cordon, drain) as well as platform/provider specific APIs (gcloud, awscli, packet) to verify data persistence and application liveness.
Stateful Application Cleanup
The deployed apps are deleted in this stage thereby verifying de-provisioning & cleanup functionalities in OpenEBS control plane.
The cluster resources (nodes, disks, VPCs) are deleted. With this step, the e2e pipeline ends.