Skip to main content
Version: 3.8.0

ChaosCenter Cluster Scope Installation


Before deploying LitmusChaos, make sure the following items are there

  • Kubernetes 1.17 or later

  • A Persistent volume of 20GB


    Recommend to have a Persistent volume(PV) of 20GB, You can start with 1GB for test purposes as well. This PV is used as persistent storage to store the chaos config and chaos-metrics in the Portal. By default, litmus install would use the default storage class to allocate the PV. Provide this value

  • Helm3 or kubectl


Users looking to use Litmus for the first time have two options available to them today. One way is to use a hosted Litmus service like Harness Chaos Engineering SaaS. Alternatively, users looking for some more flexibility can install Litmus into their own Kubernetes cluster.

Users choosing the self-hosted option can refer to our Install and Configure docs for installing alternate versions and more detailed instructions.

Installation of Self-Hosted Litmus can be done using either of the below methods:
  • Helm3 chart
  • Kubectl yaml spec file

  • Refer to the below details for Self-Hosted Litmus installation.

    Install Litmus using Helm

    The helm chart will install all the required service account configuration and ChaosCenter.

    The following steps will help you install Litmus ChaosCenter via helm.

    Step-1: Add the litmus helm repository

    helm repo add litmuschaos
    helm repo list

    Step-2: Create the namespace on which you want to install Litmus ChaosCenter

    • The ChaosCenter can be placed in any namespace, but for this scenario we are choose litmus as the namespace.
    kubectl create ns litmus

    Step-3: Install Litmus ChaosCenter

    helm install chaos litmuschaos/litmus --namespace=litmus --set portal.frontend.service.type=NodePort

    Note: If your Kubernetes cluster isn't local, you may want not to expose Litmus via NodePort. If so, remove --set portal.frontend.service.type=NodePort option. To connect to Litmus UI from your laptop, you can use port-forward svc/chaos-litmus-frontend-service 9091:9091. Then you can use your browser and open

    • Litmus helm chart depends on bitnami/mongodb helm chart, which uses a mongodb image not supported on ARM. If you want to install Litmus on an ARM-based server, please replace the default one with your custom mongodb arm image as shown below.

      helm install chaos litmuschaos/litmus --namespace=litmus \
      --set portal.frontend.service.type=NodePort \
      --set mongodb.image.registry=<put_registry> \
      --set mongodb.image.repository=<put_image_repository> \
      --set mongodb.image.tag=<put_image_tag>
    Expected Output
    NAME: chaos
    LAST DEPLOYED: Tue Jun 15 19:20:09 2021
    NAMESPACE: litmus
    STATUS: deployed
    TEST SUITE: None
    Thank you for installing litmus 😀

    Your release is named chaos and its installed to namespace: litmus.

    Visit to find more info.

    Note: Litmus uses Kubernetes CRDs to define chaos intent. Helm3 handles CRDs better than Helm2. Before you start running a chaos experiment, verify if Litmus is installed correctly.

    Install Litmus using kubectl

    In this method the users need to install mongo first via helm and then apply the installation manifest. Follow the instructions here.

    Verify your installation

    Verify if the frontend, server, and database pods are running

    • Check the pods in the namespace where you installed Litmus:

      kubectl get pods -n litmus
      Expected Output
      NAME                                       READY   STATUS    RESTARTS   AGE
      litmusportal-server-6fd57cc89-6w5pn 1/1 Running 0 57s
      litmusportal-auth-server-7b596fff9-5s6g5 1/1 Running 0 57s
      litmusportal-frontend-55974fcf59-cxxrf 1/1 Running 0 58s
      my-release-mongodb-0 1/1 Running 0 63s
      my-release-mongodb-1 1/1 Running 0 63s
      my-release-mongodb-2 1/1 Running 0 62s
      my-release-mongodb-arbiter-0 1/1 Running 0 64s

    • Check the services running in the namespace where you installed Litmus:

      kubectl get svc -n litmus
      Expected Output
      NAME                                  TYPE           CLUSTER-IP     EXTERNAL-IP      PORT(S)                         AGE
      chaos-exporter ClusterIP <none> 8080/TCP 23h
      litmusportal-auth-server-service NodePort <none> 9003:32368/TCP,3030:31051/TCP 23h
      litmusportal-frontend-service NodePort <none> 9091:30070/TCP 23h
      litmusportal-server-service NodePort <none> 9002:32455/TCP,8000:30722/TCP 23h
      my-release-mongodb-arbiter-headless ClusterIP None <none> 27017/TCP 23h
      my-release-mongodb-headless ClusterIP None <none> 27017/TCP 23h
      workflow-controller-metrics ClusterIP <none> 9090/TCP 23h

    Accessing the ChaosCenter

    To setup and login to ChaosCenter expand the available services just created and copy the PORT of the litmusportal-frontend-service service

    kubectl get svc -n litmus
    Expected Output
    NAME                               TYPE        CLUSTER-IP     EXTERNAL-IP   PORT(S)                         AGE
    litmusportal-frontend-service NodePort <none> 9091:31846/TCP 102s
    litmusportal-server-service NodePort <none> 9002:31245/TCP,8000:32714/TCP 101s
    litmusportal-auth-server-service NodePort <none> 9003:32618/TCP,3030:31899/TCP 101s
    mongo-service ClusterIP <none> 27017/TCP 101s
    mongo-headless-service ClusterIP None <none> 27017/TCP 101s

    Note: In this case, the PORT for litmusportal-frontend-service is 31846. Yours will be different.

    Once you have the PORT copied in your clipboard, simply use your IP and PORT in this manner <NODEIP>:<PORT> to access the Litmus ChaosCenter.

    For example:

    Where is my NodeIP and 31846 is the frontend service PORT. If using a LoadBalancer, the only change would be to provide a <LoadBalancerIP>:<PORT>. Learn more about how to access ChaosCenter with LoadBalancer

    You should be able to see the Login Page of Litmus ChaosCenter. The default credentials are

    Username: admin
    Password: litmus

    By default you are assigned with a default project with Owner permissions.

    Learn more