Deploying GCXI using Helm-3
Beginning with release 9.0.016, Genesys CX Insights supports deployment using Helm Charts on Kubernetes clusters. Helm Charts provide an alternative to Kubernetes descriptors.
Prerequisites for deploying using Helm
This page provides an outline, illustrating one scenario to help you visualize the overall process. Genesys does not provide support for Helm or other third party products, so you must have knowledge of Helm and other products to complete this installation. This example is suitable for a small deployment using Kubernetes clusters on RedHat Enterprise Linux; the steps for CentOS are similar. The following prerequisites apply:
- Your Kubernetes cluster must be configured and running in a suitable environemnt, with nodes in Ready state, as described in Kubernetes documentation.
- Helm-3 is installed on the Control plane node, as described in the Helm documentation.
- The images gcxi and gcxi_control are loaded and tagged on each worker node.
- On each worker node, ensure that values are set for kernel.sem, vm.max_map_count, as required by MicroStrategy. For example:
echo "kernel.sem = 250 1024000 250 4096" >> /etc/sysctl.conf echo "vm.max_map_count = 5242880" >> /etc/sysctl.conf sysctl -p
Procedure: 1. Preparing for installation
Purpose: Prepare the environment, and gather files needed for deployment.
Steps
- On the Control plane node, create a folder: helm.
- Download the tar.gz archive of Helm charts into helm folder, and extract the archive in a subfolder called helm/gcxi.
- View the file helm/gcxi/Chart.yaml, and ensure that the appVersion is set to the desired GCXI version.
- Open the file helm/gcxi/values.yaml, and follow the instructions it provides to guide you in creating a new file, values-test.yaml with appropriate settings. Save the new file in the helm folder.
- For example, the following content in the values-test.yaml file is appropriate for a simple deployment using PostgreSQL inside the container, and local PersistentVolume type. Create appropriate content in the values-test.yaml file for your environment:
gcxi: env: GCXI_GIM_DB: DSNDEF: DSN_NAME=GCXI_GIM_DB;DB_TYPE=POSTGRESQL;DB_TYPE_EX=PostgreSQL;HOST=gim_db_host;PORT=5432;DB_NAME=gim_login;LOGIN=;PASSWORD=;DRV_TYPE=JDBC;GCXI_QENGINE=ON LOGIN: gim_login PASSWORD: gim_password IWD_DB: DSNDEF: DSN_NAME=IWD_DB;DB_TYPE=POSTGRESQL;DB_TYPE_EX=PostgreSQL;HOST=iwd_db_host;PORT=5432;DB_NAME=dm_gcxi;LOGIN=;PASSWORD=;DRV_TYPE=JDBC;GCXI_QENGINE=ON LOGIN: iwd_login PASSWORD: iwd_password deployment: deployPostgres: true deployLocalPV: true useDynamicLogPV: false imagePullPolicy: worker: IfNotPresent control: IfNotPresent replicas: worker: 1 images:only postgres: version: 11
Procedure: 2. Label Nodes
Purpose: Label nodes to allocate PersistentVolumes (PVs). GCXI deployment requires three PVs, plus one for PostgreSQL deployment. All four PVs are linked to GCXI pods by means of PersistentVolumeClaims (PVCs). You can create PVs in advance (set deployLocalPV: false) or during GCXI installation (set deployLocalPV: true). In this example, we allow PVs to be created on all nodes.”
Steps
- Prepare nodes to keep backups — label all worker nodes and the create folder /genesys/gcxi/backup/ on each one:
- From the Kubernetes Control plane node, execute the following command for each worker node:
kubectl label nodes <<worker node>> gcxi/local-pv-gcxi-backup=true
- On each Kubernetes worker node, execute the following command:
mkdir /genesys/gcxi/backup/
- From the Kubernetes Control plane node, execute the following command for each worker node:
- Prepare nodes to keep logs *mdash; label all worker nodes and create the folder /mnt/log/gcxi on each one:
- From the Kubernetes Control plane node, execute the following command for each worker node:
kubectl label nodes <<worker node>> gcxi/local-pv-gcxi-log=true
- On each Kubernetes worker node, execute the following command:
mkdir /mnt/log/gcxi
- From the Kubernetes Control plane node, execute the following command for each worker node:
- Prepare nodes to keep MicroStrategy’s cache, cubes, and so on — label all worker nodes, and create the folder /genesys/gcxi/shared. This folder must be shared across worker nodes:
- From the Kubernetes Control plane node, execute the following command for each worker node:
kubectl label nodes <<worker node>> gcxi/local-pv-gcxi-share=true
- On each Kubernetes worker node, execute the following command:
mkdir /genesys/gcxi/shared/
- From the Kubernetes Control plane node, execute the following command for each worker node:
- Prepare nodes to keep postgres database files — Either label all worker nodes and create shared folder /genesys/gcxi/data, or label one node (to allow postgres to run only on it) and create the folder on it:
- From the Kubernetes Control plane node, execute the following command for the worker node where PostgreSQL will run:
kubectl label nodes <<worker node>> gcxi/local-pv-postgres-data=true
- On the Kubernetes worker node where PostgreSQL will run, execute the following command:
mkdir /genesys/gcxi/data
Procedure: 3. Deploying GCXI
Purpose: Deploy GCXIPrerequisites
- Execute the following commands (Writer's Note: Purpose of this step? And, is this the right place for it?:
chown -R 500 /genesys/gcxi/backup chown -R 500 /genesys/gcxi/shared chown -R 500 /mnt/log/gcxi chown -R 500 /genesys/gcxi/data
- For debug purposes, execute the following command to render templates without installing:
helm template --debug -f values-test.yaml gcxi-helm gcxi/
Kubernetes descriptors are displayed. The values you see generated from Helm templates, and based on settings from values.yaml and values-test.yaml.
- Review the descriptors. You should not see any errors; you will later apply this configuration to your Kubernetes cluster.
Steps
- To reset the current context to default namespace, execute the following command:
kubectl config set-context $(kubectl config current-context) --namespace=default
- To deploy GCXI, execute the following command:
helm install --debug --namespace gcxi --create-namespace -f values-test.yaml gcxi-helm gcxi/
- This process takes several minutes. Wait until all objects are created and allocated, and the Kubernetes descriptors applied to the environment appear.
- To check the installed Helm release, execute the following command:
helm list –all-namespaces
- To check GCXI Kubernetes objects created by Helm in gcxi release, execute the following command:
kubectl get all -n gcxi
Procedure: Troubleshooting
Purpose: Use the instructions in this section only if you encounter errors or other difficulties.
Steps
Problems with the deployment are most often associated with the following three kinds of objects:
- PVs
- PVCs
- pods
- To list the objects that might cause problems, execute the following commands:
kubectl get pv -n gcxi -o wide
kubectl get pvc -n gcxi -o wide
kubectl get po -n gcxi -o wide
- Examine the output from each get command.
- If any of the objects have a non-ready state (for example, Unbound (PVCs only), Pending, or CrashLoop) execute the following command to inspect the object more closely using describe:
kubectl describe <type> <name>
- For example:
kubectl describe po gcxi-0
- In the describe output, inspect the section Events.
Procedure: Uninstall GCXI
Purpose: To remove GCXI
Steps
- To remove GCXI, execute the following command:
helm uninstall gcxi-helm
