Contents
Deploying GCXI using OpenShift
Deploy Genesys CX Insights using Red Hat OpenShift.
Please note: Until Genesys further extends our support of OpenShift on the Genesys Engage cloud private edition platform, Genesys CX Insights supports the basic installation of containers on customer-operated OpenShift clusters. Customers own the responsibility of deploying and maintaining OpenShift clusters, and Genesys provides support only for issues related to GCXI containers.
This is an example scenario — This page provides a high-level outline, illustrating one scenario to help you visualize the overall process. Genesys does not provide support for OpenShift or other third-party products, so you must have knowledge of OpenShift and other products to complete this type of installation. GCXI is known to work with OpenShift Cluster 4.5.16, which is described on this page.
Prerequisites for deploying using OpenShift
Before you begin, ensure that:
- Your OpenShift cluster is configured and running in a suitable environment, with nodes in the Ready state.
- OpenShift client and Helm-3 are installed on the host where the deployment will run.
- Properly tag images gcxi and gcxi_control, and load them on the registry. OpenShift will pull the images from there to each to OpenShift worker node during deployment.
- On each worker node, values are set for kernel.sem, vm.max_map_count, as required by MicroStrategy. For example:
echo "kernel.sem = 250 1024000 250 4096" >> /etc/sysctl.conf echo "vm.max_map_count = 5242880" >> /etc/sysctl.conf sysctl -p
| Mount Name | Mount Path (inside container) | Description | Access Type | Default Mount Point on Host (can be changed through values; these directories MUST pre-exist on your host to accommodate the local provisioner) | Must be Shared across Nodes? | Required Node Label (applies to deafult Local PVs setup) |
|---|---|---|---|---|---|---|
| gcxi-backup | /genesys/gcxi_shared/backup | Backups Used by control container / jobs. |
RWX | /genesys/gcxi/backup Can be overwritten by: Values.gcxi.local.pv.backup.path |
Not necessarily. | gcxi/local-pv-gcxi-backup = "true" |
| gcxi-log | /mnt/log | MSTR logs Used by main container. The Chart allows log volumes of legacy hostPath type. This scenario is the default. |
RWX | /mnt/log/gcxi
subPathExpr: $(POD_NAME) |
Not necessarily. | gcxi/local-pv-gcxi-log = "true" Node label is not required if you are using hostPath volumes for logs. |
| gcxi-postgres | /var/lib/postgresql/data | Meta DB volume Used by Postgres container, if deployed. |
RWO | /genesys/gcxi/shared Can be overwritten by: Values.gcxi.local.pv.postgres.path |
Yes, unless you tie PostgreSQL container to a particular node. | gcxi/local-pv-postgres-data = "true" |
| gcxi-share | /genesys/gcxi_share | MSTR shared caches and cubes. Used by main container. |
RWX | /genesys/gcxi/data subPathExpr: $(POD_NAME) Can be overwritten by: Values.gcxi.local.pv.share.path |
Yes | gcxi/local-pv-gcxi-share = "true" |
Deploying GCXI with OpenShift
The following procedures describe example steps to deploy GCXI with OpenShift. The exact steps required will vary for your environment.
Procedure: 1. Preconfigure the environment
Purpose: Ensure that the environment is properly prepared for deployment.
Steps
- Ensure that the GCXI project has been created.
- Ensure that four PersistentVolumes (PV) have been created. See the table PVCs required by GCXI.
- To allow UID=500 users to run pods, configure the following users inside the GCXI container. Note that this is not recommended for production environments.
root:root (0:0) genesys:genesys (500:500)
- You can use either of these users to run the container. Running it with any other user causes an error such as:
"500 is not an allowed group spec.containers[0].securityContext.securityContext.runAsUser: Invalid value: 500: must be in the ranges: [1000570000, 1000579999]]"
- Genesys recommends the preceding method for all production environments. For test environments, execute the following command if you wish to run pods as any user:
oc adm policy add-scc-to-user anyuid -z default
Procedure: 2. Prepare for deployment
Purpose: Prepare the environment, and gather files needed for deployment.
Steps
- On the Control plane node, create a folder: helm.
- Download the tar.gz archive of Helm charts into helm folder, and extract the archive in a subfolder called helm/gcxi.
- View the file helm/gcxi/Chart.yaml, and ensure that the appVersion is set to the desired GCXI version.
- Open the file helm/gcxi/values.yaml, and follow the instructions it provides to guide you in creating a new file, values-test.yaml with appropriate settings. Save the new file in the helm folder.
- For example, the following content in the values-test.yaml file is appropriate for a simple deployment using PostgreSQL inside the container, with PersistentVolumes named gcxi-log-pv, gcxi-backup-pv, gcxi-share-pv, and gcxi-postgres-pv (which are deployed in Step 2 of Procedure: 1. Preconfigure the environment). Create content in the values-test.yaml file that is appropriate for your environment:
gcxi: env: GCXI_GIM_DB: DSNDEF: DSN_NAME=GCXI_GIM_DB;DB_TYPE=POSTGRESQL;DB_TYPE_EX=PostgreSQL;HOST=gim_db_host;PORT=5432;DB_NAME=gim_db;LOGIN=;PASSWORD=;DRV_TYPE=JDBC;GCXI_QENGINE=ON LOGIN: gim_login PASSWORD: gim_password IWD_DB: DSNDEF: DSN_NAME=IWD_DB;DB_TYPE=POSTGRESQL;DB_TYPE_EX=PostgreSQL;HOST=iwd_db_host;PORT=5432;DB_NAME=dm_gcxi;LOGIN=;PASSWORD=;DRV_TYPE=JDBC;GCXI_QENGINE=ON LOGIN: iwd_login PASSWORD: iwd_password PGDATA: /var/lib/postgresql/data/mydata4 deployment: deployPostgres: true deployLocalPV: false useDynamicLogPV: false useHostPathLogInitContainer: true hostIPC: false imagePullPolicy: worker: IfNotPresent control: IfNotPresent replicas: worker: 2 images: postgres: version: 11 pvc: log: volumeName: gcxi-log-pv backup: volumeName: gcxi-backup-pv share: volumeName: gcxi-share-pv postgres: volumeName: gcxi-postgres-pv
Procedure: 3. Deploy GCXI
Purpose: Deploy GCXI. This procedure provides steps for environments without LDAP — for environments that include LDAP (or other features not supported in values.yaml) you can pass container environment variables such MSTR_WEB_LDAP_ON=true using the gcxi.envvars file (for example: --set-file gcxi.envext=gcxi.envvars).
Steps
- Log in to OpenShift cluster from the host where you will run deployment; for example, by executing the following command:
oc login --token <token> --server <url of api server>
- Execute the following command to make the GCXI project the default:
oc project gcxi
- For debug purposes, execute the following command to render templates without installing:
helm template --debug -f values-test.yaml gcxi-helm gcxi/
- Kubernetes descriptors are displayed. The values you see generated from Helm templates, and based on settings from values.yaml and values-test.yaml. Ensure that no errors are displayed; you will later apply this configuration to your Kubernetes cluster.
- To deploy GCXI, execute the following command:
helm install --debug --namespace gcxi --create-namespace -f values-test.yaml gcxi-oc gcxi/
- This process takes several minutes. Wait until all objects are created and allocated, and the Kubernetes descriptors applied to the environment appear.
- To check the installed Helm release, execute the following command:
helm list –all-namespaces
- To check the GCXI project status, execute the following command:
oc status
- To check GCXI OpenShift objects created by Helm, execute the following command:
oc get all -n gcxi
Maintenance Procedures
This section provides additional procedures, such as troubleshooting steps.
Procedure: Troubleshooting
Purpose: Use the instructions in this section only if you encounter errors or other difficulties. Problems with the deployment are most often associated with the following three kinds of objects:
- PVs
- PVCs
- pods
Steps
- To list the objects that might cause problems, execute the following commands:
oc get pv -o wide
oc get pvc -o wide -n gcxi
oc get po -o wide -n gcxi
- Examine the output from each get command.
- If any of the objects have a non-ready state (for example, Unbound (PVCs only), Pending, or CrashLoop) execute the following command to inspect the object more closely using oc describe:
oc describe <type> <name>
- For example:
oc describe po gcxi-0
- In the describe output, inspect the section Events.
Procedure: Uninstall GCXI
Purpose: To remove GCXI
Steps
- To remove GCXI, execute the following command:
helm uninstall gcxi-oc -n gcxi
