Configure a Cluster of Co-browse Servers
Genesys Co-browse supports load balancing using the "sticky cookies" technique. The Co-browse application sets the gcbSessionServer cookie every time:
- A Co-browse session is created
- A chat session is created
- A slave joins an existing session
Load balancing is enabled by configuring a cluster of Co-browse Servers. Cassandra is embedded in Genesys Co-browse Server, so when you set up a cluster of Genesys Co-browse servers, each server also acts as a Cassandra node. You configure the Cassandra nodes by setting configuration options in the cbdb section of the Co-browse Server application.
Complete the following steps to implement load balancing:
To enable load balancing, you must set up a cluster of Co-browse Servers. For each Co-browse Server in your planned cluster, complete the procedures on the Install Genesys Co-browse Server page.
Configure the Cassandra cluster
Prerequisite: You have completed Set up 3 or more Co-browse Servers.
To work correctly, you must configure your Cassandra Cluster according to these rules:
- Only one Cassandra node per IP allowed.
- All rpcPort, nativeTransportPort, storagePort and sslStoragePort values must be same across all nodes in the Cassandra Cluster.
- We recommend that you include at least 3 nodes in your Cassandra cluster. For embedded Cassandra, this means you must have at least 3 Co-browse servers to comply with this recommendation.
Start of procedure
Complete the steps below for each Co-browse application you created in Set up 3 or more Co-browse Servers:
- Open Genesys Administrator and navigate to PROVISIONING > Environment > Applications.
- Select the Co-browse application and click Edit.
- In the Options tab, locate the cbdb section and update the following options:
- listenAddress — Set this value to the IP of the node.
- rpcAddress — Set this value to the IP of the node.
- seedNodes — Set this value to the IP of the first node.
- replicationFactor — Set this value to a number less than the total number of nodes. Replication factor is the number of copies of data to keep in the cluster. Typically 3 copies is enough for most scenarios (provided you have more than three nodes in your cluster), but this can be increased to achieve higher consistency levels.
- cassandraClusterName (optional) — This name should be the same for each node.
- Click Save & Close.
End of procedure
Configure Initial Tokens for Cassandra
Prerequisite: You have calculated tokens for the nodes in your cluster. See http://www.datastax.com/docs/1.0/initialize/token_generation#calculating-tokens-for-a-single-data-center for details.
You can configure the initial token by putting the token directly in the cassandra.yaml file or by adding a placeholder in the cassandra.yaml file.
Complete one of the following procedures:
Add Initial Token to the cassandra.yaml File
Start of procedure
- For each Co-browse Server in your cluster, open the <co-browse_installation_directory>/server/etc/cassandra.yaml file with a text editor.
- Set initial_token to the token value.
- Save the file.
End of procedure
Add a Placeholder to the cassandra.yaml File
Start of procedure
- Open the <co-browse_installation_directory>/server/etc/cassandra.yaml file for one of your Co-browse servers with a text editor.
- Add a placeholder for the initial_token value, such as $initialToken$.
- Save the file.
- Copy the modified cassandra.yaml template to all your Co-browse servers.
- Open Genesys Administrator and navigate to PROVISIONING > Environment > Applications.
- Complete the followings steps for each Co-browse application:
- Select the Co-browse application and click Edit.
- In the Options tab, locate the cbdb section and add the new option. The option name is the name of your placeholder, such as initialToken, and the value is the token generated for the server.
- Click Save & Close.
End of procedure
Prerequisite: You have a separate installation of Cassandra 1.0.6 (the same version used in Co-browse).
- Start the first node and wait until it starts listening.
- Start all other nodes in your cluster.
- Open a command line and run <cassandra home>\bin\cassandra-cli.bat -h <ip of first node> -p <cassandra rpcPort>, where <ip of first node> is the IP of the first node in your cluster and <cassandra rpcPort> is the value you configured for rpcPort.
- Enter the following command: describe cluster. The output should look similar to the following:
[default@unknown] describe cluster;
Cluster Information:
Snitch: org.apache.cassandra.locator.SimpleSnitch
Partitioner: org.apache.cassandra.dht.RandomPartitioner
Schema versions:
6c960880-1719-11e3-0000-242d50cf1fbf: [192.168.00.1, 192.168.00.2, 192.168.00.3]
The list of IP address in square brackets ([192.168.00.1, 192.168.00.2 ...]) should match all the nodes in your cluster.
End of procedure
Your load balancer configuration will depend upon which load balancer you implement. Below are two sample configurations for load balancing with Nginx:
- The first sample keeps connections secure with HTTPS both from browsers to load balancer and from load balancer to servers. It also shows an example of High Availability configuration.
- The second sample uses the SSL Acceleration technique, where HTTPS is used only from the browsers to the load balancers; plain HTTP is used from the load balancer to the Co-browse servers.
Sample 1
# Basic configuration for load balancing 2 or more Co-Browse servers.
# All nodes are listed 2 times: in upstream and map directives.
# Co-browse applications are responsible for setting the "gcbSessionServer" cookie
# with one of the values listed in map directive. These values are names of
# applications in config server.
# This (default) variant uses HTTPS (if browser request is HTTPS) for connections
# both from browser to load balancer and from load balancer to Co-Browse servers.
# For another version with HTTPS only from browser to LB, see nginxSSLAccelerated.conf
# IMPORTANT!
# This configuration is not intended for production use!
# It is mere example of how this functionality can be achieved.
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
# to handle longer names of Co-browse server applications
map_hash_bucket_size 64;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for" "$upstream_addr"';
access_log logs/nginx_access.log main;
error_log logs/nginx_error.log warn;
upstream http_cobrowse_cluster {
server 192.168.73.210:8700 fail_timeout=5s;
server 192.168.73.210:8701 fail_timeout=5s;
}
upstream https_cobrowse_cluster {
server 192.168.73.210:8743 fail_timeout=5s;
server 192.168.73.210:8744 fail_timeout=5s;
}
map $cookie_gcbSessionServer $http_sticky_backend {
default 0;
.CB_Server_Egor 192.168.73.210:8700;
.CB_Server_Egor_2 192.168.73.210:8701;
}
map $cookie_gcbSessionServer $https_sticky_backend {
default 0;
.CB_Server_Egor 192.168.73.210:8743;
.CB_Server_Egor_2 192.168.73.210:8744;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 8080;
listen 8083 ssl;
ssl_certificate cobrowse.unsigned.crt;
ssl_certificate_key cobrowse.unsigned.key;
location @fallback {
proxy_pass http://http_cobrowse_cluster;
}
location /cobrowse {
# Allow websockets, see http://nginx.org/en/docs/http/websocket.html
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# Increase buffer sizes to find room for DOM and CSS messages
proxy_buffers 8 2m;
proxy_buffer_size 10m;
proxy_busy_buffers_size 10m;
# If Co-browse server doesn't respond in 5 seconds, consider it dead
# (a 504 will fire and be caught by error_page directive for fallback).
# This timeout can be eliminated using "health checks" functionality
# available in Nginx PLUS or via 3rd party plugins. See the following links:
# http://nginx.com/products/application-health-checks/
# http://wiki.nginx.org/NginxHttpHealthcheckModule
# https://github.com/cep21/healthcheck_nginx_upstreams
# https://github.com/yaoweibin/nginx_upstream_check_module
proxy_connect_timeout 5s;
# Fall back if server responds incorrectly
error_page 502 = @fallback;
# or if doesn't respond at all.
error_page 504 = @fallback;
# Create a map of choices
# see https://gist.github.com/jrom/1760790
if ($scheme = 'http') {
set $test HTTP;
}
if ($scheme = 'https') {
set $test HTTPS;
}
if ($http_sticky_backend) {
set $test "${test}-STICKY";
}
if ($test = HTTP-STICKY) {
proxy_pass http://$http_sticky_backend$uri?$args;
break;
}
if ($test = HTTPS-STICKY) {
proxy_pass https://$https_sticky_backend$uri?$args;
break;
}
if ($test = HTTP) {
proxy_pass http://http_cobrowse_cluster;
break;
}
if ($test = HTTPS) {
proxy_pass https://https_cobrowse_cluster;
break;
}
return 500 "Misconfiguration";
}
}
}
Sample 2
# Basic configuration for load balancing 2 or more Co-browser servers.
# Nodes are listed 2 times: in upstream and map directives.
# Co-browse applications are responsible for setting the "gcbSessionServer" cookie
# with one of the values listed in map directive. These values are names of
# applications in config server.
# Note that this version uses "SSL acceleration" (http://en.wikipedia.org/wiki/SSL_Acceleration,
# http://en.wikipedia.org/wiki/Load_balancing_(computing)#Load_balancer_features):
# load balancer terminated SSL connections, passing HTTPS requests as HTTP to the servers.
events {
worker_connections 1024;
}
http {
include mime.types;
default_type application/octet-stream;
log_format main '$remote_addr - $remote_user [$time_local] "$request" '
'$status $body_bytes_sent "$http_referer" '
'"$http_user_agent" "$http_x_forwarded_for"';
access_log logs/nginx_access.log main;
error_log logs/nginx_error.log debug;
upstream cobrowse_cluster {
server 192.168.73.95:8700;
server 192.168.73.95:8701;
}
map $cookie_gcbSessionServer $sticky_backend {
default 0;
.CB_Server_1 192.168.73.95:8700;
.CB_Server_2 192.168.73.95:8701;
}
map $http_upgrade $connection_upgrade {
default upgrade;
'' close;
}
server {
listen 8080;
listen 8083 ssl;
ssl_certificate cobrowse.unsigned.crt;
ssl_certificate_key cobrowse.unsigned.key;
location /cobrowse {
# Allow websockets, see http://nginx.org/en/docs/http/websocket.html
proxy_http_version 1.1;
proxy_set_header Upgrade $http_upgrade;
proxy_set_header Connection $connection_upgrade;
# Increase buffer sizes to find room for DOM and CSS messages
proxy_buffers 8 2m;
proxy_buffer_size 10m;
proxy_busy_buffers_size 10m;
if ($sticky_backend) {
proxy_pass http://$sticky_backend$uri?$args;
}
proxy_pass http://cobrowse_cluster;
}
}
}
You must modify the URLs in your Co-browse instrumentation scripts to point to your configured load balancer. See Website Instrumentation for details about modifying the script.
If you are using the Co-Browse proxy to instrument your site, you will need to modify the URLs in the in proxy's map.xml file. See Test with the Co-browse Proxy for details about modifying the xml file.
Configure the Co-browse Server applications
Modify the url or secureUrl option in the cluster section of all your Co-Browse Server applications. If you use the secureUrl option, you must also set the useSecureConnection option to true See cluster Section for details.
You must also set up a similar configuration for the Genesys Co-browse Plug-in for Interaction Workspace. To support this, you might consider setting up two load balancers:
- public — This load balancer should have a limited set of Co-browse resources. For example, it should not include session history resources.
- private — This load balancer should have all Co-browse resources and it should be placed in the network so that it is accessible only from the corporate intranet. It should only be used for internal applications, such as Interaction Workspace.
Complete the procedure below to configure the plug-in to support the Co-browse cluster:
Configure the Co-browse Plug-in for Interaction Workspace
Prerequisites: You have installed the Genesys Co-browse Plug-in for Interaction Workspace.
Start of procedure
- Open Genesys Administrator and navigate to PROVISIONING > Environment > Applications.
- Select the Interaction Workspace Application.
- In the Options tab, add the following options in the cobrowse section:
For each option in the list above:
- Click New.
- In the New Option window, enter cobrowse in the Section text area and complete the Name and Value fields.
- Click OK.
- Click Save & Close.
End of procedure
Complete this procedure to launch your servers and validate that load balancing is working correctly.
Start of procedure
- Start your load balancer.
- Start the Co-Browse servers.
- Check the value of the gcbSessionServer cookie for the master:
- Open a web browser and clear the cookies.
- Go to the website where you instrumented Genesys Co-browse and start a Co-browse or Chat session.
- Check your cookies and make sure that the value of the gcbSessionServer cookie is the name of one of your Co-browse applications.
- Check the value of the gcbSessionServer cookie for the slave:
- Open a new browser and clear the cookies.
- Open the slave from the load balancer. For example, http://<load_balancer_IP>:<load_balancer_listening_port>/cobrowse/slave.html.
- Enter the session ID and join.
- Make sure that the gcbSessionServer cookie is set to the same name and value as the master.