Revision as of 17:00, October 23, 2015 by Bgrenon (talk | contribs)
Jump to: navigation, search

Configure a Cluster of Co-browse Servers

Genesys Co-browse supports load balancing using Stickiness.

Load balancing is enabled by configuring a cluster of Co-browse Servers. Cassandra is embedded in Genesys Co-browse Server, so when you set up a cluster of Genesys Co-browse servers, each server may also act as a Cassandra node. You configure the Cassandra nodes by setting configuration options in the cassandraEmbedded section of the Co-browse Server application.

Complete the following steps to implement load balancing:

8.5.000

For Co-browse 8.5.000, you must set up a cluster of Co-browse Servers to enable load balancing. For each Co-browse Server in your planned cluster, complete the procedures on the Install Genesys Co-browse Server page.

Important
Every Co-browse Server in the cluster generally plays the same role as the others, except some embedded Cassandra nodes act as seed nodes. This means that to see consistent behavior on the cluster, regardless of which server serves requests, all Co-browse Servers should have the same options set in their application objects in Configuration Server. The rule of thumb is to configure the cluster servers the same, unless it is absolutely necessary to do otherwise (for example, a port is busy on a machine). This simplifies maintenance of production deployments.

8.5.001+

For Co-browse 8.5.001+, you must set up a cluster of Co-browse Nodes to enable load balancing. To do this, complete the procedures to create Application objects for a Co-browse Cluster and Co-browse Nodes. Follow the installation steps outlined in the 8.5.001+ tab in the Creating the Co-browse Server Application Object in Genesys Administrator section.

Configure the Cassandra cluster

Prerequisite: You have completed Set up 2 or more Co-browse Servers.

Tip
For a description of changes to Cassandra configuration in Co-browse 8.5.0, see What's New in Cassandra Configuration for 8.5.0?
Important

To work correctly, you must configure your Cassandra Cluster according to these rules:

  1. Only one Cassandra node per IP allowed.
  2. All rpcPort, nativeTransportPort, storagePort and sslStoragePort values must be same across all nodes in the the Cassandra Cluster.
  3. We recommend that you include at least 3 nodes in your Cassandra cluster. For embedded Cassandra, this means you must have at least 3 Co-browse servers to comply with this recommendation.

External Cassandra Cluster Setup

External Cassandra cluster deployment is thoroughly described in the official Cassandra documentation. See the following:

Embedded Cassandra Cluster Setup

An Embedded Cassandra cluster is setup similarly to an external Cassandra except that embedded Cassandra node settings are provided either through Configuration Server options or an external cassandra.yaml file.

Start of procedure

Complete the steps below for each Co-browse application you created in Set up 2 or more Co-browse Servers:

  1. Open Genesys Administrator and navigate to PROVISIONING > Environment > Applications.
  2. Select the Co-browse application and click Edit.
  3. In the Options tab, locate the cassandraEmbedded section and update the following options:
    1. listenAddress — Set this value to the IP of the node (listenAddress in cassandra.yml).
    2. rpcAddress — Set this value to the IP of the node (rpcAddress in cassandra.ymal).
    3. seedNodes — Set this value to the IP of the first node (seedNodes in cassandra.yaml).
    4. clusterName (optional) — This name should be the same for each node (name in cassandra.yaml).
  4. Click Save & Close.

End of procedure

Replication Strategy

By default, Co-browse server activates NetworkTopologyStrategy as a replication strategy. NetworkTopologyStrategy is recommended for production Cassandra deployments and is supported by GossipingPropertyFileSnitch. GossipingPropertyFileSnitch relies on the cassandra-rackdc.properties file. Make sure the data center data center names defined in this file (one for each Cassandra node) correspond to data center names defined in the replicationStrategyParams option.

The cassandra-rackdc.properties file location depends on the type of Cassandra cluster:

Replication Factor

Replication factor configures the number of copies of data to keep in the cluster. The replication factor can be increased to achieve higher redundancy levels. The minimum recommended replication factor is 3. For this, you will need at least 3 nodes in your Cassandra cluster (replication factor must be equal to or less than the number of nodes). Set the replication factor in the replicationStrategyParams option to a number less than or equal to the number of nodes.

Prerequisite: You have a separate installation of Cassandra 2.x (2.1.3+, the same version used in Co-browse).

  1. Start the first node and wait until it starts listening.
  2. Start all other nodes in your cluster.
  3. Open a command line and run <cassandra home>\bin\cassandra-cli.bat -h <ip of first node> -p <cassandra rpcPort>, where <ip of first node> is the IP of the first node in your cluster and <cassandra rpcPort> is the value you configured for rpcPort.
  4. Enter the following command: describe cluster. The output should look similar to the following:
[default@unknown] describe cluster;
Cluster Information:
 Snitch: org.apache.cassandra.locator.SimpleSnitch
 Partitioner: org.apache.cassandra.dht.RandomPartitioner
 Schema versions:
 6c960880-1719-11e3-0000-242d50cf1fbf: [192.168.00.1, 192.168.00.2, 192.168.00.3]

The list of IP address in square brackets ([192.168.00.1, 192.168.00.2 ...]) should match all the nodes in your cluster.

End of procedure

Important
To achieve the best performance with Co-browse, Genesys highly recommends that you configure web sockets support for your load balancer. If web sockets are unavailable, Co-browse still functions, but it uses other transports that perform significantly slower. If your load balancer does not support web sockets and you do not want to wait for Co-browse to automatically switch to another transport, you can use the disableWebSockets options for the master and the slave. For more information, see JavaScript Configuration API#disableWebSockets and Slave Configuration Section#disableWebSockets

Your load balancer configuration will depend upon which load balancer you implement. Below are two sample configurations for load balancing with Nginx:

  • The first sample keeps connections secure with HTTPS both from browsers to load balancer and from load balancer to servers. It also shows an example of High Availability configuration.
  • The second sample uses the SSL Acceleration technique, where HTTPS is used only from the browsers to the load balancers; plain HTTP is used from the load balancer to the Co-browse servers.

Both examples assume cookie-based stickiness. If you use URL-based stickiness and the actual nodes are not publicly accessible, you may want to add logic to route publicly accessible URLs of Co-browse nodes (such as http://<load-balancer>?co-browse-node-id={node-id}) to the actual nodes. However, such configuration is beyond the scope of this Guide.

Important
Due to Safari's strict cookie policy, Genesys highly recommends that you host the Load Balancer on the same domain as the website or on one of its sub-domains. Otherwise, chat and Co-browse stickiness cookies may be rejected as being from third parties and the solution will not work. Users will not be able to start chat nor begin co-browsing.
Important
These configurations are intended to be examples and might not represent best practices for Nginx configuration.

Sample 1

Important
This configuration uses a 5 second timeout for High Availability (if a server dies, the load balancer switches the client to another server only after 5 seconds). In production, this timeout can be eliminated using "health checks" functionality, available in Nginx PLUS or via third-party plug-ins. See the following links for more information:
# Basic configuration for load balancing 2 or more Co-Browse servers.
# All nodes are listed 2 times: in upstream and map directives.
# Co-browse applications are responsible for setting the "gcbSessionServer" cookie
# with one of the values listed in map directive. These values are names of 
# applications in config server.
# This (default) variant uses HTTPS (if browser request is HTTPS) for connections 
# both from browser to load balancer and from load balancer to Co-Browse servers. 
# For another version with HTTPS only from browser to LB, see nginxSSLAccelerated.conf

# IMPORTANT! 
# This configuration is not intended for production use!
# It is mere example of how this functionality can be achieved.

events {
    worker_connections  1024;
}

http {
    include       mime.types; 
    default_type  application/octet-stream;
    # to handle longer names of Co-browse server applications
    map_hash_bucket_size 64;

    log_format  main  '$remote_addr - $remote_user [$time_local] "$request" '
                      '$status $body_bytes_sent "$http_referer" '
                      '"$http_user_agent" "$http_x_forwarded_for" "$upstream_addr"';

    access_log  logs/nginx_access.log main;
    error_log logs/nginx_error.log warn;
    
    upstream http_cobrowse_cluster {
        server 192.168.73.210:8700 fail_timeout=5s;
        server 192.168.73.210:8701 fail_timeout=5s;
    }
    upstream https_cobrowse_cluster {
        server 192.168.73.210:8743 fail_timeout=5s;
        server 192.168.73.210:8744 fail_timeout=5s;
    }

    map $cookie_gcbSessionServer $http_sticky_backend {
        default 0;
        .CB_Server_Egor   192.168.73.210:8700;
        .CB_Server_Egor_2 192.168.73.210:8701;
    }
    map $cookie_gcbSessionServer $https_sticky_backend {
        default 0;
        .CB_Server_Egor   192.168.73.210:8743;
        .CB_Server_Egor_2 192.168.73.210:8744;
    }
    
    map $http_upgrade $connection_upgrade {
        default upgrade;
        ''      close;
    }
    
    server {
        listen 8080;
        listen 8083 ssl;
        ssl_certificate cobrowse.unsigned.crt;
        ssl_certificate_key cobrowse.unsigned.key;
        
        location @fallback {
            proxy_pass http://http_cobrowse_cluster;
        }

        location /cobrowse {
            # Allow websockets, see http://nginx.org/en/docs/http/websocket.html
            proxy_http_version 1.1;
            proxy_set_header Upgrade $http_upgrade;
            proxy_set_header Connection $connection_upgrade;
            
            # Increase buffer sizes to find room for DOM and CSS messages
            proxy_buffers 8 2m;
            proxy_buffer_size 10m;
            proxy_busy_buffers_size 10m;
            
            # If Co-browse server doesn't respond in 5 seconds, consider it dead
            # (a 504 will fire and be caught by error_page directive for fallback).
            # This timeout can be eliminated using "health checks" functionality
            # available in Nginx PLUS or via 3rd party plugins. See the following links:
            # http://nginx.com/products/application-health-checks/
            # http://wiki.nginx.org/NginxHttpHealthcheckModule
            # https://github.com/cep21/healthcheck_nginx_upstreams
            # https://github.com/yaoweibin/nginx_upstream_check_module
            proxy_connect_timeout 5s;
            
            # Fall back if server responds incorrectly
            error_page 502 = @fallback;
            # or if doesn't respond at all.
            error_page 504 = @fallback;
            
            # Create a map of choices
            # see https://gist.github.com/jrom/1760790
            if ($scheme = 'http') {
                set $test HTTP;
            }
            if ($scheme = 'https') {
                set $test HTTPS;
            }
            if ($http_sticky_backend) {
                set $test "${test}-STICKY";
            }
            
            if ($test = HTTP-STICKY) {
                proxy_pass http://$http_sticky_backend$uri?$args;
                break;
            }
            if ($test = HTTPS-STICKY) {
                proxy_pass https://$https_sticky_backend$uri?$args;
                break;
            }
            if ($test = HTTP) {
                proxy_pass http://http_cobrowse_cluster;
                break;
            }
            if ($test = HTTPS) {
                proxy_pass https://https_cobrowse_cluster;
                break;
            }
            
            
            
            return 500 "Misconfiguration";
        }

    }

}

Sample 2

# Basic configuration for load balancing 2 or more Co-browser servers.
# Nodes are listed 2 times: in upstream and map directives.
# Co-browse applications are responsible for setting the "gcbSessionServer" cookie
# with one of the values listed in map directive. These values are names of 
# applications in config server.
# Note that this version uses "SSL acceleration" (http://en.wikipedia.org/wiki/SSL_Acceleration,
# http://en.wikipedia.org/wiki/Load_balancing_(computing)#Load_balancer_features):
# load balancer terminated SSL connections, passing HTTPS requests as HTTP to the servers.

events {
 worker_connections 1024;
}

http {
 include mime.types; 
 default_type application/octet-stream;

 log_format main '$remote_addr - $remote_user [$time_local] "$request" '
 '$status $body_bytes_sent "$http_referer" '
 '"$http_user_agent" "$http_x_forwarded_for"';

 access_log logs/nginx_access.log main;
 error_log logs/nginx_error.log debug;
 
 upstream cobrowse_cluster {
 server 192.168.73.95:8700;
 server 192.168.73.95:8701;
 }

 map $cookie_gcbSessionServer $sticky_backend {
 default 0;
 .CB_Server_1 192.168.73.95:8700;
 .CB_Server_2 192.168.73.95:8701;
 }
 
 map $http_upgrade $connection_upgrade {
 default upgrade;
 '' close;
 }
 
 server {
 listen 8080;
 listen 8083 ssl;
 ssl_certificate cobrowse.unsigned.crt;
 ssl_certificate_key cobrowse.unsigned.key;

 location /cobrowse {
 # Allow websockets, see http://nginx.org/en/docs/http/websocket.html
 proxy_http_version 1.1;
 proxy_set_header Upgrade $http_upgrade;
 proxy_set_header Connection $connection_upgrade;
 
 # Increase buffer sizes to find room for DOM and CSS messages
 proxy_buffers 8 2m;
 proxy_buffer_size 10m;
 proxy_busy_buffers_size 10m;
 
 if ($sticky_backend) {
 proxy_pass http://$sticky_backend$uri?$args;
 }
 proxy_pass http://cobrowse_cluster;
 }

 }

}

You must modify the URLs in your Co-browse instrumentation scripts to point to your configured load balancer. See Website Instrumentation for details about modifying the script.

If you are using the Co-Browse proxy to instrument your site, you will need to modify the URLs in the in proxy's map.xml file. See Test with the Co-browse Proxy for details about modifying the xml file.

Warning
The Co-browse proxy should only be used in a lab environment, not in production.

Configure the Co-browse Server applications

  • 8.5.000—Modify the url option in the cluster section of all your Co-browse Server applications.
  • 8.5.001—Modify the url option in the cluster section of your Co-browse Cluster application.

See the cluster section for details.

You must also set up a similar configuration for the Genesys Co-browse Plug-in for Workspace Desktop Edition. To support this, you might consider setting up two load balancers:

  • public — This load balancer should have a limited set of Co-browse resources. For example, it should not include session history resources.
  • private — This load balancer should have all Co-browse resources and it should be placed in the network so that it is accessible only from the corporate intranet. It should only be used for internal applications, such as Workspace Desktop Edition.

Complete the procedure below to configure the plug-in to support the Co-browse cluster:

Configure the Co-browse Plug-in for Workspace Desktop Edition

Prerequisites: You have installed the Genesys Co-browse Plug-in for Workspace Desktop Edition.

Start of procedure

  1. Open Genesys Administrator and navigate to PROVISIONING > Environment > Applications.
  2. Select the Workspace Desktop Edition Application.
  3. In the application's Options section, create the section cobrowse and specify the url option in this section. See the url option for details.

End of procedure

If you use Workspace Web Edition on the agent side, you must configure it to work with Co-browse. For instructions, see Configure Genesys Workspace Web Edition to Work with Co-browse.

To test your set-up, create a Co-browse session, join it as an agent and do some co-browsing. If you can do this, your configuration was successful.

End of procedure

Comments or questions about this documentation? Contact us for support!