Revision as of 20:14, May 8, 2013 by Valentip (talk | contribs) (Agent desktops)
Jump to: navigation, search

SIP Server

With SIP Server in cluster mode, you can create a highly scalable architecture in which the system's capacity can be scaled up or down with minimal configuration changes. You can add new instances of SIP Server to the cluster at any time to increase its capacity. You can also reduce the cluster size when you need to, by removing any unnecessary nodes.

SIP Server in cluster mode is designed to support high call volumes over a large number of SIP phones and T-Library desktops. SIP Server in cluster mode is also able to support an increased number of Genesys reporting and routing components, in which large volumes of data are produced.

When working in cluster mode, SIP Server uses the following three internal modules to provide cluster functionality:

  • Session Controller—call processing engine
  • T-Controller—T-Library interface for agent desktops and Genesys servers, which monitor agent- and DN-related T-Events
  • Interaction Proxy—T-Library interface that distributes interactions across the clients in a pool

Important: All three modules operate within one executable file.


Sipc-core.PNG


Session Controller

Stateless call processing engine

Session Controller (SC) is an independent call processing engine. SC operation is comparable to how SIP Server works in stand-alone mode. The main difference is that SC does not store any information about the state of call center peers—DNs or agents. SC obtains all information required to process the call from the T-Controller (TC) module and the SIP Feature Server.

Call ownership

To process a higher call volume, the cluster is scaled up by adding new instances of SIP Server. The cluster is designed to have all calls distributed across all existing SC's uniformly. This distribution is applied to both calls that are initiated through the T-Library interface (three-person conference calls) and calls that are originated through the SIP protocol (one-person conference calls). Each call is processed by one SC. All manipulations required for this call, such as transfers and conferences, are performed on this SC. A call is never transferred from one SC in the cluster to another. The cluster architecture ensures that the same SC processes all related calls, such as main and consultation calls initiated from the same DN. As a result, all regular treatments, such as completing a transfer or conference, can be applied to a call.

Limitation on processing calls from different SCs on one DN:

This call processing model creates a limitation in processing independent calls that are delivered to the same DN from different SCs. For example, two customers call the same agent simultaneously and the two inbound calls are delivered for processing to two different SCs. An agent can accept both calls, but cannot switch between the calls or merge them using an agent desktop. Those operations must be performed directly from a SIP phone.

T-Events distribution

SIP Server in cluster mode provides all functionality that is available in a stand-alone SIP Server in terms of call processing and call representation through the T-Library interface. In a cluster architecture, all T-Library clients connect to SIP Server through its cluster interfaces—T-Controller (TC) and Interaction Proxy (IProxy). The only exception to this rule is Universal Routing Server (URS), which connects directly to the default SC listening port. SC generates the standard sequence of T-Events for each call it processes and distributes those events through TC and IProxy interfaces to the T-Library clients.

T-Controller

Scalability on the number of agents

Stand-alone SIP Server deployments are limited by the number of agents that can connect to one instance of SIP Server. Cluster architecture resolves this problem by providing a TC interface layer. This layer consists of TC modules of all SIP Server instances connected to each other. Each TC maintains the states of an equal share of all devices operating in the cluster. If an agent is logged in to a device, the agent's state is stored in the same TC. This approach enables the distribution of the processing load related to the maintenance of agent and DN states across all TCs in the cluster. The TC layer can be scaled up or down by adding or removing SIP Server instances to or from the cluster. Redistribution of DNs across the new number of instances of TCs is performed automatically and does not require any reconfiguration.

Agent desktops

Agent desktops connect to the cluster TC layer. The architecture described above does not require an agent desktop to connect to a TC if the state of the corresponding device is maintained. The agent desktop can connect to any TC in the cluster. In either case, the TC layer infrastructure guarantees that all T-Events generated for the DN with which this desktop is registered will be delivered to this desktop. Even though it is possible for the desktop to connect to any TC in the cluster, the best performance is achieved if the desktop connects to the TC that maintains the state of the device for which the desktop is registering. The TC layer implements a protocol to inform the client about the location of the TC where a client should connect to. Clients that support this protocol must disconnect from the current TC and reconnect to the other TC using the address received in the T-Event. Genesys Interaction Workspace supports this functionality.

Bulk registrants

Bulk registrants are the T-Library clients that monitor all DNs owned by a particular TC. Genesys reporting and routing components, such as Stat Server and ICON, monitor all DNs in the entire environment. The cluster architecture suggests that those clients connect to all instances of TCs and register for all DNs owned by this TC.

To simplify the registration procedure for bulk registrants and to optimize network traffic, bulk registration is performed with one request, which must be submitted to the TC: TPrivateService (AttributePrivateMsgId=8197). The TC distributes T-Events for all DNs that it owns to a respective client when this request is processed. The client can select the events it needs to receive through this connection and to filter out unnecessary UserData using InputMask and UdataFilter extensions correspondingly.

The state of each active DN in the TC is reported to the bulk-registrant client when it registers. The TC generates one EventPrivateInfo (AttributePrivateMessageId=8197) per DN. Each message contains the same information as EventAddressInfo plus some additional parameters.

Interaction Proxy

Scalability of call-monitoring clients

Interaction Proxy (IProxy) is a new T-Library interface of SIP Server, which is activated when SIP Server is operating in cluster mode. This interface distributes call-related T-Events across a pool of T-Library clients, such as ICON and Stat Server. Those clients need to receive information about all calls handled in the system. However, it is often the case that one client cannot handle the call load that is processed by one SIP Server. The IProxy interface allows several clients of the same type (for example, multiple Stat Server instances) to identify themselves as one pool. IProxy balances the call load across the clients in one pool, so that each client handles only a fraction of the load. The IProxy interface also enables the scaling up of the number of T-Library clients.

Distributing interactions

The IProxy interface distributes interactions to its clients. However, an interaction can contain one or multiple calls, such as both primary and consultation calls. IP Proxy ensures that all T-Events that are related to all calls that belong to the same interaction, are sent to the same client.

Each client in the pool handles a unique set of interactions compared to other clients in the same pool, but the set of interactions that is sent to different pools are identical. For example, if there is a pool of ICON and another pool of Stat Servers connected to the same IProxy interface, then both pools will receive an identical set of interactions.

Client connections to IProxy

The IProxy client uses a TPrivateService request (8192) to identify itself as a member of a new or existing pool. This request must include the extensions ClusterId and NodeId. ClusterId must be the same for all members of the same pool, and NodeId must have a unique value for each pool member. This information allows IProxy to add a new client to a pool and to start sending an equal share of interactions to it. A cluster client registration request might also include a UdataFilter extension, containing a list of events for which this client wants to subscribe.

When a client registers for a new pool, and is therefore the first client in the pool, the IProxy sends information about all active interactions to that client. Information is distributed in a form of a snapshot consisting of call monitoring events EventCallCreated and EventPartyAdded. This information allows the IProxy client to build a valid view of active interactions. The snapshot is sent only to the first client registered in a pool.

Reliability of client connections to IProxy

The IProxy client must provide the SessionId extension in its registration request sent to IProxy. The value of this extension is generated by the client and is used by both the client and IProxy to identify a session established for this client. The session makes temporary network disconnections seamless for IProxy clients. IProxy starts accumulating events when the client disconnects. If the client reconnects back to IProxy in a short period of time and provides the SessionId used for the original session, IProxy sends to this client all T-Events accumulated while the client was disconnected. Therefore, no events are lost for the client as a result of the network disconnection.

SIP Server Switch in the Cluster

Simplified switch configuration

SIP Server switch in cluster mode does not contain DNs of type Extension and ACD Position; nor does it contain Agent Logins. This is a major difference between the cluster and stand-alone SIP Server switch configurations. It greatly simplifies switch provisioning and improves scalability of the whole solution.

Switch provisioning is significantly simplified in comparison with the stand-alone configuration in which it is required to not only create two objects (DN and Agent Login) for each agent but also to configure a number of parameters for each of those objects. In the cluster configuration, information about agent DNs is stored in the Feature Server, while Agent Login objects are not used at all.

Solution scalability improvement

The simpler configuration of the SIP Server in cluster mode improves the scalability of the entire Genesys solution. In the stand-alone configuration, the large number of agent-related objects configured under the Switch object impacts the performance of Genesys components, and affects solution scalability in a whole. SIP Server, and some other Genesys components, is designed to read at startup all DNs configured in the configuration environment. The more DNs there are, the longer it takes for a component to read them and prepare to start or resume call processing. Reading the large number of DNs at startup may also flood the network and negatively impact the service quality for active calls.

Device Profiles

Another benefit of the simpler SIP Server switch in a cluster configuration is that there is no need to configure any DN-level parameters, even in the Feature Server where the DNs are stored. Instead, in the cluster configurtation, DN-level parameters are configured in the device profiles, which are represented as VoIP Service DNs. Each device profile is used for a group of agent devices. SC is able to automatically associate a proper device profile with any device participating in a call, based on the value of the User-Agent header received in the SIP messages from the device.

DN State Maintenance

DN state life cycle

The TC layer is responsible for maintaining the states of the DNs that are currently operating in the cluster. See the high-level description of the T-Controller functionality.

At startup, the TC layer has no information about the DNs in the cluster. The TC layer starts maintaining the state of a DN when either the SIP phone or T-Library client registers for a corresponding DN.

The TC layer deletes the DN state if both of the following conditions are met:

  • SIP registration for this DN expires (or does not exist).
  • No T-Library clients are registered for this device.

SIP registration handling

In cluster mode, SIP Server does not work as a SIP Registrar but SIP Proxy still passes all SIP REGISTER messages to SIP Server for the following purposes:

  • SIP request authentication—SIP cluster may be configured to authenticate SIP REGISTER requests.
  • DN state maintenance—TC-layer puts the device in or out of service based on the information in the Expire header. It also keeps the registration timer. The TC-layer sets a device to out-of-service state if SIP registration is not renewed before the expiration.
  • Device profile linkage—TC-layer uses User-Agent information to associate a device profile with the registered device. See information about Device Profiles.

TRegisterAddress handling

TRegisterAddress is received in one of the following scenarios:

  • When a corresponding DN state is already registered in the TC-layer.
  • When the TC-layer has no knowledge about the requested DN.

In the first scenario, the TC-layer responds with the standard EventRegistered, which has the same content as the same message in stand-alone mode.

The second scenario is more sophisticated because the TC does not know if the requested device is a valid internal DN or not. This information is stored by the Feature Server. The TC-layer first sends a query to the Feature Server to check the DN validity. If the Feature Server responds positively, EventRegisterd is distributed as in the first scenario. If the Feature Server reports the DN as unknown, EventError is returned to the client.

If the DN state is registered in the TC-layer as a result of TRegisterAddress, then this device is set to out-of-service state because it has no SIP registration yet.

Cluster Node Awareness Protocol

At startup, a SIP Server Node (or a SIP Server HA pair) becomes aware of other SIP Server Nodes in the cluster, and automatically attempts to connect to T-Controllers of those other Nodes. After a connection is successful, the SIP Server Node issues a “retrieve node states” TPrivateService request and obtains a list of all running (Active or Inactive) SIP Server Nodes. Other SIP Server Nodes in the cluster become aware of the new node as they accept its connection requests. In their turn they connect to the new node as well. The SIP Server Node waits a certain time to allow bulk registrants (such as Stat Server, ICON, and Feature Server) and URS to connect before starting to process calls. If no node conflicts are detected, the SIP Server Node initiates self activation by sending designated TPrivateService requests to the other nodes.

For SIP Server HA pair members, a SIP Server Node that is running and has joined the cluster can be removed from the cluster if both SIP Server applications in the HA pair are shut down using Genesys Framework. If a SIP Server Node that is running and has joined the cluster suddenly fails, other SIP Server Nodes become aware of the failure.

DN namespace is mapped to the SIP Server Nodes, so a specific node owns a DN of a given name and set of SIP Server Nodes. On switchover, DN namespace mapping for the failed primary SIP Server is preserved for a new primary SIP Server.

SIP Server States

SIP Server Nodes can be in one of the following states:

  • Shutdown—The state of a SIP Server application when its host machine is down or it does not have a network connection. Alternatively, the machine and network are functioning properly but the SIP Server application is not running or is malfunctioning.
  • Active—The state of a SIP Server Node when it is actively participating in the cluster and handling its share of calls.
  • Inactive—The state of a SIP Server Node when it is running but is not handling calls. The node does not own any DN states. The node replies to the OPTIONS request with a 503 message instead of OK. SIP Proxy does not direct new calls to the node.

State Transitions

A cluster administrator can do the following:

  • Add, remove, or change the Cluster node description in the configuration database.
  • Start or shut down Cluster applications.

During startup, a SIP Server application transitions from the Shutdown state to the Inactive state as follows:

  1. SIP Server application is started from Genesys Administrator.
  2. It reads configured applications from the Cluster Configuration DN.
  3. It connects to all running SIP Server T-Controller ports (Active and Inactive).
  4. All SIP Server Nodes update their views with a new Inactive SIP Server Node.

If a number of connected SIP Server Nodes is equal to or greater than the min-node-count (default is 4), SIP Server waits for the period of time specified by node-awareness-startup-timeout (default is 5 sec) to allow Stat Server, ICON, Feature Server, and URS to connect. SIP Server then activates itself by triggering the Plus One procedure (described below) internally. If the min-node-count limit is not reached, SIP Server stays in Inactive state.

During shutdown, a SIP Server application transitions from the Inactive state to the Shutdown state as follows:

  1. SIP Server application is stopped from Genesys Administrator.
  2. SIP Server Node is shut down.
  3. All SIP Server Nodes detect that the SIP Server Node no longer exists by using the ADDP protocol.
  4. All SIP Server Nodes update their views removing the missing SIP Server Node.

When adding a new SIP Server Node, SIP Server applications transition from Inactive to Active state as follows:

  1. A new SIP Server Node sends a TPrivateService("Plus One") request to all running SIP Server Nodes.
  2. Active SIP Server Nodes send a share of their DNs to a new SIP Server Node.
  3. The new SIP Server Node accepts and processes all DN states.
  4. The new SIP Server Node starts replying to SIP OPTIONS requests from SIP Proxy with a 200 OK response.
  5. Inactive SIP Server Nodes monitor the progress, and mark the new SIP Server Node as Active at completion.

As a result, the new SIP Server Node is marked as Active in the cluster view on all SIP Server Nodes. SIP Proxy starts distributing 1pcc calls to the new SIP Server Node. The new SIP Server Node starts processing TMakeCall requests for its DNs.

When shutting down a SIP Server Node, SIP Server applications transition from Active to Inactive state as follows:

  1. SIP Server Node sends a TPrivateService("Minus One") to all running SIP Server Nodes.
  2. SIP Server Node being removed transfers all DN states to other SIP Server Nodes and starts replying to SIP OPTIONS requests from SIP Proxy with a 503 response.
  3. Active SIP Server Nodes receive and process new DN states.
  4. Inactive SIP Server Nodes monitor the progress, and mark the removed node as Inactive at completion.

As a result, the deactivated SIP Server Node is marked as Inactive in the cluster view on all SIP Server Nodes. All DN states from the deactivated node are equally distributed between Active SIP Server Nodes. SIP Proxy does not distribute 1pcc calls to the deactivated node. The deactivated SIP Server Node does not process any TMakeCall requests.

Transferring DN state

Cluster-aware agent desktops are notified when DN ownership is changed. Notification (EventPrivateInfo) contains the address of the new SIP Server T-Controller DN-owner. Bulk registrants connected to the old T-Controller DN-owner receive a message indicating which DNs are removed. Bulk registrants connected to the new T-Controller DN-owner receive a message indicating which DNs are added. Agents are required to re-log in if DN ownership is changed. Cluster-aware desktops log out automatically based on the notification, while T-Controller waits for the period of time specified by node-awareness-agent-timeout to elapse and then forcefully logs agents out.

Call Handling in a Cluster Environment

Differentiation of internal and external DNs

SIP Server in cluster mode needs to differentiate between internal DNs and external numbers, to provide backward compatibility on the T-Library level (T-Events are to be generated only for internal DNs and not for external numbers). A cluster switch does not have this information anymore; the cluster architecture specifies that the whole list of internal DNs is stored at the Feature Server (FS). SIP Server is obtains this information through dial plan requests, which are sent to FS where the dial plan is implemented for each call that SIP Server is processing. FS then returns the type of origination and destination devices.

Selecting Device Profile for a DN

A Device Profile is selected for a DN automatically by matching the value of the 'User-Agent' header received from the corresponding SIP phone with the value of the 'profile-id' parameter of one of the VoIP Service DNs of type 'device-profile'. The value of the 'profile-id' parameter must be a substring of the value of the 'User-Agent' header. Comparison is case sensitive. Values of the 'profile-id' parameters must be unique for all device profiles defined in the cluster switch for the Device Profile selection procedure to work properly. SIP Server is able to process the 'User-Agent' header received in the SIP REGISTER message. In this case, the value of this header is stored as a part of the device state in the TC layer and is used when 3pcc call is initiated from this device. The 'User-Agent' header received in one of the SIP messages during the call is also used for Device Profile selection. If a matching Device Profile is not found based on the received 'User-Agent', or a 'User-Agent' header was not received in any of the SIP messages generated by the SIP phone, then a default Device Profile is used. The default Device Profile is the one that has 'profile-id' set to the value of 'default-profile'. After a Device Profile is selected, all DN-level parameters defined in this profile are applied to a device used for processing the current call. The Device Profile selection procedure is performed each time a device gets involved in a call. If a Device Profile is not found based on the 'User-Agent' header and a default Device Profile is not defined, then a device with all its parameters set to default values is used for call processing. The mechanism of Device Profiles applies only to agent devices, i.e. SIP phones. Device Profiles are not used for Trunks, Trunk Groups, Media Servers, and so on. In those cases, all DN-level parameters are configured explicitly under the device in the cluster switch the same way as in stand-alone SIP Server mode.

Selecting device profile for different call types

Device profile is selected based on the User-Agent header value received in one of the SIP messages from the device. If this information is not available, then in all the scenarios listed in the following table, either the default device profile is used or a device is created with all default parameters.

Scenario Description
Destination device in any call scenario The original outgoing INVITE sent to a call destination is always created using default-profile. If the device includes the User-Agent header in the response to the outgoing INVITE sent by the SIP Server, then this information is used to select the Device Profile for the device and to replace the existing one.
Origination device in 1pcc inbound call The Device Profile is assigned based on the User-Agent header received in the INVITE message from the originating call.
Origination device of TMakeCall If the User-Agent header is submitted in the SIP REGISTER request received from this device, then the matching Device Profile is used.
Origination device of TInitiateCall/TInitiateConference The consultation call origination device uses the same Device Profile as the one used for this device in the main call.


Origination Device type detection

To ensure proper origination device type detection, the second top-most Via header in the INVITE requests received from the agent's phones must not match a contact value of any of the trunks configured in the Configuration Layer.

In a cluster environment, SIP Server detects an origination device based on the second Via header of a new incoming INVITE request. If the host of the second Via header matches one of the trunks configured in the Configuration Layer, then the device is assumed to be external. Otherwise, the origination device is positioned as internal and all necessary T-Events are generated for this device in a call. In a scenario where a call is made to a Routing Point, SIP Server does not consult the Dial Plan to process the incoming call (a performance improvement measure). Instead, it relies on automatic detection of the origination device type. If the destination device is not a Routing Point, then SIP Server queries the Dial Plan. The Dial Plan response contains the types of both origination and destination devices. So, no assumptions are made about device types.

Related Documentation Resources

The following documentation resources provide general information about SIP Server:

Comments or questions about this documentation? Contact us for support!