Jump to: navigation, search

Hive Off Recovery Process for SQL Server

Process Summary

The purpose of this memo is to give background information on the Genesys Intelligent Automation (GIA) ‘hive off’ process and to provide instructions on the most efficient way to recover from a failure in the hive-off process in a Microsoft SQL Server environment. Due to the large amount of reporting data captured and processed by the GIA application, it is essential that corrective action is taken quickly if the hive-off process fails, to ensure that the next night's hive-off can run successfully.


Version Compatibility Notice Beginning with GIA version 9.0.120, the hive-off process changed significantly.

Legacy Mode (pre-9.0.120) In legacy versions, the hive-off process copies data from the Daily tables into the Historical tables. Indexes may be disabled and re-enabled during hive-off execution, duplicate cleanup may be required, and Historical tables grow daily. Daily table purging is also performed.


Modern Mode (9.0.120+) In modern versions, data is not copied from the Daily tables to the Historical tables. Indexes are not disabled or re-enabled, duplicate cleanup between Daily and Historical tables is not required, and Historical tables no longer grow. Daily table purging still occurs and is part of normal hive-off operation. This document supports both architectures, and version-specific instructions are provided directly under each relevant section.


Important
If corrective action is not taken within several hours of the original failure, then data loss is likely to result

Version-Specific Impact

Legacy Mode (pre-9.0.120): In legacy versions, hive-off failures may result in duplicate records between the Daily and Historical tables. Index rebuild operations may become blocked or incomplete, and large transaction log usage may occur during recovery.

Modern Mode (9.0.120+): In modern versions, no Daily-to-Historical data movement occurs. Failures affect only the purge of Daily tables and the generation of reporting aggregates.

Definitions and Background

Term Definition
Hive off The hive off process is a nightly batch job that

runs on the primary Genesys Intelligent Automation graphical user interface server. It is responsible for managing the large volumes of reporting data captured during the day. The hive off job typically runs during the early hours of each morning and may take several hours to complete, depending on data volume.

Behaviour Legacy Mode (versions earlier than 9.0.120) Modern Mode (versions after 9.0.120 and later)
Copying of records from the Daily tables to the Historical tables. Records are copied from the Daily tables into the corresponding

Historical tables during the hive off process.

Records are not copied from the Daily tables into the Historical tables.
Disabling and re-enabling of indexes on the Daily tables. Indexes on the Daily tables are disabled before deletion and re-enabled

after deletion is completed.

Indexes on the Daily tables are not disabled or re-enabled during the hive

off process.

Deletion of expired data from Historical tables. Expired data is deleted from the Historical tables based on the configured

retention period.

Expired data is deleted only if legacy Historical data exists.
Daily tables Daily tables are database tables that receive information related to incoming

calls and Control Centre auditing for the current day. These tables continuously receive new reporting data throughout the day from the Genesys Intelligent Automation servers. callscall_sitescall_stepsbusiness_tasksgui_actions

Behaviour Legacy Mode (versions earlier than 9.0.120) Modern Mode (versions after 9.0.120 and later)
Reception of live call handling and auditing data Daily tables receive live call handling and auditing data throughout the day. Daily tables receive live call handling and auditing data throughout the day.
Deletion of data during the hive off process Data is deleted from the Daily tables after it has been copied into the

Historical tables.

Data is deleted from the Daily tables as part of normal hive off processing
Copying of the Daily table data into the Historical tables. Daily table data is copied into Historical tables. Daily table data is not copied into Historical tables.
Historical tables Historical tables are read-only database tables that store reporting data

for previous days. These tables provide access to historical reporting data over extended periods of time. historical_callshistorical_call_siteshistorical_call_stepshistorical_business_ taskshistorical_gui_actions.

Behaviour Legacy Mode (versions earlier than 9.0.120) Modern Mode (versions after 9.0.120 and later)
Population of Historical Tables. Historical tables are populated nightly during the hive off process. Historical tables are not populated with new data.
Synchronization with Daily tables Historical tables must be synchronized with the Daily tables after data is copied. Synchronization between the Daily and Historical tables is not required.
Usage for reporting. Historical tables are actively used for reporting purposes. Historical tables are retained only for legacy data retention.
Growth of Historical table size. Historical tables grow as new data is added each day. Historical tables do not grow because no new data is added.
Aggregate tables Aggregate tables are read-only database tables that store pre-calculated reporting

metrics generated during the hive off process. These tables are used to improve reporting performance by reducing the need to calculate metrics from raw data. daily_dashboard_aggregatesdaily_health_aggregatesdaily_health_aggregates_startlast31days_slotsreports_blockresults _blockResults_aggregatereports_businessTasksSummary_businessTasksSummary_aggregatereports_callJourneys_aggregatereports _callsbyday_callsByDate_aggregatereports_callsbyHour_callsByHour_aggregatereports_recognitionsummary_inputBlockSummary _aggregatereports_summary_businesstasks1_aggregatereports_summary_businesstasks2_aggregatereports_summary_callsummary _aggregatereports_summary_lastmenu_aggregatereports_summary_recognitionsummary1_aggregate.

Behaviour Legacy Mode (versions earlier than 9.0.120) Modern Mode (versions after 9.0.120 and later)
Generation of aggregate data Aggregate data is generated during the nightly Hive off process. Aggregate data is generated during the nightly Hive off process.
Method of aggregate generation Aggregates are generated using the legacy reporting pipeline. Aggregates are generated using the modern reporting pipeline.

The Genesys Intelligent Automation hive off process consists of four distinct steps:

  • Calculating aggregates and copying records from the ‘daily’ tables into the ‘historical’ tables.
  • Disabling indexes on the ‘daily’ tables and deleting those records from the ‘daily’ tables that were copied to the ‘historical’ tables in step 1;
  • Deleting any expired data from the ‘historical’ tables as per the NumDaysHistoricalDataToKeep setting.
  • Re-enabling indexes on the ‘daily’ tables. Should the hive off process fail for any reason, the corrective actions to ensure that the next night’s hive off runs successfully will be different depending on which step the hive off failed.

Version-specific behavior

In Legacy versions (pre-9.0.120), all of the steps described above apply exactly as written, including copying data from Daily to Historical tables and disabling/re-enabling indexes. In GIA 9.0.120 and later, the hive-off behavior has changed. Aggregates are still calculated, but data is no longer copied from the Daily tables to the Historical tables. Instead, hive-off focuses on purging Daily tables and removing expired legacy historical data (if present). Index disabling and re-enabling is no longer performed in modern versions.

Determining at Which Step the Hive Off Failed

This can be best determined by looking at the fish.log file (and, perhaps, fish.log.1, fish.log.2, etc.) in the primary GUI server’s SpeechStorm\Platform\TomcatGUI\logs directory.

The names of the steps in the log file appear as:

  • populateAndHiveOff
  • deleteBatchAfterHiveOff
  • deleteOldHistoricalData
  • rebuildIndexes

The following entries should be written to the logs during each hive off run, and the presence or absence of certain of the entries below will indicate which step of the hive off failed, thus allowing you to follow the correct path in the Recovery Process in the next section.

Start of step #1:

[INFO ] 2016-06-15 00:15:00,833 (Timer-1::) Com.speechstorm.fish.reporting.PopulateAggregatesAndHiveOffHistoricalDataTask.populateAggregatesAndHiveOff() starting at ... The following entry indicates that step #1 has finished and

Step #2 is about to begin: [INFO ] 2016-06-15 00:37:48,557 (Timer-1::) com.speechstorm.fish.reporting.PopulateAggregatesAndHiveOffHistoricalDataTask.populateAggregatesAndHiveOff() finished populateAndHiveOff at ... This entry indicates that step #2 has finished and

step #3 is about to begin: [INFO ] 2016-06-15 01:26:43,590 (Timer-1::) com.speechstorm.fish.reporting.PopulateAggregatesAndHiveOffHistoricalDataTask.populateAggregatesAndHiveOff() finished all deleteBatchAfterHiveOff steps at ... This entry indicates that step #3 has finished and

step #4 is about to begin: [INFO ] 2016-06-15 02:06:24,271 (Timer-1::) com.speechstorm.fish.reporting.PopulateAggregatesAndHiveOffHistoricalDataTask.deleteOldHistoricalData() finished at ... This entry indicates that all steps have been completed: [INFO ] 2016-06-15 02:06:45,571 (Timer-1::) com.speechstorm.fish.reporting.PopulateAggregatesAndHiveOffHistoricalDataTask.rebuildIndexes() finished rebuildIndexes at ...

Version-specific clarification

In Legacy versions (pre-9.0.120), all of the log entries listed above are expected. Failures may indicate incomplete copying of data into Historical tables, blocked or disabled indexes, failed deletions, or the presence of duplicate rows. In GIA 9.0.120 and later, log entries related to copying data into Historical tables and rebuilding indexes may not appear, as these operations are no longer performed. In modern versions, hive-off failures typically relate to aggregate calculation, Daily table purging, or the deletion of legacy historical data.

Important
The absence of deleteBatchAfterHiveOff or rebuildIndexes log entries in modern versions is expected and does not indicate a hive-off failure.

Recovery Process

While it is important to try to understand and fix the problem that caused the hive off to fail, if this cannot be done immediately (for example, it requires more investigation), then it is better to take action immediately to ensure that the next night’s hive off still runs. There is a short window of time each night in which to run the hive off (for example, between midnight and 7 am), and if the hive off does not run successfully for several days in a row then the amount of data in the ‘daily’ tables will become prohibitive and will have an impact on both call handling and reporting; the only practical solution should this be allowed to happen is to do a bulk delete of the ‘daily’ tables, thus causing reporting data to be lost.

The main factors to take into account are:

  • There must be no duplicate records in the ‘daily’ and ‘historical’ tables when the hive off runs.
  • The indexes on the ‘daily’ tables need to be enabled before the hive off runs.
  • Re-enabling indexes on the ‘daily’ tables when there is a lot of data in those tables will lock the tables for a significant period of time (for example, minutes or even hours), which could adversely affect call handling or cause a large backlog of reporting data waiting to be written.

If the hive off has not run for several days, and the value of the DBOvernightJobs.NumDaysHistoricalDataToKeep setting is not increased accordingly, then the hive off will attempt to delete a much larger chunk of records from the ‘historical’ tables, which will take proportionally longer to execute and could lead to running out of transaction log space on the DB server. The overall hive off recovery process is described in the following diagram:

Version-specific clarification

In Legacy versions (pre-9.0.120), all of the considerations listed above apply as written. Duplicate Daily and Historical records must be removed, indexes must be re-enabled before the next hive-off run, and the historical retention window must be adjusted carefully after multiple failed hive-off runs. In GIA 9.0.120 and later, some of the considerations above no longer apply. Hive-off no longer copies data from Daily tables into Historical tables, and indexes are no longer disabled or re-enabled as part of the hive-off process. However, it remains essential that hive-off runs regularly, as Daily tables continue to grow and can still impact call handling and reporting if not purged.

Important
In modern versions, branches in the recovery diagram involving Daily → Historical copying or index operations are not applicable and can be safely ignored.

Corrective Actions in Detail

One or more of the following actions may be required according to the diagram above.

(A) Delete Duplicate Records From ‘Daily’ Tables

This action is required if the hive off stops during step 2. There will be no loss of reporting data when you execute this step, as the data has already been copied to the ‘historical’ tables. The SQL to be run against the ‘fishreports’ database is as follows:

set ROWCOUNT 20000;

delete from call_steps where id in (select id from historical_call_steps);

set ROWCOUNT 20000;

delete from call_sites where id in (select id from historical_call_sites);

set ROWCOUNT 20000;

delete from business_tasks where id in (select id from historical_business_tasks);

set ROWCOUNT 20000;

delete from calls where call_id in (select call_id from historical_calls);

set ROWCOUNT 20000;

delete from gui_actions where id in (select id from historical_gui_actions);

Important
This SQL should be run multiple times until there are no more duplicate records reported.

The max ROWCOUNT limit can be increased with caution, but care needs to be taken that this does not have an adverse effect on table locking; monitoring is key here.

Version-specific clarification:

This action applies only to Legacy versions (pre-9.0.120). In GIA 9.0.120 and later, hive-off no longer copies data into Historical tables, so duplicate Daily/Historical records cannot occur, and this step must not be run.


(B) Truncate Data in ‘Daily’ Tables

This action is required if the amount of data in the ‘daily’ tables is such that re-enabling the indexes would lock the tables for a prohibitive amount of time. Be aware that this step will result in a loss of all reporting data in the ‘daily’ tables, although reporting data in the ‘historical’ tables will not be affected. A database backup is recommended before executing this step.

The SQL to run against the ‘fishreports’ database is as follows:

truncate table calls;

truncate table call_sites;

truncate table call_steps;

truncate table business_tasks;

truncate table gui_actions;

declare @maxid bigint;

select @maxid = max(call_id) + 2000 from historical_calls;

dbcc checkident (calls, RESEED, @maxid);

Version-specific clarification:

This action applies to both Legacy and Modern versions. In modern versions, reseeding identity values is optional, as no new records are written to Historical tables.

(C) Delete Orphan Records from ‘Daily’ Tables

This action is required once the ‘daily’ tables have been truncated, to ensure that there are no child records whose parent records have been removed. delete from call_sites where call_id not in (select call_id from calls);

delete from call_steps where call_id not in (select call_id from calls);

delete from business_tasks where call_id not in (select call_id from calls);

Version-specific clarification:

This action applies to both Legacy and Modern versions and ensures referential integrity.


(D) Rebuild Indexes

This action is required any time the hive off fails after step 1, to ensure that the indexes on the ‘daily’ tables are enabled. exec USP_RebuildIndexes This operation may be time-intensive depending on data volume.

Version-specific clarification:

This action applies only to Legacy versions (pre-9.0.120). In modern versions, indexes are not disabled during hive-off, so this step is not required.

(E) Increase NumDaysHistoricalDataToKeep Setting

Run the following SQL to determine an appropriate value for the NumDaysHistoricalDataToKeep setting: declare @mindate DATE

select @mindate = min(call_start_date) from historical_calls

select @mindate, datediff(d, @mindate, GETDATE()) - 1 as NewDaysToKeepSettingValue Increase the setting only if the calculated value exceeds the current configuration. The value should be reduced gradually back to its original value after recovery.

Version-specific clarification:

This step is critical in Legacy versions, where historical tables grow daily. In 'modern versions, this setting only affects the deletion of legacy historical data, as no new historical records are added.

Next Procedure

1.Ensuring That the Next Hive Off Will Run

Ensure that the hive off job is enabled and that the in progress flag is set to false. These settings can be changed via the Genesys Intelligent Automation Control Centre, in the Administration → Default Server Settings page:

Hive Off DB Jobs Schedule.png

Version-specific clarification:

In Legacy versions (pre-9.0.120), this ensures that the full hive-off process runs, including copying data into Historical tables, deleting Daily data, and rebuilding indexes. In GIA 9.0.120 and later, this ensures that aggregate generation and Daily table purging can run successfully. No data is copied into Historical tables

2.(Optional) Manually Re-Running the Hive-Off

In high-call traffic scenarios, it is best not to run the hive-off during the day but instead to wait for it to run automatically the following night automatically. If the recovery steps have been undertaken shortly after a failure, or in a low-call traffic scenario, however, it is possible to kick off the hive-off process manually. To do this, change the following setting (in the same location as above) from 1 to 0:

Hive Off DB Jobs Schedule.png

Now restart the primary GUI server and the hive-off will begin immediately, even though the current time is after the scheduled hour and minute that are defined in the server settings. When the hive-off has finished, be sure to change this setting back to 1 so that any future restarts of the GUI server will not trigger an immediate hive-off. (It is not required to restart the GUI again at this point.)

Version-specific clarification:

In Legacy versions, manually triggering hive-off will execute the full legacy process, including Daily → Historical copying and index operations. In modern versions, manually triggering hive-off will perform only aggregate generation and Daily table purging. The absence of data copying is expected behavior.

3.Identifying the Root Cause

If the underlying cause for the failure has not yet been identified and resolved, then you should take steps to record more detailed log events in order to better troubleshoot if the issue recurs. On the database server, DBAs should advise on additional logging. On the Genesys Intelligent Automation primary GUI server, enable full SQL debug logs by editing: SpeechStorm\Platform\TomcatGUI\webapps\fish-gui\WEB-INF\classes\log4j.properties Ensure the following line is not commented out:

log4j.logger.java.sql=DEBUG

The setting should be picked up automatically without requiring a restart. Enabling full SQL debug logs will create a large number of log files, so daily archiving of logs for at least three months is recommended.

Version-specific clarification:

In Legacy versions, focus analysis on copy, delete, and index rebuild phases. In modern versions, failures typically relate to aggregate generation or Daily table purging rather than Historical data movement.

4.Managing the Ongoing Deletion Of Old Historical Records

Remember to decrease NumDaysHistoricalDataToKeep by 1 each day if it was increased as per action (E) above. This should be done daily until the setting is back to its original value.

Version-specific clarification:

In Legacy versions, this step is essential to gradually reduce the historical data deletion volume after recovery. In modern versions, Historical tables no longer grow; this setting applies only to legacy historical data and may not require adjustment.

This page was last edited on January 16, 2026, at 08:17.
Comments or questions about this documentation? Contact us for support!