FTS is a popular and widely-used search engine integrated with Remedy applications which provides functionality for faster searches, attachment level searches, and case insensitive searches. Several issues were reported with this functionality on BMC Remedy ARSystem Server version 7.6.04 and earlier, and there were limitations in how well this functionality supported high availability environments. Addressing this areas was a major focus in BMC Remedy ARSystem Server version 7.6.04 Service Pack 5. This blog post will describe the variety of symptoms which were encountered due to this architectural limitation, the design changes made to address them in Service Pack 5, and how it applies to the feature functionality.
Symptoms of this problem area
- FTS in server group crashes when remote collection directory becomes unavailable
- FTS search with specific characters and operands does not work
- FTS search is too slow
- MFS search does not work well for Remedy Knowledge Management (RKM)
- FTS indexing takes too long and records in the FT_PENDING table keeps growing
- While performing FTS search on clob field , error appears ARERR 857 - IO Error
- Weighted Relevancy does not work as defined
- Scan Time trigger issue with View and Vendor forms
- 8755 or 8760 while performing FTS search
- Some MFS searches on AR System Multi-Form Search can cause ARSystem to crash
- FTS does not work as per the Tokenization rules
- Ignore Word List does not get obeyed by FTS search
- Java process loading FTS plugin shows high cpu/memory growth
- One server in server group holds the lock on write.lck file & FTS plugin keeps failing with related error
- Multiple servers own FTS writer operation in server group
- FTS search performance issues occured most often when using server group sharing remote collection directory
- Some literal searches, Wildcard searches, Keyword searches not working as expected
Analysis revealed several specific problem areas, including:
- FTS Java plugin
- Heap size of Java process that loads FTS plugin
- FTS collection directory accessibility
- MFS integration with RKM objects during RKM installation
- Registration with FTS/MFS using RKM registration plugin
- FTS plugin is overloaded
- Administrator checks re-index option and the process runs into production hours
These problem areas are addressed in the BMC Action Request System 7.6.04 Service Pack 5 high availability (HA) feature.
In earlier versions than ARS 7.6.04 Service Pack 5, there was no option for FTS operation during failover and high availability. The recommended way to address it was to use a common FTS collection directory shared location for all servers participating in the server group and only one AR Server would perform Index operations. A single server [only] was ranked for FTS with no fail-over functionality.
This is now changed!
In this enhancement we have:
- Modified ‘ft_pending’ table
- Added new column ‘indexingServerName’
- Added ‘indexingServerName’ column as part of the table’s primary index
- Added new table ‘ft_schedule’
This allows each FTS Indexing Server to track its own ‘rescan’ last scanned time. Server Group Ranking form can now be used to designate FTS Indexing servers fail over order and thus High availability going forward.
- AR Servers which are ranked in AR System Operation Ranking Form for FTS will be indexer servers. Usually this is (2) two AR Servers setup.
- The standard FTS indexing server operates as in the past releases
- A second fail-over FTS indexing server is added to the server group for redundancy and fail-over
- Both FTS indexers have distinct collection directories
- FTS Indexers do not share a collection directory, it is local to each FTS Indexer
- Hence, high availability is not dependent on network share, that gives good performance improvement in this area
- Each server is re-indexed separately
- All servers use same type of ‘FTS Collection Directory’ and ‘FTS Configuration Directory’ paths
- FTS plugins on all servers runs on same port
Enhanced Architecture Flow
Below are changes to the ft_pending table
- Each server uses its own range for sequence
- Server name that originates the request/index record
- All servers insert requests into ft_pending table
- serverName column identifies which server originated the request
- An associated seqNum is the originating servers seqNum counter
- This column consist targeted server name for each row in ft_pending table (FTS Queue). This column decides which AR Server will perform indexing for that specific row.
Creating FTS Indexing Request in the FTS High Availability Environment
When AR server saves data that needs to be FTS Indexed hence it queues the data using the ft_pending table
FTS Indexers then pulls requests from the ft_pending table and indexes accordingly
Each request, now in FTS HA is replicated for each ranked server’s collection directory
- Example: two ranked servers for FTS = two rows inserted for each queued requests
- One for each FTS Indexer (indexServerName column will differ for each)
- All other columns (including serverName, seqNum, operationType, updateTime, schemaId, fieldID, entryID) will be the same for all rows.
Exception: when AR Server doing re-indexing operation:
- Only a single Indexing server is targeted. Rank 2 server’s FTS indexer will not do any re-indexing operation along with rank 1.
- In case of Full re-index , it needs to be done from FTS tab in Server (AR System Administration Console)
Primary and Secondary Roles in FTS HA
Primary FTS Server
- These Servers are ranked for FTS operation in Operation Ranking form and generating a copy of the FTS collection/Index
- Each run two (2) FTS Plugins
- Its own Primary Instance for Indexing operation
- A Secondary Instance for secondary FTS servers to connect to as a reader plugin instance
Secondary FTS Server
- Do not run FTS Plugins , FTS plugins on these servers are disabled.
- Connect to a Primary FTS Server’s Secondary instance of the FTS Plugin i.e. reader instance.
FTS Configuration - Primary Server Group - Visual
FTS Configuration – Secondary Server Group – Visual
Understanding how FTS fail-over works
Two ranked FTS servers independently are indexing the same data and maintaining its own FTS Collection directory
Secondary (non-indexing a.k.a User Facing) servers will cater end users search requests
Default configuration, Secondary servers connect to FTS indexer ranked #1
In case of FTS Rank 1 server goes down, then fail-over will occur to FTS server ranked #2
This failover will automatically be done without any manual configuration changes.
Failover will occur in case,
- if AR server is un-available (error, down, etc)
- If AR server is doing a re-index
On initial deployment, it is not recommended for both indexing servers to re-index at same time.
On upgrade, there is no need to re-index previously existing servers, the data for versions before ARSystem 7.6.04 SP5 is already indexed and the same collection directory can be used for any one server. When adding a second server, the re-index needs to be done to sync the collection directory across all servers.
Fail-over operational view
Below are are the key take-away points to remember about how high availability full text search operates:
- FTS HA adds a second indexing server for FTS
- Provides fail-over during search requests if first ranked FTS indexer is not available
- Each indexing server operates independent of the other
- If one server needs to be re-indexed, the other can carry the load
- Additional Database load is minimal
- Minimum 2 AR Server (Rank 1 and Rank 2) for FTS indexing operation and as many server as customer can have for user facing servers (reader)
- It is recommended to have indexer AR servers non user facing
- No Rank for User Facing servers in Operation Ranking form
I hope this post will be helpful to understand the changes, and the value of applying this service pack to improve the behavior of Full Text Search in a high availability environment.