GroupWise Mobility Service (GMS) Best Practice Guide

This is a comprehensive guide I have put together to help install, configure, upgrade, and maintain a GroupWise Mobility Service system. This comes from years of experience working with GroupWise and GMS for a global customer base. My goal is to provide real world experience and tips that go beyond what is offered in the documentation.

Relevant to GroupWise Mobility versions 18.3.1 and higher (18.3.1, 18.4.x, 18.5, 23.4, 24.1, 24.2, 24.3)

GMS 24.3 - Released July 17, 2024
GMS 24.3 is mostly a bugfix release. Reference the release notes here: GroupWise Mobility Service Release Notes

GMS 24.2 - Released Apr 9, 2024
New Features: Ability to view attendee status for appointments on same GMS system as organizer; improved mCheck recovery of missing items; fixes to customer-reported bugs. Additionally, some changes were made to how GMS is tied into Python that caused problems when performing SLES or GMS patches. The GMS Python modules utilize Python 3.11 and are no longer dependent on the SUSE Python libraries installed and used by various components of the Operating System.

GMS 24.1 - Released Jan 17, 2024
Updates the Django and Python libraries to currently supported versions – 4.2 for Django and 3.11 for Python. Django and Python no longer will provide as-needed security fixes and support to previous versions of their libraries. GroupWise Mobility Service 24.1 must be run on SLES 15 SP4 or later due to these changes.  Upon installation of GroupWise Mobility Service 24.1, SLES 15 SP4 or later will auto-enable the needed Python 3.11 extension.  (Please don't upgrade GMS to version 24.1 if your SLES 15 SP4 server is not first fully patched from the SUSE channel.)

GMS 23.4 - Released Oct 25, 2023
GMS 23.4 added a few new features as well as rebranding from Micro Focus to Open Text.

  • Rebranded from Micro Focus to Open Text.
  • Notify User when Mobility Account is Provisioned.
  • Generate CSV List of Inactive Users.

GMS 18.5 - Released May 23, 2023
GMS 18.5 was mostly bug fixes.

GMS 18.4.2 - Released Dec 6, 2022
GMS 18.4.1 - Released Jun 22, 2022
GMS 18.4 was mostly bug fixes and a few feature enhancements. Here's an overview.

  • MCHECK was dramatically improved for usability and administrative functionality. Prior to this release, MCHECK was practically useless. It's important to note that MCHECK was never much of an app due to the widely popular use of the DSAPP utility, which is now deprecated. Now that DSAPP will not even run with GroupWise Mobility, MCHECK is the only tool that is supported, and it's feature set has been dramatically improved.
  • Support for TLS 1.3, also some considerations for certificates and certificate verification related to the GroupWise 18.4 release.
  • Support for BTRFS and XFS file system. (Still recommends EXT4)

GMS 18.3.2 - Released Sep 22, 2021
Mostly bugfixes and documentation updates.

GMS 18.3.1 - Released Jul 28, 2021

GMS version 18.3.1 brought significant architectural changes to the underlying Python code that runs GMS. Additionally, there were some other major changes that affected the provisioning and authentication options available to a GroupWise administrator. Here is a summary of the major changes that were introduced:

  • Python version 3
    Earlier versions of GroupWise Mobility were built on Python version 2.  This became a bigger and bigger problem when Python 3 became the DeFacto standard on SLES15.  With the release of GroupWise Mobility 18.3.1, Python 2 was abandoned completely and fully built with Python 3.
  • Starting/Stopping GroupWise Mobility
    Changes to the way you start and stop GroupWise Mobility:

    • You no longer have the ability to start/stop the agents from the GMS dashboard.
    • The way you stop the agents on the command line has also changed.
  • Provisioning and Authentication  via LDAP Groups & GroupWise Groups
    Only GroupWise "Groups" and "Users" are supported for user provisioning and authentication. LDAP is no longer an option for either.
  • SSL Certificates
    SSL Certificates have moved locations and been renamed.
  • Upgrades from Older Versions
    You cannot upgrade an older GroupWise Mobility to version 18.3.1 or higher. It's not supported and the code will not let you do it.  You must install a new server.   Note: Yes I have tried to get around this. No, it doesn't work and it's not worth it.

Upgrades & Patches to GMS servers

I added this section to address the questions and possible challenges you may have when the time comes to apply updates or patches. Over the years, some of the biggest problems I've run into were a result of a problematic update process either on the Linux or the GMS side of things. It's difficult to quantify every possible scenario or potential issue. So instead, I'm providing some rule of thumb guidelines for the different types of updates you may need to do.

The 3 Different Patching Needs

There are 3 different things you'll inevitably need to patch on a GMS server. They are outlined in more detail below but include:

  1. SLES 15 updates from the SUSE Channel
  2. GroupWise Mobility patches or updates to a newer release.
  3. SLES 15 Major Service Pack Releases (For example, SP4 upgrading to SP5)

Tips from the Field:

  • Create a VM snapshot before applying updates just in case you need to revert back. Once GMS is broken, it is extremely difficult to troubleshoot and resolve, depending on the situation.
  • ALWAYS apply SLES updates during a defined maintenance window. When you apply SLES patches on a live GMS server (via zypper up), it will often disrupt the GMS services even if the gms services remain running. When you apply SLES updates, make sure you have the ability to restart the server or the GMS services (at a minimum) directly after applying the patches.    
  • * This line applies mostly to GMS 24.1 and earlier.   A major change to GMS 24.2 made this much easier *  There is a complex relationship between GroupWise Mobility, the Python modules that GMS requires, and the Python Modules you receive when you patch a SLES system. They are not necessarily the same. Maintaining a working relationship between these components seems to be one of the biggest challenges when it comes to applying updates to your GMS server. I have experienced catastrophic failures of GMS just by applying channel updates.  I have seen systems where certain components won't start after what should have been a simple update. And depending on the severity, I have had to either revert back to a working system or make the decision to build a new server from scratch. Sometimes when more patching is required, building a new server from scratch is an easier path. 
  • I always recommend that each type of patching or update is done separately. Do not try to combine them in one shot, and always test the system fully to ensure it is working correctly afterwards.  I generally find that the following is the best order when needing to perform all of the above patches:

General Patch/Upgrade Order: First the SLES 15 updates from the SUSE Channel --> Then any GMS updates --> Then if needed apply the next major SLES service pack

  • Updates to GMS do not require step-up or incremental upgrades. You should be able to upgrade directly from GMS 18.3.1 (or higher) to the latest version GMS 24.2.  However, for GMS 24.1 and GMS 24.2 in particular, SLES 15 SP4 or SP5 is required.
  • If your SLES server is running SLES 15 SP4 and isn't fully patched from the SUSE channel, it's critical to run the SLES online updates before upgrading to GMS 24.1 and GMS 24.2. This is due to the changes in the Python and Django libraries. While I have not done extensive testing to learn all the nuances, it appears that when you upgrade to GMS 24.1 first, then apply a large batch of SUSE updates, some of the critical Python libraries get back-rev'd or overwritten by the update process.  In my experience, I have destroyed a server beyond repair doing it in this manner. This is why having a snapshot to revert back to is critical.  Instead, if you get SLES updated fully first from the channel, then the GMS 24.1 install will add the necessary Python modules that it requires, and it should work much better. 
  • Even if you're running SLES 15 SP5, I would still recommend upgrading all patches from the SUSE channel before upgrading to GMS 24.1 for many of the same reasons as the previous paragraph.

GMS 24.2 UPDATE (Added specifically to address the GMS Python Issue)

With the release of GMS 24.2, the python dependencies within GMS were separated from the OS python libraries and this seems to have resolved most of the problems related to Python and GMS, especially related to SLES channel updates and such. What this means is that in general, the order you apply updates is not as significant as with previous versions. If you're on any generally older version of GMS, upgrade directly to GMS 24.2 (now GMS 24.3). Don't waste any time or effort installing GMS 24.1 or GMS 23.4, for example. If you upgrade to GMS 24.2 or GMS 24.3, this will avoid many of the Python complexities and problems experienced with previous versions.

If you'd like to read a little about this python issue to understand it better, check out these links:

Official Opentext TID:  https://portal.microfocus.com/s/article/KM000027095?language=en_US

Forum Posting: https://community.microfocus.com/img/gw/groupwise/f/discussions/526441/broken-firewall-on-gms-24-1-and-sles-15-sp5

MicroFocus Community Posting: https://community.microfocus.com/img/gw/groupwise/w/tips/47072/knowledge-document-gms-services-do-not-start-after-online-updates-of-sles-server

SLES 15 Updates from the SUSE Channel

These are minor updates, and it's common to have hundreds of patches available after only a few months.   These updates are typically applied via the "zypper up" or "zypper up -t patch" command (although there are other options).

  • Perform the updates during a defined maintenance window.  Shut off the GMS services and take a snapshot before applying, just in case GMS services don't load correctly after the update.
  • It's generally frowned on to leave GMS services running while doing updates during full production. It's very likely the system will stop syncing mail. Yes, it does depend on which modules are being patched, but you don't really ever know 100% for sure what modules will cause GMS to stop working.
  • When applying patches from the channel, you need to reboot the server immediately after the update. Do not try to keep the system online until later. It is likely that GMS won't be operational until you reboot.
  • Generally speaking, I recommend NOT configuring automatic updates in SLES on a GMS server. This is because the updates could cause GMS to stop functioning and you won't know why.

The Problem with Leaving GMS running while running Channel Updates
You can see from the image/screen shot below that  I have left the gms services running while doing a SLES channel update.   At the end of the process, you can see that it's suggested to reboot the server, but you can also see that even though the gms services are running, the GroupWise sync agent in the dashboard is stopped.  At this point, services are down and phones will not sync.  That is why I generally recommend that you stop the gms services before channel patching, or you at least plan on rebooting the server or restarting the gms services immediately after the updates.

GroupWise Mobility Service Updates and Patches (From GMS 18.3.1 and higher)

GMS 24.1, 24.2, and 24.3 require SLES 15 SP4 or higher due to Python and Django library changes, and I strongly recommend going a step further. Please ensure that ALL SLES patches are installed from the SUSE channel prior to upgrading GMS to version 24.1, 24.2, or 24.3. 

The process to update GMS is the same for all GMS versions 18.3.1 and higher. Using the full media, the installation process updates all of the code and performs all necessary updates. The process is relatively painless in most cases. The process goes like this:

  • Save the GMS media file (something like groupwise-mobility-service-xx.x-x86-64.iso) to a location on the Linux file system where GMS is running.
  • Mount the ISO image to the file system.
    • mhcfs03 #  mount -o loop groupwise-mobility-service-xx.x-x86-64.iso /mnt
    • This command mounts the ISO CD image to the /mnt folder. That is where the installation will run from.
  • Run the installer from the /mnt folder.
    • mhcfs03:/mnt #  ./install.sh
    • Accept the license agreement.
    • Generally, step through the various options until finished. Note the following:
      • Perform air gap install?  (yes/no)   [no]   (Generally unless you have very specific reason to do so, you do not need this)
      • You'll be prompted to shut down GroupWise Mobility and update the service. Go ahead and allow that.
        "The update process may take some time. During this process, the GroupWise Mobility Service will be shut down. Are you sure you want to update the GroupWise Mobility Service now? (yes/no):"
      • You'll be prompted to reset the log levels to INFO. I generally leave the logs alone but it's very situational.
        Reset log level for all GroupWise Mobility Services to Info? (yes/no) [yes]:no
      • You'll be prompted to enable anonymous info to be sent to Open Text. Pick your preference here.
        Enable anonymous information to be automatically sent to Open Text to improve your GroupWise mobile experience? (yes/no) [yes]:
  • GMS will restart after the update.
  • If you have any problems during the update and it aborts, you will have to resolve the issue and start over.

Major SLES Service Packs (ie SLES 15 SP5)

These can be done offline using an ISO image, but I generally use the "zypper migration" command to perform an online upgrade. This process works and it's my go-to method for service packs. Note the following considerations:

  • Perform the updates during a defined maintenance window.  Shut off the GMS services and take a snapshot before applying, just in case GMS services don't load correctly after the update.
  • Before applying a major service pack, I always apply all available updates from the channel. Follow the process as outlined above for how to navigate that process with GMS.
  • If part of your plan is to upgrade GroupWise Mobility (in place upgrade), do that BEFORE applying the SLES Service Pack. There are many different reasons why I suggest this, the reasoning doesn't make a lot of sense, nor is it easy to explain. The main reason is that the newer GMS build will patch some bugs that could be problematic when doing a major SLES OS patch. If you are not planning on updating GMS, I would strongly suggest reconsidering that if you are not already on the latest GMS version (meaning GMS v23.4 or newer).

Functionality Test after Patches/Upgrades

Regardless of what type of patch or update was applied, these are the general checks you should perform afterwards to confirm that the system is functioning correctly.

  • After the system boots up again, confirm that all GMS services are running at the command line. Do this via the "gms status" command:
    • mhcfs03#  gms status
  • Login to the GMS dashboard (https://xx.xx.xx.xx:8120) and confirm that everything is functional as expected:
    • Ensure that the "GroupWise Sync Agent" and the "Device Sync Agent" both show as "running" on the Home page of the dashboard.
    • Make sure you can see statistics on the Dashboard. You should see numbers and status on each of the indicators. If it's blank, GMS has a problem.
    • Look at your user list and confirm that all users are listed as expected.
    • Review any Agent Alerts, especially RED ones and take any appropriate action.
    • Run an MCHECK General Health Check and address any issues reported.
  • Check mail flow from devices. Confirm that you are able to send and receive mail correctly.
  • Watch for any critical GMS alerts that show up in the GMS dashboard and handle them accordingly.  Note the date/time stamp of the alert, sometimes the alerts are old and unrelated to what you're doing at the moment.
  • Specifically keep an eye on the GMS dashboard for this alert below. You may not see it immediately. Sometimes you won't see it for a couple hours. It just depends on what people are doing on their mobile devices.  The reason for this alert is that a SLES update overwrote one of the Python modules that GMS needs. GMS detected that this happened and corrects it by copying in the correct module. However, you must restart GMS for it to take effect.  With GMS 18.4,1 and higher this is very common to see after any SLES patches or updates are applied. Older versions of GMS did not generate this alert.
GroupWise GMS Dashboard Alerts: Failure sending email due to python update. Restart GMS to fix this.

Contingency

If GMS is in a state of failure and it does not appear to be an easy or quick fix after any update, consider reverting to the snapshot unless you can easily identify the issue. It's likely a python related issue and there are some TIDs that discuss them. However, it's not always cut and dry.

Version Upgrades from older GMS to GMS 18.3.1 and higher

If you're running an older version of GMS such as GMS 14.x, 18.0, 18.1, 18.2, or 18.3.0, you cannot upgrade.  To get to GMS version 18.3.1 or higher, you must build a new server. There is no upgrade path and there is no migration path. Use the rest of this guide to help deploy a new server.

System Build Recommendations

VMWARE Options

Disk Controller:

  • Disk Controller 1:  Paravirtual
  • Disk Controller 2: Paravirtual (Optional)
  • Disk Controller 3: Paravirtual (Optional)

Using separate controllers isolates the Operating System disk activity from GroupWise Mobility, improving performance. Paravirtual is the best option for performance because it is working much closer to the hardware than other options.

** Do not install SLES using a different controller, then change it to Paravirtual. This will likely render the server unbootable and the disk partitions may not be read the same. **

Disk Configuration

Choose Thin or Thick disks based on the analysis below.

Note the following:  On a small system (up to around 50 users), you probably do not need to worry about using multiple disks.  A single partition will likely work fine.   However, if you get into larger systems with more data, you will benefit from splitting off the GMS data to a separate partition.  There are some considerations to this strategy because there are multiple directories where data is stored, and it's not always easy to know how much space is needed, and where.  

  • Disk 1: 100GB / Assign to the first controller (Disk 0:1)
  • Disk 2 (GMS Data, Optional): Allocate enough space for GroupWise Mobility data plus 25-30% for growth.
    • Assign to the second controller (Disk 1:1).
    • Choose the partitioning type:
      • Thin partitioning is adequate in many cases, especially in small to medium systems.
      • Thick-provisioned eager-zeroed disk partitioning is better for systems with larger user bases and heavier demands.
    • If you have different tiers of storage (fast, slow, near, far, SSD, spinny disks, whatever), put the VMDK file in a Datastore that features the fastest storage available.
  • Disk 3 (Logging Disk, Optional):  50GB.
    • Assign to the third controller (Disk 2:1).
    • Setup the partitioning type the same as disk 2.

If you have a larger and heavily utilized system, and squeezing every last bit of performance out of the drive is critical to a happy user base, consider the following:

  • A thick-provisioned eager-zeroing disk will write data faster than a thin-provisioned disk.
  • A thin-provisioned disk exhibits the same performance as a lazy-zeroed thick-provisioned disk.  Because of overprovisioning, thin provisioning will cause problems when users approach their maximum storage capacity.

Boot Options / Firmware

In VMware, under the VM Options --> Boot Options, there is an option called Firmware.  You can choose either "BIOS" or "EFI".   I don't have a hard preference, and both will work fine. There are pros and cons to each one. But this setting is one of those things you don't want to change after you've set it one way or the other. I've never seen a difference from an OS or GroupWise functionality standpoint using one or the other.

BIOS

Pros: It's generally more familiar and I would say simpler to work with. Disk partitioning is very straightforward and simple. With the BIOS setting, you are able to use standard disk tools to manipulate and resize partitions (if you need to expand at some point).

Cons:  It is a legacy setting.

EFI

Pros: EFI is the standard moving forward and offers some functionality and security not available to the BIOS setting.

Cons: The current toolsets on Linux that support EFI are extremely limited compared to what's available with a standard BIOS. For example, if you ever need to resize a partition, the tools are not as available using EFI.  **This also could depend on your filesystem, partitioning type, and how you approach it, ie with a bootable offline tool like gparted, or with native OS tools while the system is online and running.

Note that the partitioning you choose will vary depending on which option you choose here.

Server OS Selection

I generally use the latest version of SLES 15 available when installing GroupWise Mobility.  The only supported OS's for GroupWise Mobility at the moment are:

  • SLES 15 SP4 (Worked well with GMS 18.3.x and 18.4.x.  Sometimes problematic with GMS 18.5)
  • SLES 15 SP5 (Released on June 20, 2023. Works well with GMS 18.5 and higher. )

SLES 15 SP5 was released on June 20, 2023. I use SLES 15 SP5 for ALL GMS implementations at the moment. There is no reason to use any earlier version of SLES 15.

** UPGRADE NOTE **

  • Can you upgrade an older version of GMS (14.x, 18.0, 18.1, 18.2, 18.3.0) to version 18.3.1?  No. Build a new server.
  • Can you upgrade an older version of GMS (14.x, 18.0, 18.1, 18.2, 18.3.0) to any newer GMS version such as GMS 18.5 or GMS 23.4/24.1/24.2? No. Build a new server.
  • Can you upgrade GMS version 18.3.1, GMS 18.4.x, or GMS 18.5.x to GMS 23.4, GMS 24.1, or GMS 24.2? Yes you should be able to as long as your SLES server is running a supported SLES version.

File System Selection and Partitioning

SLES 15 Defaults to a BTRFS.  Do not use this.  Instead, change the partitioning to EXT4. This is required per the documentation.

  • Specify only EXT4 as the GMS server’s File System: SLES 15 SP2 and later default to BTRFS and you must manually change this to EXT4.

Additionally, note the following:

  • The SLES 15 installation tries to partition the /home partition using half of the available disk space.  It's important to NOT use the defaults for anything. Otherwise, you will waste a massive amount of space since nothing on Mobility uses the /home folder.  Start the partitioning from scratch and define your own partitions.

Note About GMS 18.4 and BTRFS.

With GMS 18.4 there is now some support for the BTRFS file system.  Reference this statement in the GMS 18.4 release notes below.  However, I personally prefer to NOT use BTRFS and do not trust that it is stable for use in a production GMS system.

Load Script Support: Load scripts now run on the BTRFS and XFS file systems. However, EXT4 is still the strongly recommended file system for GMS.

Separate Disk/Partition for GMS Data

If you utilize a 2nd partition for GMS data, it is important to understand the storage structure of GMS and where the data is kept. Per the documentation:

The largest consumers of disk space are the Mobility database (/var/lib/pgsql) and Mobility Service log files (/var/log/datasync). You might want to configure the Mobility server so that /var is on a separate partition to allow for convenient expansion. Another large consumer of disk space is attachment storage in the /var/lib/datasync/mobility/attachments directory.

Typically, with a 2nd disk, I mount it to the /var directory. This will put /var and all subfolders on the 2nd disk, including non-gms files located in /var.  This is typically fine.

Possible 3rd Disk for Logging

The one "possible" downside of mounting disk2 to /var is that both gms data and all server logging (including GMS logging) will be on the same disk. There is potential for the partition to fill up due to the amount of logging that could be done by GMS (especially if you're in diagnostic logging mode).  On very large systems I sometimes create a 3rd disk and mount it specifically to /var/log.  That will prevent any excessive logging from affecting the GMS services.

All about SSL Certificates

SSL Certificates Name and Location

SSL Certificates for GroupWise Mobility have been renamed and moved to a new location with GroupWise Mobility 18.3.1 and beyond:

The files used are in /var/lib/datasync  now and are:
gms_mobility.pem  (The only one you should need to worry about, see below)
gms_server.pem
gms_mobility.cer

This information is from the developers about each file and what it's purpose is:

  • The gms_server.pem is used internally by GMS and should NOT be replaced with your public certificate that is used for Devices & Webadmin.
  • The gms_mobility.pem is used by the Devices and Webadmin and should be replaced with a public certificate.
  • The gms_mobility.cer is rarely used but is generated from the gms_mobility.pem.  It is mostly for backward compatibility with older devices that require the manual addition of the certificate.  It is accessed via the user login of the webadmin.

My understanding from the developers is that with Python3 they were able to simplify the certificate implementation. This is a result of that simplification even though it does seem more complex.

3rd Party Trusted SSL Certificates

It's nearly impossible to use self-signed certificates anymore. Most phones require trusted certs in order to communicate. I am not going to go into detail about how to generate the certificates, but this shows you what you need to do to take your existing 3rd party certificates and configure them on a GMS server. This is what you'll need:

  • The private key file (That you generated when you created the CSR to submit to your SSL vendor)
    • Also of importance is to note that GMS requires that the Private Key file does NOT have a password on it. This is the exact opposite of what is required by the GroupWise Agents.
  • The server certificate (Generally .crt file) from your SSL vendor
  • The SSL Vendor's intermediate SSL Certificate.

Combining All Certificate and Key Files into a single PEM file

GMS requires that all of your certificate files are combined into a single PEM file. You basically concatenate all files together using the sequence below. Please note that you will need to use the name of your specific files, not the samples I have used.

# cat passwordless.key > gms_mobility.pem   (Ensure that you are using the Key File that does NOT have a password on it)
# cat server.crt >> gms_mobility.pem   (This is the certificate provided by the SSL vendor)
# cat intermediate.crt >> gms_mobility.pem   (This is the Intermediate certificate provided by the SSL vendor)
# cp gms_mobility.pem /var/lib/datasync/
# gms restart

Reference TID from Micro Focus

This Micro Focus TID is not completely accurate for GMS 18.3.1+, but the general process is the same and it does go into more detail about the entire process. Note the file name and location changes that I have outlined.  How to configure Certificates from Trusted CA for Mobility (microfocus.com)

Starting / Stopping / Enabling / Disabling GroupWise Mobility

The only way to start and stop GroupWise Mobility services is from the command line.  It is no longer possible to do this from the Mobility Dashboard. When asking why the dashboard change, I was provided the following explanation from the developers:

"When GMS was converted to run on python 3, there was a significant change in how the connectors (or sync agents) were loaded during startup. We found that in the python 2 days of GMS that often simply starting/stopping the agents from the web admin did not always work in the way that it should (i.e. an agent that was in status "Stopped" would shortly return to the "Stopped" state again after it was started via the web admin control). For this reason, we decided that it was best to just remove this half-broken functionality from the web admin."

Function Command
Start GroupWise Mobility
gms start
Stop GroupWise Mobility
gms stop
Show GroupWise Mobility Status
gms status
Restart GroupWise Mobility
gms restart
Enable GroupWise Mobility 
(enable GMS to start automatically with system)
gms enable
Disable GroupWise Mobility 
(prevent GMS from starting automatically with system)
gms disable

Troubleshooting GMS

Everything below this point relates to the general troubleshooting of a GroupWise Mobility system.  Many issues you may experience are related to performance issues and resource exhaustion. The tips below are things I do to squeeze more performance out of my systems.

GMS Troubleshooting Tool: MCHECK

GroupWise Mobility has a tool called MCHECK that is used to troubleshoot a variety of different issues related to the day-to-day operation of a GMS system. If you are having any problems with GMS, a good rule of thumb is to run some basic tests with MCHECK and see if any problems are reported.

Documentation: Refer to the link below for product documentation:

https://www.novell.com/documentation/groupwise24/gwmob_guide_admin/data/admin_mgt_mcheck.html

NOTE: The tool DSAPP that was used by older versions of GroupWise Mobility is deprecated and will not work on current versions of GroupWise Mobility. MCHECK is fully supported and actively developed with new features and options in each new version of GMS. Trying to run DSAPP will leave you frustrated and angry, even if TID's you're reading make reference to it and sound like it should work.

MCHECK - General Health Check

MCHECK has quite a few different options and overall is pretty useful. I do not go into extensive detail on all of the options here (refer to the documentation).   However, the most useful check is the "General Health Check" because of the number of different checks it runs in a single operation.

The General Health Check runs and displays a series of tests on the GMS server. After all the checks are run, you can view more detailed information about each check in the mcheck log file.

Launching MCHECK & Running a General Health Check

MCHECK is a command line tool that presents you with a text base menu. It is a python application and uses python to launch. The easiest way to run it is by changing to the folder where the MCHECK tool is located and running it from there.

  1.  Open a Linux Command line on the GMS server.
  2. Change to the /opt/novell/datasync/tools/mcheck folder:
    • mhcfs03# cd /opt/novell/datasync/tools/mcheck
  3. Run the command to launch mcheck:
    • mhcfs03# python3 mcheck.pyc
  4. The main menu for MCHECK will load:

MCheck (Version: 24.1) - Running as root
--------------------------------------
1. System
2. Users
3. Database
4. Checks & Queries
0. Exit 

Select option:

  • Choose "4. Checks & Queries" from the menu
  • This will bring you to the Checks & Queries Submenu where you will see the "General Heatlh Check Option"

MCheck (Version: 24.1) - Running as root
--------------------------------------
Checks & Queries
1. General Health Check
2. GW pending events by User (consumerevents)
3. Mobility pending events by User (syncevents)
4. Generate csv list of inactive users
0. Back

Select option:  

  • Choose "1. General Health Check". This will start the process and generate the results as shown below:

===Running General Health Check===
--------------------------------------
Checking Mobility Services...      Passed
Checking Trusted Application...    Passed
Checking Required XMLs...          Passed
Checking XMLs...                   Passed
Checking PSQL Configuration...     Passed
Checking Proxy Configuration...    Passed
Checking Disk Space...             Passed
Checking Memory...                 Warning
Checking VMware-tools...           Passed
Checking Automatic Startup...      Passed
Checking Database Schema...        Passed
Checking Database Maintenance...   Passed
Checking Reference Count...        Failed
Checking Databases Integrity...    Failed
Checking Targets Table...          Passed
Checking Certificates...           Warning

The remaining checks may take a while.
Please be patient...

Checking Disk Read Speed...        Passed

Checking Server Date...            Passed
Checking RPMs...                   Passed
Checking Nightly Maintenance...    Passed

Press Enter to continue

MCHECK Log Files & Viewing Problems Found

The log files for MCHECK provide more detail about the issues that are reported. Since the checks cover a lot of different things, it's difficult to outline every possible scenario here. The best thing to do is look at the logs, view the detailed information about any specific issue, and determine how to proceed from there. If the issue is over your head or beyond your comfort level, don't hesitate to reach out.  The MCHECK log files are located here:

/opt/novell/datasync/tools/mcheck/logs

For the General Health Check log files, look for the file(s)  "generalHealthCheck_" + Date/Timestamp + ".log".   The file will be timestamped with the date and time the check was run.  After you run a general health check, just check the most recent log file.  Below is an example of the file listing of the log folder.

-rw-r--r-- 1 root root     0 Jul 18  2022 generalHealthCheck_2022-07-18T23:18:20.log
-rw-r--r-- 1 root root 27817 Nov  2  2022 generalHealthCheck_2022-11-02T05:40:05.log
-rw-r--r-- 1 root root 27516 Jan  4 22:13 generalHealthCheck_2024-01-04T22:12:23.log
-rw-r--r-- 1 root root 27694 Jan 14 19:45 generalHealthCheck_2024-01-14T19:44:44.log
-rw-r--r-- 1 root root 27671 Jan 14 19:48 generalHealthCheck_2024-01-14T19:47:59.log
-rw-r--r-- 1 root root 55336 Jan 14 20:21 generalHealthCheck_2024-01-14T20:19:53.log
-rw-r--r-- 1 root root     0 Jan 14 21:05 generalHealthCheck_2024-01-14T21:05:55.log
-rw-r--r-- 1 root root 27668 Jan 14 21:09 generalHealthCheck_2024-01-14T21:08:51.log

Memory Considerations

Over the years, the most common problems I've seen with GMS are related to performance. So I have spent an incredible amount of time and energy finding ways to tune and optimize the performance of a GroupWise Mobility server. 90% of the issues I run into are due to the server not having enough memory.  Everybody wants to be in denial about how much RAM is really required. After all, when GMS was first introduced, it was a very lightweight application that hardly took any resources at all. This is no longer true, and GMS can be very CPU and Memory intensive depending on your system, user count, device count, and data sizes.

From The GroupWise Documentation From Real-World Experience
  • Adequate server memory depending on the number of devices supported by the Mobility server

    • 8 GB RAM to support approximately 300 devices

    • 12 GB RAM to support up to the maximum of 750 users with up to 1000 devices

  • 8GB RAM is fine for 20-25 users.
  • 16GB RAM is generally the minimum RAM I will go if there are 100 users.
  • For a 300+ user system, I will generally end up needing to assign 24GB or even 32GB of RAM.

Symptoms of Low RAM in a GroupWise Mobility Server

When the RAM on a GMS server is depleted, you will typically experience the following symptoms:

  • Extreme latency with receiving messages on your phone.
  • Phones stop syncing system wide.
  • A reboot of the server immediately resolves the issue because it frees up all the RAM and starts over with its usage.
  • Your SWAP space will be consumed, which can cause even more problems, especially if your server is virtualized.

Diagnosing Low Memory

A good way to determine the memory status on your GMS server is by using the "free -m" linux command as shown below:

Analysis

You can see that this server has 8GB Ram (7682MB).  For reference, this system is a very small system with only 15 users on GroupWise Mobility.  If the customer were to tell me that they were having problems syncing phones, I would look at this and most likely increase the RAM to 10GB. Here are a few items I look at:

  • Free:  This shows 124MB free out of 7682MB. That is pretty low free space.
  • Swap: Out of a 2GB swap space, 14MB has been used.  The fact that swap space has been used at all tells me that it needs more RAM. In a perfect world, I want to avoid using swap space (That is a discussion for another day).
  • Uptime:  Uptime stats for this server:  19:36:20 up 73 days 9:13, 1 user, load average: 0.36, 0.59, 0.61
    Because the server has been up for 73 days, I can be confident that the Memory usage has normalized and I'm not being influenced by a server reboot. Having a server that's been up for a while gives me more confidence in my memory stats and analysis.

CPU Considerations

GroupWise Mobility relies heavily on the PostgreSQL database.   Databases in general can be CPU intensive. CPU requirements go up with user count, device count, and data sizes.

From The GroupWise Documentation From Real-World Experience
  • x86-64 processor
  • 2.8 GHz multi-processor system with a 4-Core CPU recommended
4 Cores are generally adequate on a smaller system, but may not be practical at all on larger systems with hundreds of users. What you will find is that your Mobility Server can run very high CPU utilization with average day-to-day usage. Adding CPU's can help with this situation and help the system run better. The biggest consumer of the CPU is the PostgreSQL service.

Symptoms of an overburdened CPU in a GroupWise Mobility Server

Ensure that your RAM allocations are adequate before troubleshooting the processor. RAM accounts for most performance problems.  After you are confident you have enough RAM, note the following symptoms that could indicate the CPU allocations are too small.

  • Very high CPU utilization at the console.
  • Latency sending and receiving email on phones, system-wide.
  • If virtualized, CPU alerts in VMWare.

The Linux "top" Command

The "top" command shows the most heavily utilized processes in real-time on a Linux server. Let's take a look at this screenshot below. It outlines the heaviest processes that are hitting the CPU.

As a point of reference, this server is a very busy server. It currently has 36GB Ram assigned, 6 CPUs, and 400 users provisioned.  I am troubleshooting high utilization and complaints from the customer.

CPU Count / "top" analysis

You can see from the results of this 'top' command that the majority of the processes are either "python3" or "postgres".   GroupWise Mobility is built using Python3, and it uses the Postgres database.  Therefore, you can safely say that GroupWise Mobility is hitting the CPU hard on this server. Increasing the CPU count from 6 CPUs to 8 CPUs could help with performance.

I recently experienced 2 very large GMS systems with chronic CPU utilization issues. They both had 8 Cores assigned in VMware.  Each server hosts approximately 500 users.  Begrudgingly, I increased the CPU's from 8 to 12 and noted an immediate and significant reduction in CPU usage. I do approach this carefully, but in this case the host servers had CPU's with 36 cores each and are very high end servers. So despite other considerations, I felt like it could handle it and it was the right decision.

NOTE: There are numerous articles available online related to the number of CPUs in a server, and research shows that more CPUs does not always mean better performance. In fact, it is argued that sometimes it could reduce performance.  So when it comes to increasing the number of CPU's in a GroupWise Mobility server, I approach this cautiously. It is usually something I would change after I have exhausted all other recommendations in this article.   It's critical to understand how CPU allocation works with VMWare before randomly adding CPU's to any VM.

Misc Performance Tuning

There are five other items that can be tuned if you are not getting the performance you need out of your system. These are core Linux settings related to the performance that go away from the standard defaults. In many cases, the defaults are adequate. But if you are looking to squeeze a little more performance out of your system, these are options you can consider.

  1. I/O Scheduler: "none"
  2. File System Mount Option:  "noatime"
  3. Linux Swappiness
  4. Storage Hardware / SAN Tiered Storage
  5. Log Level in GMS

** Note ** These are advanced system settings and you should fully understand that a misstep here could cause severe problems with your system. Please approach with caution.

I/O Scheduler: "none"

Earlier versions of SLES used a default I/O Scheduler called "noop".  This was all changed somewhere around SLES 15 SP3.  For the sake of this discussion, please reference this SUSE documentation: https://documentation.suse.com/sles/15-SP3/html/SLES-all/cha-tuning-io.html

Summary: The default scheduler on SLES 15 SP3 is "bfq". The desired tweak is to change the scheduler to "none".

The link above explains how to do it, however, it's not the easiest document to read.  From the docs: "With no overhead compared to other I/O elevator options, it is considered the fastest way of passing down I/O requests on multiple queues to such devices."

Confirm the Current Scheduler and proper scheduler name

Before and after any changes to the scheduler, you should run a command to confirm the active scheduler.  Nothing worse than making a change, and then not knowing if it worked or not. Use this simple command to determine what scheduler is active on your system.

cat /sys/block/sda/queue/scheduler    (Replace "sda" with your actual disk device name(s) that GMS is using)
serverapp1:/ # cat /sys/block/sda/queue/scheduler
none mq-deadline kyber [bfq]

The item in brackets is the active scheduler. In the case of the example above, the active scheduler is "bfq".  You want it to be "none".

Here is what you need to do to implement the "none" scheduler:

  • Find the file /usr/lib/udev/rules.d/60-io-scheduler.rules