GroupWise Mobility Service (GMS) Best Practice Guide

For GroupWise Mobility versions 18.3.1, 18.4, 18.5, and 23.4

This document was originally written for GMS version 18.3.1 when some significant architectural changes were made to the underlying code. It is relevant to GroupWise Mobility 18.3.1 and beyond, including the current release, GMS 18.5 which was released on May 23, 2023.

GMS 23.4 New Stuff  (Released Oct 25, 2023)
GMS 23.4 added a few new features as well as rebranding from Micro Focus to Open Text.

  • Rebranded from Micro Focus to Open Text.
  • Notify User when Mobility Account is Provisioned.
  • Generate CSV List of Inactive Users.

GMS 18.5 New Stuff
GMS 18.5 was mostly bug fixes.

GMS 18.4.x New Stuff
GMS 18.4 was mostly bug fixes and a few feature enhancements. Here's an overview.

  • MCHECK was dramatically improved for usability and administrative functionality. Prior to this release, MCHECK was practically useless. It's important to note that MCHECK was never much of an app due to the widely popular use of the DSAPP utility, which is now deprecated. Now that DSAPP will not even run with GroupWise Mobility, MCHECK is the only tool that is supported, and it's feature set has been dramatically improved.
  • Support for TLS 1.3, also some considerations for certificates and certificate verification related to the GroupWise 18.4 release.
  • Support for BTRFS and XFS file system. (Still recommends EXT4)

GMS 18.3.1 New Stuff
GroupWise Mobility 18.3.1 brought some significant changes to GMS compared to earlier versions. These still apply to all current GMS releases beyond 18.3.1, including the newest release GMS 18.5. Here is a summary:

  • Python version 3
    Earlier versions of GroupWise Mobility were built on Python version 2.  This became a bigger and bigger problem when Python 3 became the DeFacto standard on SLES15.  With the release of GroupWise Mobility 18.3.1, Python 2 was abandoned completely and fully built with Python 3.
  • Starting/Stopping GroupWise Mobility
    Changes to the way you start and stop GroupWise Mobility:

    • You no longer have the ability to start/stop the agents from the GMS dashboard.
    • The way you stop the agents on the command line has also changed.
  • Provisioning and Authentication  via LDAP Groups & GroupWise Groups
    Only GroupWise "Groups" and "Users" are supported for user provisioning and authentication. LDAP is no longer an option for either.
  • SSL Certificates
    SSL Certificates have moved locations and been renamed.
  • Upgrades from Older Versions
    You cannot upgrade an older GroupWise Mobility to version 18.3.1 or higher. It's not supported and the code will not let you do it.  You must install a new server.   Note: Yes I have tried to get around this. No, it doesn't work and it's not worth it.

System Build Recommendations

VMWARE Options

Disk Controller:

  • Disk Controller 1:  Paravirtual
  • Disk Controller 2: Paravirtual (Optional)
  • Disk Controller 3: Paravirtual (Optional)

Using separate controllers isolates the Operating System disk activity from GroupWise Mobility, improving performance. Paravirtual is the best option for performance because it is working much closer to the hardware than other options.

** Do not install SLES using a different controller, then change it to Paravirtual. This will likely render the server unbootable and the disk partitions may not be read the same. **

Disk Configuration

Choose Thin or Thick disks based on the analysis below.

Note the following:  On a small system (up to around 50 users), you probably do not need to worry about using multiple disks.  A single partition will likely work fine.   However, if you get into larger systems with more data, you will benefit from splitting off the GMS data to a separate partition.  There are some considerations to this strategy because there are multiple directories where data is stored, and it's not always easy to know how much space is needed, and where.  

  • Disk 1: 100GB / Assign to the first controller (Disk 0:1)
  • Disk 2 (GMS Data, Optional): Allocate enough space for GroupWise Mobility data plus 25-30% for growth.
    • Assign to the second controller (Disk 1:1).
    • Choose the partitioning type:
      • Thin partitioning is adequate in many cases, especially in small to medium systems.
      • Thick-provisioned eager-zeroed disk partitioning is better for systems with larger user bases and heavier demands.
    • If you have different tiers of storage (fast, slow, near, far, SSD, spinny disks, whatever), put the VMDK file in a Datastore that features the fastest storage available.
  • Disk 3 (Logging Disk, Optional):  50GB.
    • Assign to the third controller (Disk 2:1).
    • Setup the partitioning type the same as disk 2.

If you have a larger and heavily utilized system, and squeezing every last bit of performance out of the drive is critical to a happy user base, consider the following:

  • A thick-provisioned eager-zeroing disk will write data faster than a thin-provisioned disk.
  • A thin-provisioned disk exhibits the same performance as a lazy-zeroed thick-provisioned disk.  Because of overprovisioning, thin provisioning will cause problems when users approach their maximum storage capacity.

Boot Options / Firmware

In VMware, under the VM Options --> Boot Options, there is an option called Firmware.  You can choose either "BIOS" or "EFI".   I don't have a hard preference, and both will work fine. There are pros and cons to each one. But this setting is one of those things you don't want to change after you've set it one way or the other. I've never seen a difference from an OS or GroupWise functionality standpoint using one or the other.

BIOS

Pros: It's generally more familiar and I would say simpler to work with. Disk partitioning is very straightforward and simple. With the BIOS setting, you are able to use standard disk tools to manipulate and resize partitions (if you need to expand at some point).

Cons:  It is a legacy setting.

EFI

Pros: EFI is the standard moving forward and offers some functionality and security not available to the BIOS setting.

Cons: The current toolsets on Linux that support EFI are extremely limited compared to what's available with a standard BIOS. For example, if you ever need to resize a partition, the tools are not as available using EFI.  **This also could depend on your filesystem, partitioning type, and how you approach it, ie with a bootable offline tool like gparted, or with native OS tools while the system is online and running.

Note that the partitioning you choose will vary depending on which option you choose here.

Server OS Selection

I generally use the latest version of SLES 15 available when installing GroupWise Mobility.  The only supported OS's for GroupWise Mobility at the moment are:

  • SLES 15 SP2 (gms 18.5 will likely not run on SLES 15 SP2 due to python related modules, although I have not tested.)
  • SLES 15 SP3 (gms 18.5 will likely not run on SLES 15 SP3 due to python related modules, although I have not tested.)
  • SLES 15 SP4
  • SLES 15 SP5  (Released on June 20, 2023. I have not yet tested with GroupWise Mobility)

SLES 15 SP4 was released on June 24, 2022. I use SLES 15 SP4 for ALL GMS implementations at the moment. There is no reason to use any earlier version of SLES 15.

** Since you cannot upgrade any older version of GMS to version 18.3.1 or newer, it is required that you build a new SLES 15 server. In this case, just build a SLES 15 SP4 server.  

File System Selection and Partitioning

SLES 15 Defaults to a BTRFS.  Do not use this.  Instead, change the partitioning to EXT4. This is required per the documentation.

  • Specify only EXT4 as the GMS server’s File System: SLES 15 SP2 and later default to BTRFS and you must manually change this to EXT4.

Additionally, note the following:

  • The SLES 15 installation tries to partition the /home partition using half of the available disk space.  It's important to NOT use the defaults for anything. Otherwise, you will waste a massive amount of space since nothing on Mobility uses the /home folder.  Start the partitioning from scratch and define your own partitions.

Note About GMS 18.4 and BTRFS.

With GMS 18.4 there is now some support for the BTRFS file system.  Reference this statement in the GMS 18.4 release notes below.  However, I personally prefer to NOT use BTRFS and do not trust that it is stable for use in a production GMS system.

Load Script Support: Load scripts now run on the BTRFS and XFS file systems. However, EXT4 is still the strongly recommended file system for GMS.

Separate Disk/Partition for GMS Data

If you utilize a 2nd partition for GMS data, it is important to understand the storage structure of GMS and where the data is kept. Per the documentation:

The largest consumers of disk space are the Mobility database (/var/lib/pgsql) and Mobility Service log files (/var/log/datasync). You might want to configure the Mobility server so that /var is on a separate partition to allow for convenient expansion. Another large consumer of disk space is attachment storage in the /var/lib/datasync/mobility/attachments directory.

Typically, with a 2nd disk, I mount it to the /var directory. This will put /var and all subfolders on the 2nd disk, including non-gms files located in /var.  This is typically fine.

Possible 3rd Disk for Logging

The one "possible" downside of mounting disk2 to /var is that both gms data and all server logging (including GMS logging) will be on the same disk. There is potential for the partition to fill up due to the amount of logging that could be done by GMS (especially if you're in diagnostic logging mode).  On very large systems I sometimes create a 3rd disk and mount it specifically to /var/log.  That will prevent any excessive logging from affecting the GMS services.

All about SSL Certificates

SSL Certificates Name and Location

SSL Certificates for GroupWise Mobility have been renamed and moved to a new location with GroupWise Mobility 18.3.1 and beyond:

The files used are in /var/lib/datasync  now and are:
gms_mobility.pem  (The only one you should need to worry about, see below)
gms_server.pem
gms_mobility.cer

This information is from the developers about each file and what it's purpose is:

  • The gms_server.pem is used internally by GMS and should NOT be replaced with your public certificate that is used for Devices & Webadmin.
  • The gms_mobility.pem is used by the Devices and Webadmin and should be replaced with a public certificate.
  • The gms_mobility.cer is rarely used but is generated from the gms_mobility.pem.  It is mostly for backward compatibility with older devices that require the manual addition of the certificate.  It is accessed via the user login of the webadmin.

My understanding from the developers is that with Python3 they were able to simplify the certificate implementation. This is a result of that simplification even though it does seem more complex.

3rd Party Trusted SSL Certificates

It's nearly impossible to use self-signed certificates anymore. Most phones require trusted certs in order to communicate. I am not going to go into detail about how to generate the certificates, but this shows you what you need to do to take your existing 3rd party certificates and configure them on a GMS server. This is what you'll need:

  • The private key file (That you generated when you created the CSR to submit to your SSL vendor)
    • Also of importance is to note that GMS requires that the Private Key file does NOT have a password on it. This is the exact opposite of what is required by the GroupWise Agents.
  • The server certificate (Generally .crt file) from your SSL vendor
  • The SSL Vendor's intermediate SSL Certificate.

Combining All Certificate and Key Files into a single PEM file

GMS requires that all of your certificate files are combined into a single PEM file. You basically concatenate all files together using the sequence below. Please note that you will need to use the name of your specific files, not the samples I have used.

# cat passwordless.key > gms_mobility.pem   (Ensure that you are using the Key File that does NOT have a password on it)
# cat server.crt >> gms_mobility.pem   (This is the certificate provided by the SSL vendor)
# cat intermediate.crt >> gms_mobility.pem   (This is the Intermediate certificate provided by the SSL vendor)
# cp gms_mobility.pem /var/lib/datasync/
# gms restart

Reference TID from Micro Focus

This Micro Focus TID is not completely accurate for GMS 18.3.1+, but the general process is the same and it does go into more detail about the entire process. Note the file name and location changes that I have outlined.  How to configure Certificates from Trusted CA for Mobility (microfocus.com)

Starting / Stopping / Enabling / Disabling GroupWise Mobility

The only way to start and stop GroupWise Mobility services is from the command line.  It is no longer possible to do this from the Mobility Dashboard. When asking why the dashboard change, I was provided the following explanation from the developers:

"When GMS was converted to run on python 3, there was a significant change in how the connectors (or sync agents) were loaded during startup. We found that in the python 2 days of GMS that often simply starting/stopping the agents from the web admin did not always work in the way that it should (i.e. an agent that was in status "Stopped" would shortly return to the "Stopped" state again after it was started via the web admin control). For this reason, we decided that it was best to just remove this half-broken functionality from the web admin."

Function Command
Start GroupWise Mobility
gms start
Stop GroupWise Mobility
gms stop
Show GroupWise Mobility Status
gms status
Restart GroupWise Mobility
gms restart
Enable GroupWise Mobility 
(enable GMS to start automatically with system)
gms enable
Disable GroupWise Mobility 
(prevent GMS from starting automatically with system)
gms disable

Troubleshooting GMS

Everything below this point relates to the general troubleshooting of a GroupWise Mobility system.  Many issues you may experience are related to performance issues and resource exhaustion. The tips below are things I do to squeeze more performance out of my systems.

Memory Considerations

Over the years, the most common problems I've seen with GMS are related to performance. So I have spent an incredible amount of time and energy finding ways to tune and optimize the performance of a GroupWise Mobility server. 90% of the issues I run into are due to the server not having enough memory.  Everybody wants to be in denial about how much RAM is really required. After all, when GMS was first introduced, it was a very lightweight application that hardly took any resources at all. This is no longer true, and GMS can be very CPU and Memory intensive depending on your system, user count, device count, and data sizes.

From The GroupWise Documentation From Real-World Experience
  • Adequate server memory depending on the number of devices supported by the Mobility server

    • 8 GB RAM to support approximately 300 devices

    • 12 GB RAM to support up to the maximum of 750 users with up to 1000 devices

  • 8GB RAM is fine for 20-25 users.
  • 16GB RAM is generally the minimum RAM I will go if there are 100 users.
  • For a 300+ user system, I will generally end up needing to assign 24GB or even 32GB of RAM.

Symptoms of Low RAM in a GroupWise Mobility Server

When the RAM on a GMS server is depleted, you will typically experience the following symptoms:

  • Extreme latency with receiving messages on your phone.
  • Phones stop syncing system wide.
  • A reboot of the server immediately resolves the issue because it frees up all the RAM and starts over with its usage.
  • Your SWAP space will be consumed, which can cause even more problems, especially if your server is virtualized.

Diagnosing Low Memory

A good way to determine the memory status on your GMS server is by using the "free -m" linux command as shown below:

Analysis

You can see that this server has 8GB Ram (7682MB).  For reference, this system is a very small system with only 15 users on GroupWise Mobility.  If the customer were to tell me that they were having problems syncing phones, I would look at this and most likely increase the RAM to 10GB. Here are a few items I look at:

  • Free:  This shows 124MB free out of 7682MB. That is pretty low free space.
  • Swap: Out of a 2GB swap space, 14MB has been used.  The fact that swap space has been used at all tells me that it needs more RAM. In a perfect world, I want to avoid using swap space (That is a discussion for another day).
  • Uptime:  Uptime stats for this server:  19:36:20 up 73 days 9:13, 1 user, load average: 0.36, 0.59, 0.61
    Because the server has been up for 73 days, I can be confident that the Memory usage has normalized and I'm not being influenced by a server reboot. Having a server that's been up for a while gives me more confidence in my memory stats and analysis.

CPU Considerations

GroupWise Mobility relies heavily on the PostgreSQL database.   Databases in general can be CPU intensive. CPU requirements go up with user count, device count, and data sizes.

From The GroupWise Documentation From Real-World Experience
  • x86-64 processor
  • 2.8 GHz multi-processor system with a 4-Core CPU recommended
4 Cores are generally adequate on a smaller system, but may not be practical at all on larger systems with hundreds of users. What you will find is that your Mobility Server can run very high CPU utilization with average day-to-day usage. Adding CPU's can help with this situation and help the system run better. The biggest consumer of the CPU is the PostgreSQL service.

Symptoms of an overburdened CPU in a GroupWise Mobility Server

Ensure that your RAM allocations are adequate before troubleshooting the processor. RAM accounts for most performance problems.  After you are confident you have enough RAM, note the following symptoms that could indicate the CPU allocations are too small.

  • Very high CPU utilization at the console.
  • Latency sending and receiving email on phones, system-wide.
  • If virtualized, CPU alerts in VMWare.

The Linux "top" Command

The "top" command shows the most heavily utilized processes in real-time on a Linux server. Let's take a look at this screenshot below. It outlines the heaviest processes that are hitting the CPU.

As a point of reference, this server is a very busy server. It currently has 36GB Ram assigned, 6 CPUs, and 400 users provisioned.  I am troubleshooting high utilization and complaints from the customer.

CPU / "top" analysis

You can see from the results of this 'top' command that the majority of the processes are either "python3" or "postgres".   GroupWise Mobility is built using Python3, and it uses the Postgres database.  Therefore, you can safely say that GroupWise Mobility is hitting the CPU hard on this server. Increasing the CPU count from 6 CPUs to 8CPUs could help with performance.

NOTE: There are numerous articles available online related to the number of CPUs in a server, and research shows that more CPUs does not always mean better performance. In fact, it is argued that sometimes it could reduce performance.  So when it comes to increasing the number of CPU's in a GroupWise Mobility server, I approach this cautiously. It is usually something I would change after I have exhausted all other recommendations in this article.

Misc Performance Tuning

There are five other items that can be tuned if you are not getting the performance you need out of your system. These are core Linux settings related to the performance that go away from the standard defaults. In many cases, the defaults are adequate. But if you are looking to squeeze a little more performance out of your system, these are options you can consider.

  1. I/O Scheduler: "none"
  2. File System Mount Option:  "noatime"
  3. Linux Swappiness
  4. Storage Hardware / SAN Tiered Storage
  5. Log Level in GMS

** Note ** These are advanced system settings and you should fully understand that a misstep here could cause severe problems with your system. Please approach with caution.

I/O Scheduler: "none"

Earlier versions of SLES used a default I/O Scheduler called "noop".  This was all changed somewhere around SLES 15 SP3.  For the sake of this discussion, please reference this SUSE documentation: https://documentation.suse.com/sles/15-SP3/html/SLES-all/cha-tuning-io.html

Summary: The default scheduler on SLES 15 SP3 is "bfq". The desired tweak is to change the scheduler to "none".

The link above explains how to do it, however, it's not the easiest document to read.  From the docs: "With no overhead compared to other I/O elevator options, it is considered the fastest way of passing down I/O requests on multiple queues to such devices."

Confirm the Current Scheduler and proper scheduler name

Before and after any changes to the scheduler, you should run a command to confirm the active scheduler.  Nothing worse than making a change, and then not knowing if it worked or not. Use this simple command to determine what scheduler is active on your system.

cat /sys/block/sda/queue/scheduler    (Replace "sda" with your actual disk device name(s) that GMS is using)
serverapp1:/ # cat /sys/block/sda/queue/scheduler
none mq-deadline kyber [bfq]

The item in brackets is the active scheduler. In the case of the example above, the active scheduler is "bfq".  You want it to be "none".

Here is what you need to do to implement the "none" scheduler:

  • Find the file /usr/lib/udev/rules.d/60-io-scheduler.rules" and copy it to /etc/udev/rules.d/
  • Edit the file "/etc/udev/rules.d/60-io-scheduler.rules" and add this line to it:
    • KERNEL=="sd[a-z]*", ATTR{queue/scheduler}="none", GOTO="scheduler_end"
    • Modify the "sd[a-z]" portion as required to match your specific device names.
  • Reboot the server after making this change.

**NOTE** The above line may need to be changed to accommodate the naming of your file system.  The syntax uses regular expressions, so "sd[a-z]" means it will set the scheduler to "none" on any device name that falls within sd[a through z]. So for example, sda, sdb, sdc, sdd, sde... sdx, sdy, sdz.   In most VMWare virtualized Linux servers, the syntax listed will work on your system. It's possible that yours will be different.  If that is the case, make the appropriate modifications to that line.  You can see what your device is easily by using this command:

  • df -h

This will show you a list of your devices, and you should see something like the output below.  In this specific example, you can see an sda, sdb, and sdc meaning there are 3 separate virtual disks.

server2:/# df -h
Filesystem Size Used Avail Use% Mounted on
/dev/sda4 45G 6.1G 37G 15% /
/dev/sda3 2.0G 50M 1.8G 3% /root
/dev/sdb 25G 2.9G 21G 13% /var/log
/dev/sda1 511M 4.9M 507M 1% /boot/efi
/dev/sdc 492G 26G 441G 6% /var/lib

File System Mount Option: "noatime"

When a linux file system is mounted, by default any time a file is read or accessed, the file is timestamped with an attribute to show that access. Generally speaking, this is unnecessary and creates added overhead and a performance hit.

Summary: Disable this behavior by adding the "noatime" attribute at system startup to your file system mount directives.

This is done in the /etc/fstab file.

gmsserver:~ # cat /etc/fstab
UUID=fa9c8105-065f-4dfa-9766-95e72cb397a5 swap swap defaults 0 0
UUID=9315ebe4-d560-4249-842f-71524c041854 / ext4 defaults,noatime 0 1
UUID=44075e18-79d8-47bb-8b0e-0e39e027a0fc /var/log ext4 noatime,data=ordered 0 2
UUID=17d4970f-a4a3-4664-9170-922cffc9989f /var/lib ext4 noatime,data=ordered 0 2
UUID=38faf7aa-0286-4fe8-925e-7517c5bf8d94 /root ext4 noatime,data=ordered 0 2
UUID=D00B-3D8F /boot/efi vfat utf8 0 2

The above shows a typical /etc/fstab file. You can see the various file systems being mounted.  The place for 'noatime" is after the file system type, which you can see here is "ext4".   Any mount options are placed here in the file, and multiple directives are separated with a comma.  You might see a directive called "default". Or you may have some other directives such as what you see here "data=ordered".   Just append that with a comma and add "noatime".  When the system boots next, the parameter will take effect as the file systems are mounted.

I will add more to this section later.

Linux Swappiness

Swappiness affects how the server behaves when physical memory is depeleted and swap space has to be used. The default value on SLES 15 is 60.  The goal is to change this to 1.  Changing this value to 1 minimises the amount of swapping while not completely disabling it.

vm.swappiness = 1: the minimum amount of swapping without disabling it entirely

Find your current setting:

# sysctl vm.swappiness
vm.swappiness = 60

Change the vm.swappiness value:
This command will put the setting into effect immediately, however it will not stick after a reboot:

# echo 1 > /proc/sys/vm/swappiness

To permanently make the change on your server, do the following:

  1. Edit the file /etc/sysctl.conf
  2. add this statement anywhere in the file:
    • vm.swappiness = 1
  3. Save the file and reboot the server.

Confirm the vm.swappiness value:
Confirm the setting is correct by issuing the following command:

# sysctl vm.swappiness
vm.swappiness = 1

Reference Article

Reference the following web page for a bigger discussion on this topic.  Understanding VM Swappiness. From this page, you can find the following excerpt:

"The value of 60 is a compromise that works well for modern desktop systems. A smaller value is a recommended option for a server system, instead. As the Red Hat Performance Tuning manual points out, a smaller swappiness value is recommended for database workloads. For example, for Oracle databases, Red Hat recommends a swappiness value of 10. In contrast, for MariaDB databases, it is recommended to set swappiness to a value of 1 "

Because GroupWise Mobility is a very heavy PostgreSQL Database application, it makes sense to apply the same logic to a GMS server. And this is why I suggest setting this value to 1.

Background on why I started making this setting change

Over the years, with virtual Linux systems in general, I have found that if the memory is depleted and the system has to start swapping memory to disk, the server often completely chokes and can become non-responsive for several minutes. This creates a production outage and a system emergency as you scramble to figure out what happens. My goal with this setting is to minimize this behavior and prevent any sort of outage.

Storage Hardware / SAN Tiered Storage

Please put your GMS system on the fastest storage available to you. GMS is very disk intensive. A slow disk could cripple your system.

Good Options:

  • SSD (Solid State Drives)
  • 10K or 15K SAS drives in a fast RAID configuration
  • High-performance SANs
  • The fastest / highest priority storage on a multi-tier SAN

Bad Options:

  • Slow SATA drives
  • SAN connected to a slow iSCSI connection
  • The slowest tier on a multi-tier SAN

Log Level used by GMS

The default log level for GMS is "info". This is the preferred setting for day-to-day usage. However, sometimes when you are diagnosing problem conditions, you need to set the logs to Diagnostic.  Note the following recommendations:

  • On a busy system, Diagnostic Logging can generate millions of lines of logging in a very short time span. It's nearly impossible to read the Diagnostic logs and know what to look for.  Therefore, my preference is to only turn on diagnostic logging if I need to gather the information to send to the Support team for analysis. Otherwise, turn it off and save system resources.
  • If you're troubleshooting GMS, also ensure that your POA's are in Verbose logging mode so you can troubleshoot that side if needed.
  • Old log files are not that useful in most cases.  If restarting GMS clears up the problem temporarily, it's better to NOT restart it because that will clear up the problem but not help with a diagnosis. Instead, get the logs set the logs to debug, restart GMS, and then let it run a while.  Then when the problem comes back, you should have a lot of good diagnostic data to work with.
  • Sometimes it is helpful to shut down GMS, remove (or move) all the log files in /var/log/datasync (and subfolders) and get a clean start for simplicity sake. The amount of logging is overwhelming, so a clean start makes it easier to know you're looking at the right data.
  • If you are troubleshooting, note the date/time of the issue and any other pertinent info so that the log info can be located more easily in the log files. Things that are helpful to know are the UserID and Subject Line of messages that might not be syncing.
  • One additional strategy you could use on a very busy system is to ensure that the gms log files are on a different disk partition than the gms data. This would balance the disk writes across different devices and theoretically minimize the hit on the primary gms data partition.  The way I have outlined above in the system build recommendations, you will accomplish this by having 2 or even 3 disks. If you've built a system with a different partitioning strategy, you may have to improvise and adjust based on how it is configured. If the system is already running, completely changing the partitioning strategy is not practical or possible.  But you could add another VMDK file (assuming VMware) to the server and partition it and mount it to the gms log location which is /var/log/datasync.   If you already have the main GMS data partitioned separately, and /var/log is set to the system partition, then this suggestion would be less applicable.

Note About Changing LOG Settings

When changing the log level to on GMS, you must restart GMS for the setting to take effect. This is a troublesome issue because the restart will likely clear up the problem, especially if it is a memory or thread issue.  So once you set Diagnostic mode, you'll probably have to wait a while for the problem to come back.