OneFS S3 Conditional Writes and Cluster Status Reporting API

In addition to the core file protocols, PowerScale cluster also supports the ubiquitous AWS S3 protocol. As such, applications have multiple access options to the same underlying dataset, semantics, across both file and object.

Also, since OneFS objects and buckets are essentially files and directories within the /ifs filesystem, the same OneFS data services, such as Snapshots, SyncIQ, WORM, etc, are all seamlessly integrated. This makes it possible to run hybrid and cloud-native workloads, which use S3-compatible backend storage – for example cloud backup & archive software, modern apps, analytics flows, IoT workloads, etc. – and to run these on-prem, alongside and coexisting with traditional file-based workflows.

The recent OneFS 9.11 release further enhances the PowerScale S3 protocol implementation with two new features: The addition of conditional write support and API-based cluster status reporting.

First, the new S3 conditional write support prevents the overwriting of existing S3 objects with identical key names. It does this via pre-condition arguments to the S3 ‘PutObject’ and ‘CompleteMultipartUpload’ requests with the addition of an ‘If-None-Match’ HTTP header. As such, if the condition is not met, the S3 operation(s) fail. Note, however, that OneFS does not currently support the ‘If-Match’ HTTP header, which checks the Etag value. More information about S3 conditional write is provided in the following AWS documentation: https://docs.aws.amazon.com/AmazonS3/latest/userguide/conditional-requests.html

The second new piece of S3 functionality in OneFS 9.11 is API-based cluster status reporting. Increasingly, next gen applications need a reliable method decide where to store their backups and large data blobs across a variety of storage technologies. As such, a consistent API format, including cluster health status reporting, is needed to answer general questions about any S3 endpoint that maybe be under consideration as a potential target – particularly for applications without access to the management network. So providing the cluster status API facilitates intelligent decision making, such as how best to balance load and capacity across multiple PowerScale clusters. Additionally, the cluster status data can also help with performance analysis, as well as diagnosing hardware issues. For example, if an endpoint-alert has had zero successful objects delivered to it in the last hour, this status object will be the first thing that gets queried to see if there is a visible issue, or if applications are ‘routing around’ by intentionally using other resources.

The API uses an S3 endpoint with the following URL format:

S3://cluster-status/s3_cluster_status_v1

This mimics the GET object operation in the S3 service and is predicated on a via virtual bucket and object. As such, HEAD requests on this virtual bucket and object are valid, as is a GET request on the virtual object to read the Cluster Status data. All other S3 calls to this virtual bucket and object are prohibited, and the 405 HTTP error code returned.

Applications and users can use the S3 SDK, or other S3-conversant utility such as ‘s5cmd’, to retrieve the cluster status object, which involves the three valid S3 requests mentioned above:

  • HEAD bucket
  • HEAD object
  • GET object

Where the ‘get object’ request returns the cluster status details. For example, using the ‘s5cmd’ utility from a windows client:

C:\s5cmd_2.3.0> .\s5cmd.exe –endpoint-url=http:10.10.20.30:9020 cat s3://cluster-status/s3_cluster_status_v1

{

   “15min_avg_read_bw_mbs” : “0.12”,

   “15min_avg_write_bw_mbs” : “0.04”,

   “capacity_status_age_date” : “2025/06/04T07:43:02”,

   “health” : “all_nodes_operational”,

   “health_percentage” : “100”,

   “health_status_age_date” : “2025/06/04T07:43:02”,

   “mgmt_name” : “10.10.20.30:8080”

   “net_state” : “full”,

   “net_state_age_date” : “2025/06/04T07:43:02”,

   “net_state_calculation” : {

      “available_percentage” : “99”,

      “down_bw_mbs” : “0”,

      “total_bw_mbs” : “3576”,

      “used_bw_mbs” : "0.01”,

   },

   “total_capacity_tb” : :0.06”,

   “total_capacity_tib” : :0.05”,

   “total_free_space_tb” : :0.06”,

   “total_free_space_tib” : :0.05”,

}

The response format is JSON, and authenticated S3 users can access these APIs and download the cluster status object. The table below includes details of each response field:

Requested Field Description
mgmt_name Management interface name of this cluster.
total_capacity_tb Cluster’s total “current” capacity in base 10 terabytes.
total_capacity_tib Cluster’s total “current” capacity in base 2 terabytes(tebibytes).
total_free_space_tb Cluster’s total “current” free space in base 10 terabytes.
total_free_space_tib Cluster’s total “current” free space in base 2 terabytes(tebibytes).
capacity_status_age_date Number of second between the time of issuance and the proper calculation of capacity status.
health Calculated status based on per node health status:  either all_nodes_operational or some_nodes_nonoperational or non_operational.
health_percentage Vendor specific number from 0-100% where the vendor’s judgement should be used has to what level of the systems normal load it can take.
health_status_age_date Number of second between the time of issuance and the proper calculation of health status.
15min_avg_read_bw_mbs Read bandwidth in use, measured in megabytes per second, averaged over a 15-minute period.
15min_avg_write_bw_mbs Write bandwidth in use, measured in megabytes per second, averaged over a 15-minute period.
net_state Networking status to S3 clusters. Divided into “Full”, “Half”, “Critical”, and “Unknown”
net_state_age_date Number of second between the time of issuance and the proper calculation of network status.

These fields can be grouped into the following core categories:

Category Description
Capacity Reports the total capacity and available capacity in both terabytes and tebibytes.
Health Includes the cluster health, node health and network health.
Management ‘Management name’ references the out-of-band management interface that admins can use to configure the cluster.
Networking Network status takes both the interfaces up/down status and the read write bandwidth on each interface into consideration.
Performance Includes the read and write bandwidth.

Under the hood, The high-level cluster reporting API operational workflow can be categorized as follows:

When an S3 client sends a get cluster status request, the OneFS S3 service retrieves the data from isi_status_d and Flexnet services. As part of this transaction, the calculations are performed and the result returned to the S3 client in JSON format. To speed up the retrieve process, a memory cache retains the data with a configured expiry time.

Configuration-wise, the addition of the cluster status API in OneFS 9.11 introduces the following new gconfig parameters:

Name Default Value Description
S3ClusterStatusBucketName “cluster-status” Name of the bucket used to access cluster status.
S3ClusterStatusCacheExpirationInSec 300 Expiration time in seconds for cluster status cache in memory. Once reached, the next request for cluster status will results in a new fetch of fresh data.
S3ClusterStatusEnabled 0 Boolean parameter controlling whether the feature is enabled or not.

0 = disabled; 1 = enabled

S3ClusterStatusObjectName “s3_cluster_status_v1” Name of the object used to access cluster status.

These parameter values can be viewed or configured using the ‘isi_gconfig’ CLI utility. For example:

# isi_gconfig | grep S3Cluster

registry.Services.lwio.Parameters.Drivers.s3.S3ClusterStatusBucketName (char*) = cluster-status

registry.Services.lwio.Parameters.Drivers.s3.S3ClusterStatusCacheExpirationInSec (uint32) = 300

registry.Services.lwio.Parameters.Drivers.s3.S3ClusterStatusEnabled (uint32) = 0

registry.Services.lwio.Parameters.Drivers.s3.S3ClusterStatusObjectName (char*) = s3_cluster_status_v1

The following gconfig CLI command syntax can be used to activate this feature, which is disabled by default:

# isi_gconfig registry.Services.lwio.Parameters.Drivers.s3.S3ClusterStatusEnabled=1

# isi_gconfig | grep S3Cluster | grep -i enabled

registry.Services.lwio.Parameters.Drivers.s3.S3ClusterStatusEnabled (uint32) = 1

Two new operations are added to the S3 service, namely ‘head S3 cluster status’ and ‘get S3 cluster status’. For the head bucket, it will always return 200. For head cluster status object, the following three fields are required:

  • ‘content-length’, which is the length of the cluster status object
  • ‘last modified date’ maps the date for getting the cluster status object
  • an empty ‘etag’

Note that OneFS uses the MD5 hash of an empty string for the empty value.

The S3 cluster status API is available once OneFS 9.11 has been successfully installed and committed. and S3 service is enabled. During upgrade to 9.11 the ‘404 Not Found’ will be returned if the API endpoints are queried.

There are a couple of common cluster status API issues to be aware of. These include:

Issue Troubleshooting step(s)
The get cluster status API fails to get the cluster status and returns 404 Check if the configuration: S3ClusterStatusEnabled has been set to 1, or if the S3ClusterStatusBucketName and S3ClusterStatusObjectName match the bucket name and object name requested in the API.
The get cluster status API fails to get the cluster status and returns 403 Check if the access key is input correctly and if the user is an authenticated user
The get cluster status API frequently returns “unknown” value Verify that the dependent services (ie. isi_status_d) is running

Helpful log files for further investigating API issues such as the above include the S3 protocol log, Stats daemon log, and Flexnet service log. These can be found at the following locations on each node:

Logfile Location
S3 protocol log /var/log/s3.log
Flexnet daemon log /var/log/isi_flexnet_d.log
Stats daemon log /var/log/isi_stats_d.log

Additionally, the following CLI utilities can also be useful troubleshooting tools:

# isi_gconfig

# isi services s3

OneFS and the PowerScale PA110 Performance Accelerator

In addition to a variety of software features, OneFS 9.11 also introduces support for a new PowerScale performance accelerator node, based upon the venerable 1RU Dell PE R660 platform.

The diskless PA110 accelerator can simply, and cost effectively, augment the CPU, RAM, and bandwidth of a network or compute-bound cluster without significantly increasing its capacity or footprint.

Since the accelerator node contains no storage and a sizable RAM footprint, it has a substantial L1 cache, since all the data is fetched from other storage nodes. Cache aging is based on a least recently used (LRU) eviction policy and the PA110 is available in a single memory configuration, with 512GB of DDR5 DRAM per node. The PA110 also supports both inline compression and deduplication.

In particular, the PA110 accelerator can provide significant benefit to serialized, read-heavy, streaming workloads by virtue of its substantial, low-churn L1 cache, helping to increase throughput and reduce latency. For example, a typical scenario for PA110 addition could be a small all-flash cluster supporting a video editing workflow that is looking for a performance and/or front-end connectivity enhancement, but no additional capacity.

Other than a low capacity M.2 SSD boot card, the PA110 node contains no local storage or journal. This new accelerator is fully compatible with clusters containing the current and previous generation PowerScale nodes. Also, unlike storage nodes which require the addition of a 3 or 4 node pool of similar nodes, a single PA110 can be added to a cluster. The PA110 can be added to a cluster containing all-flash, hybrid, and archive nodes.

Under the top cover, the one rack-unit PA110 enclosure contains dual Sapphire Rapids 6442Y CPUs with 24 core/48 thread-60MB L3, running at 2.6GHz. This is complemented by 512GB of DDR5 memory and dual 960GB M.2 mirrored boot media.

Networking comprises the venerable Mellanox CX6 series NICs, with options including CX6-LX Dual port 25G, CX6-DX Dual port 100G, or MLX CX6 VPI 200G Ethernet.

The PA110 also includes a LOM (Lan-On-Motherboard) port for management and a RIO/DB9 for the serial port. This is all powered by dual 1100W Titanium hot swappable power supplies.

The PowerScale PA110 also uses a new boot-optimized storage solution (BOSS) for its boot media. This comprises a BOSS module and associated card carrier. The module is housed in the chassis as shown:

The card carrier holds two M.2 NVMe SSD cards, which can be removed from the rear of the node as follows:

Note that, unlike PowerScale storage nodes, since the accelerator does not provide any /ifs filesystem storage capacity, the PowerScale PA110 node does not require OneFS feature licenses for any of the various data services running in a cluster.

The PowerScale PA110 can also be configured to order in ‘backup mode’, too. In this configuration, the accelerator also includes a pair of fibre channel ports, provided by an Emulex LPE35002 32Gb FC HBA. This enables direct, or two-way, NDMP backup from a cluster to a tape library or VTL, either directly attached or across a fibre channel fabric.

With a fibre channel card installed in slot 2, the PA110 backup accelerator integrates seamlessly with current DR infrastructure, as well as with leading data backup and recovery software technologies to satisfy the availability and recovery SLA requirements of a wide variety of workloads.

As a backup accelerator, the PA110 aids overall cluster performance by offloading NDMP backup traffic directly to the fibre channel ports and reducing CPU and memory consumption on storage nodes – thereby minimizing impact on front end workloads. This can be of particular benefit to clusters that have been using chassis-based nodes populated with fibre channel cards. In these cases, a simple, non-disruptive addition of PA110 backup accelerator node(s) frees up compute resources on the storage nodes, boosting client workload performance and shrinking NDMP backup windows.

The following table includes the hardware specs for the new PowerScale PA110 performance accelerator, as compared to its predecessors (P100 and B100), which are as follows:

Component (per node) PA110 (New( P100 (Prior gen) B100 (Prior gen)
OneFS release OneFS 9.11 or later OneFS 9.3 or later OneFS 9.3 or later
Chassis PowerEdge R660 PowerEdge R640 PowerEdge R640
CPU 24 cores (dual socket Intel 6442Y @ 2.6Ghz) 20 cores (dual socket Intel 4210R @ 2.4Ghz) 20 cores (dual socket Intel 4210R @ 2.4Ghz)
Memory 512GB DDR5 384GB or 768GB DDR4 384GB DDR4
Front-end I/O 2 x 10/25 Gb Ethernet; Or

2 x 40/100Gb Ethernet; Or

2 x HDR Infiniband (200Gb)

2 x 10/25 Gb Ethernet Or

2 x 40/100Gb Ethernet

2 x 10/25 Gb Ethernet Or

2 x 40/100Gb Ethernet

Back-end I/O 2 x 10/25 Gb Ethernet Or

2 x 40/100Gb Ethernet Or 2 x HDR Infiniband (200Gb)

Optional 2 x FC for NDMP

2 x 10/25 Gb Ethernet Or

2 x 40/100Gb Ethernet

Or 2 x QDR Infiniband

2 x 10/25 Gb Ethernet Or

2 x 40/100Gb Ethernet Or 2 x QDR Infiniband

Mgmt Port LAN on motherboard 4 x 1GbE (rNDC) 4 x 1GbE (rNDC)
Journal N/A N/A N/A
Boot media BOSS module 960GB 2x 960GB SAS SSD drives 2x 960GB SAS SSD drives
IDSDM 1 x 32GB microSD (Receipt and recovery boot image) 1x 32GB microSD (Receipt and recovery boot image) 1x32GB microSD (Receipt and recovery boot image)
Power Supply Dual redundant 1100W

100-240V, 50/60Hz

Dual redundant 750W 100-240V, 50/60Hz Dual redundant 750W 100-240V, 50/60Hz
Rack footprint 1RU 1RU 1RU
Cluster addition Minimum one node, and single node increments Minimum one node, and single node increments Minimum one node, and single node increments

These node hardware attributes can be easily viewed from the OneFS CLI via the ‘isi_hw_status’ command.

OneFS Migration from ESRS to Dell Connectivity Services

Tucked amongst the payload of the recent OneFS 9.11 release is new functionality that enables a seamless migration from EMC Secure Remote Services (ESRS) to Dell Technologies Connectivity Services (DTCS). DTCS, as you may recall from previous blog articles on the topic, is the rebranded SupportAssist solution for cluster phone-home connectivity.

First, why migrate from ESRS to DTCS? Well, two years ago, an end of service life date of January 2024 was announced for the Secure Remote Services version 3 gateway, which is used by the older ESRS, ConnectEMC, and Dial-home connectivity methods. Given this, the solution for clusters still using the SRSv3 gateway is to either:

  1. Upgrade Secure Remote Services v3 to Secure Connect Gateway v5.
  2. Upgrade to OneFS 9.5 or later and use the SupportAssist/DTCS ‘direct connect’ option.

The objective of this new OneFS 9.11 feature is to help customers migrate to DTCS so they can achieve their desired connectivity state with as little disruption as possible.

Scenario After upgrade to OneFS 9.11
Clusters with ESRS + SCGv5 Seamless migration capable
New cluster DTC is the only way for connectivity.

“isi esrs”, “Remote Support” in the WebUI will either be unavailable or hidden.

Clusters without ESRS/SupportAssist/DTC configured Same as above
Clusters with ESRS + SRSv3 Retain ESRS + SRSv3

HealCheck warning triggered

WebUI banner showed the migration did not happen

Resolution is to upgrade to SCGv5 or use direct connection

retry command is “isi connectivity provision start –retry-migration”

 

So when a cluster that has been provisioned with ESRS using a secure connect gateway is upgraded to OneFS 9.11, this feature automatically attempts to migrate to DTCS. Upon successful completion, any references to ESRS will no longer be visible.

Similarly, on new clusters running 9.1 1, the ability to provision with ESRS is removed, and messaging is displayed to encourage DTCS provisioning and enablement.

Under the hood, the automatic migration architecture comprises the following core functional components:

Component Description
Upgrade commit hook Starts the migration job.

 

Healthcheck connectivity_migration checklist A group of checks used to determine if automatic migration can proceed.

 

Provision state machine Updated to use the ESE API /upgradekey for provisioning in migration scenario.

 

Job Engine Job ·         Precheck: Runs the connectivity_migration checklist which must pass

·         Migrate settings: Configure DTCS using ESRS and Cluster identity settings

·         Provision: Enables DTCS, starts a provision task using state machine

 

There’s a new healthcheck checklist called ‘connectivity migration’, which contains a group of checks to determine whether its safe for an automatic migration to proceed.

There’s been an update to the provision state machine so that it now uses the upgrade key from the ESE API, so that we can provision in the migration scenario.

And the final piece is the migration job. Executed and managed by the Job Engine, this migration job has 3 phases.

The first, or pre-check, phase runs the connectivity migration checklist. All the checklist elements must to pass in order for the job to continue.

If the checklist fails, the results of those checks can be used to determine what remedial actions are needed in order to get the cluster to their desired connectivity state. When it does pass, the job progresses to the migration settings phase. Here, the required configuration data is extracted from ESRS and the cluster settings in order to configure DTCS. This includes items like the gateway host, customer contact info, telemetry settings, etc. Once the DTCS configuration data is in place, the job continues to its final phase, which spawns the actual provision task.

After enabling DTCS, the provisioning state machine uses the ESRS API key that was configured or paired with the configured gateway, which it uses and passes to the ESE API upgrade key, associate the key with the new ESE back end. Once that’s in place, DTCS provisioning via the upgrade hook background process.

A new CELOG alert has been added that will be triggered if DTCS provisioning fails during a seamless migration. This alert will automatically open a service request with a sev3 priority, and recommends contact Dell Support for assistance.

The connectivity CLI are minimal in OneFS 9.11, and essentially comprise providing messaging based on the state of the underlying system. The following example is from a freshly installed OneFS 9.11 cluster, where any ‘isi esrs’ CLI commands now display the following ‘no longer supported’ message:

# isi esrs view

Secure Remote Services (SRS) is no longer supported. Use Dell Technologies connectivity services instead via 'isi connectivity'.

A cluster that’s been upgraded to OneFS 9.11, but fails to automatically migrate to DTCS will display a message stating that SRS is at the end of its service life.

# isi esrs view

Warning: Secure Remote Service is at end of service life. Upgrade connectivity to Dell Technologies connectivity services now using ‘isi connectivity’ to prevent disruptions.  See https://www.dell.com/support/kbdoc/en-us/0000152189/powerscale-onefs-info-hubs cluster administration guides for more information.

There’s also a new ‘–retry-migration’ option for the ‘isi connectivity provision start’ command:

# isi connectivity provision start --retry-migration

SRS to Dell Technologies connectivity services migration started.

This can be used to rerun the migration process once any issues have been corrected, based on the results of the connectivity migration checklist.

Finally, upon successful migration, a message will inform that ESRS has been migrated to DTCS and that ESRS is no longer supported:

# isi esrs view

Secure Remote Services (SRS) connectivity has migrated to Dell Technologies connectivity services. Use ‘isi connectivity’ to manage connectivity as SRS is no longer supported.

Similarly, the WebUI updates will reflect the state of the underlying system. For example, on a freshly installed OneFS 9.11 cluster, the WebUI dashboard will remind the administrator that Dell Technologies Connectivity Services needs to be configured:

On the general settings page, the tab for ‘remote support’ has been removed in OneFS 9.11:

In the diagnostics gather when the checkbox comes up, the option for ESRS uploads has been removed and replaced with the DTCS upload:

And on a fresh OneFS 9.11 cluster, the remote support channel is no longer listed as an option for alerts:

If a migration does not complete successfully, a warning is displayed on the remote support tab on the general settings page informing that the migration has failed. This warning also provides information on how to proceed:

The WebUI messaging prompts the cluster admin to resolve the failed migration by examining the results of that checklist, and provides a path forward.

The alert is also displayed on the licensing tab, because at this point the connectivity needs to be reestablished because the migration failed:

The WebUI messaging provides steps to help resolve any migration issues. Plus, if a migration has failed, the ESRS upload will still remain present and active until DTCS is successfully provisioned:

Once successfully migrated, the WebUI dashboard will confirm this status:

The dashboard will also confirm that DTCS is now enabled and connected via the SCG:

Additionally, the ‘remote support’ tab and page are no longer visible under general settings, and the former ESRS option is replaced by the DTCS option on the gather menu:

When investigating and troubleshooting connectivity migration issues, if something goes wrong with the migration job, examining the /var/log/isi_ job_d.log file and search for ‘EsrsToDtcsMigration’ can be a useful starting point. For additional detail, increasing the verbosity to ‘debug logging’ for the isi_job_d service and retrying the migration can also be helpful.

Additionally, the ‘isi healthcheck evaluations’ command line options can be used to query the status of the connectivity_migration checklist, to help determine which of the checks has failed and needs attention:

# isi healthcheck evaluations list

# isi healthcheck evaluations view <name of latest>

Similarly, from the WebUI, navigating to Cluster management > Job operations displays the job status and errors. While Cluster Management > Healthcheck > Evaluations tab allows the connectivity_migration checklist details to be examined.

Note that ESRS to DTCS auto migration is only for clusters running ESRS that have been provisioned with  and are using the Secure Connect Gateway (SCG) option. Post successful migration,  the customer can always switch to using a direct connection rather than via SCG, if desired.

OneFS and Software Journal Mirroring – Management and Troubleshooting

Software journal mirroring (SJM) in OneFS 9.11 delivers critical file system support to meet the reliability requirements for PowerScale platforms with high capacity flash drives. By keeping a synchronized and consistent copy of the journal on another node, and automatically recovering the journal from it upon failure, enabling SJM can reduce the node failure rate by around three orders of magnitude – while also boosting storage efficiency by negating the need for a higher level of on-disk FEC protection.

SJM is enabled by default for the applicable platforms on new clusters. So for clusters including F710 or F910 nodes with large QLC drives that ship with 9.11 installed, SJM will be automatically activated.

SJM adds a mirroring scheme, which provides the redundancy for the journal’s contents.

This is where /ifs updates are sent to a node’s local, or primary, journal as usual. But they’re also synchronously replicated, or mirrored, to another node’s journal, too – referred to as the ‘buddy’.

Every node in an SJM-enabled pool is dynamically assigned a buddy node, and if a new SJM-capable node is added to the cluster, it’s automatically paired up with a buddy. These buddies are unique for every node in the cluster.

SJM’s automatic recovery scheme can use a buddy journal’s contents to re-form the primary node’s journal. And this recovery mechanism can also be applied manually if a journal device needs to be physically replaced.

The introduction of SJM changes the node recovery options slightly in OneFS 9.11. These options now include an additional method for restoring the journal:

This means that if a node within an SJM-enabled pool ends up at the ‘stop_boot’ prompt, before falling back to SmartFail, the available options in order of desirability are:

Order Options Description
1 Automatic journal recovery OneFS will first try to automatically recover from the local copy.

 

2 Automatic journal mirror recovery Automatic journal mirror recovery attempts to SyncBack from the buddy node’s journal.
3 Manual SJM recovery Dell support can attempt a manual SJM recovery, particularly in scenarios where a bug or issue in the software journal mirroring feature itself is inhibiting automatic recovery in some way.
4 SmartFail OneFS quarantines the node, places it into a read-only state, and reprotects by distributing the data to other devices.

While SJM is available upon upgrade commit to OneFS 9.11, it is not automatically activated. So any F710 or F910 SJM-capable node pools that were originally shipped with OneFS 9.10 installed  will require SJM to be manually enabled after their upgrade to 9.11.

If SJM is not activated on a cluster with capable node pools running OneFS 9.11, a CELOG alert will be raised, encouraging the customer to enable it. This CELOG alert will contain information about the administrative actions required to enable SJM. Additionally, a pre-upgrade check is also included in OneFS 9.11 to prevent any existing cluster with nodes containing 61TB drives that were shipped with OneFS 9.9 or older installed, from upgrading directly to 9.11 until the afflicted nodes have been USB-reimaged and their journals reformatted.

For SJM-capable clusters which do not have journal mirroring enabled, the CLI command (and platform API endpoint) to activate SJM operate at the nodepool level. Each SJM-capable pool will need to be enabled separately via the ‘isi storagepool nodepools modify’ CLI command, plus the pool name and the new  ‘–sjm-enable=true’ argument.

# isi storagepool nodepools modify <name> --sjm-enabled true

Note that this new syntax is only applicable only for nodepool(s) with SJM-capable nodes.

Similarly, to query the SJM status on a cluster’s nodepools:

# isi storagepool nodepools list –v | grep –e ‘SJM’ –e ‘Name:’

And to check a cluster’s nodes for SJM capabilities:

# isi storagepool nodetypes list -v | grep -e 'Product' -e 'Capable'

So there are a couple of considerations with SJM that should be borne in mind. As mentioned previously, any SJM-capable nodes that are upgraded from OneFS 9.10 will not have SJM enabled by default. So if, after upgrade to 9.11, a capable pool remains in an SJM-disabled state, a CELOG warning will be raised informing that the data may be under-protected, and hence it’s reliability lessened. And the CELOG event will include recommended corrective and remedial action. So administrative intervention will be required to enable SJM on this particular node pool, ideally, or alternatively increase the protection level to meet the same reliability goal.

So how impactful is SJM to protection overhead on an SJM-capable node pool/cluster? The following table shows the protection layout, both with and without SJM, for the F710 and F910 nodes containing 61TB drives:

Node type Drive Size Journal Mirroring +d2:1n +3d:1n1d +2n +3n
F710 61TB SDPM 3 4-6 7-34 35-252
F710 61TB SDPM SJM 4-16 17-252
F910 61TB SDPM 3 5-19 20-252
F910 61TB SDPM SJM 3-16 17-252

Taking the F710 with 61TB drives example above, without SJM +3n protection is required at 35 nodes and above. In contrast, with SJM-enabled, the +3d:1n1d protection level suffices all the way up to the current maximum cluster size of 252 nodes.

Generally, beyond enabling it on any capable-pools, after upgrading to 9.11 SJM just does its thing and does not require active administration or management. However, with a corresponding buddy journal for every primary node, there may be times when a primary and its buddy become un-synchronized. Clearly, this would mean that mirroring is not functioning correctly and a SyncBack recovery attempt would be unsuccessful. OneFS closely monitors this scenario, and will fire either of the top two CELOG event types below to alert the cluster admin in the event that journal syncing and/or mirroring are not working properly:

Possible causes for this could include the buddy remaining disconnected, or in a read-only state, for a protracted period of time. Or a software bug or issue, that’s preventing successful mirroring. This would result in a CELOG warning being raised for the buddy of the specific node, with the suggested administrative action included in the event contents.

Also, be aware that SJM-capable and non-SJM-capable nodes can be placed in the same nodepool if needed, but only if SJM is disabled on that pool – and the protection increased correspondingly.

The following chart illustrates the overall operational flow of SJM:

SJM is a core file system feature, so the bulk of its errors and status changes are written to the ubiquitous /var/log/messages file. However, since the Buddy assignment mechanism is a separate component with its own user-space demon, its notifications and errors are sent to a dedicated ‘isi_sjm_budassign_d’ log. This logfile is located at:

/var/log/isi_sjm_budassign_d.log

OneFS and Software Journal Mirroring – Architecture and Operation

In this next article in the OneFS software journal mirroring series, we will dig into SJM’s underpinnings and operation in a bit more depth.

With its debut in OneFS 9.11, the current focus of SJM is the all-flash F-series nodes containing either 61TB or 122TB QLC SSDs. In these cases, SJM dramatically improves the reliability of these dense drive platforms with journal fault tolerance. Specifically, it maintains a consistent copy of the primary node’s journal on a separate node. By automatically recovering the journal from this mirror, SJM is able to substantially reduce the node failure rate without the need for increased FEC protection overhead.

SJM is enabled by default for the applicable platforms on new clusters. So for clusters including F710 or F910 nodes with large QLC drives that ship with 9.11 installed, SJM will be automatically activated.

SJM adds a mirroring scheme, which provides the redundancy for the journal’s contents. This is where /ifs updates are sent to a node’s local, or primary, journal as usual. But they’re also synchronously replicated, or mirrored, to another node’s journal, too – referred to as the ‘buddy’.

Architecturally, SJM’s main components and associated lexicon are as follows:

Item Description
Primary Node with a journal that is co-located with the data drives that the journal will flush to.
Buddy Node with a journal that stores sufficient information about transactions on the primary to restore the contents of a primary node’s journal in the event of its failure.
Caller Calling function that executes a transaction. Analogous to the initiator in the 2PC protocol.
Userspace journal library Saves the backup, restores the backup, and dumps journal (primary and buddy).
Buddy reconfiguration system Enables buddy reconfiguration and stores the mapping in buddy map via buddy updater.
Buddy mapping updater Provides interfaces and protocol for updating buddy map.
Buddy map Stores buddy map (primary <-> buddy).
Journal recovery subsystem Facilitates journal recovery from buddy on primary journal loss.
Buddy map interface Kernel interface for buddy map.
Mirroring subsystem Mirrors global and local transactions.
JGN Journal Generation Number, to identify versions and verify if two copies of a primary journal are consistent.
JGN interface Journal Generation Number interface to update/read JGN.
NSB Node state block, which stores JGN.
SB Journal Superblock.
SyncForward Mechanism to sync an out-of-date buddy journal with missed primary journal content additions & deletions.
SyncBack Mechanism to reconstitute a blown primary journal from the mirrored information stored in the buddy journal.

These components are organized into the following hierarchy and flow, split across kernel and user space:

A node’s primary journal is co-located with the data drives that it will flush to. In contrast, the buddy journal lives on a remote node and stores sufficient information about transactions on the primary, to allow it to restore the contents of a primary node’s journal in the event of its failure.

SyncForward is the mechanism by which an out of date Buddy journal is caught up with any Primary journal transactions that it might have missed. While SyncBack, or restore, allows a blown Primary journal to be reconstituted from the mirroring information stored in its Buddy journal.

SJM needs to be able to rapidly detect a number of failure scenarios and decide which is the appropriate recovery workflow to initiate. For example, a blown primary journal, where SJM must quickly determine whether the Buddy’s contents are complete, to allow a SyncBack to fully reconstruct a valid Primary journal. Versus whether to resort to a more costly node rebuild instead. Or, if the Buddy node disconnects briefly, which of a Primary journal’s changes should be replicated during a SyncForward, in order to bring the Buddy efficiently back into alignment.

SJM tags the transactions logged into the Primary journal, and their corresponding mirrors in the Buddy, with a monotonically increasing Journal Generation Number, or JGN.

The JGN represents the most recent & consistent copy of a primary node’s journal, and it’s incremented whenever the write status of the Buddy journal changes, which is tracked by the Primary via OneFS GMP group change updates.

In order to determine whether the Buddy journal’s contents are complete, the JGN needs to be available to the primary node when its primary journal is blown. So the JGN is stored in a Node State Block, or NSB, and saved on a quorum of the node’s data-drives. Therefore, upon loss of a Primary journal, the JGN in the node state block can be compared against the JGN in the Buddy to confirm its transaction mirroring is complete, before the SyncBack workflow is initiated.

A primary transaction exists on the node where data storage is being modified, and the corresponding buddy transaction is a hot, redundant duplicate of the primary information on a separate node. The SDPM journal storage on the F-series platforms  is fast, and the pipe between nodes across the backend network is optimized for low-latency bulk data flow. And this allows the standard POSIX file model to transparently operate on the front-end protocols, which are blissfully aware of any journal jockeying that’s occurring behind the scenes.

The journal mirroring activity is continuous, and if the Primary loses contact with its Buddy, it will urgently seek out another Buddy and repeat the mirroring for each active transaction, to regain a fully mirrored journal config. If the reverse happens, and the Primary vanishes due to an adverse event like a local power loss or an unexpected reboot, the primary can reattach to its designated buddy and ensure that its own journal is consistent with the transactions that the Buddy has kept safely mirrored. This means that the buddy must reside on a different node than the primary. As such, it’s normal and expected for each primary node to also be operating as the buddy for a different node.

The prerequisite platform requirements for SJM support in 9.11, which are referred to as ‘SJM-capable nodes, are as follows:

Essentially, this is any F710 and F910’s with 61TB or 122TB SSDs which shipped with OneFS 9.10 or later are considered SJM-capable.

Note that there are a small number of F710 and F910s with 61TB drives out there which were shipped with OneFS 9.9 or earlier installed. These nodes must be re-imaged before they can use SJM. So they first need to be SmartFailed out, then USB reimaged to OneFS 9.10 or later. This is to allow the node’s SDPM journal device to be reformatted to include a second partition for the 16 GiB buddy journal allocation. However, this 16 GiB of space reserved for the buddy journal will not be used when SJM is disabled. The following table shows the maximum SDPM usage per journal type based on SJM enablement:

Journal State Primary journal Buddy journal
SJM enabled 16 GiB 16 GiB
SJM disabled 16 GiB 0 GiB

But to reiterate, the SJM-capable platforms which will ship with OneFS 9.11 installed, or those that shipped with OneFS 9.10, are ready to run SJM, and will form node pools of equivalent type.

While SJM is available upon upgrade commit to OneFS 9.11, it is not automatically activated. So for any F710 or F910 nodes with large QLC drives that were originally shipped with OneFS 9.10 installed, the cluster admin will need to manually enable SJM on any capable pools after their upgrade to 9.11.

Plus, if SJM is not activated, a CELOG alert will be raised, encouraging the customer to enable it, in order for the cluster to meet the reliability requirements. This CELOG alert will contain information about the administrative actions required to enable SJM.

Additionally, a pre-upgrade check is also included in OneFS 9.11 to prevent any existing cluster with nodes containing 61TB drives that were shipped with OneFS 9.9 or older installed, from upgrading directly to 9.11 – until these nodes have been USB-reimaged and their journals reformatted.

OneFS and Software Journal Mirroring

OneFS 9.11 sees the addition of a Software journal mirroring capability, which adds critical file system support to meet the reliability requirements for platforms with high capacity drives.

But first, a quick journal refresher… OneFS uses journaling to ensure consistency across both disks locally within a node and disks across nodes. As such, the journal is among the most critical components of a PowerScale node. When OneFS writes to a drive, the data goes straight to the journal, allowing for a fast reply.

Block writes go to the journal first, and a transaction must be marked as ‘committed’ in the journal before a ‘success’ status is returned to the file system operation.

Once a transaction is committed the change is guaranteed to be stable. If the node crashes or loses power, changes can still be applied from the journal at mount time via a ‘replay’ process. The journal uses a battery-backed persistent storage medium in order to be available after a catastrophic node event, and must also be:

Journal Performance Characteristic Description
High throughput All blocks (and therefore all data) pass through the journal, so it must never become a bottleneck.
Low latency Since transaction state changes are often in the latency path multiple times for a single operation, particularly for distributed transactions.

The OneFS journal mostly operates at the physical level, storing changes to physical blocks on the local node. This is necessary because all initiators in OneFS have a physical view of the file system – and therefore issue physical read and write requests to remote nodes. The OneFS journal supports both 512byte and 8KiB block sizes of 512 bytes for storing written inodes and blocks respectively.

By design, the contents of a node’s journal are only needed in a catastrophe, such as when memory state is lost. For fast access during normal operation, the journal is mirrored in RAM. Thus, any reads come from RAM and the physical journal itself is write-only in normal operation. The journal contents are read at mount time for replay. In addition to providing fast stable writes, the journal also improves performance by serving as a write-back cache for disks. When a transaction is committed, the blocks are not immediately written to disk. Instead, it is delayed until the space is needed. This allows the I/O scheduler to perform write optimizations such as reordering and clustering blocks. This also allows some writes to be elided when another write to the same block occurs quickly, or the write is otherwise unnecessary, such as when the block is freed.

So the OneFS journal provides the initial stable storage for all writes and does not release a block until it is guaranteed to be stable on a drive. This process involves multiple steps and spans both the file system and operating system. The high-level flow is as follows:

Step Operation Description
1 Transaction prep A block is written on a transaction, for example a write_block message is received by a node. An asynchronous write is started to the journal. The transaction preparation step will wait until all writes on the transaction complete.
2 Journal delayed write The transaction is committed. Now the journal issues a delayed write. This simply marks the buffer as dirty.
3 Buffer monitoring A daemon monitors the number of dirty buffers and issues the write to the drive upon reach its threshold.
4 Write completion notification The journal receives an upcall indicating that the write is complete.
5 Threshold reached Once journal space runs low or an idle timeout expires, the journal issues a cache flush to the drive to ensure the write is stable.
6 Flush to disk When cache flush completes, all writes completed before the cache flush are known stable. The journal frees the space.

The PowerScale F-series platforms use Dell’s VOSS M.2 SSD drive as the non-volatile device for their software-defined persistent memory (SDPM) journal vault.  The SDPM itself comprises two main elements:

Component Description
BBU The BBU pack (battery backup unit) supplies temporary power to the CPUs and memory allowing them to perform a backup in the event of a power loss.
Vault A 32GB M.2 NVMe to which the system memory is vaulted.

While the BBU is self-contained, the M.2 NVMe vault is housed within a VOSS module, and both components are easily replaced if necessary.

The current focus of software journal mirroring (SJM) are the all-flash F710 and F910 nodes that contain either the 61 TB QLC SSDs, or the soon to be available 122TB drives. In these cases, SJM dramatically improves the reliability of these dense drive platforms. But first, some context regarding journal failure and it’s relation to node rebuild times, durability, and the protection overhead.

Typically, a node needs to be rebuilt when its journal fails, for example if it loses its data, or if the journal device develops a fault and needs to be replaced. To accomplish this, the OneFS SmartFail operation has historically been the tool of choice, to restripe the data away from the node. But the time to completion for this operation depends on the restripe rate and amount of the storage. And the gist is that the denser the drives, the more storage is on the node, and the more work SmartFail has to perform.

And if restriping takes longer, the window during which the data is under-protected also increases. This directly affects reliability, by reducing the mean time to data loss, or MTTDL. PowerScale has an MTTDL target of 5,000 years for any given size of a cluster. The 61TB QLC SSDs represent an inflection point for OneFS restriping, where, due to their lengthy rebuild times, reliability, and specifically MTTDL, become significantly impacted.

So the options in a nutshell for these dense drive nodes, are either to:

  1. Increase the protection overhead, or:
  2. Improve a node’s resilience and, by virtue, reduce its failure rate.

Increasing the protection level is clearly undesirable, because the additional overhead reduces usable capacity and hence the storage efficiency – thereby increasing the per-terabyte cost, as well as reducing rack density and energy efficiency.

Which leaves option 2: Reducing the node failure rate itself, which the new SJM functionality in 9.11 achieves by adding journal redundancy.

So, by keeping a synchronized and consistent copy of the journal on another node, and automatically recovering the journal from it upon failure, enabling SJM can reduce the node failure rate by around three orders of magnitude – while removing the need for a punitively high protection level on platforms with large-capacity drives.

SJM is enabled by default for the applicable platforms on new clusters. So for clusters including F710 or F910 nodes with large QLC drives that ship with 9.11 installed, SJM will be automatically activated.

SJM adds a mirroring scheme, which provides the redundancy for the journal’s contents. This is where /ifs updates are sent to a node’s local, or primary, journal as usual. But they’re also synchronously replicated, or mirrored, to another node’s journal, too – referred to as the ‘buddy’.

This is somewhat analogous to how the PowerScale H and A-series chassis-based node paring operates, albeit in software and over the backend network this time, and with no fixed buddy assignment, rather than over a dedicated PCIe non-transparent bridge link to a dedicated partner node, as in the case of the chassis-based platforms.

Every node in an SJM-enabled pool is dynamically assigned a buddy node. And similarly, if a new SJM-capable node is added to the cluster, it’s automatically paired up with a buddy. These buddies are unique for every node in the cluster.

SJM’s automatic recovery scheme can use a buddy journal’s contents to re-form the primary node’s journal. And this recovery mechanism can also be applied manually if a journal device needs to be physically replaced.

A node’s primary journal lives within that node, next to its storage drives. In contrast, the buddy journal lives on a remote node and stores sufficient information about transactions on the primary, to allow it to restore the contents of a primary node’s journal in the event of its failure.

SyncForward is the process that enables a stale Buddy journal to reconcile with the Primary and any transactions that it might have missed. Whereas SyncBack, or restore, allows a blown Primary journal to be reconstructed from the mirroring information stored in its Buddy journal.

The next blog article in this series will dig into SJM’s architecture and management in a bit more depth.

PowerScale InsightIQ 6.0

It’s been an active April for PowerScale already. Close on the tail of the OneFS 9.11 launch comes the unveiling of the new, innovative PowerScale InsightIQ 6.0 release.

InsightIQ provides powerful performance monitoring and reporting functionality, helping to maximize PowerScale cluster performance and efficiency. This includes advanced analytics to optimize applications, correlate cluster events, and the ability to accurately forecast future storage needs.

So what new treats does this InsightIQ 6.0 release bring to the table?

Added functionality includes:

  • Greater scale
  • Expanded ecosystem support
  • Enhanced reporting efficiency
  • Streamlined upgrade and migration

InsightIQ 6.0 continues to offer the same two deployment models as its 5.x predecessors:

Deployment Model Description
InsightIQ Scale Resides on bare-metal Linux hardware or virtual machine.
InsightIQ Simple Deploys on a VMware hypervisor.

The InsightIQ Scale version resides on bare-metal Linux hardware or virtual machine, whereas InsightIQ Simple deploys via OVA on a VMware hypervisor.

In v6.0, InsightIQ Scale enjoys a substantial boost in its breadth-of-monitoring scope and can now encompass up to 20 clusters or 504 nodes.

Additionally, with this new 6.0 version, InsightIQ Scale can now be deployed on a single Linux host. This is in stark contrast to InsightIQ 5’s requirements for a three Linux node minimum installation platform.

Deployment:

The deployment options and hardware requirements for installing and running InsightIQ 6.0 are as follows:

Attribute InsightIQ 6.0 Simple InsightIQ 6.0 Scale
Scalability Up to 10 clusters or 252 nodes Up to 20 clusters or 504 nodes
Deployment On VMware, using OVA template RHEL or SLES with deployment script
Hardware requirements VMware v15 or higher:

·         CPU: 8 vCPU

·         Memory: 16GB

·         Storage: 1.5TB (thin provisioned);

Or 500GB on NFS server datastore

Up to 10 clusters and 252 nodes:

·         CPU: 8 vCPU or Cores

·         Memory: 16GB

·         Storage: 500GB

Up to 20 clusters and 504 nodes:

·         CPU: 12 vCPU or Cores

·         Memory: 32GB

·         Storage: 1TB

Networking requirements 1 static IP on the PowerScale cluster’s subnet 1 static IP on the PowerScale cluster’s subnet

 

Ecosystem support:

The InsightIQ ecosystem itself is also expanded in version 6.0 to also include SUSE Linux Enterprise Server 15 SP4, in addition to Red Hat Enterprise Linux (RHEL) versions 8.10, 9.4, and RHOSP 17. This allows customers who have standardized on SUSE to now run an InsightIQ 6.0 Scale deployment on a V15 host to monitor the latest OneFS versions.

Qualified on InsightIQ 5.2 InsightIQ 6.0
OS (IIQ Scale Deployment) RHEL 8.10 and RHEL 9.4 RHEL 8.10, RHEL 9.4, RHOSP 17, and SLES 15 SP4
PowerScale OneFS 9.3 to 9.10 OneFS 9.4 to 9.11
VMware ESXi ESXi v7.0U3 and ESXi v8.0U3 ESXi v7.0U3 and ESXi v8.0U3
VMware Workstation Workstation 17 Free Workstation 17 Free Version

Similarly, in addition to deployment on VMware ESXi 7 & 8, the InsightIQ Simple version can also be installed for free on VMware Workstation 17, providing the ability to stand up InsightIQ in a non-production or lab environment for trial or demo purpose, without incurring a VMware charge. Plus, the InsightIQ 6.0 OVA template has now been reduced in size to under 5GB, and with an installation time of less than 12 minutes.

Online Upgrade

The prerequisites for upgrading to InsightIQ 6.0 are either a Simple or Scale deployment with InsightIQ v5.1.x or v5.2.x installed and running. Additionally, the free disk space must exceed 50% of the allocated capacity.

The upgrade in 6.0 is a five step process:

First, the installer checks the current Insight IQ version, verifies there’s sufficient free disk space, and confirms that setup is ready. Next, IIQ is halted and dependencies met, followed by the installation of the new 6.0 infrastructure and a migration of the legacy InsightIQ 5.x configuration and historical report data to the new platform. Finally, the cleanup phase removes the old configuration files, etc, and InsightIQ 6.0 is ready to go.

Phase Description
Pre-check Check IIQ version; verify free disk space; confirm setup is ready
Pre-upgrade Stop IIQ and install dependencies
Install and Migrate Install IIQ 6.0 infrastructure and migrate IIQ Data
Post-upgrade Migrate historical report data
Cleanup Remove old configuration files

During the upgrade of an InsightIQ Scale deployment to v6.0, the 3-node setup will be converted to 1-node configuration. After a successful upgrade, InsightIQ will be accessible via the primary node’s IP address.

Offline Migration

The offline migration functionality in this new release facilitates the transfer of data and configuration context from InsightIQ version 4.4.1 to version 6.0. This includes support for both InsightIQ Simple and InsightIQ Scale deployments.

Additionally, the process has been streamlined in InsightIQ 6.0 so that only a single ‘iiq_data_migrations.sh’ script needs to be run to complete the migration. For example:

This is in contrast to prior IIQ releases, where separate import and export utilities were required for the migration process. Detailed migration logs are also provided in InsightIQ 6.0, located at /usr/share/storagemonitoring/logs/offline_migration/insightiq_offline_migration.log.

Durable Data Collection

Decoupled data collection and processing in IIQ 6.0 delivers gains in both performance and fault tolerance. Under the hood, InsightIQ 6.0 sees an updated architecture with the introduction of the following new components:

Component Role
Data Processor Responsible for processing and storing the data in TimescaleDB for display by Reporting service.
Temporary Datastore Stores historical statistics fetched from PowerScale cluster, in-between collection and processing.
Message Broker Facilitates inter-service communication. With the separation of data collection and data processing, this allows both services to signal to each other when their respective roles come up.
Timescale DB New database storage for the time-series data. Designed for optimized handling of historical statistics.

Telemetry Down-sampling

InsightIQ 6.0’s new TimescaleDB database now permits the storage of long-term historical data via an enhanced retention strategy:

Unlike prior InsightIQ releases, which used two data formats, with v6.0 telemetry, summary data is now stored in the following cascading levels, each with a different data retention period:

Level Sample Length Data Retention Period
Raw table Varies by metric type. Raw data sample lengths range from 30s to 5m. 24 hours
5m summary 5 minutes 7 days
15m summary 15 minutes 4 weeks
3h summary 3 hours Infinite

Note that the actual raw sample length may vary by graph/data type – from 30 seconds for CPU % Usage data up to 5 minutes for cluster capacity metrics.

Meanwhile, the new InsightIQ v6.0 code is available for download on the Dell Support site, allowing both the installation of and upgrade to this new release.

OneFS and Dell Technologies Connectivity Services Management and Troubleshooting

In this final article in the Dell Technologies Connectivity Services (DTCS) for OneFS series, we turn our attention to management and troubleshooting.

Once the provisioning process above is complete, the ‘isi connectivity settings view’ CLI command reports the status and health of DTCS operations on the cluster.

# isi connectivity settings view

        Service enabled: Yes

       Connection State: enabled

      OneFS Software ID: xxxxxxxxxx

          Network Pools: subnet0:pool0

        Connection mode: direct

           Gateway host: -

           Gateway port: -

    Backup Gateway host: -

    Backup Gateway port: -

  Enable Remote Support: Yes

Automatic Case Creation: Yes

       Download enabled: Yes

This can also be obtained from the WebUI by navigating to Cluster management > General settings > Connectivity services:

There are some caveats and considerations to keep in mind when upgrading to OneFS 9.10 or later and enabling DTCS, including:

  • DTCS is disabled when STIG Hardening applied to cluster
  • Using DTCS on a hardened cluster is not supported
  • Clusters with the OneFS network firewall enabled (‘isi network firewall settings’) may need to allow outbound traffic on port 9443.
  • DTCS is supported on a cluster that’s running in Compliance mode
  • Secure keys are held in Key manager under the RICE domain

Also, note that ESRS can no longer be used after DTCS has been provisioned on a cluster.

DTCS has a variety of components that gather and transmit various pieces of OneFS data and telemetry to Dell Support and backend services through the Embedded Service Enabler (ESE.  These workflows include CELOG events; In-product activation (IPA) information; CloudIQ telemetry data; Isi-Gather-info (IGI) logsets; and provisioning, configuration, and authentication data to ESE and the various backend services.

Activity Information
Events and alerts DTCS  can be configured to send CELOG events..
Diagnostics The OneFS isi diagnostics gather and isi_gather_info logfile collation and transmission commands have a  DTCS  option.
Healthchecks HealthCheck definitions are updated using  DTCS .
License Activation The isi license activation start command uses  DTCS  to connect.
Remote Support Remote Support uses DTCS and the Connectivity Hub to assist customers with their clusters.
Telemetry CloudIQ telemetry data is sent using DTCS.

CELOG

Once DTCS is up and running, it can be configured to send CELOG events and attachments via ESE to CLM. This can be managed by the ‘isi event channels’ CLI command syntax. For example:

# isi event channels list

ID   Name                                    Type         Enabled

------------------------------------------------------------------

2    Heartbeat Self-Test                     heartbeat    Yes

3    Dell Technologies connectivity services connectivity No

------------------------------------------------------------------

Total: 2

# isi event channels view "Dell Technologies connectivity services"

     ID: 3

   Name: Dell Technologies connectivity services

   Type: connectivity

Enabled: No

Or from the WebUI:

CloudIQ Telemetry

DTCS provides an option to send telemetry data to CloudIQ. This can be enabled from the CLI as follows;

# isi connectivity telemetry modify --telemetry-enabled 1 --telemetry-persist 0

# isi connectivity telemetry view

        Telemetry Enabled: Yes

        Telemetry Persist: No

        Telemetry Threads: 8

Offline Collection Period: 7200

Or via the DTCS WebUI:

Diagnostics Gather

Also, the ‘isi diagnostics gather’ and isi_gather_info CLI commands both now include a ‘–connectivity’ upload option for log gathers, which also allows them to continue to function when the cluster is unhealthy via a new ‘Emergency mode’. For example, to start a gather from the CLI that will be uploaded via DTCS:

# isi diagnostics gather start -–connectivity 1

Similarly, for ISI gather info:

# isi_gather_info --connectivity

Or to explicitly avoid using DTCS for ISI gather info log gather upload:

# isi_gather_info --noconnectivity

This can also be configured from the WebUI via Cluster management > General configuration > Diagnostics > Gather:

License Activation through DTCS

PowerScale License Activation (previously known as In-Product Activation) facilitates the management of the cluster’s entitlements and licenses by communicating directly with Software Licensing Central via DTCS. Licenses can either be activated automatically or manually.

The procedure for automatic activation includes:

Step 1: Connect to Dell Technologies Connectivity Services

Step 2: Get a License Activation Code

Step 3: Select modules and activate

Similarly, for manual activation:

Step 1: Download the Activation file

Step 2: Get Signed License from Dell Software Licensing Central

Step 3: Upload Signed License

To activate OneFS product licenses through the DTCS WebUI, navigate to Cluster management > Licensing. For example, on a new cluster without any signed licenses:

Click the button Update & Refresh in the License Activation section. In the ‘Activation File Wizard’, select the desired software modules.

Next select ‘Review changes’, review, click ‘Proceed’, and finally ‘Activate’.

Note that it can take up to 24 hours for the activation to occur.

Alternatively, cluster License activation codes (LAC) can also be added manually.

Troubleshooting

When it comes to troubleshooting DTCS, the basic process flow is as follows:

The OneFS components and services above are:

Component Info
ESE Embedded Service Enabler.
isi_rice_d Remote Information Connectivity Engine (RICE).
isi_crispies_d Coordinator for RICE Incidental Service Peripherals including ESE Start.
Gconfig OneFS centralized configuration infrastructure.
MCP Master Control Program – starts, monitors, and restarts OneFS services.
Tardis Configuration service and database.
Transaction journal Task manager for RICE.

Of these, ESE, isi_crispies_d, isi_rice_d, and the Transaction Journal are exclusive to DTCS and its predecessor, SupportAssist. In contrast, Gconfig, MCP, and Tardis are all legacy services that are used by multiple other OneFS components.

For its connectivity, DTCS elects a single leader single node within the subnet pool, and NANON nodes are automatically avoided. Ports 443 and 8443 are required to be open for bi-directional communication between the cluster and Connectivity Hub, and port 9443 is for communicating with a gateway. The DTCS ESE component communicates with a number of Dell backend services

  • SRS
  • Connectivity Hub
  • CLM
  • ELMS/Licensing
  • SDR
  • Lightning
  • Log Processor
  • CloudIQ
  • ESE

Debugging backend issues may involve one or more services, and Dell Support can assist with this process.

The main log files for investigating and troubleshooting DTCS issues and idiosyncrasies are isi_rice_d.log and isi_crispies_d.log. These is also an ese_log, which can be useful, too. These can be found at:

Component Logfile Location Info
Rice /var/log/isi_rice_d.log Per node
Crispies /var/log/isi_crispies_d.log Per node
ESE /ifs/.ifsvar/ese/var/log/ESE.log Cluster-wise for single instance ESE

Debug level logging can be configured from the CLI as follows:

# isi_for_array isi_ilog -a isi_crispies_d --level=debug+

# isi_for_array isi_ilog -a isi_rice_d --level=debug+

Note that the OneFS log gathers (such as the output from the isi_gather_info utility) will capture all the above log files, plus the pertinent DTCS Gconfig contexts and Tardis namespaces, for later analysis.

If needed, the Rice and ESE configurations can also be viewed as follows:

# isi_gconfig -t ese

[root] {version:1}

ese.mode (char*) = direct

ese.connection_state (char*) = disabled

ese.enable_remote_support (bool) = true

ese.automatic_case_creation (bool) = true

ese.event_muted (bool) = false

ese.primary_contact.first_name (char*) =

ese.primary_contact.last_name (char*) =

ese.primary_contact.email (char*) =

ese.primary_contact.phone (char*) =

ese.primary_contact.language (char*) =

ese.secondary_contact.first_name (char*) =

ese.secondary_contact.last_name (char*) =

ese.secondary_contact.email (char*) =

ese.secondary_contact.phone (char*) =

ese.secondary_contact.language (char*) =

(empty dir ese.gateway_endpoints)

ese.defaultBackendType (char*) = srs

ese.ipAddress (char*) = 127.0.0.1

ese.useSSL (bool) = true

ese.srsPrefix (char*) = /esrs/{version}/devices

ese.directEndpointsUseProxy (bool) = false

ese.enableDataItemApi (bool) = true

ese.usingBuiltinConfig (bool) = false

ese.productFrontendPrefix (char*) = platform/16/connectivity

ese.productFrontendType (char*) = webrest

ese.contractVersion (char*) = 1.0

ese.systemMode (char*) = normal

ese.srsTransferType (char*) = ISILON-GW

ese.targetEnvironment (char*) = PROD

And for ‘rice’.

# isi_gconfig -t rice

[root] {version:1}

rice.enabled (bool) = false

rice.ese_provisioned (bool) = false

rice.hardware_key_present (bool) = false

rice.connectivity_dismissed (bool) = false

rice.eligible_lnns (char*) = []

rice.instance_swid (char*) =

rice.task_prune_interval (int) = 86400

rice.last_task_prune_time (uint) = 0

rice.event_prune_max_items (int) = 100

rice.event_prune_days_to_keep (int) = 30

rice.jnl_tasks_prune_max_items (int) = 100

rice.jnl_tasks_prune_days_to_keep (int) = 30

rice.config_reserved_workers (int) = 1

rice.event_reserved_workers (int) = 1

rice.telemetry_reserved_workers (int) = 1

rice.license_reserved_workers (int) = 1

rice.log_reserved_workers (int) = 1

rice.download_reserved_workers (int) = 1

rice.misc_task_workers (int) = 3

rice.accepted_terms (bool) = false

(empty dir rice.network_pools)

rice.telemetry_enabled (bool) = true

rice.telemetry_persist (bool) = false

rice.telemetry_threads (uint) = 8

rice.enable_download (bool) = true

rice.init_performed (bool) = false

rice.ese_disconnect_alert_timeout (int) = 14400

rice.offline_collection_period (uint) = 7200

The ‘-q’ flag can also be used in conjunction with the isi_gconfig command to identify any values that are not at their default settings. For example, the stock (default) Rice gconfig context will not report any configuration entries:

# isi_gconfig -q -t rice

[root] {version:1}

OneFS and Provisioning Dell Technologies Connectivity Services – Part 2

In the previous article in this Dell Technologies Connectivity Services (DTCS) for OneFS Support series, we reviewed the off-cluster prerequisites for enabling DTCS on a PowerScale cluster:

  1. Upgrading the cluster to OneFS 9.10 or later.
  2. Obtaining the secure access key and PIN.
  3. Selecting either direct connectivity or gateway connectivity.
  4. If using gateway connectivity, installing Secure Connect Gateway v5.x.

In this article, we turn our attention to step 5 – provisioning Dell Technologies Connectivity Services (DTCS) on the cluster.

Note that, as part of this process, we’ll be using the access key and PIN credentials previously obtained from the Dell Support portal in step 2 above.

Provisioning DTCS on a cluster

DTCS can be configured from the OneFS 9.10 WebUI by navigating to ‘Cluster management > General settings > DTCS’.

When unconfigured, the Connectivity Services WebUI page also displays verbiage recommending the adoption of DTCS:

  1. Accepting the telemetry notice.

Selecting the ‘Connect Now’ button initiates the following setup wizard. The first step requires checking and accepting the Infrastructure Telemetry Notice:

  1. Support Contract.

For the next step, enter the details for the primary support contact, as prompted:

Or from the CLI using the ‘isi connectivity contacts’ command set. For example:

# isi connectivity contacts modify --primary-first-name=Nick --primary-last-name=Trimbee --primary-email=trimbn@isilon.com
  1. Establish Connections.

Next, complete the ‘Establish Connections’ page

This involves the following steps:

  • Selecting the network pool(s).
  • Adding the secure access key and PIN,
  • Configuring either direct or gateway access
  • Selecting whether to allow remote support, CloudIQ telemetry, and auto case creation.

a. Select network pool(s).

At least one statically-allocated IPv4 or IPv6 network subnet and pool is required for provisioning DTCS.

Select one or more network pools or subnets from the options displayed. For example, in this case ‘subnet0pool0’:

Or from the CLI:

Select one or more static subnet/pools for outbound communication. This can be performed via the following CLI syntax:

# isi connectivity settings modify --network-pools="subnet0.pool0"

Additionally, if the cluster has the OneFS network firewall enabled (‘isi network firewall settings’), ensure that outbound traffic is allowed on port 9443.

b.  Add secure access key and PIN.

In this next step, add the secure access key and pin. These should have been obtained in an earlier step in the provisioning procedure from the following Dell Support site: https://www.dell.com/support/connectivity/product/isilon-onefs.:

Alternatively, if configuring DTCS via the OneFS CLI, add the key and pin via the following syntax:

# isi connectivity provision start --access-key <key> --pin <pin>

c.  Configure access.

i. Direct access.

Or from the CLI. For example, to configure direct access (the default), ensure the following parameter is set:

# isi connectivity settings modify --connection-mode direct

# isi connectivity settings view | grep -i "connection mode"

Connection mode: direct

ii.  Gateway access.

Alternatively, to connect via a gateway, check the ‘Connect via Secure Connect Gateway’ button:

Complete the ‘gateway host’ and ‘gateway port’ fields as appropriate for the environment.

Alternatively, to set up a gateway configuration from the CLI, use the ‘isi connectivity settings modify’ syntax. For example, to configure using the gateway FQDN ‘secure-connect-gateway.yourdomain.com’ and the default port ‘9443’:

# isi connectivity settings modify --connection-mode gateway

# isi connectivity settings view | grep -i "connection mode"

Connection mode: gateway

# isi connectivity settings modify --gateway-host secure-connect-gateway.yourdomain.com --gateway-port 9443

When setting up the gateway connectivity option, Secure Connect Gateway v5.0 or later must be deployed within the data center. Note that DTCS is incompatible with either ESRS gateway v3.52 or SAE gateway v4. However, Secure Connect Gateway v5.x is backwards compatible with PowerScale OneFS ESRS and SupportAssist, which allows the gateway to be provisioned and configured ahead of a cluster upgrade to DTCS/OneFS 9.10.

d.  Configure support options.

Finally, configure the desired support options:

When complete, the WebUI will confirm that SmartConnect is successfully configured and enabled, as follows:

Or from the CLI:

# isi connectivity settings view

Service enabled: Yes

Connection State: enabled

OneFS Software ID: ELMISL0223BJJC

Network Pools: subnet0.pool0, subnet0.testpool1, subnet0.testpool2, subnet0.testpool3, subnet0.testpool4

Connection mode: gateway

Gateway host: eng-sea-scgv5stg3.west.isilon.com

Gateway port: 9443

Backup Gateway host: eng-sea-scgv5stg.west.isilon.com

Backup Gateway port: 9443

Enable Remote Support: Yes

Automatic Case Creation: Yes

Download enabled: Yes

Having worked through getting DTCS configured, up and running, in the next article in this series we’ll turn our attention to the management and troubleshooting of DTCS.

PowerScale OneFS 9.11

In the runup to next month’s Dell Technologies World 2025, PowerScale is bringing spring with the launch of the innovative OneFS 9.11 release, which shipped today (8th April 2025). This all-encompassing new 9.11 version offers PowerScale innovations in capacity, durability, replication, protocols, serviceability, and ease of use.

OneFS 9.11 delivers the latest version of PowerScale’s software platform for on-prem and cloud environments and workloads. This deployment flexibility can make it a solid fit for traditional file shares and home directories, vertical workloads like financial services, M&E, healthcare, life sciences, and next-gen AI, ML and analytics applications.

PowerScale’s scale-out architecture can be deployed on-site, in co-location facilities, or as customer managed Amazon AWS and Microsoft Azure deployments, providing core to edge to cloud flexibility, plus the scale and performance and needed to run a variety of unstructured workflows on-prem or in the public cloud.

With data security, detection, and monitoring being top of mind in this era of unprecedented cyber threats, OneFS 9.11 brings an array of new features and functionality to keep your unstructured data and workloads more available, manageable, and durable than ever.

Hardware Innovation

On the platform hardware front, OneFS 9.11 also unlocks dramatic capacity enhancements the all-flash F710 and F910, which see the introduction of support for 122TB QLC SSDs.

Additionally, support is added in OneFS 9.11 for future H and A-series chassis-based hybrid platforms.

Software Journal Mirroring

In OneFS 9.11 a new software journal mirroring capability (SJM) is added for the PowerScale all-flash F710 and F910 platforms with 61 TB or larger QLC SSDs. For these dense drive nodes, software journal mirroring negates the need for higher FEC protection levels and their associated overhead.

With SJM, file system writes are sent to a node’s local journal as well as synchronously replicated, or mirrored, to a buddy node’s journal. In the event of a failure, SJM’s automatic recovery scheme can use a Buddy journal’s mirrored contents to re-form the Primary node’s journal, avoiding the need to SmartFail the node.

Protocols

The S3 object protocol enjoys conditional write and cluster status enhancements in OneFS 9.11. With conditional write support, the addition of an ‘if-none-match’ HTTP header for ‘PutObject’ or ‘CompleteMultipartUpload’ requests guards against overwriting of existing objects with identical key names.

For cluster reporting, capacity, health, and network status are exposed via new S3 endpoints. Status monitoring is predicated on a virtual bucket and object, and reported via GETs on the virtual object to read the Cluster Status data. All other S3 calls to the virtual bucket and object are blocked, with 405 error code returned.

Replication

In OneFS 9.11, SmartSync sees the addition of backup-to-object functionality. This includes a full-fidelity file system baseline plus fast incremental replication to ECS/ObjectScale, AWS S3, and AWS Glacier IR object stores. Support is provided for the full range of OneFS path lengths, encodings, and file sizes up to 16TB – plus special files and alternate data streams (ADS), symlinks and hardlinks, sparse regions, and POSIX and SMB attributes.

OneFS 9.11 also introduces the default enablement of temporary directory hashing on new SyncIQ replication policies, thereby improving target-side directory delete performance.

Support and Monitoring

For customers that are still using Dell’s legacy ESRS connectivity service, OneFS 9.11 also includes a seamless migration path to its replacement, Dell Technologies Connectivity Services (DTCS). To ensure all goes smoothly, a pre-check phase runs a migration checklist, which must pass in order for the operation to progress. Once underway, the prior ESRS and cluster identity settings are preserved and migrated, and finally a provisioning phase completes the transition to DTCS.

In summary, OneFS 9.11 brings the following new features and functionality to the Dell PowerScale ecosystem:

Feature Description
Networking ·         Dynamic IP pools added to SmartConnect Basic
Platform ·         Support for F-series nodes with 122TB QLC SSD drives
Protocol ·         S3 cluster status API
Replication ·         SmartSync File-to-Object
Support ·         Seamless ESRS to DTCS migration

 

Reliability ·         Software Journal Mirroring for high-capacity QLC SSD nodes.

We’ll be taking a deeper look at the new OneFS 9.11 features and functionality in blog articles over the course of the next few weeks.

Meanwhile, the new OneFS 9.11 code is available on the Dell Support site, as both an upgrade and reimage file, allowing both installation and upgrade of this new release.

For existing clusters running a prior OneFS release, the recommendation is to open a Service Request with to schedule an upgrade. To provide a consistent and positive upgrade experience, Dell Technologies is offering assisted upgrades to OneFS 9.11 at no cost to customers with a valid support contract. Please refer to this Knowledge Base article for additional information on how to initiate the upgrade process.