OneFS Multi-party Authorization Configuration

The previous article in this series looked at the OneFS Multi-party Authorization architecture, which requires that one or more additional trusted parties sign off on requested changes for certain privileged actions within a PowerScale cluster.

At a high-level, the MPA workflow is as follows:

  1. The first step is registration, which requires two users with the approval privilege, both registered as internal approvers. And note that merely having an RBAC approval privilege is not enough – there is a secure process to register the users. Once there are two registered approval users, then it’s on to step #2, where the approval admin can enable the MPA feature.
  2. Once MPA is enabled, any of the predefined ‘privileged actions’ will require explicit approval from internal approvers.
  3. Step three involves generating a Privileged Action Approval Request (PAA) – either manually, or automatically when a privileged action is attempted.
  4. Next, one of the registered MPA approvers blesses or rejects the PAA Request.
  5. And finally, assuming approval was granted, the requesting administrator can now go ahead and execute their privileged action.

In order to approve requests, an approver needs to complete a secure one-time registration by entering the six-digit TOTP code, etc.

MPA can be configured and managed via the OneFS CLI, platform API, or WebUI. OneFS 9.12 sees the addition of a new ‘isi mpa’ CLI command set who’s basic syntax is as follows:

# isi mpa
Description:
    Manage Multi-Party Authorization.

Usage:
    isi mpa {<action> | <subcommand>}
        [--timeout <integer>]
        [{--help | -h}]

Actions:
    complete-registration    Complete Multi-Party Authorization registration.
    initiate-registration    Initiate Multi-Party Authorization registration.

Subcommands:
    approvers                Manage Multi-Party Authorization approvers.
    ca-certificates          Manage CA-Certificates for Multi-Party
                             Authorization.
    requests                 Manage Multi-Party Authorization requests.
    settings                 Manage Multi-Party Authorization settings.

For example, the CLI syntax to view the MPA global settings:

# isi mpa settings global view

Multi Party Authorization is enabled

Similarly, the platform API MPA endpoints reside under ‘/platform/23/mpa’. For example, the endpoint to get the MPA global settings:

https://<cluster_ip_addr>:8080/platform/23/mpa/settings/global

{

"last_update_time" : 1753132456,

"mpa_enabled" : true

}

The WebUI configuration and management portal for MPA is located at Access > Multi-Party Authorization. For example, navigating to Access > Multi-Party Authorization will yield the MPA global settings:

The CLI commands and platform API endpoints for registration, query, and approval are as follows:

MPA Action CLI Command Platform API Endpoint
Initiate (re)registration isi mpa initiate-registration (–force) POST /platform/23/mpa/initiate-registration (?force)
Complete registration isi mpa complete-registration –totp_code xxxxxx PUT /platform/23/mpa/complete-registration -d ‘{“totp_code“ : xxxxxx}’
List all approvers isi mpa approver list GET /platform/23/mpa/approvers
Query an approver isi mpa approver view <ID> GET /platform/23/mpa/approvers/<ID>
Approve a request isi mpa requests approve <id> –totp xxxxxx –approved true POST /platform/23/mpa/approval/<MPA-req-id>

The five basic steps for the MPA configuration and management are as follows:

  1. First, Create users and grant approval privilege.

The approver registration and enablement workflow is as follows:

Approver registration be configured from the CLI with the following set of commands.

First, configure two approver users – in this case ‘mpa-approver1’ and ‘mpa-approver2’:

# isi auth users create mpa-approver1 --enabled true --password <passwd>

# isi auth users create mpa-approver2 --enabled true --password <passwd>

Next, create a group for these users. Eg. ‘mpa-group’:

# isi auth group create mpa-group

Add the approver users to their new group:

# isi auth group modify mpa-group --add-user mpa-approver1

# isi auth group modify mpa-group --add-user mpa-approver2

Then add the new group (mpa-group) to the RBAC ApprovalAdmin role:

# isi –-debug auth roles modify ApprovalAdmin --add-group mpa-group

If needed, configure approvers’ SSH login to the cluster.

# isi auth roles create ssh-role

# isi auth roles modify ssh-role --add-group mpa-group --add-priv-read ISI_PRIV_LOGIN_SSH

Next, add the approver users to MPA and complete their registration. This involves the approver user logging in to their account and initiating registration, using the secret or embedded URL in conjunction with a third party code generator to create a time-based one-time password or TOTP.

Here, the ‘mpa-approver1’ user is registering first:

# isi mpa initiate-registration

   Initiated registration successfully: (Online utility can be used to   convert URL to QR code for use with authenticator)

   account:  Dell Technologies

   algorithm:  SHA1

   digits:  6

   issuer:  mpa-approver1

   period:  30

   secret:  5KHIHISJZPB7UTPRNEBLSIMI5L4R6K6UI

   url:  otpath://totp/Dell%Technologies:mpa-approver1?secret=5KHIHISJZPB7UTPRNEBLSIMI5L4R6K6UI&issuer=Dell%20Technologies&algorithm=SHA1&digits=6&period=30

The user secret or URL returned can be used to set up a TOTP in Google Authenticator, or another similar application.

Specifically, this secure one-time registration involves:

a. First, initiating the OneFS registration process to obtain a URI, which can then be generated into a QR code by a 3rd party app if desired. For example:

b. Next, converting the QR code or URI into a time-based one-time password code via Google Authenticator, Microsoft Authenticator, or another TOTP app:

c. Finally, completing the OneFS approver registration wizard:

Or from the CLI:

# isi mpa complete-registration

totp_code:  ******

Approver registered.

This process should then be repeated for the second (ie. ‘mpa-approver2’) role.

  1. Once the approvers are registered, MPA can now be enabled via its global settings.

MPA can be enabled from the WebUI under Access > Multi-Party Authorization > Settings:

Note that MPA cannot be enabled on clusters without having at least two registered approvers:

When successfully enabled, the following status banner will be displayed: 

Alternatively, enabling via the CLI can be done with the following command syntax:

# isi mpa settings global modify –enable true

# isi mpa settings global view

Multi Party Authorization is enabled

 

  1. Now MPA is up and running, privileged action requests can be generated.

We’ll use the new OneFS 9.12 Secure Snapshots feature as an example.

Once MPA is enabled on a cluster, any requests to execute a privileged request are paused pending approval.

In this case, an attempt to delete a secure snapshot results in a warning, and the requested action is suspended until approved by an authorizing party.

  1. Request Approval.

The privileged action requests can now be approved or rejected, as appropriate.

To private consent for the operation, the authorizing administrator clicks the ‘Approve’ button for the pending request, located under Access > Multi-Party Authorization > Requests:

The authorizing party is prompted for their time-based one-time password (TOTP) security authentication code as part of the approval process:

Or via the CLI:

# isi mpa requests list

# isi mpa requests approve <id> <comment> <approved> --totp-code <******> --approval-valid-before <timestamp> --zone <zone>
  1. Finally, the privileged action is ready to be executed.

Once approval has been granted, the WebUI reports a successful approval status.

The privileged secure snapshot delete operation is now able to proceed as expected:

Or from the CLI:

# isi mpa requests view <id>

In the next and final article in this series, we’ll turn our attention to MPA management and troubleshooting.

OneFS Multi-party Authorization Architecture

As we saw in the previous article in this series, OneFS Multi-party Authorization (MPA) requires that one or more additional trusted parties sign off on requested changes for certain privileged actions within a PowerScale cluster.

A prerequisite to enabling MPA in OneFS 9.12 is the requirement for two users with the approval privilege, both registered as internal approvers. Once MPA has been activated on a cluster, any of the predefined ‘privileged actions’ will require explicit approval. In order for a privileged action, such as secure snapshot deletion, to complete, one of the registered MPA approvers must bless the PAA request. Finally, assuming approval was granted, the privileged action can be executed.

The MPA privileged action approval workflow can be summarized in the following request state diagram:

Under the hood, MPA request lifecycle configuration data is stored in the form of key-value pairs, and the MPA Approvers registration status, PAA requests, and request approval details are kept in a shared SQLite database.

Note that MPA approver users can only reside in the System Zone and require the new ISI_PRIV_MPA_APPROVAL and ISI_PRIV_MPA_REQUEST privileges at a minimum.

MPA includes privileged actions, which extend across the S3 protocol, supporting immutable bucket retention and logging, cluster hardening, and the main one which is the new secure snapshots functionality.

As such, the full complement of these privileged actions in OneFS 9.12 is as follows:

Service /

Component

Action Description 
MPA register_approval_admin Registering approval admin becomes a privilege action once MPA is enabled.
MPA upload_trust_anchor Upload a trust anchor certificate after MPA is enabled.
Privileged CLI cli_mpa_check MPA check to execute specific CLI command. Eg. once root lockdown mode is enabled, elevate to root will use this.
Hardening apply_hardening Applying a Hardening Profile, such as root-lockdown, STIG.
Hardening disable_hardening Disabling a Hardening Profile, such as root-lockdown, STIG.
S3 reduce_immutable_bucket_retention Reduce bucket retention for an immutable bucket.
S3 modify_server_access_logging_config Change a bucket’s access logging configuration.
Platform reduce_immutable_bucket_retention Reduce bucket retention for an immutable bucket.
Platform modify_server_access_logging_config Changing a bucket’s access logging configuration.
Snapshot delete_snapshot Delete a snapshot.
modify_snapshot Modify a snapshot.
delete_snapshot_schedule Delete a snapshot’s schedule.
modify_snapshot_schedule Modify a snapshot’s schedule.

A number of new RBAC privileges are introduced in 9.12 to support MPA, including MPA request, MPA approval, which allows an approver to bless or reject a privileged action request, plus MPA upload, global and access zone settings:

MPA Privilege Description Permission
ISI_PRIV_MPA_APPROVAL Privilege for MPA approver to approve or reject MPA request. Write
ISI_PRIV_MPA_REQUEST Privilege required for MPA request APIs. Write/Read
ISI_PRIV_MPA_SETTINGS_GLOBAL Privilege required to enable MPA. Write
ISI_PRIV_MPA_SETTINGS_ZONE Privilege required to read MPA metadata/ MPA request-lifecycle settings. Read
ISI_PRIV_SIGNED_APPROVAL_UPLOAD Upload signed approval for Multi-Party Authorization Requests. (Applicable for 3rd party MPA only) Write

There’s also the new ‘Approval Admin’ default role, and a number of other roles that see the addition of MPA privileges, including SecurityAdmin, SystemAdmin, zone admin, and basic user, etc.

The ApprovalAdmin role is automatically assigned all the required privileges for MPA approval and configuration. For example:

# isi auth roles list | grep -i appr

ApprovalAdmin

# isi auth roles view ApprovalAdmin
       Name: ApprovalAdmin
Description: Allows MPA request approval.
    Members: root
             admin
 Privileges
             ID: ISI_PRIV_LOGIN_PAPI
     Permission: +

             ID: ISI_PRIV_MPA_APPROVAL
     Permission: w

             ID: ISI_PRIV_MPA_REQUEST
     Permission: w

             ID: ISI_PRIV_MPA_SETTINGS_GLOBAL
     Permission: r

             ID: ISI_PRIV_MPA_SETTINGS_ZONE
     Permission: r

The minimum MPA privileges required are ISI_PRIV_MPA_APPROVAL and ISI_PRIV_MPA_REQUEST.

Here are the system access zone roles that see the addition of MPA privileges in OneFS 9.12:

System Zone Roles Privilege Permission
ApprovalAdmin ISI_PRIV_MPA_APPROVAL Write
ISI_PRIV_MPA_REQUEST Write
ISI_PRIV_MPA_SETTINGS_GLOBAL Read
ISI_PRIV_MPA_SETTINGS_ZONE Read
BasicUserRole ISI_PRIV_MPA_REQUEST Write
ISI_PRIV_MPA_SETTINGS_ZONE Read
SecurityAdmin ISI_PRIV_MPA_REQUEST Write
ISI_PRIV_MPA_SETTINGS_GLOBAL Write
ISI_PRIV_MPA_SETTINGS_ZONE Read
SystemAdmin ISI_PRIV_MPA_REQUEST Write
ISI_PRIV_MPA_SETTINGS_GLOBAL Read
ISI_PRIV_MPA_SETTINGS_ZONE Read

Similarly, the OneFS 9.12 MPA privileges for non-system zone roles:

Non-System Zone Roles Privilege Permission
BasicUserRole ISI_PRIV_MPA_REQUEST Write
ISI_PRIV_MPA_SETTINGS_ZONE Read
ZoneAdmin ISI_PRIV_MPA_REQUEST Write
ISI_PRIV_MPA_SETTINGS_ZONE Read
ZoneSecurityAdmin ISI_PRIV_MPA_REQUEST Write
ISI_PRIV_MPA_SETTINGS_ZONE Read

MPA approvers can use any of a cluster’s authentication providers, such as file, local, Active Directory or LDAP, and can they grant or reject any MPA request from any zone. However, an approver cannot approve their own request, regardless of whether it’s created by them or created for them.

The CLI commands and platform API endpoints for registration, query, and approval are as follows:

MPA Action CLI Command Platform API Endpoint
Initiate (re)registration isi mpa initiate-registration (–force) POST /mpa/initiate-registration (?force)
Complete registration isi mpa complete-registration –totp_code xxxxxx PUT /mpa/complete-registration -d ‘{“totp_code“ : xxxxxx}’
List all approvers isi mpa approver list GET /mpa/approvers
Query an approver isi mpa approver view <ID> GET /mpa/approvers/<ID>
Approve a request isi mpa requests approve <id> –totp xxxxxx –approved true POST /mpa/approval/<MPA-req-id>

Note that the approver user can easily re-register at any time, say if their TOTP is lost, or for another security concern.

An approval request can be in one of five states: Approved, Cancelled, Completed, Pending, or Rejected.

These states, and their various expiry times, can be viewed with the following CLI command syntax:

# isi mpa settings request-lifecycle list

Approved
    Description: expire in configured days for privilege action execution.
    Expire_time: 7d

Cancelled
    Description: remove from system in configured days.
    Expire_time: 7d

Completed
    Description: remove from system in configured days.
    Expire_time: 30d

Pending
    Description: expire in configured days can be updated by user or approver.
    Expire_time: 7d

Rejected

    Description: remove from system in configured days.
    Expire_time: 30d

The next article in this series will focus on the specific configuration, enablement, and approval procedures for MPA.

OneFS Multi-party Authorization

Among a handful of new security features, OneFS 9.12 introduces Multi-Party Authorization (MPA). MPA is an administrative approval mechanism that requires at least one additional trusted party to sign off on a requested change, for certain privileged actions within a PowerScale cluster.

As such, MPA helps to mitigate the risk of data loss or system configuration damage from critical actions, by vastly reducing the likelihood of accidental or malicious execution of consequential operations. Conceptually, MPA is fairly analogous to a bank’s safe deposit box, which requires dual control: Both the authorized owner and a bank official must be present in order to open the vault and unlock the box, ensuring that no single party can access its contents independently (dual authorization). MPA enforces a similar security control mechanism wherein operations involving critical or sensitive systems or data require explicit approval from multiple authorized entities. This ensures that no single actor can execute high-impact actions unilaterally, thereby mitigating risk and enhancing operational integrity through enforced oversight.

As such, many enterprises require MPA in order to meet industry data protection requirements, in addition to internal security mandates and best practices.

MPA can be configured and managed from the CLI, WebUI or platform API, but note that the feature is not activated by default. Once MPA has been enabled and configured, all predefined ’privileged actions’ require additional approval from an authorizing user. So this is markedly different from earlier OneFS releases, where the original requester would likely have had the basic rights and been able to execute that action in its entirety themselves.

MPA provides an approval mechanism for a set of ‘privileged actions’, which are defined in OneFS. Additionally, a new default ‘approval admin’ RBAC role is added in OneFS 9.12 specifically for this purpose. This ApprovalAdmin role is automatically assigned all the required privileges for MPA approval and configuration. Specifically:

System Zone Roles Privilege Permission
ApprovalAdmin ISI_PRIV_MPA_APPROVAL Write
ISI_PRIV_MPA_REQUEST Write
ISI_PRIV_MPA_SETTINGS_GLOBAL Read
ISI_PRIV_MPA_SETTINGS_ZONE Read

There are no special licensing requirements for enabling or running MPA. So, upon installation or upgrade to OneFS 9.12, once two or more approval users have been registered, the MPA feature itself can be enabled:

Note that MPA is incompatible with, and therefore not supported on, a PowerScale cluster that is already running in Compliance Mode.

Once MPA is up and running, any approval that’s granted for a privileged operation will remain valid and viable for a defined duration. When approving a request, the approver can specify an expiry time. Once that allotted time has passed, the approved request can no longer be used to execute an action, and a new approval must be generated. If the approver does not specify an expiry time, then, by default, their approval will be valid for seven days.

At a high-level, the MPA workflow is as follows:

  1. The first step is registration, which requires two users with the approval privilege, both registered as internal approvers. And note that merely having an RBAC approval privilege is not enough – there is a secure process to register the users. Once there are two registered approval users, the approval admin can enable the MPA feature.

    With MPA is enabled, Privileged Action Approval Requests (PAA) can be generated – either manually, or automatically when a privileged action is attempted. Any of the predefined ‘privileged actions’ will require explicit approval from internal approvers. Next, one of the registered MPA approvers blesses or rejects the PAA Request. And finally, assuming approval was granted, the requesting administrator can now go ahead and execute their privileged action.

    Step Action Description
    1 Approver Registration Registration requires two users with the approval privilege, both registered as internal approvers. MPA requires a secure process to register the users.
    2 MPA Enablement Enable MPA via the CLI, WebUI, or platform API. Any of the predefined ‘privileged actions’ will require explicit approval from internal approvers.
    3 Request Creation Generate a Privileged Action Approval Request (PAA) – either manually, or automatically when a privileged action is attempted.
    4 Request Approval One of the registered MPA approvers blesses or rejects the PAA request.
    5 Request Execution Assuming approval was granted, the requesting administrator can now go ahead and execute their privileged action.

MPA approver users can only reside in the System Zone, and require the new ISI_PRIV_MPA_APPROVAL and ISI_PRIV_MPA_REQUEST privileges at a minimum. So while a request can be generated from any zone, an approver for that request must reside in the system zone.

An approval request can be in one of the following states:

Approval Request State Description Expiry Time
Approved Indicates request is approved by an approver. Will expire in 7 days by default, or at approver-configured expiry time. 7 days
Cancelled Indicates request is Cancelled by user who created the request. will be removed from system in 7 days. 7 days
Completed Approved request has been executed, and cannot be used again. Will be removed from system in 30 days. 30 days
Pending Awaiting for approval, prior to expiry in 7 days. 7 days
Rejected Indicates request is rejected by approver, and will be removed from system in 30 days. 30 days

Note that, in order to approve or reject requests, an approver needs to complete a secure one-time registration.

In OneFS 9.12, the MPA request lifecycle configuration is predefined, and no updates are supported. Only the Approval API (approval_valid_before) can shorten MPA request expiration time in approved status. MPA requests are zone scoped and managed by MPA request lifecycle configuration. An authorized user can view or update a request while it’s in a ‘pending’ state, but only an MPA approver user can move the request to an ‘approved’ or ‘rejected’ state. While an approval is valid (i.e. not expired), the original requester can use it to execute the privileged action. A request stays in an ‘approved’ state until its approval expires, at which point it is moved to a ‘completed’ state.

MPA includes privileged actions, which extend across the S3 protocol, supporting immutable bucket retention and logging, cluster hardening, and the main one which is the new secure snapshots functionality. These are available for selection in the MPA configuration, for example from the WebUI MPA Requests dropdown menu:

In the next article in this series, we’ll look closer at the architectural fundaments that constitute MPA.

PowerScale Cybersecurity Suite

The Dell PowerScale Cybersecurity Suite represents a forward-thinking, integrated approach to addressing the growing challenges in cybersecurity and disaster recovery.

It aligns with Gartner’s definition of Cyberstorage, a category of solutions specifically designed to secure data storage systems against modern threats. These threats include ransomware, data encryption attacks, and theft, and the emphasis of Cyberstorage is on active prevention, early detection, and the ability to block attacks before they cause damage. Recovery capabilities are also tailored to the unique demands of data storage environments, making the protection of data itself a central layer of defense.

Unlike traditional solutions that often prioritize post-incident recovery – leaving organizations exposed during the critical early stages of an attack – Dell’s PowerScale Cybersecurity Suite embraces the Cyberstorage paradigm by offering a comprehensive set of capabilities that proactively defend data and ensure operational resilience. These include:

Capability Details
Active Defense at the Data Layer The suite integrates AI-driven tools to detect and respond to threats in real-time, analyzing user behavior and unauthorized data access attempts. Bidirectional threat intelligence integrates seamlessly into SIEM, SOAR, and XDR platforms for coordinated protection at every layer.
Automated Threat Prevention Includes features like automated snapshots, operational air-gapped vaults, and immediate user lockouts to stop attacks before damage spreads. Attack simulation tools also ensure that the defenses are always optimized for emerging threats.
NIST Framework Alignment Adheres to the National Institute of Standards and Technology (NIST) cybersecurity framework, providing a structured approach to identifying, protecting, detecting, responding to, and recovering from threats. This comprehensive protection eliminates vulnerabilities overlooked by traditional backup and security tools, enabling organizations to stay ahead of today’s evolving cyber risks while ensuring business continuity.
Rapid Recovery and Resilience With secure backups and precision recovery, organizations can rapidly restore specific files or entire datasets without losing unaffected data. Recovery is accelerated by integrated workflows that minimize downtime.

By embedding detection, protection, and response directly into the data layer, the Dell PowerScale Cybersecurity Suite adopts a proactive and preventive approach to safeguarding enterprise environments:

Approach Details
Identification & Detection Detecting potential incursions in real time using AI-driven behavioral analytics.
Protection Protecting data at its source with advanced security measures and automated threat monitoring.
Response Responding decisively with automated remediation to minimize damage and accelerate recovery, ensuring seamless continuity.
Recovery Providing recovery tools and forensic data and recovery tools to quickly restore clean data in the event of a breach, minimizing business disruption.

It begins with identification and detection, where data is protected at its source through advanced security measures and continuous automated threat monitoring. Protection is achieved by identifying potential incursions in real time using AI-driven behavioral analytics, allowing organizations to act before threats escalate. When a threat is detected, the suite responds with automated remediation processes that minimize damage and accelerate recovery, ensuring uninterrupted operations.

This integrated approach enables Dell to address the full lifecycle of security and recovery within the PowerScale platform, delivering exceptional resilience across IT environments.

Released globally on August 28, 2025, the Dell PowerScale Cybersecurity Suite is available in three customizable bundles tailored to meet diverse operational and regulatory needs.

Bundle Details
Cybersecurity Bundle Leverages AI-driven threat detection, Zero Trust architecture, and automated risk mitigation to fortify security.
Airgap Vault Bundle Extends the capabilities of the Cybersecurity Bundle by adding isolated, secure backups for robust ransomware protection. This bundle requires the Cybersecurity Bundle and the Disaster Recovery Bundle..
Disaster Recovery Bundle Prioritizes rapid recovery with near-zero Recovery Point Objectives (RPO), Recovery Time Objectives (RTO), and seamless failover capabilities.

The Cybersecurity Bundle leverages AI-driven threat detection, Zero Trust architecture, and automated risk mitigation to strengthen data protection. The Airgap Vault Bundle builds on this by adding isolated, secure backups for enhanced ransomware defense, and requires both the Cybersecurity and Disaster Recovery bundles for full deployment. The Disaster Recovery Bundle focuses on rapid recovery, offering rapid Recovery Point Objectives (RPO) and Recovery Time Objectives (RTO), along with seamless failover capabilities to ensure business continuity.

Customers can choose specific bundles based on their requirements, with the only prerequisite being that the Airgap Vault bundle must be deployed alongside both the Cybersecurity and Disaster Recovery bundles to ensure full functionality and integration.

Built upon the venerable PowerScale platform, the suite is engineered to protect unstructured data and maintain operational availability in today’s increasingly complex threat landscape. As such, it offers a comprehensive set of tools, techniques, and architectural flexibility to deliver multilayered security and responsive recovery.

This enables organizations to design robust solutions that align with their specific security priorities—from advanced threat detection to seamless disaster recovery.

Among its key benefits, the suite includes automated threat response capabilities that swiftly mitigate risks such as malicious data encryption and exfiltration.

Feature Details
Automated threat response The suite features automated responses to cybersecurity threats, such as data encryption, and the prevention of data exfiltration, helping mitigate risks swiftly and effectively.
Secure Operational Airgap Vault Data is protected within an isolated operational airgap vault, and will only be transferred to the operational airgap vault if the production storage environment is not under attack, ensuring critical assets remain secure and inaccessible to unauthorized actors.
Ecosystems Integration Seamlessly integrates with leading endpoint protection and incident response software, automating and simplifying operations during a cyberattack to ensure a coordinated and efficient response.
DoD-Certified Hardware Integration Designed to enhance PowerScale’s DoD APL certified hardware, meeting rigorous cybersecurity standards, and providing customers with a trusted platform on which to build their defenses. The suite’s advanced capabilities, robust protection, and proven hardware deliver a comprehensive cyber and DR solution tailored to meet today’s complex security challenges.

It also features a secure operational airgap vault, which ensures that data is only transferred when the production environment is verified to be safe, keeping critical assets isolated and protected from unauthorized access. Integration with leading endpoint protection and incident response platforms allows for coordinated and efficient responses during cyberattacks, streamlining operations and reducing complexity.

The suite is also designed to complement PowerScale’s DoD APL-certified hardware, meeting stringent cybersecurity standards and providing a trusted foundation for enterprise defense strategies. Its advanced capabilities, combined with proven hardware, deliver a comprehensive cybersecurity and disaster recovery solution tailored to modern security challenges.

The Dell PowerScale Cybersecurity Suite is engineered to support petabyte-scale environments containing billions of files distributed across multiple PowerScale clusters. There is no hard-coded limit on the volume of data it can manage. However, actual throughput and failover times are influenced by factors such as SyncIQ bandwidth, the degree of policy parallelism, and the overall readiness of the environment. To help forecast recovery timelines, the suite includes an Estimated Failover Time engine that uses real metrics to generate policy-specific projections.

Superna software, which underpins the suite, interacts with PowerScale systems via REST APIs. These APIs provide access to inventory data, facilitate the orchestration of snapshot creation and deletion, and enable user lockout and restoration of access to shared resources. Specifically, the suite utilizes the file system API and the system configuration API to perform these operations. Meanwhile, configuration and management can be performed via the comprehensive, intuitive WebUI:

The performance impact of running the Data Security Edition—which includes the Cybersecurity Bundle with Ransomware Defender and Easy Auditor—is primarily tied to the frequency and volume of API calls made to PowerScale. These calls can be managed and tuned by administrators and support teams during deployment and system configuration. One known scenario that may affect cluster performance occurs when user permissions are broadly configured to allow all users access to all shares. In such cases, default settings combined with enabled snapshots can lead to excessive API endpoint activity, as the system attempts to create and delete snapshots across all shares. To mitigate this, Dell recommends disabling default snapshots during deployment and instead configuring snapshots only for critical data paths. Outside of this specific configuration, Dell has not identified any significant performance impacts under normal operating conditions.

The Dell-branded software is built on the same codebase as the standard Superna 2.x release, with branding determined by licensing. This branding helps  ensure consistency, simplify user interactions, and reinforce the alignment between the suite and Dell Technologies’ broader product portfolio.

With regard to release cadence, there is typically a 60-day lead time between Superna releasing a new version and Dell launching its branded equivalent, allowing for additional QA, regression, and longevity testing.

With the Cybersecurity suite, Dell Technologies will directly manage implementation services and provide first-call support. This approach ensures a seamless and customer-focused experience, with faster response times and streamlined service delivery. It also guarantees that customers receive consistent and integrated support throughout their deployment and operational lifecycle.

It is important to note that Dell-branded and Superna-branded software cannot be mixed within the same PowerScale cluster. Currently, the Dell PowerScale Cybersecurity Suite is intended exclusively for new deployments and is not compatible with existing Superna-branded environments. Migration from Superna to Dell-branded software is not supported at this time, unless an entire Superna Eyeglass solution is being renewed. However, Dell is actively working to expand migration options in future releases.

PowerScale InsightIQ 6.1

It’s been a sizzling summer for Dell PowerScale to date! Hot on the heels of the OneFS 9.12 launch comes the unveiling of the innovative new PowerScale InsightIQ 6.1 release.

InsightIQ provides powerful performance and health monitoring and reporting functionality, helping to maximize PowerScale cluster efficiency. This includes advanced analytics to optimize applications, correlate cluster events, and the ability to accurately forecast future storage needs.

So what new goodness does this InsightIQ 6.1 release add to the PowerScale metrics and monitoring mix?

Additional functionality includes:

Feature New IIQ 6.1 Functionality
Ecosystem support ·         InsightIQ is qualified on Ubuntu
Flexible Alerting ·         Defining custom alerts on the most used set of granular metrics.

·         Nine new KPIs for a total of 16 KPIs.

·         Increased granularity of alerting and many more

Online Migration from Simple to Scale ·         Customers with InsightIQ 6.0.1 Simple (OVA) can now migrate data and functionalities to InsightIQ 6.1.0 Scale.
Self-service Admin Password Reset ·         Administrators can reset their own password through a simple, secure flow with reduced IT dependency

InsightIQ 6.1 continues to offer the same two deployment models as its predecessors:

Deployment Model Description
InsightIQ Scale Resides on bare-metal Linux hardware or virtual machine.
InsightIQ Simple Deploys on a VMware hypervisor (OVA).

The InsightIQ Scale version resides on bare-metal Linux hardware or virtual machine, whereas InsightIQ Simple deploys via OVA on a VMware hypervisor.

InsightIQ v6.x Scale enjoys a substantial breadth-of-monitoring scope, with the ability to encompass 504 nodes across up to 20 clusters.

Additionally, InsightIQ v6.x Scale can be deployed on a single Linux host. This is in stark contrast to InsightIQ 5’s requirements for a three Linux node minimum installation platform.

Deployment:

The deployment options and hardware requirements for installing and running InsightIQ 6.x are as follows:

Attribute InsightIQ 6.1 Simple InsightIQ 6.1 Scale
Scalability Up to 10 clusters or 252 nodes Up to 20 clusters or 504 nodes
Deployment On VMware, using OVA template RHEL, SLES, or Ubuntu with deployment script
Hardware requirements VMware v15 or higher:

·         CPU: 8 vCPU

·         Memory: 16GB

·         Storage: 1.5TB (thin provisioned);

Or 500GB on NFS server datastore

Up to 10 clusters and 252 nodes:

·         CPU: 8 vCPU or Cores

·         Memory: 16GB

·         Storage: 500GB

Up to 20 clusters and 504 nodes:

·         CPU: 12 vCPU or Cores

·         Memory: 32GB

·         Storage: 1TB

Networking requirements 1 static IP on the PowerScale cluster’s subnet 1 static IP on the PowerScale cluster’s subnet

Ecosystem support:

The InsightIQ ecosystem itself is also expanded in version 6.1 to also include Ubuntu 24.04 Online deployment and OpenStack RHOSP 21 with RHEL 9.6, in addition to SLES 15 SP4 and Red Hat Enterprise Linux (RHEL) versions 9.6 and 8.10. This allows customers who have standardized on Ubuntu Linux to now run an InsightIQ 6.1 Scale deployment on a v24.04 host to monitor the latest OneFS versions.

Qualified on InsightIQ 6.0 InsightIQ 6.1
OS (IIQ Scale Deployment) RHEL 8.10, RHEL 9.4, and SLES 15 SP4 RHEL 8.10, RHEL 9.6, and SLES 15 SP4
PowerScale OneFS 9.4 to 9.11 OneFS 9.5 to 9.12
VMware ESXi ESXi v7.0U3 and ESXi v8.0U3 ESXi v8.0U3
VMware Workstation Workstation 17 Free Version Workstation 17 Free Version
Ubuntu Ubuntu 24.04 Online deployment
OpenStack RHOSP 17 with RHEL 9.4 RHOSP 21 with RHEL 9.6

Similarly, in addition to deployment on VMware ESXi 8, the InsightIQ Simple version can also be installed for free on VMware Workstation 17, providing the ability to stand up InsightIQ in a non-production or lab environment for trial or demo purposes, without incurring a VMware licensing charge.

Additionally, the InsightIQ OVA template is now reduced in size to under 5GB, and with an installation time of less than 12 minutes.

Online Upgrade

The IIQ upgrade in 6.1 is a six step process:

First, the installer checks the current Insight IQ version, verifies there’s sufficient free disk space, and confirms that setup is ready. Next, IIQ is halted and dependencies met, followed by the installation of the 6.1 infrastructure and a migration of legacy InsightIQ configuration and historical report data to the new platform. The cleanup phase removes the old configuration files, etc, followed by the final phase which upgrades alerts and removes the lock, leaving InsightIQ 6.1 ready to roll.

Phase Details
Pre-check •       docker command

•        IIQ version check 6.0.1

•       Free disk space

•       IIQ services status

•       OS compatibility

Pre-upgrade •       EULA accepted

•       Extract the IIQ images

•       Stop IIQ

•       Create necessary directories

Upgrade •       Upgrade addons services

•       Upgrade IIQ services except alerts

•       Upgrade EULA

•       Status Check

Post-upgrade •       Update admin email

•       Update IIQ metadata

Cleanup •       Replace scripts

•       Remove old docker images

•       Remove upgrade and backup folders

Upgrade Alerts and Unlock •       Trigger alert upgrade

•       Clean lock file

The prerequisites for upgrading to InsightIQ 6.1 are either a Simple or Scale deployment with 6.0.1 installed, and with a minimum of 40GB free disk space.

The actual upgrade is performed by the ‘upgrade-iiq.sh’ script:

Specific steps in the upgrade process are as follows:

  • Download and uncompress the bundle
# tar xvf iiq-install-6.1.0.tar.gz
  • Enter InsightIQ folder and un-tar upgrade scripts
# cd InsightIQ
# tar xvf upgrade.tar.gz
  • Enter upgrade scripts folder
# cd upgrade/
  • Start upgrade. Note that the usage is same for both the Simple and Scale InsightIQ deployments.
# ./upgrade-iiq.sh -m <admin_email>

Upon successful upgrade completion, InsightIQ will be accessible via the primary node’s IP address.

Online Simple-to-Scale Migration

The Online Simple-to-Scale Migration feature enables seamless migration of data and functionalities from InsightIQ version 6.0.1 to version 6.1. This process is specifically designed to support migrations from InsightIQ 6.0.1 Simple (OVA) deployments to InsightIQ 6.1 Scale deployments.

Migration is supported only from InsightIQ version 6.0.1. To proceed, the following prerequisites must be met:

  • An InsightIQ 6.0.1 Simple deployment running IIQ 6.0.1.
  • An InsightIQ Scale deployment running IIQ 6.1.0 installed and the EULA accepted.

The ‘iiq_data_migration’ script can be run as follows to initiate a migration:

# cd /usr/share/storagemonitoring/online_migration
# bash iiq_data_migration.sh

Additionally, detailed logs are available at the following locations for monitoring and verifying the migration process:

Logfile Location
Metadata Migration Log /usr/share/storagemonitoring/logs/online_migration/insightiq_online_migration.log
Cluster Data Migration Log /usr/share/storagemonitoring/logs/clustermanagement/insightiq_cluster_migration.log

 Self-service Admin Password Reset

InsightIQ 6.1 introduces a streamlined self-service password reset feature for administrators. This secure process allows admins to reset their own passwords without IT intervention.

Key features include one-time password (OTP) verification, ensuring only authorized users can reset passwords. Timeout enforcement means OTPs expire after 5 minutes for added security, and accounts are locked after five failed attempts to prevent brute-force attacks.

Note that SMTP must be configured in order to receive OTPs via email.

Flexible Alerting

InsightIQ 6.1 enhances alerting capabilities with 16 total KPIs/metrics, including 9 new ones. Key improvements include:

  • Greater granularity (beyond cluster-level alerts)
  • Support for sub-filters and breakouts
  • Multiple operators and unit-based thresholding
  • Aggregator and extended duration support

Several metrics have been transformed and/or added in version 6.1. For example:

IIQ 6.1 Metric IIQ.6.0 Metric
Active Clients ·         Active Clients NFS

·         Active Clients SMB1

·         Active Clients SMB2

Average Disk Hardware Latency
Average Disk Operation Size
Average Pending Disk Operation Count ·         Pending Disk Operation Count
Capacity ·         Drive Capacity

·         Cluster Capacity

·         Node Capacity

·         Nodepool Capacity

Connected Clients ·         Connected Clients NFS

·         Connected Clients SMB

CPU Usage ·         CPU Usage
Disk Activity
Disk Operations Rate ·         Pending Disk Operation Count
Disk Throughput Rate
External Network Errors Rate
External Network Packets Rate
External Network Throughput Rate ·         Network Throughput Equivalency
File System Throughput Rate
Protocol Operations Average Latency ·         Protocol Latency NFS

·         Protocol Latency SMB

Also, clusters can now be directly associated with alert rules:

The generated alerts page sees the addition of a new ‘Metric’ field:

For example, an alert can now be generated at the nodepool level for the metric ‘External Network Throughput Rate’:

IIQ 6.1 also includes an updated email format, as follows:

Alert Migration

The alerting system has transitioned from predefined alerting to flexible alerting. During this migration, all alert policies, associated rules, resources, notification rules, and generated alerts are automatically migrated—no additional steps are required.

Key differences include:

IIQ 6.1 Flexible Alerting IIQ 6.0 Predefined Alerting
·         Each alert rule is associated with only one cluster (1:1 mapping). ·         Alert rules and resources are tightly coupled with alert policies.
·         A policy can still have multiple rules, but resources are now linked directly to rules, not policies.

·         This results in N × M combinations of alert rules and clusters (N = resources, M = rules).

·         A single policy can be linked to multiple rules and resources.

For example, imagine the following scenario:

  • Pre-upgrade (Predefined Alerting):

An IIQ 6.0 1 instance has a policy (Policy1), which is associated with two rules (CPU Rule & Capacity Ruleand 4 clusters (Cluster1-4).

  • Post-Upgrade (Flexible Alerting):

Since only one resource can be associated with one alert rule, a separate alert rule will be created for each cluster. So, after upgrading to IIQ 6.1, Policy1 will now have four individual cluster CPU Alert rules and four individual cluster Capacity Alert rules:

If an IIQ 6.1 upgrade happens to fail due to alert migration, a backup of the predefined alerting database is automatically created. To retry the migration, run:

# bash /usr/share/storagemonotoring/scripts/retrigger_alerts_upgrade.sh

Plus, for additional context and troubleshooting, the alert migration logs can be found at:

 /usr/share/storagemonitoring/logs/alerts/alerts_migration.log

Durable Data Collection

Data collection and processing in IIQ 6.x provides both performance and fault tolerance, with the following decoupled architecture:

Component Role
Data Processor Responsible for processing and storing the data in TimescaleDB for display by Reporting service.
Temporary Datastore Stores historical statistics fetched from PowerScale cluster, in-between collection and processing.
Message Broker Facilitates inter-service communication. With the separation of data collection and data processing, this allows both services to signal to each other when their respective roles come up.
Timescale DB New database storage for the time-series data. Designed for optimized handling of historical statistics.

InsightIQ TimescaleDB database stores long-term historical data via the following retention strategy:

Telemetry data is summarized and stored in the following cascading levels, each with a different data retention period:

Level Sample Length Data Retention Period
Raw table Varies by metric type. Raw data sample lengths range from 30s to 5m. 24 hours
5m summary 5 minutes 7 days
15m summary 15 minutes 4 weeks
3h summary 3 hours Infinite

Note that the actual raw sample length may vary by graph/data type – from 30 seconds for CPU % Usage data up to 5 minutes for cluster capacity metrics.

Meanwhile, the new InsightIQ v6.1 code is available for download on the Dell Support site, allowing both the installation of and upgrade to this new release.

ObjectScale XF960

Fresh off the launch of the new ObjectScale 4.1 release, Dell Technologies has announced the general availability of the ObjectScale XF960 platform, a next-generation all-flash object storage appliance designed to meet the performance demands of AI, analytics, and unstructured data workloads. The XF960 is now ready to ship, offering a compelling blend of speed, scalability, and efficiency.

Built to support performance-driven workloads, the XF960 enables organizations to unlock the full potential of their data. Whether training complex AI models, managing large datasets, or deploying cloud-native applications, the XF960 provides the object storage substrate needed to drive innovation.

As Dell’s highest-performing object storage platform to-date, the new XF960 delivers up to 300% more read throughput, 42% more write throughput, plus 75% lower read and 42% lower write response times than the previous-generation EXF900.

The XF960 scales effortlessly from small clusters to large enterprise deployments, maintaining performance and manageability throughout. It also introduces advanced data efficiency features, including five user-configurable compression modes—LZ4, Zstandard, and Deflate among them—allowing for up to a 9:1 compression ratio on certain workloads.

Enhancing its S3 protocol compatibility, the XF960 supports push-based event notifications, up to three times faster object listing, S3FS file system mounting, and seamless integration with the latest AWS SDKs, improving both data access and developer productivity.

Designed for flexible integration, the XF960 accommodates the expansion of existing ECS environments, and is initially supported with the new ObjectScale 4.1 code which dropped last week (8/12/25).

As compared to the EXF900, the XF960 features up-rev’d hardware, including a 2RU PowerEdge R760 chassis, dual Intel Sapphire Rapids CPUs with 32 cores, 256GB DDR5 memory, and support for NVMe drives ranging from 7.68TB to 61.44TB. It also includes 100GbE front-end and back-end NICs, dual 1400W power supplies, and S5448 switches.

  EXF900 XF960
CPU Dual Intel Cascade Lake 24 Cores (165 W) Dual Intel Sapphire Rapids 32 Cores (270W)
RAM 192GB RAM per node, installed as 12x16GB DDR4 RDIMMs 256GB RAM per node, installed as 16x16G DDR5 RDIMMs
SSDs

(NVMe)

3.84TB ISE

7.68TB ISE

15.36TB TLC ISE

61.44TB QLC ISE

12 and 24 drive configurations

7.68TB TLC ISE

15.36TB TLC SED FIPS

30TB QLC ISE

61.44TB QLC ISE

6, 12 and 24 drive configurations

Front End NIC 25GbE, 100GbE 100GbE
Back End NIC 25GbE, 100GbE 100GbE
Power Dual 1100W PSUs Dual 1400W PSUs
Back/front-end Switches S5248 S5448

For existing ObjectScale all-flash customers, while it is technically possible to intermix EXF900 and XF960 nodes, it is not recommended due to performance limitations. Mixed clusters will operate at EXF900 performance levels. That said, the  next ObjectScale release, v4.2, will introduce improvements for mixed environments.

PowerScale OneFS 9.12

Dell PowerScale is already powering up the summer with the launch of the innovative OneFS 9.12 release, which shipped today (14th August 2025). This new 9.12 release has something for everyone, introducing PowerScale innovations in security, serviceability, reliability, protocols, and ease of use.

OneFS 9.12 represents the latest version of PowerScale’s common software platform for on-premises and cloud deployments. This can make it a excellent choice for traditional file shares and home directories, vertical workloads like M&E, healthcare, life sciences, financial services, plus generative and agentic AI, and other ML/DL and analytics applications.

PowerScale’s scale-out architecture can be deployed on-site, in co-lo facilities, or as customer-managed PowerScale for Amazon AWS and Microsoft Azure deployments, providing core to edge to cloud flexibility, plus the scale and performance needed to run a variety of unstructured workflows on-prem or in the public cloud.

With data security, detection, and monitoring being paramount in this era of unprecedented cyber threats, OneFS 9.12 brings an array of new features and functionality to keep your unstructured data and workloads more available, manageable, and secure than ever.

Protocols

On the S3 object protocol front, OneFS 9.12 sees the debut of new security and immutability functionality. S3 Object Lock extends the standard AWS S3 Object Lock model with PowerScale’s own ‘Bucket-Lock’ protection mode semantics. Object Lock capabilities can operate on a per-zone basis and per-bucket, using the cluster’s compliance clock for the date and time evaluation of object’s retention. Additionally, S3 protocol access logging and bucket logging are also enhanced in this new 9.12 release.

Networking

As part of PowerScale’s seamless protocol failover experience for customers, OneFS 9.12 sees SmartConnect’s default IP allocation method for new pools move to ‘dynamic’. While SMB2 and SMB3 are the primary focus, all protocols benefit from this enhancement, including SMB, NFS, S3, and HDFS. Legacy pools will remain unchanged upon upgrade to 9.12, but any new pools will automatically be provisioned as dynamic (unless manually configured as ‘static’).

Security

In the interests of increased security and ransomware protection, OneFS 9.12 includes new Secure Snapshots functionality. Secure Snapshots provide true snapshot immutability, as well as protection for snapshot schedules, in order to protect against alteration or deletion, either accidentally or by a malicious actor.

Secure snapshots are built upon Multi-party Authorization (MPA), also introduced in OneFS 9.12. MPA prevents an individual administrator from executing privileged operations, such as configuration changes on snapshots and snapshot schedules, by requiring two or more trusted parties to sign off on a requested change for the privileged actions within a PowerScale cluster.

OneFS 9.12 also introduces support for common access cards (CAC) and personal identity verification (PIV) smart cards, providing physical multi-factor authentication (MFA), allowing users to SSH to a PowerScale cluster using the same security badge that grants them access into their office. In addition to US Federal mandates, CAC/PIV integration is a requirement for many security conscious organizations across the public and private sectors.

Upgrade

One-click upgrades in OneFS 9.12 allow a cluster to automatically display and download available  trusted upgrade packages from Dell Support, which can be easily applied via ‘one click installation’ from the OneFS WebUI or CLI. Upgrade package versions are automatically managed by Dell in accordance with a cluster’s telemetry data.

Support

OneFS 9.12 introduces an auto-healing capability, where the cluster detects problems using the HealthCheck framework and automatically executes a repair action for known issues and failures. This helps to increase cluster availability and durability, while reducing the time to resolution and the need for technical support engagements. Furthermore, additional repair-actions can be added at any point, outside of the general OneFS release cycle.

Hardware Innovation

On the platform hardware front, OneFS 9.12 also introduces an HDR Infiniband front-end connectivity option for the PowerScale PA110 performance and backup accelerator. Plus, 9.12 also brings a fast reboot enhancement to the high-memory PowerScale F-series nodes.

In summary, OneFS 9.12 brings the following new features and functionality to the Dell PowerScale ecosystem:

Area Feature
Networking ·         SmartConnect dynamic allocation as the default.
Platform ·         PowerScale PA110 accelerator front-end Infiniband support.

·         Conversion of front-end Ethernet to Infiniband support for F710 & F910.

·         F-series fast reboots.

Protocol ·         S3 Object Lock.

·         S3 Immutable SmartLock bucket for tamper-proof objects.

·         S3 protocol access logging.

·         S3 bucket logging.

Security ·         Multi-party authorization for privileged actions.

·         CAC/PIV smartcard SSH access.

·         Root lockdown mode.

·         Secure Snapshots with MPA override to protect data when retention period has not expired.

Support ·         Custer-level inventory request API.

·         In-field support for back-end NIC changes.

Reliability ·         Auto Remediation self-diagnosis and healing capability.
Upgrade ·         One-click upgrade.

We’ll be taking a deeper look at OneFS 9.12’s new features and functionality in future blog articles over the course of the next few weeks.

Meanwhile, the new OneFS 9.12 code is available on the Dell Support site, as both an upgrade and reimage file, allowing both installation and upgrade of this new release.

For existing clusters running a prior OneFS release, the recommendation is to open a Service Request with to schedule an upgrade. To provide a consistent and positive upgrade experience, Dell is offering assisted upgrades to OneFS 9.12 at no cost to customers with a valid support contract. Please refer to Knowledge Base article KB544296 for additional information on how to initiate the upgrade process.

ObjectScale 4.1

Hot off the press comes ObjectScale version 4.1 – a major release of Dell’s enterprise-grade object storage platform. As a foundational component of the Dell AI Data Platform, ObjectScale 4.1 delivers enhanced scalability, performance, and resilience that’s engineered to meet the evolving demands of AI-driven workloads and modern data ecosystems.

This release is available as a software upgrade for existing ECS and ObjectScale environments, and the core new features and functionality introduced in this ObjectScale 4.1 release include:

Storage Efficiency and Operational Experience

On the storage efficiency and operation experience front, ObjectScale 4.1 introduces support for multiple compression modes including LZ4, Zstandard, Deflate, and Snappy, configurable via both the UI and API. This flexibility allows admins to fine-tune compression strategies to balance performance, cost, and workload characteristics.

Post-upgrade to ObjectScale 4.1, the default algorithms are updated to LZ4 for AFA appliances (EXF900 and XF960) and Zstandard for HDD appliances (EX300, EX3000, EX500, EX5000, X560). Storage admins can change the algorithm at any time via the UI or API, based on workload or use case.

Improved garbage collection throughput enables faster reclamation of deleted capacity. Enhanced monitoring, alerting, and logging tools provide greater visibility into background processes, contributing to overall cluster stability.

An updated dashboard offers refined views of user, available, and reserved capacity. Automated alerts notify administrators when usage exceeds 90%, indicating a transition to Read-Only mode for the affected Virtual Data Center (VDC).

New port-level bandwidth controls for replication traffic allow for more predictable performance and optimized resource allocation across distributed environments.

Security and Data Protection

Within the security and data protection realm, ObjectScale now provides support for Self-Encrypting Drives (SEDs) with local key management via Dell iDRAC. This ensures hardware-level encryption and secure, appliance-local key handling for enhanced data protection.

TLS 1.3, the latest version of the Transport Layer Security protocol, is also supported in ObjectScale 4.1. This upgrade delivers stronger encryption, faster handshakes, and the removal of legacy algorithms, improving both control and data path security.

Expanded Capabilities for Modern Workloads

ObjectScale 4.1 now offers up to 3x faster object listing performance in multi-VDC environments. This enhancement improves data browsing and discovery, with better handling of deleted metadata and validation of Untrusted Listing Keys.

Through webhook-based APIs, ObjectScale can now push real-time notifications to external applications when events such as object creation, deletion, or modification occur—enabling responsive, event-driven architectures.

Support for S3FS in 4.1 allows users to mount S3 buckets on Linux systems as local file systems. This simplifies access and management, particularly for legacy applications that rely on traditional file system operations.

On the integration front, ObjectScale 4.1 is compatible with the latest AWS SDK v2.29, so Java developers can immediately use new S3 features and performance fixes in their applications, and build cloud-native applications with full access to modern AWS features and APIs.

The following hardware platforms are supported by the new ObjectScale 4.1 release:

Gen 2 systems U480E, U400T, U400E, U4000, U400, U2800, U2000, D6200, D5600, D4500
Gen 3 systems EX3000, EX300, EXF900, EX5000, EX500
Gen 4 systems X560, XF960

Note that upgrading to ObjectScale 4.1 is only supported from ECS 3.8.x and 4.0.x releases.

In summary, ObjectScale 4.1 represents a strategic advancement in Dell’s commitment to delivering intelligent, secure, and scalable storage solutions for the AI era. Whether upgrading existing infrastructure or deploying new systems, this new 4.1 release empowers organizations to meet the challenges of data growth, complexity, and innovation with confidence.

OneFS SmartSync Backup-to-Object Management and Troubleshooting

As we saw in the previous articles in this series, SmartSync in OneFS 9.11 enjoys the addition of backup-to-object functionality, which delivers high performance, full-fidelity incremental replication to ECS, ObjectScale, Wasabi, and AWS S3 & Glacier IR object stores.

This new SmartSync backup-to-object functionality supports the full spectrum of OneFS path lengths, encodings, and file sizes up to 16TB – plus special files and alternate data streams (ADS), symlinks and hardlinks, sparse regions, and POSIX and SMB attributes.

In addition to the standard ‘isi dm’ command set, the following CLI utility can also come in handy for tasks such as verifying the dataset ID for restoration, etc:

# isi_dm browse

For example, to query the SmartSync accounts and datasets:

# isi_dm browse

<no account>:<no dataset> $ list-accounts

000000000000000100000000000000000000000000000000 (tme-tgt)

ec2a72330e825f1b7e68eb2352bfb09fea4f000000000000 (DM Local Account)

fd0000000000000000000000000000000000000000000000 (DM Loopback Account)

<no account>:<no dataset> $ connect-account 000000000000000100000000000000000000000000000000

tme-tgt:<no dataset> $ list-datasets

1       2025-07-22T10:23:33+0000        /ifs/data/zone3

2       2025-07-22T10:23:33+0000        /ifs/data/zone4

1025    2025-07-22T10:25:01+0000        /ifs/data/zone3

2049    2025-07-22T10:30:04+0000        /ifs/data/zone4

tme-tgt:<no dataset> $ connect-dataset 2

tme-tgt:2 </ifs/data/zone4:> $ ls

home                           [dir]

zone2_sync1753179349           [dir]

tme-tgt:2 </ifs/data/zone4:> $ cd zone2_sync1753179349

tme-tgt:2 </ifs/data/zone4:zone2_sync1753179349/> $ ls

home                           [dir]

tme-tgt:2 </ifs/data/zone4:zone2_sync1753179349/> $

Or for additional detail:

tme-tgt:2 </ifs/data/zone4:zone2_sync1753179349/> $ settings output-to-file-on /tmp/out.txt

tme-tgt:2 </ifs/data/zone4:zone2_sync1753179349/> $ settings verbose-on

tme-tgt:2 </ifs/data/zone4:zone2_sync1753179349/> $ list-datasets

1       2025-07-22T10:23:33+0000        /ifs/data/zone3 { dmdi_tree_id={ dmdti_system_guid={dmg_guid=0060486e3954c1b470687f084aa83df6c07d} dmdti_local_unid=1 } dmdi_revision={ dmdr_system_guid={dmg_guid=0060486e3954c1b470687f084aa83df6c07d} dmdr_local_unid=1 } }

2       2025-07-22T10:23:33+0000        /ifs/data/zone4 { dmdi_tree_id={ dmdti_system_guid={dmg_guid=0060486e3954c1b470687f084aa83df6c07d} dmdti_local_unid=2 } dmdi_revision={ dmdr_system_guid={dmg_guid=0060486e3954c1b470687f084aa83df6c07d} dmdr_local_unid=2 } }

1025    2025-07-22T10:25:01+0000        /ifs/data/zone3 { dmdi_tree_id={ dmdti_system_guid={dmg_guid=0060486e3954c1b470687f084aa83df6c07d} dmdti_local_unid=1 } dmdi_revision={ dmdr_system_guid={dmg_guid=0060486e3954c1b470687f084aa83df6c07d} dmdr_local_unid=3 } }

2049    2025-07-22T10:30:04+0000        /ifs/data/zone4 { dmdi_tree_id={ dmdti_system_guid={dmg_guid=0060486e3954c1b470687f084aa83df6c07d} dmdti_local_unid=2 } dmdi_revision={ dmdr_system_guid={dmg_guid=0060486e3954c1b470687f084aa83df6c07d} dmdr_local_unid=4 } }

But when it comes to monitoring and troubleshooting SmartSync, there are a variety of diagnostic tools available. These include:

Component Tools Issue
Logging ·         /var/log/isi_dm.log

·         /var/log/messages

·         ifs/data/Isilon_Support/datamover/transfer_failures/baseline_failures_ <jobid>

General SmartSync info and  triage.
Accounts ·         isi dm accounts list / view Authentication, trust and encryption.
CloudCopy ·         S3 Browser (ie. Cloudberry), Microsoft Azure Storage Explorer Cloud access and connectivity.
Dataset ·         isi dm dataset list/view Dataset creation and health.
File system ·         isi get Inspect replicated files and objects.
Jobs ·         isi dm jobs list/view

·         isi_datamover_job_status -jt

Job and task execution, auto-pausing, completion, control, and transfer.
Network ·         isi dm throttling bw-rules list/view

·         isi_dm network ping/discover

Network connectivity and throughput.
Policies ·         isi dm policies list/view

·         isi dm base-policies list/view

Copy and dataset policy execution and transfer.
Service ·         isi services -a isi_dm_d <enable/disable> Daemon configuration and control.
Snapshots ·         isi snapshot snapshots list/view Snapshot execution and access.
System ·         isi dm throttling settings CPU load and system performance.

SmartSync info and errors are typically written to /var/log/isi_dm.log and /var/log/messages, while DM jobs transfer failures generate a log specific to the job ID under /ifs/data/Isilon_Support/datamover/transfer_failures.

Once a policy is running, the job status is reported via ‘isi dm jobs list’. Once complete, job histories are available by running ‘isi dm historical jobs list’. More details for a specific job can be gleaned from the ‘isi dm job view’ command, using the pertinent job ID from the list output above. Additionally, the ‘isi_datamover_job_status’ command with the job ID as an argument will also supply detailed information about a specific job.

Once running, a DM job can be further controlled via the ‘isi dm jobs modify’ command, and available actions include cancel, partial-completion, pause, or resume.

If a certificate authority (CA) is not correctly configured on a PowerScale cluster, the SmartSync daemon will not start, even though accounts and policies can still be configured. Be aware that the failed policies will not be reported via ‘isi dm jobs list’ or ‘isi dm historical-jobs list’ since they never started. Instead, an improperly configured CA is reported in the /var/log/isi_dm.log as follows:

Certificates not correctly installed, Data Mover service sleeping: At least one CA must be installed: No such file or directory from dm_load_certs_from_store (/b/mnt/src/isilon/lib/isi_dm/isi_dm_remote/src/rpc/dm_tls.cpp:197 ) from dm_tls_init (/b/mnt/src/isilon/lib/isi_dm/isi_dm_remote/src/rpc/dm_tls.cpp:279 ): Unable to load certificate information

Once a CA and identity are correctly configured, the SmartSync service automatically activates. Next, SmartSync attempts a handshake with the target. If the CA or identity is mis-configured, the handshake process fails, and generates an entry in /var/log/isi_dm.log. For example:

2025-07-30T12:38:17.864181+00:00 GEN-HOP-NOCL-RR-1(id1) isi_dm_d[52758]: [0x828c0a110]: /b/mnt/src/isilon/lib/isi_dm/isi_dm_remote/src/acct_mon.cpp:dm_acc tmon_try_ping:348: [Fiber 3778] ping for account guid: 0000000000000000c4000000000000000000000000000000, result: dead

Note that the full handshake error detail is logged if the SmartSync service (isi_dm_d) is set to log at the ‘info’ or ‘debug’ level using isi_ilog:

# isi_ilog -a isi_dm_d --level info+

Valid ilog levels include:

fatal error err notice info debug trace

error+ err+ notice+ info+ debug+ trace+

A copy or repeat-copy policy requires an available dataset for replication before running. If a dataset has not been successfully created prior to the copy or repeat-copy policy job starting for the same base path, the job is paused. In the following example, the base path of the copy policy is not the same as that of the dataset policy, hence the job fails with a “path doesn’t match…” error.

# ls -l /ifs/data/Isilon_support/Datamover/transfer_failures

Total 9

-rw-rw----   1 root  wheel  679  July 20 10:56 baseline_failure_10

# cat /ifs/data/Isilon_support/Datamover/transfer_failures/baseline_failure_10

Task_id=0x00000000000000ce, task_type=root task ds base copy, task_state=failed-fatal path doesn’t match dataset base path: ‘/ifs/test’ != /ifs/data/repeat-copy’:

from bc_task)initialize_dsh (/b/mnt/src/isilon/lib/isi_dm/isi_dm/src/ds_base_copy

from dmt_execute (/b/mnt/src/isilon/lib/isi_dm/isi_dm/src/ds_base_copy_root_task

from dm_txn_execute_internal (/b/mnt/src/isilon/lib/isi_dm/isi_dm_base/src/txn.cp

from dm_txn_execute (/b/mnt/src/isilon/lib/isi_dm/isi_dm_base/src/txn.cpp:2274)

from dmp_task_spark_execute (/b/mnt/src/isilon/lib/isi_dm/isi_dm/src/task_runner.

Once any errors for a policy have been resolved, the ‘isi dm jobs modify’ command can be used to resume the job.

OneFS SmartSync Backup-to-Object Configuration

As we saw in the previous article in this series, SmartSync in OneFS 9.11 sees the addition of backup-to-object, which provides high performance, full-fidelity incremental replication to ECS, ObjectScale, Wasabi, and AWS S3 & Glacier IR object stores.

This new SmartSync backup-to-object functionality supports the full spectrum of OneFS path lengths, encodings, and file sizes up to 16TB – plus special files and alternate data streams (ADS), symlinks and hardlinks, sparse regions, and POSIX and SMB attributes. Specifically:

Copy-to-object (OneFS 9.10 & earlier) Backup-to-object (OneFS 9.11)
·         One-time file system copy to object

·         Baseline replication only, no support for incremental copies

·         Browsable/accessible filesystem-on-object representation

·         Certain object limitations

o   No support for sparse regions and hardlinks

o   Limited attribute/metadata support

o   No compression

·         Full-fidelity file system baseline & incremental replication to object

o   Supports ADS, special files, symlinks, hardlinks, sparseness, POSIX/NT attributes, and encoding

o   Any file size and any path length

·         Fast incremental copies

·         Compact file system snapshot representation in native cloud

·         Object representation

o   Grouped by target base-path in policy configuration

o   Further grouped by Dataset ID, Global File ID

SmartSync backup-to-object operates on user-defined data set, which are essentially OneFS file system snapshots with plus additional properties.

A data set creation policy takes snapshots and creates a data set out of it. Additionally, there are also copy and repeat copy policies which are the policies that will transfer that data set to another system. And the execution of these two policy types can be linked and scheduled separately. So you can have one schedule for your data set creation, say to create a data set every hour on a particular path. And you can have a tiered or different distribution system for the actual copy itself. For example, to copy every hour to a hot DR cluster in data center A. But also copy every month to a deep archive cluster in data center B. So all these things are possible now, without increasing the bloat of snapshots on the system, since they’re now able to be shared.

Currently, SmartSync does not have a WebUI presence, so all its configuration is either via the command-line or platform API.

Here’s the procedure for crafting a baseline replication config:

Essentially, create the replication account, which in OneFS 9.11 will be either Dell ECS or Amazon AWS. Then configure that dataset creation policy, run it, and, if desired, create a repeat-copy policy. These specific steps with their CLI syntax include:

  1. Create a replication account:
# isi dm account create --account-type [AWS_S3 | ECS_S3]
  1. Configure a dataset creation policy
# isi dm policies create [Policy Name] --policy-type CREATION
  1. Run the dataset creation policy:
# isi dm policies list

# isi dm policies modify [Creation policy id] –-run-now=true

# isi dm jobs list

# isi dm datasets list
  1. create a repeat-copy policy
# isi dm policies create [Policy Name] --policy-type=' REPEAT_COPY'
  1. Run the repeat-copy policy:
# isi dm policies list

# isi dm policies modify [Repeat-copy policy id] –-run-now=true
  1. View the data replication job status
# isi dm jobs list

Similarly for an incremental replication config:

Note that the dataset creation policy and repeat-copy policy are already created in the baseline replication configure and can be ignored.

Incremental replication using the dataset create and repeat-copy policies from the previous slide’s baseline config.

  1. Run the dataset creation policy
# isi dm policies list

# isi dm policies modify [Creation policy id] –-run-now=true

# isi dm jobs list

# isi dm datasets list
  1. Run the repeat-copy policy:
# isi dm policies list

# isi dm policies modify [Repeat-copy policy id] –-run-now=true
  1. View the data replication incremental job status
# isi dm jobs list

And here’s the basic procedure for creating and running a partial or full restore:

Note that the replication account is already created on the original cluster and the creation step can be ignored.  Replication account creation is only required if restoring the dataset to a new cluster.

Additionally, partial restoration involves a subset of the directory structure, specified via the ‘source path’ , whereas full restoration invokes a restore of the entire dataset.

The process includes creating the replication account if needed, finding the ID of the dataset to be restored, creating and running the partial or full restoration policy, and checking the job status to verify it ran successfully.

  1. Create a replication account:
# isi dm account create --account-type [AWS_S3 | ECS_S3]

For example:

# isi dm account create --account-type ECS_S3 --name [Account Name] --access-id [access-id] --uri [URI with bucket-name] --auth-mode CLOUD --secret-key [secret-key] --storage-class=[For AWS_S3 only: STANDARD or GLACIER_IR]
  1. Verify the dataset ID for restoration:
# isi_dm browse

Checking the following attributes:

  • list-accounts
  • connect-account [Source Account ID created in step 1]
  • list-datasets
  • connect-dataset [Dataset id]
  1. Create a partial or full restoration policy
# isi dm policies create [Policy Name] --policy-type='COPY'
  1. Run the partial or full restoration policy:
# isi dm policies modify [Restoration policy id] –-run-now=true
  1. View the data restoration job status
# isi dm jobs list

OneFS 9.11 also introduces recovery point objective or RPO alerts for SmartSync, but note that these are for repeat-copy policies only. These RPO alerts can be configured through the replication policy by adding the desired time value to the ‘repeat-copy-rpo-alert’ parameter. If this configured threshold is exceeded, an RPO alert is triggered. This RPO alert is automatically resolved after the next successful policy job run.

Also be aware that the default time value for a repeat copy RPO is zero, which instructs SmartSync to not generate RPO alerts for that policy.

The following CLI syntax can be used to create a replication policy, with the ‘–repeat-copy-rpo-alert’ flag set for the desired time:

# isi dm policies create [Policy Name] --policy-type=' REPEAT_COPY' --enabled='true' --priority='NORMAL' --repeat-copy-source-base-path=[Source Path] --repeat-copy-base-base-account-id=[Source account id] --repeat-copy-base-source-account-id=[Source account id] --repeat-copy-base-target-account-id=[Target account id] --repeat-copy-base-new-tasks-account=[Source account id] --repeat-copy-base-target-dataset-type='FILE_ON_OBJECT_BACKUP' --repeat-copy-base-target-base-path=[Bucket Name] --repeat-copy-rpo-alert=[time]

And similarly to change the RPO alert configuration on an existing replication policy:

# isi dm policies modify [Policy id] --repeat-copy-rpo-alert=[time]

An alert is triggered and corresponding CELOG event created if the specified RPO for the policy is exceeded. For example:

# isi event list

ID   Started     Ended       Causes Short                     Lnn  Events  Severity

--------------------------------------------------------------------------------------

1898 07/15 00:00 07/15 00:00 SW_CELOG_HEARTBEAT               1    1       information

2012 07/15 06:03 --          SW_DM_RPO_EXCEEDED               2    1       warning

--------------------------------------------------------------------------------------

And then once RPO alert has been resolved after a successful replication policy job run:

# isi event list

ID   Started     Ended       Causes Short                     Lnn  Events  Severity

--------------------------------------------------------------------------------------

1898 07/15 00:00 07/15 00:00 SW_CELOG_HEARTBEAT               1    1       information

2012 07/15 06:03 07/15 06:12 SW_DM_RPO_EXCEEDED               2    2       warning

--------------------------------------------------------------------------------------