OneFS CAVA Configuration and Management

In the previous article, we looked at an overview of CAVA on OneFS. Next, we’ll focus our attention on how to set it up. In a nutshell, the basic procedure for installing CAVA on a PowerScale cluster can be summarized as follows:

  • Configure CAVA servers in OneFS
  • Create an IP address pool
  • Establish a dedicate access zone for CAVA
  • Associate an Active Directory authentication provider with the access zone
  • Update the AV application’s user role.

There are also a few pre-requisites to address before starting the installation, and these include:

Pre-requisite Description
SMB service Ensure the OneFS SMB service is enabled to allow AV apps to retrieve file from cluster for scanning.
SmartConnect Service IP The SSIP should be configured at the subnet level. CAVA uses SmartConnect to balance scanning requests across all the nodes in the IP pool.
AV application and CEE Refer to the CEE installation and usage guide and the Vendor’s AV application documentation.
Active Directory OneFS CAVA requires that both cluster and AV application reside in the same AD domain.
IP Addressing All connections from AV applications are served by dedicated cluster IP pool. These IP addresses are used to configure the IP ranges in this IP pool.The best practice is to use exclusive IP addresses that are only available to the AV app.

 

  1. During CEE configuration, a domain user account is created for the Windows ‘EMC CAVA’ service. This account is used to access the hidden ‘CHECK$ ‘ SMB share in order to retrieve the files for scanning. In the following example, the user account is ‘LAB\cavausr’.
  2. Once the anti-virus servers have been installed and configured, their corresponding CAVA entries are created on the cluster. This can be done via the following CLI syntax:
# isi antivirus cava servers create --server-name=av1 --server-uri=10.1.2.3 --enabled=1

Or from the WebUI:

Multiple CAVA servers may be added in order to meet the desired server ratio for a particular PowerScale cluster. The recommended sizing formula is:

CAVA servers = 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 cluster 𝑛𝑜𝑑𝑒𝑠 / 4

Before performing the following steps, ensure the CAVA ‘Service Enabled’ configuration option is set to ‘No’.

# isi antivirus cava settings view

       Service Enabled: No

     Scan Access Zones: System

               IP Pool: -

         Report Expiry: 8 weeks, 4 days

          Scan Timeout: 1 minute

Cloudpool Scan Timeout: 1 minute

     Maximum Scan Size: 0.00kB

 

  1. Next, an IP pool is created for the anti-virus applications to connect to the cluster. This dedicated IP pool should only be used by the anti-virus applications. As such, the recommendation is to ensure the IP ranges in this IP pool are exclusive and only available for use by the CAVA servers.

Avoid mixing the IP range in this dedicated IP pool with others for a regular SMB client connection.

The antivirus traffic is load balanced by the SmartConnect zone in this IP pool. Since this is a dedicated IP pool for CAVA servers, all the AV scanning should be evenly distributed within the pool. This can be accomplished with the following CLI syntax:

# isi network pools create groupnet0.subnet0.pool1 --ranges=10.1.2.3-10.1.2.13 -- sc-dns-zone "cava1.lab.onefs.com" --ifaces=1:ext-1

In this example, the IP pool is ‘groupnet0.subnet0.pool1’, with address range ‘10.1.2.3 – 10.1.2.13’, the SmartConnect Zone name is ‘cava1.lab.onefs.com’, and the assigned network interface is node 1’s ext-1. Ensure the appropriate DNS delegation is created.

  1. Once the IP pool is created, it can be associated with the CAVA configuration via the following CLI command:
# isi antivirus cava settings modify --ip-pool="groupnet0.subnet0.pool1"

This action will make the IP Pool unavailable to all other users except antivirus servers. Do you want to continue? (yes/[no]): yes

"

IP Pool groupnet0.subnet0.pool1 added to CAVA antivirus.

Note: The access zone of IP Pool groupnet0.subnet0.pool1 has been changed to AvVendor.

"

Or from the WebUI:

Be sure to create the DNS delegation for the zone name associated with this IP pool.

At this point, the IP pool is associated with the ‘AvVendor’ access zone, and the IP pool is exclusively available to the CAVA servers.

  1. Next, a dedicated access zone, ‘AvVendor’ associated with the IP pool is automatically created when the CAVA service is enabled on the cluster. The CAVA service can be enabled, via the following CLI command:
# isi antivirus cava settings modify --service-enabled=1

View the CAVA settings and verify that the ‘Server Enabled’ field is set to ‘Yes’:

# isi antivirus cava settings view

Service Enabled: Yes

Scan Access Zones: System

IP Pool: groupnet0.subnet0.pool1

Report Expiry: 8 weeks, 4 days

Scan Timeout: 1 minute

 Cloudpool Scan Timeout: 1 minute

Maximum Scan Size: 0.00kB

Confirm that the ‘AvVendor’ access zone has been successfully created:

# isi zone zones list

Name Path --------------

System /ifs

AvVendor /ifs

--------------

Total: 2
  1. If using Active Directory, OneFS CAVA requires the cluster and all the AV application servers to reside in the same AD domain.

The output of the following CLI command will display the cluster’s current authentication provider status:

# isi auth status

Evaluate which AD domain you wish to use for access. This domain should contain the account that will be used by the service on the CEE server to connect to the cluster.

If the cluster is not already joined to the desired AD domain, the following CLI syntax can be used to create an AD machine account for the cluster – in this example joining the ‘lab.onefs.com’ domain:

# isi auth ads create lab.onefs.com --user administrator

Note that a local user account can also be used in place of an AD account, if preferred.

  1. Next, the auth provider needs to be added to the ‘AvVendor’ access zone. This can be accomplished from either the WebUI or CLI. For example:
# isi zone zones modify AvVendor --add-auth-providers=lsa-activedirectoryprovider:lab.onefs.com
  1. The AV software, running on a Windows server, accesses the cluster’s data via a hidden ‘CHECK$’ share. Add the ‘ISI_PRIV_AV_VENDOR’ privilege in the AVapp role to the AV software’s user account in order to grant access to the CHECK$ share. For example, the following CLI command assigns the ‘LAB\cavausr’ user account to the ‘AVapp’ role in the ‘AvVendor’ access zone:
# isi auth roles modify AvVendor --zone= AvVendor --add-user lab\\cavausr
  1. At this point, the configuration for the CAVA service on the cluster is complete. The following CLI syntax confirms that the ‘System Status’ is reported as ‘RUNNING’:
# isi antivirus cava status

System Status: RUNNING

Fault Message: -

CEE Version: 8.7.7.0 DTD

Version: 2.3.0 AV

Vendor: Symantec
  1. On the CAVA side as well. The existing docs work fine for other products but with PowerScale there are some integration points which are NOT obvious.

The CAVA Windows service should be modified to use the AD domain or local account that was created/used in step 6 above. This user account must be added to the ‘Local Administrators’ group on the CEE server, in order to allow the CAVA process to scan the system process list and find the AV engine process:

Note that the CAVA service requires a restart after reconfiguring the log-in information.

Also ensure that the inbound port TCP12228 is available, in the case of a firewall or other packet filtering device.

Note that, if using MS Defender, ensure the option for ‘Real Time Scan’ is set to ‘enabled’.

  1. Finally, the CAVA job can be scheduled to run periodically. In this case, the job ‘av1’ is configured to scan all of /ifs, including any CloudPools, daily at 11am, and with a ‘medium’ impact policy:
# isi antivirus cava jobs create av1 -e Yes --schedule 'every day at 11:00' --impact MEDIUM --paths-to-include /ifs –enabled yes –scan-cloudpool-files yes

# isi antivirus cava jobs list

Name  Include Paths  Exclude Paths  Schedule                 Enabled

---------------------------------------------------------------------

av1   /ifs           -              every 1 days at 11:00 am Yes

---------------------------------------------------------------------

Total: 1

This can also be configured from the WebUI by navigating to Data protection > Antivirus > CAVA and clicking the ‘Add job’ button:

Additionally, CAVA antivirus filters can be managed per access zone for on-demand or protocol access using the ‘isi antivirus cava filter’s command, per below. Be aware that the ISI_PRIV_ANTIVIRUS privilege is required in order to manage CAVA filters.

# isi antivirus cava filters list

Zone   Enabled  Open-on-fail  Scan-profile  Scan Cloudpool Files

-----------------------------------------------------------------

System Yes      Yes           standard      No

zone1  Yes      Yes           standard      No

zone2  Yes      Yes           standard      No

zone3  Yes      Yes           standard      No

zone4  Yes      Yes           standard      No

-----------------------------------------------------------------

Total: 5




# isi antivirus cava filters view

Zone: System

Enabled: Yes

Open-on-fail: Yes

File Extensions: *

File Extension Action: include

Scan If No Extension: No

Exclude Paths: -

Scan-profile: standard

Scan-on-read: No

Scan-on-close: Yes

Scan-on-rename: Yes

Scan Cloudpool Files: No

The ISI_PRIV_ANTIVIRUS privilege is required in order to manage CAVA filters.

Note that blocking access, repair, and quarantine are all deferred to the specific CAVA AV Vendor, and all decisions for these are made by the AV Vendor. This is not a configurable option in OneFS for CAVA AV.

OneFS and CAVA Antivirus Scanning

When it comes to antivirus protection, OneFS provides two options. The first, and legacy, solution is using ICAP (internet content adaptation protocol), and which we featured in a previous blog article. The other, which debuted in OneFS 9.1, is the CAVA (common antivirus agent) solution. Typically providing improved performance than ICAP, CAVA employs a windows server running third-party AV software to identify and eliminate known viruses before they infect files on the system.

OneFS CAVA leverages the Dell Common Event Enabler – the same off-cluster agent that’s responsible for OneFS audit – and which provides compatibility with prominent AV software vendors. These currently include:

Product Latest Supported Version
Computer Associates eTrust 6.0
F-Secure ESS 12.12
Kaspersky Security 10 for Windows Server 10.1.2
McAfee VirusScan 8.8i Patch13
McAfee Endpoint Protection 10.7.0 Update July 2020
Microsoft SCEP 4.10.209.0
Microsoft Defender 4.18.2004
Sophos Endpoint Security Control 10.8
Symantec Protection Engine 8.0
Symantec Endpoint Protection 14.2
TrendMicro ServerProtect for Storage 6.00 Patch 1

The Common Event Enabler, or CEE, agent resides on an off-cluster server, and OneFS sends HTTP request to it as clients trigger the AV scanning workflow. The antivirus application then retrieves the pertinent files from the cluster via a hidden SMB share. These files are then checked by the AV app, after which CEE returns the appropriate response to the cluster.

OneFS Antivirus provides three different CAVA scanning methods:

AV Scan Type Description
Individual File Single file scan, triggered by the CLI/PAPI. Typically provides increased performance and reduced cluster CPU and memory utilization compared to ICAP.
On-access Triggered by SMB file operation (ie. read and close), and dependent on scan profile:

·         Standard profile: Captures an SMB close and rename operation and triggers scan on corresponding file.

·         Strict profile: Captures an SMB read, close, and rename operation and triggers scan on corresponding file.

Policy Scheduled or manual directory tree-based scans executed by the OneFS Job Engine.

An individual file scan can be initiated from the CLI or WebUI. Since the scanning target files are manually selected, and lwavscand daemon immediately sends an HTTP scanning request to the CEE or CAVA agent, at which point OneFS places a lock on the file, and the AV app retrieves it via the hidden CHECK$ SMB share. After the corresponding content has been downloaded by the CAVA server, the AV engine performs a virus detection scan, after which CEE sends the results back to the OneFS lwavscand daemon and the file lock is release. All the scanning attributes are recorded under /ifs/.ifsvar/modules/avscan/isi_avscan.db.

For on-access scanning, depending on which scan profile is selected, any SMB read or close requests are captured by the OneFS I/O manager, which passes the details to the AVscan filter. This can be configured per access-zone, and filters by:

  • File extension to include
  • File extension to exclude
  • File path to exclude

For any files matching all the filtering criteria, the lwavscand daemon sends the HTTP scanning request to the CEE/CAVA agent. Simultaneously, OneFS locks the file and the AV app downloads a copy via the hidden SMB share (CHECK$). After the corresponding content is downloaded to the CAVA server, it runs the scan with the anti-virus engine, and CEE sends the scan results and response back to the process lwavscand. At this stage, some scanning attributes are written to this file and the lock is released. The scanning attributes are listed below:

  • Scan time
  • Scan Result
  • Anti-virus signature timestamp
  • Scan current

Then the previous SMB workflow can continue if the file is not infected. Otherwise, the file is denied access. If there are errors during the scan and the scan profile is strict, the setting Open on fail determines the next action.

Policy Scan

A policy scan is triggered by the job engine. Like other OneFS jobs, the job impact policy, priority, and schedule can be configured as desired. In this case, filtering includes:

  • File extension to include
  • File extension to exclude
  • File path to exclude
  • File path to include

There are two types of connections between CAVA servers and the cluster’s nodes. An SMB connection, which is used to fetch files or contents for scanning via the hidden CHECK$ share, and CEE’s HTTP connections for scan requests, scan responses, and other functions. CAVA SMB connections use a dedicated IP pool and access zone to separate traffic from other workloads. Within this IP pool, SmartConnect ensures all SMB connections are evenly spread across all the nodes.

The anti-virus applications use the SMB protocol to fetch the file or a portion of a file for scanning in a PowerScale cluster. From the anti-virus perspective, a hidden SMB share CHECK$ is used for this purpose and resides on every anti-virus application server. This share allows access to all files on a PowerScale cluster under /ifs. SmartConnect and a dedicated access zone are introduced in this process to ensure that all the connection from the anti-virus application is fully distributed and load-balanced among all the configured nodes in the IP pool. A hidden role AVVendor is created by the CAVA anti-virus service to map the CAVA service account to OneFS.

Upon completion of a scan, a report is available containing a variety of data and statistics. The details of scan reports are stored in a database under /ifs/.ifsvar/modules/avscan/isi_avscan.db and the scans can be viewed from either the CLI or WebUI. OneFS also generates a report every 24 hours that includes all on-access scans that occurred during the previous day. These antivirus scan reports contain the following information and metrics:

  • Scan start time.
  • Scan end time.
  • Number of files scanned.
  • Total size of the files scanned.
  • Scanning network bandwidth utilization.
  • Network throughput consumed by scanning.
  • Scan completion.
  • Infected file detection count.
  • Infected file names.
  • Threats associated with infected files.
  • OneFS response to detected threats.

CAVA is broadly compatible with the other OneFS data services, but with the following notable exceptions:

Data Service Compatibility
ICAP Compatible
Protocols SMB support
SmartLock Incompatible. OneFS cannot set scanning attributes on SmartLock WORM protected files as they are read-only. As such, AV application cannot clean them.
SnapshotIQ Incompatible
SyncIQ SyncIQ is unable to replicate AV scan file attributes to a target cluster.

 When it comes to designing and deploying a OneFS CAVA environment, the recommendation is to adopt the following general sizing guidance:

  • Use Windows servers with at least 16 GB memory and two-core processors.
  • Provision a minimum of two CEE/CAVA servers for redundancy.

For the Common Event Enabler, connectivity limits and sizing rules include:

  • Maximum connections per CAVA servers = 20
  • Number of different CAVA servers a cluster node can connect to = 4
  • The nth cluster node starts from nth CAVA server

The formula is as follows:

𝑀𝑎𝑥𝑖𝑚𝑢𝑚 𝑐𝑜𝑛𝑛𝑒𝑐𝑡𝑖𝑜𝑛𝑠 = 𝑀𝑎𝑥𝑖𝑚𝑢𝑚 𝑐𝑜𝑛𝑒𝑐𝑡𝑖𝑜𝑛𝑠 𝑝𝑒𝑟 𝐶𝐴𝑉𝐴 𝑠𝑒𝑟𝑣𝑒𝑟 × 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 𝐶𝐴𝑉𝐴 𝑠𝑒𝑟𝑣𝑒𝑟𝑠 = 20 × 5 = 100

Additionally, the following formula can help determine the appropriate number of CAVA servers for a particular cluster:

CAVA servers = 𝑛𝑢𝑚𝑏𝑒𝑟 𝑜𝑓 cluster 𝑛𝑜𝑑𝑒𝑠 / 4

For example, imagine a forty four node PowerScale H700 cluster. Using the above formular, the recommended number of CAVA servers will be:

42 / 4 = 10.5

So, with upward rounding, eleven CAVA servers would be an appropriate ratio for this cluster configuration. Note that this approach, based on the cluster’s node count, is applicable for both on-access and policy-based scanning.

For environments where not all of a cluster’s nodes have network connectivity (NANON), CAVA is supported with the following caveats:

Scan Type Requirements
Individual file Supported when the node which triggers the scan has the front-end connectivity to the CAVA servers. Otherwise, the CELOG ‘SW_AVSCAN_CAVA_SERVER_OFFLINE’ alert is fired.
Policy-based Works by default. OneFS 9.1 and later automatically detect any nodes without network connectivity and prevents the job engine from allocating scanning tasks to them. Only nodes with front-end connectivity to the CAVA servers will participate in running scheduled scans.
On-access Supported when the node which triggers the scan has front-end connectivity to the CAVA servers. Otherwise, the CELOG ‘SW_AVSCAN_CAVA_SERVER_OFFLINE’ alert is fired.

 

OneFS SmartLock Configuration and Management

In the previous article, we looked at the architecture of SmartLock. Now, we’ll turn our attention to its configuration, compatibility, and use.

The following CLI procedure can be used to commit a file into WORM state without having to remove the “write” permissions of end users using the chmod -w <file> command. This avoids re-enabling write permission after the files have been released from WORM retention. These commands are applicable for both Enterprise and Compliance SmartLock modes:

  1. Create and verify a new SmartLock domain. Note that if you specify the path of an existing directory, the directory must be empty. The following command creates an enterprise directory with a default retention period of two years, a minimum retention period of one year, and a maximum retention period of three years:.
# isi worm domains create /ifs/smartlk --default-retention 2Y --min-retention 1Y --max-retention 3Y --mkdir

# isi worm domains list

ID     Path         Type

------------------------------

656128 /ifs/smartlk enterprise

------------------------------

Total: 1




# isi worm domains view 656128

               ID: 656128

             Path: /ifs/smartlk

             Type: enterprise

              LIN: 4760010888

Autocommit Offset: -

    Override Date: -

Privileged Delete: off

Default Retention: 2Y

    Min Retention: 1Y

    Max Retention: 3Y

   Pending Delete: False

       Exclusions: -


Alternatively, a WORM domain can also configured from the WebUI, by navigating to Cluster management > SmartLock and clicking on the ‘Create domain’ button:

In addition to SmartLock Domains, OneFS also supports SnapRevert, SyncIQ, and writable snapshot domains.  A list of all configured domains on a cluster can be viewed with the following CLI syntax:

# isi_classic domain list -l

ID | Root Path | Type          | Overrid | Def. | Min.  | Max.  | Autocomm | Priv.

---+-------------+-------------+---------+------+-------+-------+----------+------

65>| /ifs/sync1>| SyncIQ        | None    | None | None  | None  | None     | Off

65>| /ifs/smartlk>| SmartLock     | None    | None | None  | None  | None     | Off

65>| /ifs/snap1| Writable,Snap>| None    | None | None  | None  | None     | Off
  1. Next, create a file:
# date >> /ifs/smartlk/wormfile1
  1. View the file’s permission bits and confirm that the owner has write permission:
    # ls -lsia /ifs/smartlk
total 120

4760010888 32 drwx------     2 root  wheel   27 Feb  3 23:19 .

         2 64 drwxrwxr-x +   8 root  wheel  170 Feb  3 23:11 ..

4760931018 24 -rw-------     1 root  wheel   29 Feb  3 23:19 wormfile1
  1. Examine the wormfile1 file’s contents and verify that it has not been WORM committed:
# cat /ifs/smartlk/wormfile1

cat /ifs/smartlk/wormfile1

Thu Feb  3 23:19:09 GMT 2022


# isi worm files view !$

isi worm files view /ifs/smartlk/wormfile1

WORM Domains

ID     Root Path

-------------------

656128 /ifs/smartlk


WORM State: NOT COMMITTED

   Expires: -

5. Commit the file into WORM. The ‘chmod’ CLI command can be used to manually commit a file with write permission into WORM state. For example:

# chmod a-w /ifs/smartlk/wormfile1

Or:

# chmod 444 /ifs/smartlk/wormfile1

The ‘chflags’ command can also be used:

# chflags dos-readonly /ifs/smartlk/wormfile1

Similarly, a writable file can be committed from an SMB client’s GUI by checking the ‘Read-only’ attribute within the file’s ‘Properties’ tab. For example:

  1. Verify the file is committed and the permission bits are preserved:
# isi worm files view /ifs/smartlk/wormfile1

WORM Domains

ID     Root Path

-------------------

656128 /ifs/smartlk




WORM State: COMMITTED

   Expires: 2024-02-03T23:23:45




# ls -lsia /ifs/smartlk

total 120

4760010888 32 drwx------     2 root  wheel   27 Feb  3 23:19 .

         2 64 drwxrwxr-x +   8 root  wheel  170 Feb  3 23:11 ..

4760931018 24 -rw-------     1 root  wheel   29 Feb  3 23:19 wormfile1

 

  1. Override the retention period expiration date for all WORM committed files in a SmartLock directory
# isi worm domains modify /ifs/smartlk --override-date 2024-08-03

# isi worm domains view 656128

               ID: 656128

             Path: /ifs/smartlk

             Type: enterprise

              LIN: 4760010888

Autocommit Offset: -

    Override Date: 2024-08-03T00:00:00

Privileged Delete: off

Default Retention: 2Y

    Min Retention: 1Y

    Max Retention: 3Y

   Pending Delete: False

       Exclusions: /ifs/smartlk/wormdir1


  1. Create a new directory under the domain and configure it for exclusion from WORM.
# isi worm domains modify --exclude /ifs/smartlk/notwormdir1 656128

To remove an existing exclusion domain on a directory, first delete the directory and all of its constituent files.

  1. Verify that exclusion has been configured:
# isi worm domains view 656128

               ID: 656128

             Path: /ifs/smartlk

             Type: enterprise

              LIN: 4760010888

Autocommit Offset: -

    Override Date: -

Privileged Delete: off

Default Retention: 2Y

    Min Retention: 1Y

    Max Retention: 3Y

   Pending Delete: False

       Exclusions: /ifs/smartlk/notwormdir1

10: Delete the file from its enterprise WORM domain before the expiration date via the privileged delete option:

# rm -f /ifs/smartlk/wormfile1

rm: /ifs/smartlk/wormfile1: Read-only file system

# isi worm files delete /ifs/smartlk/wormfile1

Are you sure? (yes/[no]): yes

Operation not permitted.  Please verify that privileged delete is enabled.

# isi worm domains modify /ifs/smartlk --privileged-delete true

# isi worm domains view /ifs/smartlk

               ID: 656128

             Path: /ifs/smartlk

             Type: enterprise

              LIN: 4760010888

Autocommit Offset: -

    Override Date: 2024-08-03T00:00:00

Privileged Delete: on

Default Retention: 2Y

    Min Retention: 1Y

    Max Retention: 3Y

   Pending Delete: False

       Exclusions: /ifs/smartlk/wormdir1

# isi worm files delete /ifs/smartlk/wormfile1

Are you sure? (yes/[no]): yes

# ls -lsia /ifs/smartlk/wormfile1

ls: /ifs/smartlk/wormfile1: No such file or directory

 

  1. Delete SmartLock Domain.

For enterprise-mode domains, ensure the domain is empty first, then remove with ‘rmdir’:

# rmdir /ifs/smartlk/notwormdir1

# ls -lsia /ifs/smartlk

total 96

4760010888 32 drwx------     2 root  wheel    0 Feb  4 00:06 .

         2 64 drwxrwxr-x +   8 root  wheel  170 Feb  3 23:11 ..

# isi worm domains list

ID     Path         Type

------------------------------

656128 /ifs/smartlk enterprise

------------------------------

Total: 1

# rmdir /ifs/smartlk

# isi worm domains list

ID Path Type

------------

------------

Total: 0

 Note that SmartLock’s ‘pending delete’ option can only be used for compliance-mode directories:

# isi worm domains modify --set-pending-delete 656128

You have 1 warnings:

Marking a domain for deletion is irreversible. Once marked for deletion:
  1. No new files may be created, hardlinked or renamed into the domain.
  2. Existing files may not be committed or have their retention dates extended.
  3. SyncIQ will fail to sync to and from the domain.
Are you sure? (yes/[no]): yes

Cannot mark non-compliance domains for deletion.

In the following table, the directory default retention offset is configured for one year for both scenarios A & B. This means that any file committed to that directory without a specific expiry date (ie. scenario A) will automatically inherit a one year expiry from the date it’s committed. As such, WORM protection for any files committed on 2/1/2022 will be until 2/1/2023, based on the default one year setting. In scenarios A & B, the retention date of 3/1/2023 takes precedent over any directory default retention offset period. In scenario D, the Override Retention Date, configured at the directory level, ensures that all data in that directory is automatically protected through a minimum of 1/31/2023. This can be useful for organizations to satisfy litigation holds and other blanket data retention requirements.

Scenario A

No file-retention date

Scenario B

File-retention date > directory offset

Scenario C

Directory-offset > file-retention date

Scenario 4

Override retention date

File-retention date N/A 3/1/2023 3/1/2023 3/1/2023
Directory-offset retention date 1 year 1 year 2 years 1 year
File-committed date 2/1/2022 2/1/2022 2/1/2022 2/1/2022
Expiration date 2/1/2023 3/1/2023 3/1/2023 1/31/2023

In general, SmartLock plays nicely with OneFS and the other data services. For example, SnapshotIQ can take snaps of data in a WORM directory.  Similarly, SmartLock retention settings are retained aross NDMP backups, avoiding the need to recommit files after a data restore. Be aware, though, that NDMP backups of SmartLock Compliance data do not satisfy the regulatory requirements of SEC 17a-4(f).

For CloudPools, WORM protection of SmartLink stub files is permitted in OneFS 8.2 and later, but only in Enterprise mode. Stubs can be moved into an Enterprise mode directory, preventing their modification or deletion, as well as recalled from the cloud to the cluster once committed.

SyncIQ interop with SmartLock has more complexity, context, and caveats, and the compatibility between different directory types on the replication source and target can be characterized as follows:

Source dir Target dir SyncIQ failover SyncIQ failback
Non-WORM Non-worm Yes Yes, unless files are WORM committed on target. Retention not enforced.
Non-WORM Enterprise Yes No
Non-WORM Compliance No Yes: But files do not have WORM status.
Enterprise Non-worm Yes: Replication type allowed, but retention not enforced Yes: Newly committed WORM files included.
Enterprise Enterprise Yes No
Enterprise Compliance No No
Compliance Non-worm No No
Compliance Enterprise No No
Compliance Compliance Yes Yes: Newly committed WORM files are included

When using SmartLock with SyncIQ replication, configure Network Time Protocol (NTP) peer mode on both the source and target cluster to ensure that cluster clocks are synchronized. Where possible, also run the same OneFS version across replication pairs and create a separate SyncIQ policy for each SmartLock directory.

OneFS SmartLock and WORM Data Retention

Amongst the plethora of OneFS data services sits SmartLock, which provides immutable data storage for the cluster, guarding critical data against accidental, malicious or premature deletion or alteration. Based on a write once, read many (WORM) locking capability, SmartLock offers tamper-proof archiving for regulatory compliance and disaster recovery purposes, etc. Configured at the directory-level, SmartLock delivers secure, simple to manage data containers that remain locked for a configurable duration or indefinitely. Additionally, SmartLock satisfies the regulatory compliance demands of stringent corporate and federal data retention policies.

Once SmartLock is licensed and activated on a cluster, it can be configured to run in one of two modes:

Mode Description
Enterprise Upon SmartLock license activation, cluster automatically becomes enterprise mode enabled, permitting Enterprise directory creation and committing of WORM files with specified retention period.
Compliance  A SmartLock licensed cluster can optionally be put into compliance mode, allowing data to be protected in compliance directories, in accordance with U.S. Securities and Exchange Commission rule 17a-4(f) regulations.

Note that SmartLock’s configuration is global, so enabling it in either enterprise or compliance mode is a cluster-wide action.

Under the hood, SmartLock uses both a system clock, which is common to both modes, and a compliance clock which is exclusive to compliance mode. The latter updates the time in a protected system B-tree and, unlike the system clock, cannot be manually modified by either  the ‘root’ or ‘compadmin’ roles.

SmartLock employs the OneFS job engine framework to run its WormQueue job, which routinely scans the SmartLock queue for LINs that need to be committed. By default, WormQueue is scheduled to run every day at 2am, with a ‘LOW’ impact policy and a relative job priority of ‘6’.

SmartLock also leverages the OneFS IFS domains ‘restricted writer’ infrastructure to enforce its immutability policies. Within OneFS, a domain defines a set of behaviors for a collection of files under a specified directory tree. More specifically, a protection domain is a marker which prevents a configured subset of files and directories from being deleted or modified.

If a directory has a protection domain applied to it, that domain will also affect all of the files and subdirectories under that top-level directory. A cluster’s WORM domains can be reported with the ‘isi worm domains list’.

As we’ll see, in some instances, OneFS creates protection domains automatically, but they can also be configured manually via the ‘isi worm domains create’ CLI syntax.
SmartLock domains are assigned to WORM directories to prevent committed files from being modified or deleted, and OneFS automatically sets up a SmartLock domain when a WORM directory is created. That said, domain governance does come with some boundary caveats. For example, SmartLock root directories cannot be nested, either in enterprise or compliance mode, and hard links cannot cross SmartLock domain boundaries. Note that a SmartLock domain cannot be manually deleted. However, upon removal of a top level SmartLock directory, OneFS automatically deletes the associated SmartLock domain.

Once a file is WORM committed, or SmartLocked, it is indefinitely protected from modification or moves. When its expiry, or ‘committed until’ date is reached, the only actions that can be performed on a file are either deletion or extension of its ‘committed until’ date.

Enterprise mode permits storing data in enterprise directories in a non-rewriteable, non-erasable format, protecting data from deletion or modification, and enterprise directories in both enterprise and compliance modes. If a file in an enterprise directory is committed to a WORM state, it is protected from accidental deletion or modification until the retention period has expired. In Enterprise mode, you may also create regular directories, which are not subjected to SmartLock’s retention requirements. A cluster operating in enterprise mode provides the best of both worlds, providing WORM security, while retaining root access and full administrative control.

Any directory under /ifs is a potential SmartLock candidate, and it does not have to be empty before being designated as an enterprise directory. SmartLock and regular directories can happily coexist on the same cluster and once a directory is converted to a SmartLock directory, it is immediately ready to protect files that are placed there. SmartLock also automatically protects any subdirectories in a domain, and they inherit all the WORM settings of the parent directory – unless a specific exclusion is configured.

The following table indicates which type of files (and directories) can be created in each of the cluster modes:

Directory Enterprise mode Compliance mode
Regular (non-WORM) directory Y Y
Enterprise directory (governed by system clock) Y Y
Compliance directory (governed by compliance clock) N N

Both enterprise and compliance modes also permit the creation of non-WORM files and directories, which obviously are free from any retention requirements. Also, while an existing, empty enterprise directory can be upgraded to a compliance directory, it cannot be reverted back to an enterprise directory. Writes are permitted during a directory’s conversion to a SmartLock directory, but files cannot be committed until the transformation is complete.

Attribute Enterprise directory Compliance directory
Customization file-retention dates Y Y
Post-commit write protection Y Y
SEC 17a-4(f) compliance N Y
Privileged delete On | Off | Disabled Disabled
Tamper-proof clock N Y
Root account Y N
Compadmin account N Y

Regular users and apps are not permitted to move, delete, or change SmartLock-committed files. However, SmartLock includes the notion of a ‘privileged user’ with elevated rights (ie. root access) and the ability to delete WORM protected files. ‘Privileged deletes’ can only be performed on the cluster itself, not over the network, which adds an additional layer of control of privileged functions. The privileged user exists only in enterprise mode, and a privilege delete can also be performed by non-root users that have been assigned the ‘ISI_PRIV_IFS_WORM_DELETE’ RBAC role. The privileged delete capability is disabled by default, but can easily be enabled for enterprise directories (note that there is no privileged delete for compliance directories). It may also be permanently disabled to guard against deletion or modification from admin accounts.

Files in a SmartLock directory can be committed to a WORM state simply by removing their read-write privileges. A specific retention expiry date can be set on the file, which can be increased but not reduced. Any files that have been committed to a WORM state cannot be moved, even after their retention period has expired.

A file’s retention period expiration date can be set by modifying its access time (-atime), for example by using the CLI ‘touch’ command. However, note that simply accessing a file will not set the retention period expiration date.

If a file is ‘touched’ in a SmartLock directory without specifying a WORM release date, when committed, the retention period is automatically set to the default period specified for the SmartLock directory. If a default has not be set, the file is assigned a retention period of zero seconds. As such, the recommendation is clearly to specify a minimum retention period for all SmartLock directories.

After a directory has been marked as a SmartLock directory, any files committed to this directory are immutable until their retention time expires and cannot be delete, moved, or changed. The administrator sets retention dates on files, and can extend them but not shorten them. When a file’s retention policy expires, it becomes a normal file which can be managed or removed as desired.

Any uncommitted files in a SmartLock directory can be altered or moved at will, up until they are WORM committed, at which point they become immutable. Files can be committed to a SmartLock directory either locally via the cluster CLI, or from an NFS or SMB client.

The OneFS CLI chmod command can also be used to commit a file into the WORM state without removing the write permissions. This option alleviates the need for cluster admins to re-enable the permissions for users to modify the files after they have been released from WORM retention.

Be aware that, when using the cp -p command to copy a file with a read-only flag from the OneFS CLI to WORM domains, the target file will immediately be committed to WORM state. The read-only flag can be removed from source files before copying them to WORM domains if this is undesirable.

File retention times can be set in two ways: on a per-file basis, or using the directory default setting, with the file’s retention date overriding the directory default retention date. For any existing WORM files, the retention date can be extended but not reduced. Even after their retention date has expired, WORM-protected files cannot be altered while they reside in a SmartLock directory. Instead, they must be moved to a non-WORM directory before modification is permitted. Note that, in an enterprise SmartLock directory with ‘privileged delete’ enabled, a WORM state file can still be removed within its retention period.

A directory level ‘override retention date’ option is also available, which can be used automatically extend the retention date of files. Note that any files in the directory whose retention times were already set beyond the scope of the override are unaffected by an override.

OneFS 8.2 and later permits the exclusion of a directory inside a WORM domain from WORM retention policies and protection. Any content that is created later in the excluded directory is not SmartLock protected. 8.2 also introduced a ‘pending delete’ flag, which can be set on a compliance-mode WORM domain in order to delete the domain and the directories and files in it. Note that ‘pending delete’ cannot be configured on an enterprise-mode WORM domain.

Marking a domain for deletion is an irreversible action, after which no new files may be created, hard linked, or renamed into the domain. Existing files may not be committed or have their retention dates extended. Additionally, any SyncIQ replication tasks to and from the domain will fail.

In contrast to enterprise mode, and in order to maintain an elevated level of security and immutability, SmartLock compliance mode enforces some stringent administrative restrictions. Most notably, the root account (UID 0) loses its elevated privileges. Instead, clusters operating in compliance mode can use the ‘compadmin’ account to run some restricted commands with root privileges via ‘sudo’. These commands are specified in the /usr/local/etc/sudoers file.

Given the administrative restrictions of Compliance mode and its potential to affect both compliance data and enterprise data, it is strongly advised to only use compliance mode if mandated to do so under SEC rule 17a-4(f). In most cases, enterprise mode, with privileged delete disabled, offers a level of security that is more than adequate for the vast majority environments. Some fundamental differences between the two mode includes:

Enterprise mode Compliance mode
Governed by single system clock. Governed by both system clock and compliance clock.
Data committed to WORM state only for specified retention period. WORM state file can have privileged delete capability within retention period. Data written to compliance directories, when committed, can never be altered.
Root/adminstrator access retains full administrative control. Root/administrator access is disabled.

A directory cannot be upgraded to a SmartLock Compliance directory until the WORM compliance clock has been set on the cluster. This can be done from the CLI using the ‘isi worm cdate set’ syntax. Be aware that setting the compliance clock is a one-time deal, after which it cannot be altered.

The ComplianceStoreDelete job, introduced in OneFS 8.2, automatically tracks and removes expired files from the compliance store that were placed there as a result of SyncIQ conflict resolution. By default, this job is scheduled to run once per month at ‘low’ impact and priority level 6, but it can also be run manually on-demand.

If the decision is made to configure a cluster for Compliance mode, some tips to make the transition smoother include verifying that the cluster time is correct before putting the PowerScale cluster in Compliance mode. Plan on using RBAC for cluster access to perform administrative operations and data management, and be aware that for the CLI, ‘compadmin’ account represents a regular data user. For any root-owned data, perform all ownership or permission changes before upgrading to Compliance mode and avoid changing ownership of any system files. Review the permissions and ownership of any files that exclusively permit the root account to manage or write data to them. After upgrading to Compliance mode, if the OneFS configuration limits the relevant POSIX access permissions to specific directories or files, writing data or changing ownership of these objects will be blocked. Ensure that any SMB shares do not have ‘run-as-root’ configured before putting the PowerScale cluster into Compliance mode.

In the next article in this series, we’ll take a look at SmartLock’s configuration, compatibility, and use.