OneFS Job Engine Architecture and Overview

Received several comments on the previous post requesting more background information on the Job Engine. So, over the next few blog articles, we’ll try to remedy this by delving into its architecture, idiosyncrasies, and operation.

The OneFS Job Engine runs across the entire cluster and is responsible for dividing and conquering large storage management and protection tasks. To achieve this, it reduces a task into smaller work items and then allocates, or maps, these portions of the overall job to multiple worker threads on each node. Progress is tracked and reported on throughout job execution and a detailed report and status is presented upon completion or termination.

Job Engine includes a comprehensive check-pointing system which allows jobs to be paused and resumed, in addition to stopped and started. It also includes an adaptive impact management system, CPU and drive-sensitive impact control, and the ability to run multiple jobs at once.

The Job Engine typically executes jobs as background tasks across the cluster, using spare or especially reserved capacity and resources. The jobs themselves can be categorized into three primary classes:

Category Description
File System Maintenance Jobs These jobs perform background file system maintenance, and typically require access to all nodes. These jobs are required to run in default configurations, and often in degraded cluster conditions. Examples include file system protection and drive rebuilds.
Feature Support Jobs The feature support jobs perform work that facilitates some extended storage management function, and typically only run when the feature has been configured. Examples include deduplication and anti-virus scanning.
User Action Jobs These jobs are run directly by the storage administrator to accomplish some data management goal. Examples include parallel tree deletes and permissions maintenance.

Although the file system maintenance jobs are run by default, either on a schedule or in reaction to a particular file system event, any Job Engine job can be managed by configuring both its priority-level (in relation to other jobs) and its impact policy. These are covered in more detail below.

Job Engine jobs often comprise several phases, each of which are executed in a pre-defined sequence. These run the gamut from jobs like TreeDelete, which have just a single phase, to complex jobs like FlexProtect and MediaScan that have multiple distinct phases.

A job phase must be completed in entirety before the job can progress to the next phase. If any errors occur during execution, the job is marked “failed” at the end of that particular phase and the job is terminated.

Each job phase is composed of a number of work chunks, or Tasks. Tasks, which are comprised of multiple individual work items, are divided up and load balanced across the nodes within the cluster. Successful execution of a work item produces an item result, which might contain a count of the number of retries required to repair a file, plus any errors that occurred during processing.

When a Job Engine job needs to work on a large portion of the file system, there are four main methods available to accomplish this:

  • Inode (LIN) Scan
  • Tree Walk
  • Drive Scan
  • Changelist

The most straightforward access method is via metadata, using a Logical Inode (LIN) Scan. In addition to being simple to access in parallel, LINs also provide a useful way of accurately determining the amount of work required.

A directory tree walk is the traditional access method since it works similarly to common UNIX utilities, such as find – albeit in a far more distributed way. For parallel execution, the various job tasks are each assigned a separate subdirectory tree. Unlike LIN scans, tree walks may prove to be heavily unbalanced, due to varying sub-directory depths and file counts.

Disk drives provide excellent linear read access, so a drive scan can deliver orders of magnitude better performance than a directory tree walk or LIN scan for jobs that don’t require insight into file system structure. As such, drive scans are ideal for jobs like MediaScan, which linearly traverses each node’s disks looking for bad disk sectors.

A fourth class of Job Engine jobs utilize a ‘changelist’, rather than LIN-based scanning. The changelist approach analyzes two snapshots to find the LINs which changed (delta) between the snapshots, and then dives in to determine the exact changes.

The following table provides a comprehensive list of the exposed jobs and operations that the OneFS Job Engine performs, and their file system access methods:

Job Name Job Description Access Method
AutoBalance Balances free space in the cluster. Drive + LIN
AutoBalanceLin Balances free space in the cluster. LIN
AVScan Virus scanning job that antivirus server(s) run. Tree
ChangelistCreate Create a list of changes between two consecutive SyncIQ snapshots Changelist
CloudPoolsLin Archives data out to a cloud provider according to a file pool policy. LIN
CloudPoolsTreewalk Archives data out to a cloud provider according to a file pool policy. Tree
Collect Reclaims disk space that could not be freed due to a node or drive being unavailable while they suffer from various failure conditions. Drive + LIN
ComplianceStoreDelete SmartLock Compliance mode garbage collection job. Tree
Dedupe Deduplicates identical blocks in the file system. Tree
DedupeAssessment Dry run assessment of the benefits of deduplication. Tree
DomainMark Associates a path and its contents with a domain. Tree
DomainTag Associates a path and its contents with a domain. Tree
EsrsMftDownload ESRS managed file transfer job for license files.  
FilePolicy Efficient SmartPools file pool policy job. Changelist
FlexProtect Rebuilds and re-protects the file system to recover from a failure scenario. Drive + LIN
FlexProtectLin Re-protects the file system. LIN
FSAnalyze Gathers file system analytics data that is used in conjunction with InsightIQ. Changelist
IndexUpdate Creates and updates an efficient file system index for FilePolicy and FSAnalyze jobs, Changelist
IntegrityScan Performs online verification and correction of any file system inconsistencies. LIN
LinCount Scans and counts the file system logical inodes (LINs). LIN
MediaScan Scans drives for media-level errors. Drive + LIN
MultiScan Runs Collect and AutoBalance jobs concurrently. LIN
PermissionRepair Correct permissions of files and directories. Tree
QuotaScan Updates quota accounting for domains created on an existing directory path. Tree
SetProtectPlus Applies the default file policy. This job is disabled if SmartPools is activated on the cluster. LIN
ShadowStoreDelete Frees space associated with a shadow store. LIN
ShadowStoreProtect Protect shadow stores which are referenced by a LIN with higher requested protection. LIN
ShadowStoreRepair Repair shadow stores. LIN
SmartPools Job that runs and moves data between the tiers of nodes within the same cluster. Also executes the CloudPools functionality if licensed and configured. LIN
SmartPoolsTree Enforce SmartPools file policies on a subtree. Tree
SnapRevert Reverts an entire snapshot back to head. LIN
SnapshotDelete Frees disk space that is associated with deleted snapshots. LIN
TreeDelete Deletes a path in the file system directly from the cluster itself. Tree
Undedupe Removes deduplication of identical blocks in the file system. Tree
Upgrade Upgrades cluster on a later OneFS release. Tree
WormQueue Scan the SmartLock LIN queue LIN

Note that there are also a few background Job Engine jobs, such as the Upgrade job, which are not exposed to administrative control.

A job impact policy can consist of one or many impact intervals, which are blocks of time within a given week. Each impact interval can be configured to use a single, pre-defined impact-level which specifies the amount of cluster resources to use for a particular cluster operation. The available impact-levels are:

  • Paused
  • Low
  • Medium
  • High

This degree of granularity allows impact intervals and levels to be configured per job, in order to ensure smooth cluster operation. And the resulting impact policies dictate when a job runs and the resources that a job can consume. The following default job engine impact policies are provided:

Impact policy Schedule Impact Level
LOW Any time of day Low
MEDIUM Any time of day Medium
HIGH Any time of day High
OFF_HOURS Outside of business hours (9AM to 5PM, Mon to Fri), paused during business hours Low

Be aware that these default impact policies cannot be modified or deleted.

However, new impact policies can be created, either via the “Add an Impact Policy” WebUI button, or by cloning a default policy and then modifying its settings as appropriate.

A mix of jobs with different impact levels will result in resource sharing. Each job cannot exceed the impact levels set for it, and the aggregate impact level cannot exceed the highest level of the individual jobs.

For example:

  • Job A (HIGH), job B (LOW).

The impact level of job A is HIGH.

The impact level of job B is LOW.

The total impact level of the two jobs combined is HIGH.

 

  • Job A (MEDIUM), job B (LOW), job C (MEDIUM).

The impact level of job A is MEDIUM.

The impact level of job B is LOW.

The impact level of job C is MEDIUM.

The total impact level of the three jobs combined is MEDIUM.

 

  • Job A (LOW), job B (LOW), job C (LOW), job D (LOW).

The impact level of job A is LOW.

The impact level of job B is LOW.

The impact level of job C is LOW.

The impact level of job D is LOW.

The job that was most recently queued/paused, or has the highest job ID value, will be paused.

The total impact level of the three running jobs, and one paused job, combined is LOW.

A best practice is to keep the default impact and priority settings, where possible, unless there’s a valid reason to change them.

The majority of Job Engine jobs are intended to run with “LOW” impact and execute in the background. Notable exceptions are the FlexProtect jobs, which by default are set at “medium” impact. This allows FlexProtect to quickly and efficiently re-protect data, without critically impacting other user activities.

Job Engine jobs are prioritized on a scale of one to ten, with a lower value signifying a higher priority. This is similar in concept to the UNIX scheduling utility, ‘nice’.

Higher priority jobs always cause lower-priority jobs to be paused, and, if a job is paused, it is returned to the back of the Job Engine priority queue. When the job reaches the front of the priority queue again, it resumes from where it left off. If the system schedules two jobs of the same type and priority level to run simultaneously, the job that was queued first is run first.

Priority takes effect when two or more queued jobs belong to the same exclusion set, or when, if exclusion sets are not a factor, four or more jobs are queued. The fourth queued job may be paused, if it has a lower priority than the three other running jobs.

In contrast to priority, job impact policy only comes into play once a job is running and determines the amount of resources a job can utilize across the cluster. As such, a job’s priority and impact policy are orthogonal to one another.

The FlexProtect(LIN) and IntegrityScan jobs both have the highest job engine priority level of 1, by default. Of these, FlexProtect is the most important, because of its core role in re-protecting data.

All the Job Engine jobs’ priorities are configurable by the cluster administrator. The default priority settings are strongly recommended, particularly for the highest priority jobs mentioned above.

As we saw in the previous blog article, the OneFS Job Engine allows up to three jobs to be run simultaneously. This concurrent job execution is governed by the following criteria:

  • Job Priority
  • Exclusion Sets – jobs which cannot run together (ie, FlexProtect and AutoBalance)
  • Cluster health – most jobs cannot run when the cluster is in a degraded state.

OneFS Job Contention

Got asked the following question from the field recently:

“I kicked off a job on my cluster which seemed to be running happily. When I went back into the UI to check it had stopped and was marked waiting and wouldn’t restart.”

Situations like this occur when a running lower priority job is trumped by higher priority job(s). Since the OneFS Job Engine only allows up to three jobs to be run simultaneously, if a fourth job with a higher priority is started, the lowest of the currently executing jobs will be paused. For example:

# isi job start fsanalyze

Started job [583]

# isi job status

The job engine is running.

Running and queued jobs:

ID   Type       State Impact  Pri  Phase Running Time

---------------------------------------------------------

578  SmartPools Running Low     6   1/2   11s

581  Collect    Running Low     4   1/3   16s

583  FSAnalyze  Running Low     5   1/10  1s

---------------------------------------------------------

Total: 3



In this case, we have three jobs running: SmartPools with a priority of 6, MultiScan with priority 4, and FSAnalyze with priority 5.

Next, we go ahead and start a deduplication job, with a priority value of 4:

# isi job start dedupe

Started job [584]

If we now take a look at the cluster’s job status, we can see that SmartPools job has been put into a waiting state (paused), because of its relative priority. A value of ‘1’ indicates the highest priority job level that OneFS supports, with ‘10’ being the lowest.

# isi job status

The job engine is running.

Running and queued jobs:

ID   Type       State Impact  Pri  Phase Running Time

---------------------------------------------------------

578  SmartPools Waiting Low     6  1/2   11s

581  Collect    Running Low     4  1/3   1m 4s

583  FSAnalyze  Running Low     5  9/10  43s

584  Dedupe     Running Low     4  1/1    -

---------------------------------------------------------

Total: 4



Once the FSAnalyze job has completed, the SmartPools job is automatically restarted again:

# isi job status

The job engine is running.

Running and queued jobs:

ID   Type       State Impact  Pri  Phase Running Time

---------------------------------------------------------

578  SmartPools Running Low     6   1/2  23s

581  Collect    Running Low     4   1/3  5m 9s

584  Dedupe     Running Low     4   1/1  1m 2s

---------------------------------------------------------

Total: 3

Let’s look at this in a bit more detail. So the Job Engine’s concurrent job execution is governed by the following criteria:

  • Job Priority
  • Exclusion Sets – jobs which cannot run together (ie, FlexProtect and AutoBalance)
  • Cluster health – most jobs cannot run when the cluster is in a degraded state.

In addition to the per-job impact controls described above, additional impact management is also provided by the notion of job exclusion sets. For multiple concurrent job execution, exclusion sets, or classes of similar jobs, determine which jobs can run simultaneously. A job is not required to be part of any exclusion set, and jobs may also belong to multiple exclusion sets.

Currently, there are two exclusion sets that jobs can be part of: Restriping and Marking:

Here’s a list of the basic concurrent job combinations that OneFS supports:

  • 1 Restripe Job + 1 Mark Job + 1 Other Job
  • 1 Restripe Job + 2 Other Jobs
  • 1 Mark Job + 2 Other Jobs
  • 1 Mark and Restripe Job + 2 Other Jobs
  • 3 Other Jobs

OneFS marks blocks that are actually in use by the file system. IntegrityScan, for example, traverses the live file system, marking every block of every LIN in the cluster to proactively detect and resolve any issues with the structure of data in a cluster. The jobs that comprise the marking exclusion set are:

  • Collect
  • IntegrityScan
  • MultiScan

OneFS protects data by writing file blocks across multiple drives on different nodes in a process known as ‘restriping’. The Job Engine defines a restripe exclusion set that contains these jobs which involve file system management, protection and on-disk layout. The restripe exclusion set contains the following jobs:

  • AutoBalance
  • AutoBalanceLin
  • FlexProtect
  • FlexProtectLin
  • MediaScan
  • MultiScan
  • SetProtectPlus
  • ShadowStoreProtect
  • SmartPools
  • Upgrade

The restriping exclusion set is per-phase instead of per job. This helps to more efficiently parallelize restripe jobs when they don’t need to lock down resources.

Restriping jobs only block each other when the current phase may perform restriping. This is most evident with MultiScan, whose final phase only sweeps rather than restripes. Similarly, MediaScan, which rarely ever restripes, is usually able to run to completion more without contending with other restriping jobs.

For example, below the two restripe jobs, MediaScan and AutoBalanceLin, are both running their respective first job phases. ShadowStoreProtect, also a restriping job, is in a ‘waiting’ state, blocked by AutoBalanceLin.

Running and queued jobs:

ID    Type               State       Impact  Pri  Phase  Running Time

----------------------------------------------------------------------

26850 AutoBalanceLin     Running     Low     4    1/3    20d 18h 19m 

26910 ShadowStoreProtect Waiting     Low     6    1/1    -           

28133 MediaScan          Running     Low     8    1/8    1d 15h 37m  

----------------------------------------------------------------------

MediaScan restripes in phases 3 and 5 of the job, and only if there are disk errors (ECCs) which require data reprotection. If MediaScan reaches phase 3 with ECCs, it will pause until AutoBalanceLin is no longer running. If MediaScan’s priority were in the range 1-3, it would cause AutoBalanceLin to pause instead.

If two jobs happen to reach their restriping phases simultaneously and the jobs have different priorities, the higher priority job (ie. priority value closer to “1”) will continue to run, and the other will pause. If the two jobs have the same priority, the one already in its restriping phase will continue to run, and the one newly entering its restriping phase will pause.

Jobs may also belong to both exclusion sets. An example of this is MultiScan, since it includes both AutoBalance and Collect.

The majority of the jobs do not belong to an exclusion set, as illustrated in the following graphic. These are typically the feature support jobs, as described above, and they can coexist and contend with any of the other jobs.

Exclusion sets do not change the scope of the individual jobs themselves, so any runtime improvements via parallel job execution are the result of job management and impact control. The Job Engine monitors node CPU load and drive I/O activity per worker thread every twenty seconds to ensure that maintenance jobs do not cause cluster performance problems.

If a job affects overall system performance, Job Engine reduces the activity of maintenance jobs and yields resources to clients. Impact policies limit the system resources that a job can consume and when a job can run. You can associate jobs with impact policies, ensuring that certain vital jobs always have access to system resources.

Looking at our previous example again, where the SmartPools job is paused when FSAnalyze is running:

# isi job list

ID   Type       State Impact  Pri  Phase Running Time

---------------------------------------------------------

578  SmartPools Waiting Low     6  1/2    15s

581  Collect    Running Low     4  1/3    37m

584  Dedupe     Running Low     4  1/1    33m

586  FSAnalyze  Running Low     5  1/10   9s

---------------------------------------------------------

Total: 4


If this is undesirable, FSAnalyze can be manually paused, to allow SmartPools to run unimpeded:

# isi job pause FSAnalyze

# isi job list

ID   Type       State       Impact Pri  Phase  Running Time

-------------------------------------------------------------

578  SmartPools Waiting     Low     6   1/2    15s

581  Collect    Running     Low     4   1/4    38m

584  Dedupe     Running     Low     4   1/1    34m

586  FSAnalyze  User Paused Low     5   1/10   20s

-------------------------------------------------------------

Total: 4

Alternatively, the priority of the SmartPools job can also be elevated to value ‘4’ (or the priority of FSAnalyze lowered to value ‘7’) to permanently prioritize it over the FSAnalyze job. For example:

# isi job types modify SmartPools --priority 4

Are you sure you want to modify the job type SmartPools? (yes/[no]): yes

# isi job types view SmartPools

         ID: SmartPools

Description: Enforce SmartPools file policies. This job requires a SmartPools license.

    Enabled: Yes

     Policy: LOW

   Schedule: every day at 22:00

   Priority: 4

Or, via the WebUI:

Navigate to Job Operations > Job Type and configure the desired priority value edit the appropriate job type’s details:

OneFS FilePolicy Job

Traditionally, OneFS has used the SmartPools jobs to apply its file pool policies. To accomplish this, the SmartPools job visits every file, and the SmartPoolsTree job visits a tree of files. However, the scanning portion of these jobs can result in significant random impact to the cluster and lengthy execution times, particularly in the case of SmartPools job. To address this, OneFS also provides the FilePolicy job, which offers a faster, lower impact method for applying file pool policies than the full-blown SmartPools job.

But first, a quick Job Engine refresher…

As we know, he Job Engine is OneFS’ parallel task scheduling framework, and is responsible for the distribution, execution, and impact management of critical jobs and operations across the entire cluster.

The OneFS Job Engine schedules and manages all the data protection and background cluster tasks: creating jobs for each task, prioritizing them and ensuring that inter-node communication and cluster wide capacity utilization and performance are balanced and optimized.  Job Engine ensures that core cluster functions have priority over less important work and gives applications integrated with OneFS – Isilon add-on software or applications integrating to OneFS via the OneFS API – the ability to control the priority of their various functions to ensure the best resource utilization.

Each job, for example the SmartPools job, has an “Impact Profile” comprising a configurable policy and a schedule which characterizes how much of the system’s resources the job will take – plus an Impact Policy and an Impact Schedule.  The amount of work a job has to do is fixed, but the resources dedicated to that work can be tuned to minimize the impact to other cluster functions, like serving client data.

Here’s a list of the specific jobs that are directly associated with OneFS SmartPools:

Job Description
SmartPools Job that runs and moves data between the tiers of nodes within the same cluster. Also executes the CloudPools functionality if licensed and configured.
SmartPoolsTree Enforces SmartPools file policies on a subtree.
FilePolicy Efficient changelist-based SmartPools file pool policy job.
IndexUpdate Creates and updates an efficient file system index for FilePolicy job.
SetProtectPlus Applies the default file policy. This job is disabled if SmartPools is activated on the cluster.

In conjunction with the IndexUpdate job, FilePolicy improves job scan performance for by using a ‘file system index’, or changelist, to find files needing policy changes, rather than a full tree scan.

Avoiding a full treewalk dramatically decreases the amount of locking and metadata scanning work the job is required to perform, reducing impact on CPU and disk – albeit at the expense of not doing everything that SmartPools does. The FilePolicy job enforces just the SmartPools file pool policies, as opposed to the storage pool settings. For example, FilePolicy does not deal with changes to storage pools or storage pool settings, such as:

  • Restriping activity due to adding, removing, or reorganizing node pools.
  • Changes to storage pool settings or defaults, including protection.
  • Packing small files into shadow store containers.

However, the majority of the time SmartPools and FilePolicy perform the same work. Disabled by default, FilePolicy supports the full range of file pool policy features, reports the same information, and provides the same configuration options as the SmartPools job. Since FilePolicy is a changelist-based job, it performs best when run frequently – once or multiple times a day, depending on the configured file pool policies, data size and rate of change.

The FilePolicy job can be invoked from the OneFS CLI with the following configuration options:

Config Option Details
–directory-only Skip processing of regular files.
–ingest Fast mode for quickly setting policies on directories.  Alias for –directory-only –policy-only.
–nop

 

Dry run, calculate what would be done.
–policy-only Apply policies but skip restriping.

Job schedules can easily be configured from the OneFS WebUI by navigating to Cluster Management > Job Operations, highlighting the desired job and selecting ‘View\Edit’. The following example illustrates configuring the IndexUpdate job to run every six hours at a LOW impact level with a priority value of 5:

When enabling and using the FilePolicy and IndexUpdate jobs, the recommendation is to continue running the SmartPools job as well, but at a reduced frequency (monthly).

In addition to running on a configured schedule, the FilePolicy job can also be executed manually.

FilePolicy requires access to a current index. In the event that the IndexUpdate job has not yet been run, attempting to start the FilePolicy job will fail with the error below. Instructions in the error message will be displayed, prompting to run the IndexUpdate job first. Once the index has been created, the FilePolicy job will run successfully. The IndexUpdate job can be run several times daily (ie. every six hours) to keep the index current and prevent the snapshots from getting large.

Consider using the FilePolicy job with the job schedules below for workflows and datasets with the following characteristics:

  • Data with long retention times
  • Large number of small files
  • Path-based File Pool filters configured
  • Where FSAnalyze job is already running on the cluster (InsightIQ monitored clusters)
  • There is already a SnapshotIQ schedule configured
  • When the SmartPools job typically takes a day or more to run to completion at LOW impact

For clusters without the characteristics described above, the recommendation is to continue running the SmartPools job as usual and to not activate the FilePolicy job.

The following table provides a suggested job schedule when deploying FilePolicy:

Job Schedule Impact Priority
FilePolicy Every day at 22:00 LOW 6
IndexUpdate Every six hours, every day LOW 5
SmartPools Monthly – Sunday at 23:00 LOW 6

Since no two clusters are the same, so this suggested job schedule may require additional tuning to meet the needs of a specific environment.

Note that when clusters running older OneFS versions and the FSAnalyze job are upgraded to OneFS 8.2.x or later, the legacy FSAnalyze index and snapshots are removed and replaced by new snapshots the first time that IndexUpdate is run. The new index stores considerably more file and snapshot attributes than the old FSA index. Until the IndexUpdate job effects this change, FSA keeps running on the old index and snapshots.

OneFS Re-protection and Cluster Expansion

Received the following question from the field, which seemed like it would make a useful blog article:

“I have an 8 node A2000 cluster with 12TB drives that has a recommended protection level of +2d:1n. I need to add capacity because the cluster is already 87% full so will be adding a new half chassis/node pair. According to the sizer at 10 nodes the cluster’s recommended protection level changes to +3d:1n1d. What’s the quickest way to go about this?”

Essentially, this boils down to whether the protection level should be changed before or after adding the new nodes.

The principle objective here is efficiency, by limiting the amount of protection and layout work that OneFS has to perform. In this case, both node addition and a change in protection level require that the cluster’s restriper daemon to run. This entails two long running operations: To balance data evenly across the cluster and to increase the data protection.

If the new A2000s are added first, and then the cluster protection is changed to the recommend level for the new configuration, the process would look like:

1)   Add a new node pair to the cluster

2)   Let rebalance finish

3)   Configure the data protection level to +3d:1n1d

4)   Allow the restriper to complete the re-protection

However, by altering the protection level first, all the data restriping can be performed more efficiently and in a single step:

1)   Change the protection level setting to +3d:1n1d

2)   Add nodes (immediately after changing the protection level)

3)   Let rebalance finish

In addition to reducing the amount of work the cluster has to do, this streamlined process also has the benefit of getting data re-protected at the new recommended level more quickly.

OneFS protects and balances data by writing file blocks across multiple drives on different nodes. This process is known as ‘restriping’ in OneFS jargon. The Job Engine defines a restripe exclusion set that contains those jobs which involve file system management, protection and on-disk layout. The restripe set encompasses the following jobs:

Job Description
Autobalance(Lin) Balance free space in a cluster
FlexProtect(Lin) Scans file system after device failure to ensure all files remain protected
MediaScan Locate and clear media-level errors from disks
MultiScan Runs AutoBalance and Collect jobs concurrently
SetProtectPlus Applies the default file policy (unless SmartPools is activated)
ShadowStoreProtect Protect shadow stores
SmartPools Protects and moves data between tiers of nodes within cluster
Upgrade Manages OneFS version upgrades

Each of the Job Engine jobs has an associated restripe goal, which can be displayed with the following command:

# isi_gconfig -t job-config | grep restripe_goal

The different restriper functions operate as follows, where each in the path is a superset of the previous:

The following table describes the action and layout goal of each restriper function:

Function Detail Goal
Retune Always restripe using the retune layout goal. Originally intended to optimize layout for performance, but has instead become a synonym for ‘force restripe’. LAYOUT_RETUNE
Rebalance Attempt to balance utilization between drives etc. Also address all conditions implied by REPROTECT. LAYOUT_REBALANCE
Reprotect Change the protection level to more closely match the policy if the current cluster state allows wider striping or more mirrors. Re-evaluate the disk pool policy and SSD strategy. Also address all conditions implied by REPAIR. LAYOUT_REPAIR
Repair Replaces any references to restripe_from (down/smartfailed) components. Also fixes recovered writes. LAYOUT_REPAIR

Here’s how the various Job Engine jobs (as reported by the isi_gconfig –t job-config command above) align with the four restriper goals:

The retune goal moves the current disk pool to the bottom of the list, increasing the likelihood (but not guaranteeing) that another pool will be selected as the restripe target. This is useful, for example, in the event of a significant drive loss in one of the disk pools that make up the node’s pool (eg. disk pool 4 suffers loss of 2+ drives and it becomes > 90% full). Using a retune goal more ‘quickly’ forces rebalance to the other pools.

So, an efficient approach to the earlier cluster expansion scenario is to change protection and then add the new node. A procedure for this is as follows:

  1. Reconfigure the protection level to the recommended setting for the appropriate node pool(s). This can be done from the WebUI by navigating to File System > Storage Pools > SmartPools and editing the appropriate node pool(s):

2. Especially for larger clusters (ie. twenty nodes or more), of if there’s a mix of node hardware generations, it’s helpful to do some prep work upfront prior to adding a node. Prior to adding node(s):

    1. Image any new node to the same OneFS version that the cluster is running.
    2. Ensure that any new nodes have the correct versions of node and drive firmware, plus any patches that may have been added, before joining to the cluster.
    3. If the nodes are from different hardware generations or configuration, ensure that they fit within the Node Compatibility requirements for the cluster’s OneFS version.

3. Set the Job Engine daemon to ‘disable’ an hour or so prior to adding new node(s) to help ensure a clean node join. This can be done with the following command:

# isi services –a isi_job_d disable

4. Add the new node(s) and verify the healthy state of the expanded cluster:

5. Confirm there are no un-provisioned drives:

# disi –I diskpools ls | grep –i “Unprovisioned drives”

6. Check that the node(s) joined the existing pools

# isi storagepool list

7. Restart the Job Engine:

# isi services –a isi_job_d enable

8. After adding all nodes, the recommendation for a cluster with SSDs is to run AutoBalanceLin with an impact policy of ‘LOW’ or OFF_HOURS. For example:

# isi job jobs start autobalancelin --policy LOW

9. To ensure the restripe is going smoothly, monitor the disk IO (‘DiskIn’ and ‘DiskOut’ counters) using the following command:

# isi statistics system –nall –-oprates –-nohumanize

Monitor the ‘DiskIn’ and ‘DiskOut’ counters. Between 2500-5000 disk IOPS is pretty healthy for nodes containing SSDs.

10. Additionally, cancelling Mediascan and/or MultiScan and pausing FSanalysis will reduce resource contention and allow the AutoBalanceLIN job to complete more efficiently.

# isi job jobs cancel mediascan

# isi job jobs cancel multiscan

# isi job jobs pause fsanalyze

Finally, it’s worth periodically running and reviewing the OneFS health check reports – especially pre and post configuring changes and adding new nodes to the cluster. These can be found an run from the WebUI by navigating to Cluster Management > HealthCheck > HealthChecks.

The OneFS Healthcheck diagnostic tool will help verify that the OneFS configuration is as expected and verify there are no cluster issues. The important checks to run include basic, job engine, cluster capacity, pre-upgrade, and performance.

OneFS Audit Log Purging

OneFS audit logs can grow quickly based on a customer’s audit configuration. As such, audit logs often need to be trimmed for space or for regulatory compliance purposes.  OneFS 9.1 introduces the ability to automatically and non-disruptively purge both configuration and protocol audit log files. In releases prior to OneFS 9.1, there was no easy way to remove audit logs without stopping and restarting the audit service. Additionally, it involved a somewhat complex and potentially error-prone procedure. The steps must be followed explicitly, otherwise it may lead to data unavailability. OneFS audit logs are stored and compressed binary files which are located under /ifs/.ifsvar/audit/logs, and each log file can grow to a maximum of 1 GB.

Audit log purging provides a simple, efficient method to trim audit log entries. Log purging is applied to all nodes in the cluster, either automatically and periodically based on a retention policy, or manually. Configuration is via the OneFS CLI or platform API, and an automated purging policy is based on a time range, or ‘retentionperiod(by days)’. When run automatically, the log purger runs once an hour as a background process with minimal performance impact.

The OneFS 9.1 ‘isi audit settings’ command set enables the configuration and control of automatic log purging. The configuration parameters can be viewed with the following syntax:

# isi audit settings global view

Protocol Auditing Enabled: No

            Audited Zones: -

          CEE Server URIs: -

                 Hostname:

  Config Auditing Enabled: No

    Config Syslog Enabled: No

    Config Syslog Servers: -

  Protocol Syslog Servers: -

     Auto Purging Enabled: No

         Retention Period: 180


The following CLI command will enable automatic purging:

# isi audit settings global modify --auto-purging-enabled=yes

You are enabling the automatic log purging.

Automatic log purging will run in background to delete audit log files.

Please check the retention period before enabling automatic log purging.

Are you sure you want to do this?? (yes/[no]): yes

This change can be verified with the following syntax:

# isi audit settings global view | grep -i purging

     Auto Purging Enabled: Yes

Similarly, the retention period can also be configured from the OneFS CLI as follows. In this case, it’s being changed to 90 days from its default of 180 days:

# isi audit settings global modify --retention-period=90

# isi audit settings global view | grep -i retention

         Retention Period: 365

Manual deletion uses the same underlying mechanism as auto deletion, the only difference being that it is initiated manually. The isi audit logs delete CLI command can be used to purge the audit logs prior to a specified date. For example, the following syntax will manually purge the audit logs of entries prior to 1st April 2020:

# isi audit logs delete --before=2020-04-01

You are going to delete the audit logs before 2020-04-01.

Are you sure you want to do this?? (yes/[no]): yes

The purging request has been triggered.

The following CLI command can be run to monitor and verify the activity of the manual purging process:

# isi audit logs check

Purging Status:

             Using Before Value: 2019-10-01

Currently Manual Purging Status: COMPLETED

The removal of protocol or configuration audit logs for a particular time period can be verified with the OneFS audit event viewer command line utility. For example, the following syntax will display any protocol audit events between 1st January 2020 and 31st March 2019:

# isi_audit_viewer -t protocol -s "2020-01-01 12:00:00" -e "2020-03-31 12:00:00"

There are a couple of things to keep in mind when configuring and using audit logfile purging:

If syslog forwarding is enabled and is unable to forward quick enough, it can potentially block the purging process. This could potentially result in remaining, un-purged audit log entries, even if they fall outside of the retention period. The same bottlenecking challenge also could potentially occur with the CEE forwarder.

Performance-wise, depending on the quantity of audit data to be purged, the initial deletion may be lengthy and somewhat resource intensive. However, subsequent deletions will typically be much faster and with negligible performance impact.

The following CLI commands can be useful in determining  how logging in general is progressing on a cluster. The ‘isi_audit_progress’ syntax will display both last consumed and logged event timestamps. Note that the command is node-local, so will need to be run with ‘isi_for_array’ to get a full cluster report:

# isi_for_array -s isi_audit_progress -t protocol CEE_FWD

tme-1:  Last consumed event time: '2022-12-02 10:16:24'

tme-1:  Last logged event time:   '2022-12-02 10:16:27'

tme-2:  Last consumed event time: '2022-12-02 10:16:19'

tme-2:  Last logged event time:   '2022-12-02 10:16:24'

tme-3:  Last consumed event time: '2022-12-02 10:16:31'

tme-3:  Last logged event time:   '2022-12-02 10:16:31'

Next, the following CLI command will confirm that nodes are happily forwarding to the CEE server(s):

# isi statistics query current list --keys=node.audit.cee.export.rate --nodes all

   Node  node.audit.cee.export.rate

-----------------------------------

      1                 107.200000

      2                  89.400000

      3                    94.100000

average                  96.900000

-----------------------------------

 

OneFS CELOG – Part 2

In the previous article in this series, we looked at an overview of CELOG – OneFS’ cluster event log and alerting infrastructure. For this blog post, we’ll  focus in on CELOG’s configuration and management.

CELOG’s CLI is integrated with OneFS’ RESTful platform API and roles-based access control (RBAC) and is based around the ‘isi event’ series of commands. These are divided into three main groups:

Command Group Description
Alerting Alerts:  Manage rules for reporting on groups of correlated events

Channels:  Manage channels for sending alerts

Monitoring and Control Events: List and view events

Groups: Manage groups of correlated event occurrences

Test:  Create test events

Configuration Settings:  Manage maintenance window, data retention and storage limits

The isi event alerts command set allow for the viewing, creation, deletion, and modification of alert conditions. These are the rules that specify how sets of event groups are reported.

As such, alert conditions combine a set of event groups or event group categories with a condition and a set of channels (isi event channels). When any of the specified event group conditions are met, an alert fires and is dispatched via the specified channel(s).

An alert condition comprises:

  • The threshold or condition under which alerts should be sent.
  • The event groups and/or categories it applies to (there is a special value ‘all’).
  • The channels through which alerts should be sent.

The channels must already exist and cannot be deleted while in use by any alert conditions. Some alert conditions have additional parameters, including: NEW, NEW_EVENTS, ONGOING, SEVERITY_INCREASE, SEVERITY_DECREASE and RESOLVED.

An alert condition may also possess a duration, or ‘transient period’. If this is configured, then no event group which is active (ie. not resolved) for less than this period of time will be reported upon via the alert condition that specifies it.

Note: The same event group may be reported upon under other alert conditions that do not specify a transient period or specify a different one.

The following command creates an alert named ExternalNetwork, sets the alert condition to NEW, the source event group to ID 100010001 (SYS_DISK_VARFULL), the channel to TechSupport, sets the severity level to critical, and the maximum alert limit to 5:

# isi event alerts create ExternalNetwork NEW –add_eventgroup 100010001 --channel TechSupport --severity critical --limit 5

Or, from the WebUI by browsing to Cluster Management > Events and Alerts > Alerts:

Similarly, the following will add the event group ID 123456 to the ExternalNetwork alert, and only send alerts for event groups with critical severity:

# isi event alerts modify ExternalNetwork -–add-eventgroup 123456 --severity critical

Channels are the routes via which alerts are sent, and include any necessary routing, addressing, node exclusion information, etc. The isi event channels command provides create, modify, delete, list and view options for channels.

The supported communication methods include:

  • SMTP
  • SNMPA
  • ConnectEMC

The following command creates the channel ‘TechSupport’ used in the example above, and sets its type to EMCConnect:

# isi event channels create TechSupport connectemc

Note that ESRS connectivity must be enabled prior to configuring a ‘connectemc’ channel.

Conversely, a channel can easily be removed with the following syntax:

# isi event channels delete TechSupport

Or from the WebUI by browsing to Cluster Management > Events and Alerts > Alerts:

For SMTP, a valid email server is required. This can be checked with the ‘isi email view’ command. If the “SMTP relay address” field is empty, this can be configured by running something along the lines of:

# isi email settings modify -–mail-relay=mail.mycompany.com

The following syntax modifies the channel named TechSupport, changing the SMTP username to admin, and resetting the SMTP password:

# isi event channels modify TechSupport --smtp-username admin -–smtp-password p@ssw0rd

SNMP traps are sent by running either ‘snmpinform’ or ‘snmptrap’ with appropriate parameters (agent, community, etc). To configure a cluster to send SNMP traps in order for a network monitoring system (NMS) to receive them, from the WebUI navigate to Dashboard > Events > Notification Rules > Add Rule and create a rule with Recipients = SNMP, and enter the correct values for ‘Community’ and ‘Host’ appropriate for your NMS.

The isi event events list command displays events with their ID, time of occurrence, severity, logical node number, event group and message. The events for a single event group occurrence can be listed using the –eventgroup-id parameter.

To identify the instance ID of the event that you want to view, run the following command:

# isi event events list

To view the details of a specific event, run the isi event events view command and specify the event instance ID. The following displays the details for an event with the instance ID of 6.201114:

# isi event events view 6.201114

           ID: 6.201114

Eventgroup ID: 4458074

   Event Type: 400040017

      Message: (policy name: cb-policy target: 10.245.109.130) SyncIQ encountered a filesystem error. Failure due to file system error(s): Could not sync stub file 103f40aa8: Input/output error

        Devid: 6

          Lnn: 6

         Time: 2020-10-26T12:23:14

     Severity: warning

        Value: 0.0

The list of event groups can be filtered by cause, begin (occurred after this time), end (occurred before this time), resolved, ignored or event count (event group occurrences with at least as many events as specified). By default only event group occurrences which are not ignored will be shown.

The configurations options are to set or revert the ignore status and to set the resolved status. Be warned that an event group marked as resolved cannot be reverted.

For example, the following example command modifies event group ID 10 to a status ‘ignored’:

# isi event groups modify 10 --ignored true

Note: If desired, the isi event group bulk command will set all event group occurrences to either ignore or resolved. Use sparingly!

The isi events settings view command displays the current values of all settings, whereas the modify command allows any of them to be reconfigured.

The configurable options include:

Config Option Detail
retention-days Retention period for data concerning resolved event groups in days
storage-limit The amount of cluster storage that CELOG is allowed to consume – measured in millionths of the total on the cluster (megabytes per terabyte of total storage). Values of 1-100 are allowed (up to one ten thousandth of the total storage), however there is a 1GB floor for small clusters.
maintenance-start

maintenance-duration

These two should always be used together to specify a maintenance period during which no alerts will be generated. This is intended to suppress alerts during periods of maintenance when they are likely to be false alarms.
heartbeat-interval CELOG runs a periodic (once daily, by default) self test by sending a heartbeat event from each node which is reported via the system ‘Heartbeat Self-Test’ channel. Any failures are logged in /var/log/messages.

The following syntax alters the number of days that resolved event groups are saved to 90, and increases the storage limit for event data to 5MB for every 1TB of total cluster storage:

# isi event settings modify --retention-days 90 --storage-limit 5

A maintenance window can be configured to discontinue alerts while performing maintenance on your cluster.

For example, the following command schedules a maintenance window that starts at 11pm on October 27, 2020, and lasts for one day:

# isi event settings modify --maintenance-start 2020-10-27T23:00:00 --maintenance-duration 1D

Maintenance periods and retention settings can also be configured from the WebUI by browsing to Cluster Management > Events and Alerts > Settings:

The isi event test command is provided in order to validate the communication path and confirm that events are getting transmitted correctly. The following generates a test alert with the message “Test msg from OneFS”:

# isi event test create "Test msg from OneFS"

Here are the log files that CELOG uses for its various purposes:

Log File Description
/var/log/isi_celog_monitor.log System monitoring and event creation
/var/log/isi_celog_capture.log First stop recording of events, attachment generation
/var/log/isi_celog_analysis.log Assignment of events to eventgroups
/var/log/isi_celog_reporting.log Evaluation of alert conditions and sending of alert requests
/var/log/isi_celog_alerting.log Sending of alerts
/var/log/messages Heartbeat failures
/ifs/.ifsvar/db/celog_alerting/<channel>/fail.log Failure messages from alert sending
/ifs/.ifsvar/db/celog_alerting/<channel>/sent.log Alerts sent via <channel>

These logs can be invaluable for troubleshooting the various components of OneFS events and alerting.

As mentioned previously, CELOG combines multiple events into a single event group. This allows an incident to be communicated and managed as a single, coherent issue. A similar process occurs for multiple instances of the same event. As such, the following deduplication rules apply to these broad categories of events, namely:

Event Category Descriptions
Repeating Events Events firing repeatedly before the elapsed time-out value will be condensed into a “… is triggering often” event
Sensor Multiple events by hardware sensors on a node within a given time-frame will be combined as a “hardware problems” event
Disk Multiple events generated by a specific disk will be coalesced into a logical “disk problems” event
Network Various issues may exist, depending on the type of connectivity problem between nodes of a cluster:

§  When a node cannot contact any other nodes in the cluster, each of its connection errors will be condensed into a “node cannot contact cluster” event.

§  When a node is not reachable by the rest of the cluster nodes, cluster will combine the connection errors as a “cluster cannot reach node X event

§  When clusters split into chunks, each set of nodes will report connection errors coalesced as a “nodes X, Y, Z cannot contact nodes A, B, C event.

§  When cluster re-forms, events will again be combined into a single logical “cluster split into N groups: { A, B, C }, { X, Y, Z }, …” event.

§  When connectivity between all nodes is restored and cluster is reformed, the events will be condensed into a single “All nodes lost internal connectivity” event.

Reboot

 

If a node is unreachable even after a defined time elapses after a reboot, further connection errors will be coalesced as a “node did not rejoin cluster after reboot” event.

So CELOG is your cluster guardian – continuously monitoring the health and performance of the hardware, software, and services – and generating events when situations occur that might require your attention.

OneFS CELOG

The previous post about customizable CELOG alerts generated a number of questions from the field. So, over the course of the next couple of articles we’ll be reviewing the fundamentals of OneFS logging and alerting.

The OneFS Cluster Event Log (or CELOG) provides a single source for the logging of events that occur in an Isilon cluster. Events are used to communicate a picture of cluster health for various components. CELOG provides a single point from which notifications about the events are generated, including sending alert emails and SNMP traps.

Cluster events can be easily viewed from the WebUI by browsing to Cluster Management > Events and Alerts > Events. For example:

Or from the CLI, using the ‘isi event events view’ syntax:

# isi event events view 2.370158

           ID: 2.370158

Eventgroup ID: 271428

   Event Type: 600010001

      Message: The snapshot daemon failed to create snapshot 'Hourly - prod' in schedule 'Hourly @ Every Day': error: Name collision

        Devid: 2

          Lnn: 2

         Time: 2020-10-19T17:01:33

     Severity: warning

        Value: 0.0

In this instance, CELOG communicates on behalf of SnapshotIQ that it’s failed to create a scheduled hourly snapshot because of an issue with the naming convention.

At a high level, processes that monitor conditions on the cluster or log important events during the course of their operation communicate directly with the CELOG system. CELOG receives event messages from other processes via a well-defined API.

A CELOG event often contains the following elements:

Element Definition
Event Events are generated by the system and may be communicated in various ways (email, snmp traps, etc), depending upon the configuration.
Specifier Specifiers are strings containing extra information, which can be used to coalesce events and construct meaningful, readable messages.
Attachment Extra chunks of information, such as parts of log files or sysctl output, added to email notifications to provide additional context about an event.

For example, in SnapshotIQ event above, we can see the event text contains a specifier and attachment that has been mostly derived from the corresponding syslog message:

# grep "Hourly - prod" /var/log/messages* | grep "2020-10-19T17:01:33"

2020-10-19T17:01:33-04:00 <3.3> a200-2 isi_snapshot_d[5631]: create_schedule_snapshot: snapshot schedule (Hourly @ Every Day) pattern created a snapshot name collision (Hourly - prod); scheduled create failed.

CELOG is a large, complex system, which can be envisioned as a large pipeline. It gathers events and statistics info on one end from isi_stats_d and isi_celog_monitor, plus directly other applications such as SmartQuotas, SyncIQ, etc. These events are passed from one functional block to another, with a database at the end of the pipe. Along the way, attachments may be generated, notifications sent, and events passed to a coalescer.

On the front end, there are two dispatchers, which pass communication from the UNIX socket and network to their corresponding handlers. As events are processed, they pass through a series of coalescers. At any point they may be intercepted by the appropriate coalescer, which creates a coalescing event and which will accept other related events.

As events drop out the bottom of the coalescer stack, they’re deposited in add, modify and delete queues in the backend database infrastructure. The coalescer thread then moves onto pushing things into the local database, forwarding them along to the primary coalescer, and queueing events to have notifications sent and/or attachments generated.

The processes of safely storing events, analyzing them, deciding on what alerts to send and sending them is separated into four separate modules within the pipeline:

The following table provides a description of each of these CELOG modules:

Module Definition
Capture The first stage in the processing pipeline, Event Capture is responsible for reading event occurrences from the kernel queue, storing them safely on persistent local storage, generating attachments, and queueing them by priority for analysis.
Analysis Extra chunks of information (log file extracts, sysctl output, etc) are added to alert notifications to provide additional context about an event.
Reporter The Reporter is the third stage in the processing pipeline, and runs on only one node in the cluster. It periodically queries Event Analysis for changes and generates alert requests for any relevant conditions.
Alerter The Alerter is the final stage in the processing pipeline, responsible for actually delivering the alerts requested by the reporter. There is a single sender for each enabled channel on the cluster.

CELOG local and backend database redundancy ensures reliable event storage and guards against bottlenecks.

By default, OneFS provides the following event group categories, each of which contain a variety of conditions, or ‘event group causes’, which will trigger an event if their conditions are met:

Event Group Category Event Series Number
System disk events 1000*****
Node status events 2000*****
Reboot events 3000*****
Software events 4000*****
Quota events 5000*****
Snapshot events 6000*****
Windows networking events 7000*****
Filesystem events 8000*****
Hardware events 9000*****
CloudPools events 11000*****

Say, for example a chassis fan fails in one of a cluster’s nodes. OneFS will likely capture multiple hardware events. For instance:

  • Event # 90006003 related to the physical power supply
  • Event # 90020026 for an over-temperature alert

All the events relating to the fan failure will be represented in a single event group, which allows the incident to be communicated and managed as a single, coherent issue.

Detail on individual events can be viewed for each item. For example, the following event is for a drive firmware incompatibility.

Drilling down into the event details reveals the event number – in this case, event # 100010027:

OneFS events and alerts info is available online at the CELOG event reference guide.

The Event Help information will often provide an “Administrator Action” plan, which, where appropriate, provides troubleshooting and/or resolution steps for the issue.

For example, here’s the Event Help for snapshot delete failure event # 600010002:

The OneFS WebUI Cluster Status dashboard shows the event group info at the bottom of the page.

More detail and configuration can be found in the Events and Alerts section of the Cluster Management WebUI. This can be accessed via the “Manage event groups” link, or by browsing to Cluster Management > Events and Alerts > Events.

OneFS Customizable CELOG Alerts

Another feature enhancement that is introduced in the new OneFS 9.1 release is customizable CELOG event thresholds. This new functionality allows cluster administrators to customize the alerting thresholds for several filesystem capacity-based events. These new configurable events and their default threshold values include:

These event thresholds can be easily set from the OneFS WebUI, CLI, or platform API. For configuration via the WebUI, browse to Cluster Management > Events and Alerts > Thresholds, as follows:

The desired event can be configured from the OneFS WebUI by clicking on the associated ‘Edit Thresholds’ button. For example, to lower the thresholds for the FILESYS_FDUSAGE event critical threshold from 95 to 92%:

Note that none of an event’s thresholds can have an equal value to each other. Plus an informational must be lower than warning and critical must be higher than warning. For example:

Alternatively, event threshold configuration can also be performed via the OneFS CLI ‘isi event thresholds’ command set . For example:

The list of configurable CELOG events can be displayed with the following CLI command:

# isi event threshold list
ID ID Name
-------------------------------
100010001 SYS_DISK_VARFULL
100010002 SYS_DISK_VARCRASHFULL
100010003 SYS_DISK_ROOTFULL
100010015 SYS_DISK_POOLFULL
100010018 SYS_DISK_SSDFULL
600010005 SNAP_RESERVE_FULL
800010006 FILESYS_FDUSAGE
-------------------------------

Full details, including the thresholds, are shown with the addition of the ‘-v’ verbose flag:

# isi event threshold list -v
ID: 100010001
ID Name: SYS_DISK_VARFULL
Description: Percentage at which /var partition is near capacity
Defaults: info (75%), warn (85%), crit (90%)
Thresholds: info (75%), warn (85%), crit (90%)
--------------------------------------------------------------------------------
ID: 100010002
ID Name: SYS_DISK_VARCRASHFULL
Description: Percentage at which /var/crash partition is near capacity
Defaults: warn (90%)
Thresholds: warn (90%)
--------------------------------------------------------------------------------
ID: 100010003
ID Name: SYS_DISK_ROOTFULL
Description: Percentage at which /(root) partition is near capacity
Defaults: warn (90%), crit (95%)
Thresholds: warn (90%), crit (95%)
--------------------------------------------------------------------------------
ID: 100010015
ID Name: SYS_DISK_POOLFULL
Description: Percentage at which a nodepool is near capacity
Defaults: info (70%), warn (80%), crit (90%), emerg (97%)
Thresholds: info (70%), warn (80%), crit (90%), emerg (97%)
--------------------------------------------------------------------------------
ID: 100010018
ID Name: SYS_DISK_SSDFULL
Description: Percentage at which an SSD drive is near capacity
Defaults: info (75%), warn (85%), crit (90%)
Thresholds: info (75%), warn (85%), crit (90%)
--------------------------------------------------------------------------------
ID: 600010005
ID Name: SNAP_RESERVE_FULL
Description: Percentage at which snapshot reserve space is near capacity
Defaults: warn (90%), crit (99%)
Thresholds: warn (90%), crit (99%)
--------------------------------------------------------------------------------
ID: 800010006
ID Name: FILESYS_FDUSAGE
Description: Percentage at which the system is near capacity for open file descriptors
Defaults: info (85%), warn (90%), crit (95%)
Thresholds: info (85%), warn (90%), crit (95%)

Similarly, the following CLI syntax can be used to display the existing thresholds for a particular event – in this case the SYS_DISK_VARFULL /var partition full alert:

# isi event thresholds view 100010001

         ID: 100010001

    ID Name: SYS_DISK_VARFULL

Description: Percentage at which /var partition is near capacity

   Defaults: info (75%), warn (85%), crit (90%)

 Thresholds: info (75%), warn (85%), crit (90%)

The following command will reconfigure the threshold from the defaults of 75%|85%|95% to 70%|75%|85%:

# isi event thresholds modify 100010001 --info 70 --warn 75 --crit 85

# isi event thresholds view 100010001

         ID: 100010001

    ID Name: SYS_DISK_VARFULL

Description: Percentage at which /var partition is near capacity

   Defaults: info (75%), warn (85%), crit (90%)

 Thresholds: info (70%), warn (75%), crit (85%)

And finally, to reset the thresholds back to their default values:

#  isi event thresholds reset 100010001

Are you sure you want to reset info, warn, crit from event 100010001?? (yes/[no]): yes

# isi event thresholds view 100010001

         ID: 100010001

    ID Name: SYS_DISK_VARFULL

Description: Percentage at which /var partition is near capacity

   Defaults: info (75%), warn (85%), crit (90%)

 Thresholds: info (75%), warn (85%), crit (90%)

Configuring OneFS SyncIQ Encryption

Unlike previous OneFS versions, SyncIQ is disabled by default in OneFS 9.1 and later. Once SyncIQ has been enabled by the cluster admin, a global encryption flag is automatically set, requiring all SyncIQ policies to be encrypted. Similarly, when upgrading a PowerScale cluster to OneFS 9.1, the global encryption flag is also set. However, be aware that the global encryption flag is not enabled on clusters configured with any existing SyncIQ policies upon upgrade to OneFS 9.1 or later releases.

The following procedure can be used to configure SyncIQ encryption from the OneFS CLI:

  1. Ensure both source and target clusters are running OneFS 8.2 or later.
  2. Next, create an X.509 certificates, one for each of the source and target clusters, and signed by a certificate authority.
Certificate Type Abbreviation
Certificate Authority <ca_cert_id>
Source Cluster Certificate <src_cert_id>
Target Cluster Certificate <tgt_cert_id>

These can be generated using publicly available tools, such as OpenSSL: http://slproweb.com/products/Win32OpenSSL.html.

  1. Add the newly created certificates to the appropriate source cluster stores. Each cluster gets certificate authority, its own certificate, and its peer’s certificate:
# isi sync certificates server import <src_cert_id> <src_key>

# isi sync certificates peer import <tgt_cert_id>

# isi cert authority import <ca_cert_id>
  1. On the source cluster, set the SyncIQ cluster certificate:
# isi sync settings modify --cluster-certificate-id=<src_cert_id>
  1. Add the certificates to the appropriate target cluster stores:
# isi sync certificates server import <tgt_cert_id> <tgt_key>

# isi sync certificates peer import <src_cert_id>

# isi cert authority import <ca_cert_id>
  1. On the target cluster, set the SyncIQ cluster certificate:
# isi sync settings modify --cluster-certificate-id=<tgt_cert_id>
  1. A global option is available in OneFS 9.1, requiring that all incoming and outgoing SyncIQ policies are encrypted. Be aware that executing this command impacts any existing SyncIQ policies that may not have encryption enabled. Only execute this command once all existing policies have encryption enabled. Otherwise, existing policies that do not have encryption enabled will fail. To enable this, execute the following command:
# isi sync settings modify --encryption-required=True
  1. On the source cluster, create an encrypted SyncIQ policy:
# isi sync policies create <pol_name> sync <src_dir> <target_ip> <tgt_dir> --target-certificate-id=<tgt_cert_id>

Or modify an existing policy on the source cluster:

# isi sync policies modify <pol_name> --target-certificate-id=<tgt_cert_id>

OneFS 9.1 also facilitates SyncIQ encryption configuration via the OneFS WebUI, in addition to CLI. For the source, server certificates can be added and managed by navigating to Data Protection > SyncIQ > Settlings and clicking on the ‘add certificate’ button:

And certificates can be imported onto the target cluster by browsing to Data Protection > SyncIQ > Certificates and clicking on the ‘add certificate’ button. For example:

So that’s what’s required to get encryption configured across a pair of clusters. There are several addition optional encryption configuration parameters available. These include:

  • Updating the policy to use a specified SSL cipher suite:
# isi sync policies modify <pol_name> --encryption-cipher-list=<suite>
  • Configuring the target cluster to check the revocation status of incoming certificates:
# isi sync settings modify --ocsp-address=<address> --ocsp-issuer-certificate-id=<ca_cert_id>
  • Modifying how frequently encrypted connections are renegotiated on a cluster:
# isi sync settings modify --renegotiation-period=24H
  • Requiring that all incoming and outgoing SyncIQ policies are encrypted:
# isi sync settings modify --encryption-required=True

To troubleshoot SyncIQ encryption, first check the reports for the SyncIQ policy in question. The reason for the failure should be indicated in the report. If the issue was due to a TLS authentication failure, then the error message from the TLS library will also be provided in the report. Also, more detailed information can often be found in /var/log/messages on the source and target clusters, including:

  • ID of the certificate that caused the failure.
  • Subject name of the certificate that caused the failure.
  • Depth at which the failure occurred in the certificate chain.
  • Error code and reason for the failure.

Before enabling SyncIQ encryption, be aware of the potential performance implications. While encryption only adds minimal overhead to the transmission, it may still negatively impact a production workflow. Be sure to test encrypted replication in a lab environment that emulates the environment before deploying in production.

Note that both the source and target cluster must be upgraded and committed to OneFS 8.2 or later, prior to configuring SyncIQ encryption.

In the event that SyncIQ encryption needs to be disabled, be aware that this can only be performed via the CLI and not the WebUI:

# isi sync settings modify --encryption-required=false

If encryption is disabled under OneFS 9.1, the following warnings will be displayed on creating a SyncIQ policy.

From the WebUI:

And via the CLI:

# isi sync policies create pol2 sync /ifs/data 192.168.1.2 /ifs/data/pol1

********************************************

WARNING: Creating a policy without encryption is dangerous.

Are you sure you want to create a SyncIQ policy without setting encryption?

Your data could be vulnerable without encrypted protection.

Type ‘confirm create policy’ to proceed.  Press enter to cancel:

OneFS SyncIQ and Encrypted Replication

Introduced in OneFS 9.1, SyncIQ encryption is integral in protecting data in-flight during inter-cluster replication over the WAN. This helps prevent man-in-the-middle attacks,  mitigating remote replication security concerns and risks.

SyncIQ encryption helps to secure data transfer between OneFS clusters, benefiting customers who undergo regular security audits and/or government regulations.

  • SyncIQ policies support end-to-end encryption for cross-cluster communications.
  • Certificates are easy to manage with the SyncIQ certificate store.
  • Certificate revocation is supported through the use of an external OCSP responder.
  • Clusters now require that all incoming and outgoing SyncIQ policies be encrypted through a simple configuration change in the SyncIQ global settings.

SyncIQ encryption relies on cryptography, using a public and private key pair to encrypt and decrypt replication sessions. These keys are mathematically related: Data encrypted with one key is decrypted with other key, confirming the identity of each cluster. SyncIQ uses the common X.509 Public Key Infrastructure (PKI) standard which defines certificate requirements.

A Certificate Authority (CA) serves as a trusted 3rd party, which issues and revokes certificates. Each cluster’s certificate store has the CA, it’s certificate, and the peer’s certificate, establishing a trusted ‘passport’ mechanism.

A SyncIQ job can attempt either an encrypted or unencrypted handshake:

Under the hood, SyncIQ utilizes TLS protocol version 1.2 and OpenSSL version: 1.0.2o. Customers are responsible for creating their own X.509 certificates, and SyncIQ peers must store each other’s end entity certificates. A TLS authentication failure will cause the corresponding SyncIQ job to immediately fail, and a CELOG event notifies the user of a SyncIQ encryption failure.

One the source cluster, the SyncIQ job’s coordinator process passes the target cluster’s public cert to its primary worker (pworker) process. The target monitor and sworker threads receive a list of approved source cluster certs. The pworkers can then establish secure connections with their corresponding sworkers (secondary workers).

SyncIQ traffic encryption is enabled on a per-policy basis. The CLI includes the ‘isi certificates’ and ‘isi sync certificates’ commands for the configuration of TLS certificates:

# isi cert -h

Description:

    Configure cluster TLS certificates.

Required Privileges:

    ISI_PRIV_CERTIFICATE

Usage:

    isi certificate <subcommand>

        [--timeout <integer>]

        [{--help | -h}]

Subcommands:

  Certificate Management:

    authority    Configure cluster TLS certificate authorities.

    server       Configure cluster TLS server certificates.

    settings     Configure cluster TLS certificate settings.

The following policy configuration fields are included:

Config Field Detail
–target-certificate-id <string> The ID of the target cluster certificate being used for encryption.
–ocsp_issuer_certificate-id <string> The ID of the certificate authority that issued the certificate whose revocation status is being checked.
–ocsp-address <string> The address of the OCSP responder to which to connect.
–encryption-cipher-list <string> The cipher list being used with encryption. For SyncIQ targets, this list serves as a list of supported ciphers. For SyncIQ sources, the list of ciphers will be attempted to be used in order.

In order to configure a policy for encryption the ‘–target-certificate-id’ must be specified. The users will input the ID of the desired certificate as is defined in the certificate manager. If self-signed certificates are being utilized, then they will have been manually copied to their peer cluster’s certificate store.

For authentication, there is a strict comparison of the public certs to the expected values. If a cert chain (that has been signed by the CA) is selected to authenticate the connection, the chain of certificates will need to be added to the cluster’s certificate authority store. Both methods use the ‘SSL VERIFY FAIL IF NO PEER CERT’ option when establishing the SSL context. Note that once encryption is enabled (by setting the appropriate policy fields), modification of the certificate IDs is allowed. However, removal and reverting to unencrypted syncs will prompt for confirmation before proceeding.

We’ll take a look at the SyncIQ encryption configuration procedures and options in the second article of this series.