OneFS Multi-writer

In the previous blog article we took a look at write locking and shared access in OneFS. Next, we’ll delve another layer deeper into OneFS concurrent file access.

The OneFS locking hierarchy also provides a mechanism called Multi-writer, which allows a cluster to support concurrent writes from multiple client writer threads to the same file. This granular write locking is achieved by sub-diving the file into separate regions and granting exclusive data write locks to these individual ranges, as opposed to the entire file. This process allows multiple clients, or write threads, attached to a node to simultaneously write to different regions of the same file.

Concurrent writes to a single file need more than just supporting data locks for ranges. Each writer also needs to update a file’s metadata attributes such as timestamps, block count, etc. A mechanism for managing inode consistency is also needed, since OneFS is based on the concept of a single inode lock per file.

In addition to the standard shared read and exclusive write locks, OneFS also provides the following locking primitives, via journal deltas, to allow multiple threads to simultaneously read or write a file’s metadata attributes:

OneFS Lock Types include:

Exclusive: A thread can read or modify any field in the inode. When the transaction is committed, the entire inode block is written to disk, along with any extended attribute blocks.

Shared: A thread can read, but not modify, any inode field.

DeltaWrite: A thread can modify any inode fields which support delta-writes. These operations are sent to the journal as a set of deltas when the transaction is committed.

DeltaRead: A thread can read any field which cannot be modified by inode deltas.

These locks allow separate threads to have a Shared lock on the same LIN, or for different threads to have a DeltaWrite lock on the same LIN. However, it is not possible for one thread to have a Shared lock and another to have a DeltaWrite. This is because the Shared thread cannot perform a coherent read of a field which is in the process of being modified by the DeltaWrite thread.

The DeltaRead lock is compatible with both the Shared and DeltaWrite lock. Typically the file system will attempt to take a DeltaRead lock for a read operation, and a DeltaWrite lock for a write, since this allows maximum concurrency, as all these locks are compatible.

Here’s what the write lock incompatibilities looks like:

OneFS protects data by writing file blocks (restriping) across multiple drives on different nodes. The Job Engine defines a ‘restripe set’ comprising jobs which involve file system management, protection and on-disk layout. The restripe set contains the following jobs:

  • AutoBalance & AutoBalanceLin
  • FlexProtect & FlexProtectLin
  • MediaScan
  • MultiScan
  • SetProtectPlus
  • SmartPools
  • Upgrade

Multi-writer for restripe allows multiple restripe worker threads to operate on a single file concurrently. This in turn improves read/write performance during file re-protection operations, plus helps reduce the window of risk (MTTDL) during drive Smartfails, etc. This is particularly true for workflows consisting of large files, while one of the above restripe jobs is running. Typically, the larger the files on the cluster, the more benefit multi-writer for restripe will offer.

Note that OneFS multi-writer ranges are not a fixed size and instead tied to layout/protection groups. So typically in the megabytes size range. The number of threads that can write to the same file concurrently, from the filesystem perspective, is only limited by file size. However, NFS file handle affinity (FHA) comes into play from the protocol side, and so the default is typically eight threads per node.

The clients themselves do not apply for granular write range locks in OneFS, since multi-writer operation is completely invisible to the protocol. Multi-writer uses proprietary locking which OneFS performs to coordinate filesystem operations. As such, multi-writer is distinct from byte-range locking that application code would call, or even oplocks/leases which the client protocol stack would call.

Depending on the workload, multi-writer can improve performance by allowing for more concurrency, and (while typically on by default in recent releases) can be enabled on a cluster from the CLI as follows:

# isi_sysctl_cluster efs.bam.coalescer.multiwriter=1

Similarly, to disable multi-writer:

# isi_sysctl_cluster efs.bam.coalescer.multiwriter=0

Note that, as a general rule, unnecessary contention should be avoided. For example:

  • Avoid placing unrelated data in the same directory: Use multiple directories instead. Even if it is related, split it up if there are many entries.
  • Use multiple files: Even if the data is ultimately related, from a performance/scalability perspective, having each client use its own file and then combining them as a final stage is the correct way to architect for performance.

With multi-writer for restripe, an exclusive lock is no longer required on the LIN during the actual restripe of data. Instead, OneFS tries to use a delta write lock to update the cursors used to track which parts of the file need restriping. This means that a client application or program should be able to continue to write to the file while the restripe operation is underway.

An exclusive lock is only required for a very short period of time while a file is set up to be restriped. A file will have fixed widths for each restripe lock, and the number of range locks will depend on the quantity of threads and nodes which are actively restriping a single file.

OneFS File Locking and Concurrent Access

There has been a bevy of recent questions around how OneFS allows various clients attached to different nodes of a cluster to simultaneously read from and write to the same file. So it seemed like a good time for a quick refresher on some of the concepts and mechanics behind OneFS concurrency and distributed locking.

File locking is the mechanism that allows multiple users or processes to access data concurrently and safely. For reading data, this is a fairly straightforward process involving shared locks. With writes, however, things become more complex and require exclusive locking, since data must be kept consistent.

OneFS has a fully distributed lock manager that marshals locks on data across all the nodes in a storage cluster. This locking manager is highly extensible and allows for multiple lock types to support both file system locks as well as cluster-coherent protocol-level locks, such as SMB share mode locks or NFS advisory-mode locks. OneFS also has support for delegated locks such as SMB oplocks and NFSv4 delegations.

Every node in a cluster is able to act as coordinator for locking resources, and a coordinator is assigned to lockable resources based upon a hashing algorithm. This selection algorithm is designed so that the coordinator almost always ends up on a different node than the initiator of the request. When a lock is requested for a file, it can either be a shared lock or an exclusive lock. A shared lock is primarily used for reads and allows multiple users to share the lock simultaneously. An exclusive lock, on the other hand, allows only one user access to the resource at any given moment, and is typically used for writes. Exclusive lock types include:

Mark Lock:  An exclusive lock resource used to synchronize the marking and sweeping processes for the Collect job engine job.

Snapshot Lock:  As the name suggests, the exclusive snapshot lock which synchronizes the process of creating and deleting snapshots.

Write Lock:  An exclusive lock that’s used to quiesce writes for particular operations, including snapshot creates, non-empty directory renames, and marks.

The OneFS locking infrastructure has its own terminology, and includes the following definitions:

Domain: Refers to the specific lock attributes (recursion, deadlock detection, memory use limits, etc) and context for a particular lock application. There is one definition of owner, resource, and lock types, and only locks within a particular domain may conflict.

Lock Type: Determines the contention among lockers. A shared or read lock does not contend with other types of shared or read locks, while an exclusive or write lock contends with all other types. Lock types include:
• Advisory
• Anti-virus
• Data
• Delete
• LIN
• Mark
• Oplocks
• Quota
• Read
• Share Mode
• SMB byte-range
• Snapshot
• Write

Locker: Identifies the entity which acquires a lock.

Owner: A locker which has successfully acquired a particular lock. A locker may own multiple locks of the same or different type as a result of recursive locking.

Resource: Identifies a particular lock. Lock acquisition only contends on the same resource. The resource ID is typically a LIN to associate locks with files.

Waiter: Has requested a lock, but has not yet been granted or acquired it.

Here’s an example of how threads from different nodes could request a lock from the coordinator:

1. Node 2 is selected as the lock coordinator of these resources.
2. Thread 1 from Node 4 and thread 2 from Node 3 request a shared lock on a file from Node 2 at the same time.
3. Node 2 checks if an exclusive lock exists for the requested file.
4. If no exclusive locks exist, Node 2 grants thread 1 from Node 4 and thread 2 from Node 3 shared locks on the requested file.
5. Node 3 and Node 4 are now performing a read on the requested file.
6. Thread 3 from Node 1 requests an exclusive lock for the same file as being read by Node 3 and Node 4.
7. Node 2 checks with Node 3 and Node 4 if the shared locks can be reclaimed.
8. Node 3 and Node 4 are still reading so Node 2 asks thread 3 from Node 1 to wait for a brief instant.
9. Thread 3 from Node 1 blocks until the exclusive lock is granted by Node 2 and then completes the write operation.

OneFS Drive Performance Statistics

The previous post examined some of the general cluster performance metrics. In this article we’ll focus in on the disk subsystem and take a quick look at some of the drive statistics counters. As we’ll see, OneFS offers several tools to inspect and report on both drive health and performance.

Let’s start with some drive failure and wear reporting tools….

The following cluster-wide command will indicate any drives that are marked as smartfail, empty, stalled, or down:

# isi_for_array -sX 'isi devices list | egrep -vi “healthy|L3”'

Usually, any node that requires a drive replacement will have an amber warning light on the front display panel. Also, the drive that needs swapping out will typically be marked by a red LED.

Alternatively, isi_drivenum will also show the drive bay location of each drive, plus a variety of other disk related info, etc.

# isi_for_array -sX ‘isi_drivenum –A’

This next command provides drive wear information for each node’s flash (SSD) boot drives:

# isi_for_array -sSX "isi_radish -a /dev/da* | grep -e FW: -e 'Percent Life' | grep -v Used”

However, the output is in hex. This can be converted to a decimal percent value using the following shell command, where <value> is the raw hex output:

# echo "ibase=16; <value>" | bc

Alternatively, the following perl script will also translate the isi_radish command output from hex into comprehensible ‘life remaining’ percentages:

#!/usr/bin/perl

use strict;

use warnings;

my @drives = ('ada0', 'ada1');

foreach my $drive (@drives)

{

print "$drive:\n";

open CMD,'-|',"isi_for_array -s isi_radish -vt /dev/$drive" or

die "Failed to open pipe!\n";

while (<CMD>)

{

if (m/^(\S+).*(Life Remaining|Lifetime Left).*\(raw\s+([^)]+)/i)

{

print "$1 ".hex($3)."%\n";

}

}

}

The following drive statistics can be useful for both performance analysis and troubleshooting purposes.

General disk activity stats are available via the isi statistics command.

For example:

# isi statistics system –-nodes=all --oprates --nohumanize

This output will give you the per-node OPS over protocol, network and disk. On the disk side, the sum of DiskIn (writes)  and DIskOut (reads) gives the total IOPS for all the drives per node.

For the next level of granularity, the following drive statistics command provides individual SSSD drive info. The sum of OpsIn and OpsOut is the total IOPS per drive in the cluster.

# isi statistics drive -nall -–long --type=sata --sort=busy | head -20

And the same info for SSDs:

# isi statistics drive -nall --long --type=ssd --sort=busy | head -20

The primary counters of interest in drive stats data are often the ‘TimeInQ’, ‘Queued’, OpsIn, OpsOut, and IO and the ’Busy’ percentage of each disk. If most or all the drives have high busy percentages, this indicates a uniform resource constraint, and there is a strong likelihood that the cluster is spindle bound. If, say, the top five drives are much busier than the rest, this suggests a workflow hot-spot.

# isi statistics pstat

The read and write mix, plus metadata operations, for a particular protocol can be gleaned from the output of the isi statistics pstat command. In addition to disk statistics, CPU and network stats are also provided. The –protocol parameter is used to specify the core NAS protocols such as NFSv3, NFSv4, SMB1, SMB2, HDFS, etc. Additionally, OneFS specific protocol stats, including job engine (jobd), platform API (papi), IRP, etc, are also available.

For example, the following will show NFSv3 stats in a ‘top’ format, refreshed every 6 seconds by default:

# isi statistics pstat --protocol nfs3 --format top

The uptime command provides system load average for 1, 5, and 15 minute intervals, and is comprised of both CPU queues and disk queues stats.

# isi_for_array -s 'uptime'

It’s worth noting that this command’s output does not take CPU quantity into account. As such, a load average of 1 on a single CPU means the node is pegged. However, that load average of 1 on a dual CPU system means the CPU is 50% idle.

The following command will give the CPU count:

# isi statistics query current --nodes all --degraded --stats node.cpu.count

The sum of disk ops across a cluster per node is available via the following syntax:

# isi statistics query current --nodes=all --stats=node.disk.xfers.rate.sum

There are a whole slew of more detailed drive metrics that OneFS makes available for query.

Disk time in queue provides an indication as to how long an operation is queued on a drive. This indicator is key if a cluster is disk-bound. A time in queue value of 10 to 50 milliseconds is concerning, whereas a value of 50 to 100 milliseconds indicates a potential problem.

The following CLI syntax can be used to obtain the maximum, minimum, and average values for disk time in queue for SATA drives in this case:

# isi statistics drive --nodes=all --degraded --no-header --no-footer | awk ' /SATA/ {sum+=$8; max=0; min=1000} {if ($8>max) max=$8; if ($8<min) min=$8} END {print “Min = “,min; print “Max = “,max; print “Average = “,sum/NR}’

The following command displays the time in queue for 30 drives sorted highest-to-lowest:

# isi statistics drive list -n all --sort=timeinq | head -n 30

Queue depth indicates how many operations are queued on drives. A queue depth of 5 to 10 is considered heavy queuing.

The following CLI command can be used to obtain the maximum, minimum, and average values for disk queue depth of SATA drives. If there’s a big delta between the maximum number and average number in the queue, it’s worth investigating further to determine whether an individual drive is working excessively.

# isi statistics drive --nodes=all --degraded --no-header --no-footer | awk ' /SATA/ {sum+=$9; max=0; min=1000} {if ($9>max) max=$9; if ($9<min) min=$9} END {print “Min = “,min; print “Max = “,max; print “Average = “,sum/NR}’

For information on SAS or SSD drives, you can substitute SAS or SSD for SATA in the above syntax.

To display queue depth for twenty drives sorted highest-to-lowest, run the following command:

# isi statistics drive list -n all --sort=queued | head -n 20

Note that the TimeAvg metric, as reported by isi statistics drive command, represents all the latency at the disk that doesn’t include the scheduler wait time (TimeInQ). So this is a measure of disk access time (ie. send the op, wait, receive response). The Total Time at the disk is a sum of the access time (TimeAvg) and the scheduler time (TimeInQ).

The disk percent busy metric can he useful to determine if a drive is getting pegged. However, it does not indicate how much extra work may be in the queue. To obtain the maximum, minimum, and average disk busy values for SATA drives, run the following command. For information on SAS or SSD drives, you can include SAS or SSD respectively, instead of SATA.

# isi statistics drive --nodes=all --degraded --no-header --no-footer | awk ' /SATA/ {sum+=$10; max=0; min=1000} {if ($10>max) max=$10; if ($10min) min=$10} END {print “Min = “,min; print “Max = “,max; print “Average = “,sum/NR}’

To display disk percent busy for twenty drives sorted highest-to-lowest issue, run the following command.

# isi statistics drive -nall --output=busy | head -n 20

OneFS Performance Statistics

There have been several recent inquiries on how to effectively gather performance statistics on a cluster, so it seemed like a useful topic to dig into further in a blog article:

First, two cardinal rules… Before planning or undertaking any performance tuning on a cluster, or its attached clients:

1)  Record the original cluster settings before making any configuration changes to OneFS or its data services.

2)  Measure and analyze how the various workloads in your environment interact with and consume storage resources.

Performance measurement is done by gathering statistics about the common file sizes and I/O operations, including CPU and memory load, network traffic utilization, and latency. To obtain key metrics and wall-clock timing data for delete, renew lease, create, remove, set userdata, get entry, and other file system operations, connect to a node via SSH and run the following command as root to enable the vopstat system control:

# sysctl efs.util.vopstats.record_timings=1

After enabling vopstats, they can be viewed by running the ‘sysctl efs.util.vopstats’ command as root: Here is an example of the command’s output:

# sysctl efs.util.vopstats.

efs.util.vopstats.ifs_snap_set_userdata.initiated: 26 

efs.util.vopstats.ifs_snap_set_userdata.fast_path: 0 

efs.util.vopstats.ifs_snap_set_userdata.read_bytes: 0

efs.util.vopstats.ifs_snap_set_userdata.read_ops: 0

efs.util.vopstats.ifs_snap_set_userdata.raw_read_bytes: 0

efs.util.vopstats.ifs_snap_set_userdata.raw_read_ops: 0

efs.util.vopstats.ifs_snap_set_userdata.raw_write_bytes: 6432768

efs.util.vopstats.ifs_snap_set_userdata.raw_write_ops: 2094

efs.util.vopstats.ifs_snap_set_userdata.timed: 0

efs.util.vopstats.ifs_snap_set_userdata.total_time: 0

efs.util.vopstats.ifs_snap_set_userdata.total_sqr_time: 0

efs.util.vopstats.ifs_snap_set_userdata.fast_path_timed: 0

efs.util.vopstats.ifs_snap_set_userdata.fast_path_total_time: 0   

efs.util.vopstats.ifs_snap_set_userdata.fast_path_total_sqr_time: 0

The time data captures the number of operations that cross the OneFS clock tick, which is 10 milliseconds. Independent of the number of events, the total_sq_time provides no actionable information because of the granularity of events.

To analyze the operations, use the total_time value instead. The following example shows only the total time records in the vopstats:

# sysctl efs.util.vopstats | grep –e "total_ time: [ ^0]"

efs.util.vopstats.access_rights.total_time: 40000

efs.util.vopstats.lookup.total_time:  30001

efs.util.vopstats.unlocked_ write_mbuf.total_time:  340006

efs.util.vopstats.unlocked_write_mbuf.fast_path_total_time: 340006

efs.util.vopstats.commit.total_time: 3940137

efs.util.vopstats.unlocked_getattr.total_time: 280006

efs.util.vopstats.unlocked_getattr.fast_path_total_time:   50001 efs.util.vopstats.inactive.total_time: 100004

efs.util.vopstats.islocked.total_time:   30001 efs.util.vopstats.lock1.total_time:   280005

efs.util.vopstats.unlocked_read_mbuf.total_time: 11720146 efs.util.vopstats.readdir.total_time: 20000

efs.util.vopstats.setattr.total_time: 220010 efs.util.vopstats.unlock.total_time: 20001

efs.util.vopstats.ifs_snap_delete_resume.timed: 77350

efs.util.vopstats.ifs_snap_delete_resume.total_time: 720014

efs.util.vopstats.ifs_snap_delete_resume.total_sqr_time: 7200280042

The ‘isi statistics’ CLI command is a great tool for the task here, plus its output is current (ie. real time). It’s a versatile utility, providing real time stats with the following subcommand-level syntax

Statistics

Category

Details
Client Display cluster usage statistics organized according to cluster hosts and users
Drive Show performance by drive
Heat Identify the most accessed files/directories
List List valid arguments to given option
Protocol Display cluster usage statistics organized by communication protocol
Pstat Generate detailed protocol statistics along with CPU, OneFS, network & disk stats
Query Query for specific statistics. There are current and history modes
System Display general cluster statistics (Op rates for protocols, network & disk traffic (kB/s)

Full command syntax and a description of the options can be accessed via isi statistics –help or via the man page (man isi-statistics).

The ‘isi statistics pstat’ command provides statistics per protocol operation, client connections, and the file system. For example, for  NFSv3:

# isi statistics pstat --protocol=nfs3

The ‘isi statistics client’ CLI command  provides I/O and timing data by client name and/or IP address, depending on options – plus the username if it can be determined. For example, to generate a list of the top NFSv3 clients on a cluster, the following command can be used:

# isi statistics client --protocols=nfs3 –-format=top

Or, for SMB clients:

# isi statistics client --protocols=smb2 –-format=top

The extensive list of protocols for which client data can be displayed includes:

nfs3 | smb1 | nlm | ftp | http | siq | smb2 | nfs4 | papi | jobd | irp | lsass_in | lsass_out | hdfs | all | internal | external

SMB2 and SMB3 current connections are both displayed in the following stats command:

# isi statistics query current --stats node.clientstats.active.smb2

Or for SMB2 + 3 historical connection data:

# isi statistics query history --stats node.clientstats.active.smb2

The following command will total by users, as opposed to node. This can be helpful when investigating HPC workloads, or other workflows involving compute clusters:

# isi statistics client --protocols=nfs3 –-format=top --numeric --totalby=username --sort=ops,timemax

This ‘heat’ command option can be useful for viewing the files that are being utilized:

# isi statistics heat --long --classes=read,write,namespace_read,namespace_write | head -10

This shows the amount of contention where parallel user(s) operations are targeting the same object.

# isi statistics heat --long --classes=read,write,namespace_read,namespace_write --events=blocked,contended,deadlocked | head -10

It can also be useful to constrain statistics reporting to a single node. For example, the following command will show the fifteen hottest files on node 4.

# isi statistics heat --limit=15 --nodes=4

It’s worth noting that isi statistics doesn’t directly tie a client to a file or directory path. Both isi statistics heat and isi statistics client provide some of this information, but not together. The only directory/file related statistics come from the ‘heat’ stats, which track the hottest accesses in the filesystem.

The system and drive statistics can also be useful for performance analysis and troubleshooting purposes. For example:

# isi statistics system -–nodes=all --oprates --nohumanize

This output will give you the per-node OPS over protocol, network and disk. On the disk side, the sum of DiskIn (writes)  and DIskOut (reads) gives the total IOPS for all the drives per node.

For the next level of granularity, the drive statistics command provides individual disk info.

# isi statistics drive -nall -–long --type=sata --sort=busy | head -20

If most or all the drives have high busy percentages, this indicates a uniform resource constraint, and there is a strong likelihood that the cluster is spindle bound. If, say, the top five drives are much busier than the rest, this would suggest a workflow hot-spot.