OneFS Job Execution and Node Exclusion

The majority of the OneFS job engine’s jobs have no default schedule and are typically manually started by a cluster administrator or process. Other jobs such as FSAnalyze, MediaScan, ShadowStoreDelete, and SmartPools, are normally started via a schedule. The job engine can also initiate certain jobs on its own. For example, if the SnapshotIQ process detects that a snapshot has been marked for deletion, it will automatically queue a SnapshotDelete job.

The Job Engine will also execute jobs in response to certain system event triggers. In the case of a cluster group change, for example the addition or subtraction of a node or drive, OneFS automatically informs the job engine, which responds by starting a FlexProtect job. The coordinator notices that the group change includes a newly-smart-failed device and then initiates a FlexProtect job in response.

Job administration and execution can be controlled via the WebUI, CLI, or platform API and a job can be started, stopped, paused and resumed, and this is managed via the job engines’ check-pointing system. For each of these control methods, additional administrative security can be configured using roles-based access control (RBAC).

The job engine’s impact control and work throttling mechanism is able to limit the rate at which individual jobs can run. Throttling is employed at a per-manager process level, so job impact can be managed both granularly and gracefully.

Every twenty seconds, the coordinator process gathers cluster CPU and individual disk I/O load data from all the nodes across the cluster. The coordinator uses this information, in combination with the job impact configuration, to decide how many threads may run on each cluster node to service each running job. This can be a fractional number, and fractional thread counts are achieved by having a thread sleep for a given percentage of each second.

Using this CPU and disk I/O load data, every sixty seconds the coordinator evaluates how busy the various nodes are and makes a job throttling decision, instructing the various job engine processes as to the action they need to take. This enables throttling to be sensitive to workloads in which CPU and disk I/O load metrics yield different results. Additionally, there are separate load thresholds tailored to the different classes of drives utilized in OneFS powered clusters, from capacity optimized SATA disks to flash-based SSDs.

However, up through OneFS 9.2, a job engine job was an all or nothing entity. Whenever a job ran, it involved the entire cluster – regardless of individual node type, load, or condition. As such, any nodes that were overloaded or in a degraded state could still impact the execution ability of the job at large.

To address this, OneFS 9.3 provides the capability to exclude one or more nodes from participating in running a job. This allows the temporary removal of any nodes with high load, or other issues, from the job execution pool so that jobs do not become stuck. Configuration is via the OneFS CLI and gconfig and is global, such that it applies to all jobs on startup. However, the exclusion configuration is not dynamic, and once a job is started with the final node set, there is no further reconfiguration permitted. So if a participant node is excluded, it will remain excluded until the job has completed. Similarly, if a participant needs to be excluded, the current job will have to be cancelled and a new job started. Any nodes can be excluded, including the node running the job engine’s coordinator process. The coordinator will still monitor the job, it just won’t spawn a manager for the job.

The list of participating nodes for a job are computed in three phases:

  1. Query the cluster’s GMP group.
  2. Call to job.get_participating_nodes to get a subset from the gmp group
  3. Remove the nodes listed in core.excluded_participants from the subset.

The CLI syntax for configuring an excluded nodes list on a cluster is as follows (in this example, excluding nodes one through three):

# isi_gconfig –t job-config core.excluded_participants="{1,2,3}"

The ‘excluded_participants’ are entered as a comma-separated devid value list with no spaces, specified within parentheses and double quotes.

Note that it is the node’s device ID (devid) which is required in the above command, and this is not always the same as the node number (LNN). The following command will report both the LNN and corresponding devid:

# isi_nodes %{lnn} , %{id}

All excluded nodes must be specified in full, since there’s no aggregation. Note that, while the excluded participant configuration will be displayed via gconfig, it is not reported as part of the ‘sysctl efs.gmp.group’ output.

A job engine node exclusion configuration can be easily reset to avoid excluding any nodes by assigning the “{}” value.

# isi_gconfig –t job-config core.excluded_participants="{}"

A ‘core.excluded_participant_percent_warn’ parameter defines the maximum percentage of removed nodes.

# isi_gconfig -t job-config core.excluded_participant_percent_warn

core.excluded_participant_percent_warn (uint) = 10

This parameter defaults to 10%, above which a CELOG event warning is generated.

As many nodes as desired can be removed from the job group. CELOG informational event will notify of removed nodes. If too many nodes have been removed, (gconfig parameter sets too many node threshold) CELOG will fire a warning event. If some nodes are removed but they’re not part of the GMP group, a different warning event will trigger.

If all nodes are removed, a CLI/pAPI error will be returned, the job will fail, and a CELOG warning will fire. For example:

# isi job jobs start LinCount

Job operation failed: The job had no participants left. Check core.excluded_participants setting and make sure there is at least one node to run the job:  Invalid argument

# isi job status

10   LinCount         Failed    2021-10-24T:20:45:23

------------------------------------------------------------------

Total: 9

Note, however, that the following core system maintenance jobs will continue to run across all nodes in a cluster even if a node exclusion has been configured:

  • AutoBalance
  • Collect
  • FlexProtect
  • MediaScan
  • MultiScan

OneFS 9.3 Introduction

Arriving hot on the heels of the PowerStore H700 & H7000 hybrid chassis and A300 & A300 archive platforms that debuted last month, the new PowerScale OneFS 9.3 release shipped on Monday, 18th October 2021. This new 9.3 release brings with it an array of new features and functionality, including:

Feature Info
NVMe SED support for PowerScale All-flash ·         FIPS-certified Self-Encrypting Drives (SEDs) support for PowerScale F600 & F900 nodes
Writable Snapshots ·         Enables the creation and management of a space and time efficient, modifiable copy of a regular OneFS snapshot.
NFS v4.1 and v4.2 Support ·         Connectivity support for versions 4.1 and 4.2 of the NFS file access protocol.
Long filename support ·         Provision for file names up to 1024 bytes, allowing support for long names in UTF-8 multi-byte character sets.
Inline data in inodes ·         Data efficiency feature allowing a small file’s data to be stored in unused space within its inode block.
HDFS ACLs ·         Provide support for HDFS-4685 access control lists, allowing users to manage permissions on their datasets from Hadoop clients.
Job engine exclusions ·         Allow Job engine jobs to be run on a defined subset of a cluster’s nodes.
CloudPools Recall ·         Improved CloudPools file recall & rehydrate performance.
Safe SMB client disconnects ·         Allows SMB clients the opportunity to flush their caches prior to being disconnected.
S3 protocol enhancements ·         Added support for S3 chunked upload, multi-object delete, and non-slash delimiters for lists.

The new OneFS 9.3 code is available on the Dell EMC Online Support site as both a  reimage file for configuring new clusters with 9.3, and an upgrade package for legacy clusters.

We’ll also be taking a deeper look at  these new OneFS 9.3 features in blog articles over the course of the next few weeks.

OneFS Netlogger

Among the useful data and network analysis tools that OneFS provides is the isi_netlogger utility. Netlogger captures IP traffic over a period of time for network and protocol analysis.

Under the hood, isi_netlogger is a python wrapper around the ubiquitous tcpdump utility. Netlogger can be run either from the OneFS command line or WebUI.

For example, from the WebUI, browse to Cluster management > Diagnostics:

Alternatively, from the OneFS CLI, the isi_netlogger command captures traffic on interface (‘-i’) over a timeout period of minutes (‘-t’), and stores a specified number of log files,as defined by the keep_count, or ‘-k’ parameter.

Using the ‘-b’ bpf buffer size option will temporarily change the default buffer size while netlogger is running. Netlogger’s log files are stored by default under /ifs/netlog/<node_name>.

Here’s the basic syntax of the tool:

isi_netlogger [-c launch clustered mode (run on all nodes)]

              [-n run on specified nodes (ex: -n 1,3)]

              [-d run as daemon]

              [-q quiet mode (redirect output to logs)]

              [-k keep_count of logs (default 3, to keep all logs use 0)]

              [-t timeout (default 10) ]

              [-s snaplen (default 320) ]

              [-b bpf buffer size in bytes, KB, or MB (end with 'k' or 'm') ]

              [-i interface name[,..] | all (ex: -i em0 or -i em0,em1 or -i all)]

              [-a ARP packets included. Normally filtered out ]

              [-p print out the tcpdump command ]

              [-z do not bundle capture files (default bundling is done)]

              [-- tcpdump filtering options]

The WebUI can also be used to configure the netlogger parameters under Cluster management > Diagnostics > Netlogger settings:

Be aware that ‘isi_for_array isi_netlogger’ will consume significant cluster resources. When running the tool on a production cluster, be cognizant of the effect on the system.

When the command has completed, the capture file(s) are stored under the /ifs/netlog directory.

The following command can also be used to incorporate netlogger output files into an isi_gather_info bundle:

# isi_gather_info -n [node#] -f /ifs/netlog

To capture on multiple nodes of the cluster, the netlogger command can be prefixed by the versatile isi_for_array utility:

# isi_netlogger -n 2,3 -t 5 -k 864 -s 256

The command syntax above will create five minute incremental files on nodes 2 and 3, using a snaplength of 256 bytes, which will capture the first 256 bytes of each packet. These five-minute logs will be kept for about three days and the naming convention is of the form netlog-<node_name>-<date>-<time>.pcap. For example:

# ls

/etc/netlog/lab-cluster-1/netlog-lab_cluster-1.2021-10-18_20.24.38.pcap

When using isi_netlogger, the ‘-s’ flag needs to be set appropriately based on the protocol being to capture the right amount of detail in the packet headers and/or payload. Or, if you want the entire contents of every packet, a value of zero (‘-s 0’) can be used.

The default snaplength for netlogger is to use a snaplen of 320 bytes per packet, which is usually enough for most protocols.

However, for SMB, a snaplength of 512 is sometimes required. However, depending on a node’s traffic quantity, a snaplen of 0 (eg: capture whole packet) can potentially overwhelm the nic driver.

All the output gets written to files under /ifs/netlog and the default capture time is ten minutes (‘-t 10’).

Filters can be applied to the  filter to the end to constrain traffic to/from certain hosts or protocols. For example, to limit output to traffic between client 10.10.10.1:

# isi_netlogger -t 5 -k 864 -s 256 -- host 10.10.10.1

Or to capture NFS traffic only, filter on port 2049:

# isi_netlogger -- port 2049

Or multiple ports:

# isi_netlogger -- port 2345 or port 5432

To capture from a non-standard interface can sometime require a bit of creativity:

# isi_netlogger -p -t 5 -k 864 -s 256 -a -- " -i vlan0 host 192.168.10.1"

The -p flag to print out the tcpdump command it is running. And, essentially, anything following a double dash flag ‘–‘ is passed as a normal tcpdump filter/option.

To capture across differing interface names across the cluster:

# isi_netlogger -i \`ifconfig |grep -B2 'inet <ip_addr>.' | grep flags= | awk -F: '{ print $1 }'\` -s 0 -a

Where <ip_addr> is as much of the public IP address that is common across all the nodes. For example “192.” or “192.168.”.

To stop netlogger before a command has completed, a simple ‘isi_for_array killall tcpdump’ can be used to terminate any active tcpdump/netlogger sessions across a cluster. If any processes do remain after this, these can all be killed with a command along the lines of:

# isi_for_array -s "kill -9 \`ps auxw|grep netlog|grep -v grep|awk {'print \$2'}\`;killall -9 tcpdump"

PowerScale Gen6 Chassis Hardware Resilience

In this article, we’ll take a quick look at the OneFS journal and boot drive mirroring functionality in PowerScale chassis-based hardware:

PowerScale Gen6 platforms, such as the H700/7000 and A300/3000, stores the local filesystem journal and its mirror in the DRAM of the battery backed compute node blade.  Each 4RU Gen 6 chassis houses four nodes. These nodes comprise a ‘compute node blade’ (CPU, memory, NICs), plus  drive containers, or sleds, for each.

A node’s file system journal is protected against sudden power loss or hardware failure by OneFS’ journal vault functionality – otherwise known as ‘powerfail memory persistence’ (PMP). PMP automatically stores the both the local journal and journal mirror on a separate flash drive across both nodes in a node pair:

This journal de-staging process is known as ‘vaulting’, during which the journal is protected by a dedicated battery in each node until it’s safely written from DRAM to SSD on both nodes in a node-pair. With PMP, constant power isn’t required to protect the journal in a degraded state since the journal is saved to M.2 flash, and mirrored on the partner node.

So, the mirrored journal is comprised of both hardware and software components, including the following constituent parts:

Journal Hardware Components

  • System DRAM
  • 2 Vault Flash
  • Battery Backup Unit (BBU)
  • Non-Transparent Bridge (NTB) PCIe link to partner node
  • Clean copy on disk

Journal Software Components

  • Power-fail Memory Persistence (PMP)
  • Mirrored Non-volatile Interface (MNVI)
  • IFS Journal + Node State Block (NSB)
  • Utilities

Asynchronous DRAM Refresh (ADR) preserves RAM contents when the operating system is not running. ADR is important for preserving RAM journal contents across reboots, and it does not require any software coordination to do so.

The journal vault feature encompasses the hardware, firmware, and operating system support that ensure the journal’s contents are preserved across power failure. The mechanism is similar to the NVRAM controller on previous generation nodes, but does not use a dedicated PCI card.

On power failure, the PMP vaulting functionality is responsible for copying both the local journal and the local copy of the partner node’s journal to persistent flash. On restoration of power, PMP is responsible for restoring the contents of both journals from flash to RAM, and notifying the operating system.

A single dedicated flash device is attached via M.2 slot on the motherboard of the node’s compute module, residing under the battery backup unit (BBU) pack. To be serviced, the entire compute module must be removed.

If the M.2 flash needs to be replaced for any reason, it will be properly partitioned and the PMP structure will be created as part of arming the node for vaulting.

The battery backup unit (BBU), when fully charged, provides enough power to vault both the local and partner journal during a power failure event.

A single battery is utilized in the BBU, which also supports back-to-back vaulting.

On the software side, the journal’s Power-fail Memory Persistence (PMP) provides an equivalent to the NVRAM controller‘s vault/restore capabilities to preserve Journal. The PMP partition on the M.2 flash drive provides an interface between the OS and firmware.

If a node boots and its primary journal is found to be invalid for whatever reason, it has three paths for recourse:

  • Recover journal from its M.2 vault.
  • Recover journal from its disk backup copy.
  • Recover journal from its partner node’s mirrored copy.

The mirrored journal must guard against rolling back to a stale copy of the journal on reboot. This necessitates storing information about the state of journal copies outside the journal. As such, the Node State Block (NSB) is a persistent disk block that stores local and remote journal status (clean/dirty, valid/invalid, etc), as well as other non-journal information. NSB stores this node status outside the journal itself, and ensures that a node does not revert to a stale copy of the journal upon reboot.

Here’s the detail of an individual node’s compute module:

Of particular note is the ‘journal active’ LED, which is displayed as a white ‘hand icon’.

When this white hand icon is illuminated, it indicates that the mirrored journal is actively vaulting, and it is not safe to remove the node!

There is also a blue ‘power’ LED, and a yellow ‘fault’ LED per node. If the blue LED is off, the node may still be in standby mode, in which case it may still be possible to pull debug information from the baseboard management controller (BMC).

The flashing yellow ‘fault’ LED has several state indication frequencies:

Blink Speed Blink Frequency Indicator
Fast blink ¼ Hz BIOS
Medium blink 1 Hz Extended POST
Slow blink 4 Hz Booting OS
Off Off OS running

The mirrored non-volatile interface (MNVI) sits below /ifs and above RAM and the NTB, provides the abstraction of a reliable memory device to the /ifs journal. MNVI is responsible for synchronizing journal contents to peer node RAM, at the direction of the journal, and persisting writes to both systems while in a paired state. It upcalls into the journal on NTB link events, and notifies the journal of operation completion (mirror sync, block IO, etc).

For example, when rebooting after a power outage, a node automatically loads the MNVI. It then establishes a link with its partner node and synchronizes its journal mirror across the PCIe Non-Transparent Bridge (NTB).

Prior to mounting /ifs, OneFS locates a valid copy of the journal from one of the following locations in order of preference:

Order Journal Location Description
1st Local disk A local copy that has been backed up to disk
2nd Local vault A local copy of the journal restored from Vault into DRAM
3rd Partner node A mirror copy of the journal from the partner node

If the node was shut down properly, it will boot using a local disk copy of the journal.  The journal will be restored into DRAM and /ifs will mount. On the other hand, if the node suffered a power disruption the journal will be restored into DRAM from the M.2 vault flash instead (the PMP copies the journal into the M.2 vault during a power failure).

In the event that OneFS is unable to locate a valid journal on either the hard drives or M.2 flash on a node, it will retrieve a mirrored copy of the journal from its partner node over the NTB.  This is referred to as ‘Sync-back’.

Note: Sync-back state only occurs when attempting to mount /ifs.

On booting, if a node detects that its journal mirror on the partner node is out of sync (invalid), but the local journal is clean, /ifs will continue to mount.  Subsequent writes are then copied to the remote journal in a process known as ‘sync-forward’.

Here’s a list of the primary journal states:

Journal State Description
Sync-forward State in which writes to a journal are mirrored to the partner node.
Sync-back Journal is copied back from the partner node. Only occurs when attempting to mount /ifs.
Vaulting Storing a copy of the journal on M.2 flash during power failure. Vaulting is performed by PMP.

During normal operation, writes to the primary journal and its mirror are managed by the MNVI device module, which writes through local memory to the partner node’s journal via the NTB. If the NTB is unavailable for an extended period, write operations can still be completed successfully on each node. For example, if the NTB link goes down in the middle of a write operation, the local journal write operation will complete. Read operations are processed from local memory.

Additional journal protection for Gen 6 nodes is provided by OneFS’ powerfail memory persistence (PMP) functionality, which guards against PCI bus errors that can cause the NTB to fail.  If an error is detected, the CPU requests a ‘persistent reset’, during which the memory state is protected and node rebooted. When back up again, the journal is marked as intact and no further repair action is needed.

If a node looses power, the hardware notifies the BMC, initiating a memory persistent shutdown.  At this point the node is running on battery power. The node is forced to reboot and load the PMP module, which preserves its local journal and its partner’s mirrored journal by storing them on M.2 flash.  The PMP module then disables the battery and powers itself off.

Once power is back on and the node restarted, the PMP module first restores the journal before attempting to mount /ifs.  Once done, the node then continues through system boot, validating the journal, setting sync-forward or sync-back states, etc.

During boot, isi_checkjournal and isi_testjournal will invoke isi_pmp. If the M.2 vault devices are unformatted, isi_pmp will format the devices.

On clean shutdown, isi_save_journal stashes a backup copy of the /dev/mnv0 device on the root filesystem, just as it does for the NVRAM journals in previous generations of hardware.

If a mirrored journal issue is suspected, or notified via cluster alerts, the best place to start troubleshooting is to take a look at the node’s log events. The journal logs to /var/log/messages, with entries tagged as ‘journal_mirror’.

The following new CELOG events have also been added in OneFS 8.1 for cluster alerting about mirrored journal issues:

CELOG Event Description
HW_GEN6_NTB_LINK_OUTAGE Non-transparent bridge (NTP) PCIe link is unavailable
FILESYS_JOURNAL_VERIFY_FAILURE No valid journal copy found on node

Another reliability optimization for the Gen6 platform is boot mirroring. Gen6 does not use dedicated bootflash devices, as with previous generation nodes. Instead, OneFS boot and other OS partitions are stored on a node’s data drives. These OS partitions are always mirrored (except for crash dump partitions). The two mirrors protect against disk sled removal. Since each drive in a disk sled belongs to a separate disk pool, both elements of a mirror cannot live on the same sled.

The boot and other OS partitions are  8GB and reserved at the beginning of each data drive for boot mirrors. OneFS automatically rebalances these mirrors in anticipation of, and in response to, service events. Mirror rebalancing is triggered by drive events such as suspend, softfail and hard loss.

The following command will confirm that boot mirroring is working as intended:

# isi_mirrorctl verify

When it comes to smartfailing nodes, here are a couple of other things to be aware of with mirror journal and the Gen6 platform:

  • When you smartfail a node in a node pair, you do not have to smartfail its partner node.
  • A node will still run indefinitely with its partner missing. However, this significantly increases the window of risk since there’s no journal mirror to rely on (in addition to lack of redundant power supply, etc).
  • If you do smartfail a single node in a pair, the journal is still protected by the vault and powerfail memory persistence.

Regarding the journal’s Battery Backup Unit (BBU), later versions of OneFS and BBU firmware can generate an Cell End of Life warning. This end-of-life message (EOL) for the BBU does not indicate that the BBU is unhealthy, but rather that it’s coming up on EOL.

The battery wear percentage value indicates the amount of deterioration to the battery cell and therefore its ability hold charge.

The BBU firmware version 2.20 introduced an alert that will warns when a node is approaching End of Life (EOL) on its battery.  The report for the event will occur when battery degradation reaches a 40% threshold.

Note that a nodes will automatically transition into read-only (RO) state if the battery reaches the 50% threshold.

A cluster’s BBU firmware version can be easily determined via the following CLI syntax:

# isi upgrade cluster firmware devices | egrep "mongoose|bcc"

The command output will be similar to the following:

# isi upgrade cluster firmware devices| egrep "mongoose|bcc"
MLKbem_mongoose           ePOST  02.40                   1-4
EPbcc_infinity            ePOST  02.40                   5-8

The amount of time to go from 40% degradation to 50% degradation is also affected by the BBU firmware version. If the BBU firmware version is 1.20, the battery should take four months to go from 40% to 50% degradation
If the BBU firmware version is 2.20 or above, the battery should take 12 months to transition from 40% to 50% degradation.

Details about upgrading from BBU firmware version 1.20 to the latest supported firmware version can be found in the following KB article.

PowerScale Platform Update

In this article, we’ll take a quick peek at the new PowerScale Hybrid H700/7000 and Archive A300/3000 hardware platforms that were released last month. So the current PowerScale platform family hierarchy is as follows:

Here’s the lowdown on the new additions to the hardware portfolio:

Model Tier Drive per Chassis & Drives Max Chassis Capacity (16TB HDD) CPU per Node Memory per Node Network
H700 Hybrid/Utility Standard:

60 x 3.5” HDD

960TB CPU: 2.9Ghz, 16c Mem: 384GB FE: 100GbE

BE: 100GbE or IB

H7000 Hybrid/Utility Deep:

80 x 3.5” HDD

1280TB CPU: 2.9Ghz, 16c Mem: 384GB FE: 100GbE

BE: 100GbE or IB

A300 Archive Standard:

60 x 3.5” HDD

960TB CPU: 1.9Ghz, 6c Mem: 96GB FE: 25GbE

BE: 25GbE or IB

A3000 Archive Deep:

80 x 3.5” HDD

1280TB CPU: 1.9Ghz, 6c Mem: 96GB FE: 25GbE

BE: 25GbE or IB

The PowerScale H700 provides performance and value to support demanding file workloads. With up to 960 TB of HDD per chassis, the H700 also includes inline compression and deduplication capabilities to further extend the usable capacity

The PowerScale H7000 is a versatile, high performance, high capacity hybrid platform with up to 1280 TB per chassis. The deep chassis based H7000 is an ideal to consolidate a range of file workloads on a single platform. The H7000 includes inline compression and deduplication capabilities

On the active archive side, the PowerScale A300  combines performance, near-primary accessibility, value, and ease of use. The A300 provides between 120 TB to 960 TB per chassis and scales to 60 PB in a single cluster. The A300 includes inline compression and deduplication capabilities

PowerScale A3000: is an ideal solution for high performance, high density, deep archive storage that safeguards data efficiently for long-term retention. The A3000 stores up to 1280 TB per chassis and scales to north of 80 PB in a single cluster. The A3000 also includes inline compression and deduplication.

These new H700/7000 and A300/3000 nodes require OneFS 9.2.1, and can be seamlessly added to an existing cluster, offering the full complement of OneFS data services including snapshots, replication, quotas, analytics, data reduction, load balancing, and local and cloud tiering. All also contain SSD

Unlike the all-flash PowerScale F900, F600, and F200 stand-alone nodes, which required a minimum of 3 nodes to form a cluster, a single chassis of 4 nodes is required to create a cluster, with support for both InfiniBand and Ethernet backend network connectivity.

Each F700/7000 and A300/3000 chassis contains four compute modules (one per node), and five drive containers, or sleds, per node. These sleds occupy bays in the front of each chassis, with a node’s drive sleds stacked vertically:

The drive sled is a tray which slides into the front of the chassis, and contains between three and four 3.5 inch drives in an H700/0 or A300/0, depending on the drive size and configuration of the particular node. Both regular hard drives or self-encrypting drives (SEDs) are available  in 2,4, 8, 12, and 16TB capacities.

Each drive sled has a white ‘not safe to remove’ LED on its front top left, as well as a blue power/activity LED, and an amber fault LED.

The compute modules for each node are housed in the rear of the chassis, and contain CPU, memory, networking, and SSDs, as well as power supplies. Nodes 1 & 2 are a node pair, as are nodes 3 & 4. Each node-pair shares a mirrored journal and two power supplies:

Here’s the detail of an individual compute module, which contains a multi core Cascade Lake CPU, memory, M2 flash journal, up to two SSDs for L3 cache, six DIMM channels, front end 40/100 or 10/25 Gb ethernet, 40/100 or 10/25 Gb ethernet or Infiniband, an ethernet management interface, and power supply and cooling fans:

On the front of each chassis is an LCD front panel control with back-lit buttons and 4 LED Light Bar Segments – 1 per Node. These LEDs typically display blue for normal operation or yellow to indicate a node fault. This LCD display is hinged so it can be swung clear of the drive sleds for non-disruptive HDD replacement, etc:

So, in summary, the new PowerScale hardware delivers:

  • More Power
    • More cores, more memory and more cache
    • A300/3000 up to 2x faster than previous generation (A200/2000)
  • More Choice
    • 100GbE, 25GbE and Infiniband options for cluster interconnect
    • Node compatibility for all hybrid and archive nodes
    • 30TB to 320TB per rack unit
  • More Value
    • Inline data reduction across the PowerScale family
    • Lowest $/GB and most density among comparable solutions