OneFS SmartQuotas Notifications

A crucial part of the OneFS SmartQuotas system is to provide user notifications regarding quota enforcement violations, both when a violation event occurs and while violation state persists on a scheduled basis.

An enforcement quota may have several notification rules associated with it. Each notification rule specifies a condition and an action to be performed when the condition is met. Notification rules are considered part of enforcements. Clearing an enforcement also clears any notification rules associated with it.

Enforcement quotas support the following notification settings:

Quota Notification Setting Description
Global default Uses the global default notification for the specified type of quota.
Custom – basic Enables creation of basic custom notifications that apply to a specific quota. Can be configured for any or all the threshold types (hard, soft, or advisory) for the specified quota.
Custom – advanced Enables creation of advanced, custom notifications that apply to a specific quota. Can be configured for any or all of the threshold types (hard, soft, or advisory) for the specified quota.
None Disables all notifications for the quota.

A quota notification condition is an event which may trigger an action defined by a notification rule. These notification rules may specify a schedule (for example, “every day at 5:00 AM”) for performing an action or immediate notification of a certain condition. Examples of notification conditions include:

  • Notify when a threshold is exceeded; at most, once every 5 minutes
  • Notify when allocation is denied; at most, once an hour
  • Notify while over threshold, daily at 2 AM
  • Notify while grace period expired weekly, on Sundays at 2 AM

Notifications are triggered for events grouped by the following two categories:

Type Description
Instant notification Includes the write-denied notification triggered when a hard threshold denies a write and the threshold-exceeded notification, triggered at the moment a hard, soft, or advisory threshold is exceeded. These are one-time notifications because they represent a discrete event in time.
Ongoing notification Generated on a scheduled basis to indicate a persisting condition, such as a hard, soft, or advisory threshold being over a limit or a soft threshold’s grace period being expired for a prolonged period.

Each notification rule can perform either one or none of the following notification actions.

Quota Notification Action Description
Alert Sends an alert for one of the quota actions, detailed below.
Email Manual Address Sends email to a specific address, or multiple addresses (OneFS 8.2 and later).
Email Owner Emails an owner mapping based on its identity source.

The email owner mapping is as follows:

Mapping Description
Active directory Lookup is performed against the domain controller (DC). If the user does not have an email setting, a configurable transformation from user name and DC fully qualified domain name is performed in order to generate an email address.
LDAP LDAP user email resolution is similar to AD users. In this case, only the email attribute looked up in the LDAP server is configurable by an administrator based on the LDAP schema for the user account information.
NIS Only the configured email transformation for the NIS fully qualified domain name is used.
Local users Only the configured email transformation is used.

The actual quota notification is handled by a daemon, isi_quota_notify_d, which performs the following functions:

  • Processes kernel notification events that get sent out. They are matched to notification rules to generate instant notifications (or other actions as specified in the notification rule)
  • Processes notification schedules – The daemon will check notification rules on a scheduled basis. These rules specify what violation condition should trigger a notification on a regular scheduled basis.
  • Performs notifications based on rule configuration to generate email messages or alert notifications.
  • Manages persistent notification states so that pending events are processed in the event of a restart.
  • Handles rescan requests when quotas are created or modified

SmartQuotas provides email templates for advisory, grace, and regular notification configuration, which can be found under /etc/ifs. The advisory limit email template (/etc/ifs/quota_email_advisory_template.txt) for example, displays:

Subject: Disk quota exceeded

The <ISI_QUOTA_DOMAIN_TYPE> quota on path <ISI_QUOTA_PATH> owned by <ISI_QUOTA_OWNER> has   exceeded the <ISI_QUOTA_TYPE> limit.

The quota limit is <ISI_QUOTA_THRESHOLD>, and <ISI_QUOTA_USAGE> is currently in use. <ISI_QUOTA_HARD_LIMIT> Contact your system administrator for details.

An email template contains text, and, optionally, variables that represent quota values. The following table lists the SmartQuotas variables that may be included in an email template.

Variable Description Example
ISI_QUOTA_DOMAIN_TYPE Quota type. Valid values are: directory, user, group, default-directory, default-user, default-group default-directory
ISI_QUOTA_EXPIRATION Expiration date of grace period Fri Jan 8 12:34:56 PST 2021
ISI_QUOTA_GRACE Grace period, in days 5 days
ISI_QUOTA_HARD_LIMIT Includes the hard limit information of the quota to make advisory/soft email notifications more informational. You have 30 MB left until you reach the hard quota limit of 50 MB.
ISI_QUOTA_NODE Hostname of the node on which the quota event occurred us-wa-1
ISI_QUOTA_OWNER Name of quota domain owner jsmith
ISI_QUOTA_PATH Path of quota domain /ifs/home/jsmith
ISI_QUOTA_THRESHOLD Threshold value 20 GB
ISI_QUOTA_TYPE Threshold type Advisory
ISI_QUOTA_USAGE Disk space in use 10.5 GB

Note that the default quota templates under /etc/ifs send are configured to send email notifications with a plain text MIME type. However, editing a template to start with an HTML tag (<html>) will allow an email client to interpret and display it as HTML content. For example:

<html><Body>

<h1>Quota Exceeded</h1><p></p>

<hr>

<p> The path <ISI_QUOTA_PATH> has exceeded the threshold <ISI_QUOTA_THRESHOLD> for this <ISI_QUOTA_TYPE> quota. </p>

</body></html>

Various system alerts are sent out to the standard cluster Alerting system when specific events occur. These include:

Alert Type Level Event Description
NotifyFailed Warning An attempt to process a notification rule failed externally, such as an undelivered email.
NotifyConfig Warning A notification rule failed due to a configuration issue, such as a non-existent user or missing email address.
NotifyExceed Warning A child quota’s advisory/soft/hard limit is greater than any of parent quota’s hard limit.
ThresholdViolation Info A quota threshold was exceeded. The conditions under which this alert is triggered are defined by notification rules.
DomainError Error An invariant was violated that resulted in a forced domain rescan.

 

Unveiling Lakehouse – Explaining Data Lakehouse as Cloud-native DWP Part2

In this article I focus on how the data lakehouse architecture compares with the classic data warehouse architecture. I imagine the data lakehouse architecture as an attempt to implement some of the core requirements of data warehouse architecture in a modern, cloud-native design. I will explore the advantages of cloud-native design, including the ability to dynamically provision resources in response to specific events, predetermined patterns, and other triggers. I also explore data lakehouse architecture as its own unique approach to addressing new or different types of practices, use cases, and consumers.

In an important sense, data lakehouse architecture is an effort to adapt the data warehouse and its architecture to the cloud, while also addressing a larger set of novel use cases, practices, and consumers. This claim is not as counterintuitive or daunting as it may seem. We can think of data warehouse architecture as a technical specification that enumerates and describes the set of requirements (features and capabilities) that the ideal data warehouse system must address, but does not specify how to design or implement the data warehouse. Designers are free to engineer their own novel implementations of the warehouse, such as what Joydeep Sen Sarma and Ashish Thusoo attempted with Apache Hive, a SQL interpreter for Hadoop, or what Google did with BigQuery, its NoSQL query-as-a-service offering.

The data lakehouse is a similar example. If a data lakehouse implementation addresses the set of requirements specified by data warehouse architecture, it can be considered a data warehouse.

In the What is Data Lakehouse? – Unstructured Data Quick Tips (unstructureddatatips.com), we saw that data lakehouse architecture differs from the monolithic design of classic data warehouse implementations and the more tightly coupled designs of big data-era platforms like Hadoop+Hive or PaaS warehouses like Snowflake.

So, how is data lakehouse architecture different and why?

Adapting Data Warehouse Architecture to Cloud

The classic implementation of data warehouse architecture is based on outdated expectations, especially regarding how the warehouse’s functions and resources are instantiated, connected, and accessed. For example, early implementers of data warehouse architecture expected the warehouse to be physically implemented as an RDBMS and for its components to connect to each other using a low-latency, high-throughput bus. They also expected SQL to be the only way to access and manipulate data in the warehouse.

Another expectation was that the data warehouse would be online and available all the time, and its functions would be tightly coupled to each other. This was a feature of its implementation in an RDBMS, but it made it impractical (and impossible) to scale the warehouse’s resources independently.

None of these expectations are true in the cloud. We are familiar with the cloud as a metaphor for virtualization, which is the use of software to abstract and define various virtual resources, and for the scale-up/scale-down elasticity that is a defining characteristic of the cloud.

However, we may not spend as much time thinking about the cloud as a metaphor for event-driven provisioning of virtualized hardware, and the ability to provision software in response to events.

This on-demand dimension is arguably the most important practical benefit of the cloud’s elasticity and a significant difference between the data lakehouse and the classic data warehouse.

The Data Lakehouse as Cloud-native Data Warehouse

Event-driven design on this scale requires a different set of hardware and software requirements, which cloud-native software engineering concepts, technologies, and methods address. Instead of monolithic applications that run on always-on, always-available, physically implemented hardware resources, cloud-native design allows developers to instantiate discrete software functions as loosely coupled services in response to specific events. These loosely coupled services correspond to the functions of an application, and applications are composed of these loosely coupled services, like the data lakehouse and its layered architecture.

What makes the data lakehouse cloud-native? It is cloud-native when it decomposes most, if not all, of the software functions implemented in data warehouse architecture. These functions include:

        • One or more functions that can store, retrieve, and modify data;
        • one or more functions that can perform various operations (such as joins) on data;
        • one or more functions that expose interfaces for users and jobs to store, retrieve, modify data and specify different types of operations to perform on data;
        • one or more functions that manage and enforce data access and integrity safeguards;
        • one or more functions that generate or manage technical and business metadata;
        • one or more functions that manage and enforce data consistency safeguards when two or more users/jobs try to modify the same data simultaneously or when a new user and job tries to update data currently being accessed by prior users/jobs.

Using this as a guideline, we can say that a “pure” or “ideal” implementation of data lakehouse architecture would include:

      • The lakehouse service itself, which in addition to SQL query provides metadata management, data federation, and data cataloging capabilities. It also serves as a semantic layer by creating, maintaining, and versioning modeling logic, such as denormalized views applied to data in the lake.
      • The data lake, which at minimum provides schema enforcement and the ability to store, retrieve, modify, and schedule operations on objects/blobs in object storage. It also usually provides data profiling and discovery, metadata management, data cataloging, data engineering, and optionally data federation capabilities. It enforces access and data integrity safeguards across its zones and ideally generates and manages technical metadata for the data in these zones.
      • An object storage service that provides a scalable, cost-effective storage substrate and handles the work of storing, retrieving, and modifying data stored in file objects.

There are different ways to implement the data lakehouse. One option is to combine all these functions into a single omnibus platform, a data lake with its own data lakehouse, like what Databricks, Dremio, and others have done with their data lakehouse implementations.

Why Does Cloud-native Design Matter?

This raises some obvious questions. Why do this? What are the advantages of a loosely coupled architecture compared to the tightly integrated architecture of the classic data warehouse? As mentioned, one benefit of loose coupling is the ability to scale resources independently of each other, such as allocating more compute without adding storage or network resources. It also eliminates some dependencies that can cause software to break, so a change in one service will not necessarily impact or break other services, and the failure of a service will not necessarily cause other services to fail or lose data. Cloud-native design also uses mechanisms like service orchestration to manage and address service failures.

Another benefit of loose coupling is the potential to eliminate dependencies from reliance on a specific vendor’s or provider’s software. If services communicate and exchange data with each other solely through publicly documented APIs, it should be possible to replace a service that provides a set of functions (like SQL query) with an equivalent service. This is the premise of pure or ideal data lakehouse architecture, where each component is effectively commoditized (with equivalent services available from major cloud infrastructure providers, third-party SaaS and/or PaaS providers, and as open-source offerings) and reduces the risk of provider-specific lock-in.

The Data Lakehouse as Event-driven Data Warehouse

Cloud-native software design also expects the provisioning and deprovisioning of the hardware and software resources for loosely coupled cloud-native services to happen automatically. In other words, provisioning a cloud-native service means provisioning its enabling resources, and terminating a cloud-native service means to deprovision these resources. In a way, cloud-native design wants to make hardware and to some extent software disappear as variables in deploying, managing, maintaining, and especially scaling business services.

From the perspective of consumers and expert users, there are only services – tools that do things.

For example, if an ML engineer designs a pipeline to extract and transform data from 100 GBs of log files, a cloud-native compute engine will dynamically provision compute instances to process the workload. Once the engineer’s workload finishes, the engine will automatically terminate these instances.

Ideally, neither the engineer nor the usual IT support people (DBAs, systems and network administrators, so forth) need to do anything to provision these compute instances or the software and hardware resources they depend on. Instead, this all happens automatically – for example, in response to an API call initiated by the engineer. The classic on-premises data warehouse was not designed with this kind of cloud-native, event-driven computing paradigm in mind.

The Data Lakehouse as Its Own Thing

The data lakehouse is supposed to be its own thing, providing the six functions listed above. However, it depends on other services – specifically, an object storage service and optionally a data lake service – to provide basic data storage and core data management functions. In addition, data lakehouse architecture implements novel software functions that have no obvious parallel in classic data warehouse architecture and are unique to the data lakehouse. These functions include:

      • One or more functions that can access, store, retrieve, modify, and perform operations (like joins) on data stored in object storage and/or third-party services. The lakehouse simplifies access to data in Amazon S3, AWS Lake Formation, Amazon Redshift, so forth
      • One or more functions that can discover, profile, catalog, and/or facilitate access to distributed data stored in object storage and/or third-party services. For example, a modeler creates denormalized views that combine data stored in the data lakehouse and in the staging zone of an AWS Lake Formation (a data lake), and designs advanced models incorporating data from an Amazon Redshift sales data mart.

However, in this respect, the lakehouse is not different from a PaaS data warehouse service, which we will explore in depth in future articles.

OneFS SmartQuotas Execution, Operation, and Governance

SmartQuotas employs the OneFS job engine to execute its work. Specifically, the QuotaScan job updates the accounting for quota domains created on an existing directory path. Although it is typically run without any intervention, the administrator has the option of manually control if necessary or desirable.

The OneFS job engine is based on a delegation hierarchy made up of coordinator, director, manager, and worker processes.

Once a SmartQuotas job is initially allocated, the job engine uses a shared work distribution model in order to execute the work, and each job is identified by a unique Job ID. When a job is launched, whether it’s scheduled, started manually, or responding to a cluster event, the Job Engine spawns a child process from the isi_job_d daemon running on each node. This job engine daemon is also known as the parent process.

The entire job engine’s orchestration is handled by the coordinator, which is a process that runs on one of the nodes in a cluster. While the actual work item allocation is managed by the individual nodes, the coordinator node takes control, divides up the job, and evenly distributes the resulting tasks across the nodes in the cluster. It is also responsible for starting and stopping jobs, and also for processing work results as they are returned during the execution of a job.

Each node in the cluster has a job engine director process, which runs continuously and independently in the background. The director process is responsible for monitoring, governing and overseeing all job engine activity on a particular node, constantly waiting for instruction from the coordinator to start a new job. The director process serves as a central point of contact for all the manager processes running on a node, and as a liaison with the coordinator process across nodes.

Manager processes are responsible for arranging the flow of tasks and task results throughout the duration of a job. Each manager controls and assigns work items to multiple worker threads working on items for the designated job. Under direction from the coordinator and director, a manager process maintains the appropriate number of active threads for a configured impact level, and for the node’s current activity level.

Each worker thread is given a task, if available, which it processes item-by-item until the task is complete or the manager un-assigns the task. Towards the end of a job phase, the number of active threads decreases as workers finish up their allotted work and become idle. Nodes which have completed their work items just remain idle, waiting for the last remaining node to finish its work allocation. When all tasks are done, the job phase is considered to be complete, and the worker threads are terminated.

By default, QuotaScan runs with a ‘low’ impact policy and a low-priority value of ‘6’.

If quotas are created on empty directories, governance will instantaneously propagate from parent to child incrementally. If the directory is not empty, the QuotaScan job is used to update the governance.

A domain created on a non-empty directory will not be marked as ready. This triggers a QuotaScan job to be started, and it performs a treewalk to traverse the directory tree under the domain root.

The QuotaScan job is the cluster maintenance process responsible for scanning the cluster to performing accounting activities to bring the determined governance to each inode. In essence, the job is a distributed tree walk that is performed based on the state of the domain.

Under the hood, SmartQuotas is based on the concept of domains – the linchpins of quota accounting. Since OneFS is a single file system, it relies on domains for defining the scope of a quota in place of the typical volume boundaries found in most storage systems. As such, a domain defines which files belong to a quota, accounts for each resource type in that set and defines the top-level directory configuration point.

For SmartQuotas, the three main resource types are:

Resource Type Description
Directory A specific directory and all its subdirectories
User A specific user
Group All members of a specific group

A domain defined as “name@folder” would be the set of files under “folder,” owned by “name,” which could be either a user or a group. The files accounted include all files reachable from the given path, without traversing any soft links. The owner “name” can be ALL, and “/ifs,” the OneFS root directory, is also an effective ALL for “folder.”

With SmartQuotas, it is easy to create traditional domain types quickly by using “ALL.” The following are examples of domain types:

  • All files belonging to user Jane: user:Jane@/ifs
  • All files under /ifs/home, belonging to any user: ALL@/ifs/home.
  • All files under /ifs/home that belong to user Jane: user:Jane@/ifs/home

Domains cannot be created on anything but directories. More specifically, domains are associated with the actual directories themselves, not directory paths. For example, if the domain is ALL@/ifs/home/data, but /ifs/home/data gets renamed to /ifs/home/files, the domain stays with the directory.

Domains can also be nested and may overlap. For example, say a hard quota is set on /ifs/data/marketing for 5 TB. 1 TB soft quotas are then placed on individual users in the marketing department. This ensures that the marketing directory as a whole never exceeds 5 TB, while limiting the users in the marketing department to 1 TB each.

A default quota domain is one that does not account for any specific set of files but instead specifies a policy for new domains that match a specific trigger. In other words, default domains are configuration templates for actual domains. SmartQuotas use the identity notation ‘default-user’, ‘default-group’, and ‘default directory’ to describe domains with default policies. For example, the domain default-user@/ifs/home becomes specific-user@/ifs/home for each specific-user that is not otherwise defined. All enforcements on default-user are copied to specific-user when specific-user allocates within the domain and the new inherited domain quota is termed as a Linked Quota. There may be overlapping defaults (default-user@/ifs and default-user@/ifs/home may both be defined).

Default quota domains help drastically simplify quota management for large environments by providing a mechanism to define top-level template configurations from which many actual quotas can be cloned, or linked. When a default quota domain is configured on a directory, any subdirectories created directly underneath this will automatically inherit the quota limits specified in the parent domain. This streamlines the provisioning and management quotas for large enterprise environments. Furthermore, default directory quotas can co-exist with user and/or group quotas and legacy default quotas.

Default directory quotas have been available since OneFS 8.2, in addition to the default user and group quotas available in earlier releases. For example:

  • Create default-directory quota
# isi quota create --path=/ifs/parent-dir --type=default-directory --hard-threshold=10M
  • Modify Default directory quota
# isi quota modify --path=/ifs/parent-dir --type=default-directory --advisory-threshold=6M --soft-threshold=7M --soft-grace=1D
  • List default-directory quota
# isi quota list                 




  Type              AppliesTo  Path            Snap  Hard   Soft  Adv  Used




  --------------------------------------------------------------------------




  default-directory DEFAULT    /ifs/parent-dir No    10.00M -    6.00M 0.00




  --------------------------------------------------------------------------




  Total: 1
  • Delete Default directory quota
# isi quota delete --path=/ifs/parent-dir --type=default-directory

If the enforcements on a default domain change, SmartQuotas will automatically propagate the changes to the Linked Quota domains. If a default quota domain is deleted, SmartQuotas will delete all children marked as inherited. An administrator may also choose to delete the default without deleting the children, but this will break inheritance on all inherited children.

For example, the creation & deletion of sub-directory under default directory folder causes inherited directory quota creation and removal:

A quota domain may be in one of three accounting states as described in the following table:

Quota Accounting States Description
Ready A domain in the ready state is fully accounted. SmartQuotas displays “ready” domains in all interfaces and all enforcements apply to such domains.
Accounting A domain is placed in the Accounting state when it is waiting on accounting updates.
Deleting After a request to delete a domain, SmartQuotas will place the domain in the deleting state until tear-down is complete. Domain removal may be a lengthy process.

SmartQuotas displays accounting domains in all interfaces including usage data but indicate they are in the process of being “Accounted.” SmartQuotas applies all enforcements to accounting domains, even when it might reject an allocation that would have proceeded if it had completed the QuotaScan.

Domains in the deleting state are hidden from all interfaces, and the top-level directory of a domain may be deleted while the domain is still in the deleting state (assuming there are no domains in “Ready” or “Accounting” state defined on the directory). No enforcements are applied for domains in “Deleting” state.

A quota scan is performed when the domain is in an Accounting State. This can occur during quota creation to account the new domain if a quota has been set for the domain and quota deletion to un-account the domain. A QuotaScan is required when creating a quota on a non-empty directory. If quotas are created up-front on an empty directory, no QuotaScan is necessary.

A QuotaScan job may be started either from the WebUI or CLi with the following syntax:.

# isi job jobs start quotascan

Any path specified on the command line is treated as the root of a tree that should be processed. This is provided primarily as a means to rescan a directory or maintenance reasons.

In addition to the core isi_smartquoatas service, there are three processes, or daemons, associated with SmartQuotas:

Daemon Details
isi_quota_notify_d Listens for ‘limit exceeded’ and ‘link denied’ events and generate notifications for each. Also responds to configuration change events and instructs the QDB to generate ‘expired’ and ‘violated’ over-threshold notifications.
isi_quota_report_d Generates quota reports. Since the QDB only produces real-time resource usage, reports are necessary for providing point-in-time vies of a quota domain’s usage. These historical reports are useful for trend analysis of quota resource usage.
isi_quota_sweeper_d Responsible for quota housekeeping tasks such as propagating default changes, domain and notification rule garbage collection, and kicking off QuotaScan jobs when necessary.

 

These can be viewed as follows:

# isi services -a | grep -i quota

   isi_smartquotas      SmartQuotas Service                      Enabled

# ps -auxw | grep -i quota

root    4852    0.0  0.0  26708   8404  -  Is   Sat20        0:00.00 /usr/sbin/isi_quota_report_d

root    4860    0.0  0.0  26812   8424  -  Is   Sat20        0:00.00 /usr/sbin/isi_quota_notify_d

root    4872    0.0  0.0  26836   8488  -  Is   Sat20        0:00.00 /usr/sbin/isi_quota_sweeper_d

OneFS 8.2 and later also include the rpc.quotad service to facilitate client-side quota reporting on UNIX and Linux clients using native ‘quota’ tools. The service which runs on tcp/udp port 762 is enabled by default, and control is under NFS global settings.

Also, users can view their available user capacity set by soft or hard user and group quotas rather than the entire cluster capacity or parent directory-quotas. This avoids the ‘illusion’ of seeing available space that may not be associated with their quotas.

SmartQuotas is included as a core component of OneFS but requires a valid product license key in order to activate. This license key can be purchased through your Dell EMC account team. An unlicensed cluster will show a SmartQuotas warning until a valid product license has been purchased and applied to the cluster.

License keys can be easily added through the ‘Activate License’ section of the OneFS WebUI, accessed by going to Cluster Management > Licensing.

OneFS SmartQuotas Architectural Fundamentals

As we saw in the previous article in this series, at a high level, there are three main elements to a OneFS quota:

Element Description
Domain Define which files and directories belong to a quota
Resource The quantity being limited
Enforcement Specify the limits and what actions are taken when those thresholds are exceeded

We’ll look at each of these elements over the course of this series of articles. But first, let’s delve into the architecture.

Under the hood, SmartQuotas hinges on the quota domain and quota database, and the general operational flow is as follows:

Each quota is governed by a OneFS domain, which defines the quota’s scope and includes a set of usage levels, limits, and configuration options. Most of this information is organized and managed by the file system and stored in the quota database (QDB). This database is represented in a B-tree structure, known as the quota tree, and allows both scalability and fast random access. Because of its importance, the quota database is protected at OneFS’ highest metadata level. The quota accounting blocks (QABs) within individual records are protected at the same level as the associated directory.

A quota domain is made up of the following principal parts:

Component Description
Quota domain key Where the unique identifier for the domain is stored.
Quota domain header (QDH) Contains various state and configuration information that affects the domain as a whole.
Quota domain enforcements Manages quota limits, including whether they have been reached or exceeded, notification information, and the quota grace period.
Quota domain account (QDA) Handles tracking of usage levels for the domain. The QDA tracks physical, logical, and file resource types for each domain.

The QDB is a data structure that stores quota domain record (QDR). Resource allocation and governance changes are recorded in the quota operation associated with a transaction, totaled and applied persistently to the QDRs.

Within QDB, a quota domain record stores all configuration and state associated with a domain. The record can be broken down into three components:

Component Description
Configuration Fields within quota config, such as whether the domain is a container. Despite the name, this includes some state fields like the Ready flag.
Enforcements A list of quota enforcements, which include the limit, grace period, and notification state. Although the structure is flexible, only three enforcements are allowed and only for a single resource.
Account The quota account for the domain.

The on-disk format of the QDR is as follows The structure is dynamic, based on the configured enforcements and state of the account, so the on-disk structures look different than the in-memory structures.

Quota domain locks synchronize access to quota domain records in the QDB.

The main challenge for quota domain locks is that the need to exclusively lock a quota domain is not known until the accounting is fully determined. In fact, it may not be until responses from transaction deltas are received before this is reported to the initiator. To address this, Quota Domain Locks use optimistic restarts.

Quota Account Blocks (QABs) enable high-performance accounting using transaction deltas. Since when the quota usage info if viewed it is stale anyway, locking is simplified by using an exclusive domain lock for coherent reads of usage.

Each QAB contains a large number of Quota Accounting records, which need to be updated whenever a particular user adds or removes data from an area of the file system on which quotas are enabled (quota domain). If a large quantity of clients are simultaneously accessing the quota domain, these blocks can become highly contended and a potential bottleneck. Similarly, if a single client (or small number of clients) consistently makes a large number of small writes to files within a single quota, write performance could again be impacted.

To address this, quota accounts have a mechanism to help avoid hot spots on the nodes storing QABs. Quota Account Constituents (QACs) help parallelize the quota accounting by including additional QAB mirrors distributed across other nodes in the cluster.

Configuration is managed through a sysctl, efs.quota.reorganize.qac_ratio , which increases the number of quota accounting constituents. This provides better scalability and reduces latencies on heavy create/delete activities when quotas are used.

Using this parameter, the internally calculated QAC count for each quota is multiplied by the specified value. If a workflow experiences write performance issues, and it has many writes to files or directories governed by a single quota, then increasing the QAC ratio may significantly improve write performance.

The sysctl efs.quota.reorganize.qac_ratio can be reconfigured to its maximum value of 8 from its default value of 1 using the following CLI command:

# isi_sysctl_cluster efs.quota.reorganize.qac_ratio=8

To verify the persistent change, run:

# cat /etc/mcp/override/sysctl.conf | grep qac_ratio

efs.quota.reorganize.qac_ratio=8 #added by script

Although increasing the QAC count through this sysctl can improve performance on write heavy quota domains, some amount of experimentation may be required until the ideal QAC ratio value is found. Adjusting the parameter can adversely affect write performance if you apply a value that is too high, or if you apply the parameter in an environment that does not have diminished write performance due to quota contention.

Additionally, OneFS provides a CLI command, which can restripe the QABs to improve their performance.

# isi_restripe_qabs retune

This utility can be run either on demand or periodically to randomly redistribute QABs for all existing quotas. It does this by ignoring the default ‘rebalance’ layout and running a ‘retune’ layout strategy instead, thereby alleviating the performance impact from an imbalanced QAB layout.

Unveiling Lakehouse – What is Data Lakehouse? Part1

What is Data Lakehouse?

This article on the data lakehouse will aim to introduce the data lakehouse and describe what is new and different about it.

The Data Lakehouse Explained

The term “lakehouse” is derived from the two foundational technologies the data lake and the data warehouse. Lakehouse is a concept or data paradigm that can be built using different set of technologies to fulfill the objectives.

At a high level, the data lakehouse consists of the following components:

      • Data lakehouse
      • Data lake
      • Object storage

The data lakehouse describes a data warehouse-like service that runs against a data lake, which sits on top of an object storage. These services are distributed in the sense that they are not consolidated into a single, monolithic application, as with a relational database. They are independent in the sense that they are loosely coupled or decoupled — that is, they expose well-documented interfaces that permit them to communicate and exchange data with one another. Loose coupling is a foundational concept in distributed software architecture and a defining characteristic of cloud services and cloud-native design.

How Does the Data Lakehouse Work? 

From the top to the bottom of the data lakehouse stack, each constituent service is more specialized than the service that sits “underneath” it.

      • Data lakehouse: The data lakehouse is a highly specialized abstraction layer or a semantic layer. That exposes data in the lake for operational reporting, ad hoc query, historical analysis, planning and forecasting, and other data warehousing workloads.
      • Data lake: The data lake is a less specialized abstraction layer. That schematizes and manages the objects contained in an underlying object storage service, and schedules operations to be performed on them. The data lake can efficiently ingest and store data of every type. Like structured relational data (which it persists in a columnar object format), semi structured (text, logs, documents), and un or multi structured (files of any type) data.
      • Object storage: As the foundation of the lakehouse stack, object storage consists of an even more basic abstraction layer: A performant and cost-effective means of provisioning and scaling storage, on-demand storage.

Again, for data lakehouse to work, the architecture must be loosely coupled. For example, several public cloud SQL query services, when combined with cloud data lake and object storage services, can be used to create the data lakehouse. This solution is the “ideal” data lakehouse in the sense that it is a rigorous implementation of a formal, loosely coupled architectural design. The SQL query service runs against the data lake service, which sits on top of an object storage service. Subscribers instantiate prebuilt queries, views, and data modeling logic in the SQL query service, which functions like a semantic layer. And this whole solution is the data lakehouse.

This implementation is distinct from the data lakehouse services that Databricks, Dremio, and others market. These implementations are coupled to a specific data lake implementation, with the result that deploying the lakehouse means, in effect, deploying each vendor’s data lake service, too.

The formal rigor of an ideal data lakehouse implementation has one obvious benefit: It is notionally easier to replace one type of service (for example, a SQL query) with an equivalent commercial or open-source service.

What Is New and Different About the Data Lakehouse?

It all starts with the data lake. Again, the data lakehouse is a higher-level abstraction superimposed over the data in the lake. The lake usually consists of several zones, the names, and purposes of which vary according to implementation. At a minimum, Lakehouse consist of the following:

      • one or more ingest or landing zones for data.
      • one or more staging zones, in which experts work with and engineer data; and
      • one or more “curated” zones, in which prepared and engineered data is made available for access.

Usually, the data lake is home to all an organization’s useful data. This data is already there. So, the data lakehouse begins with query against this data where it lives.

It is in the curated (GOLD) zone of the data lake that the data lakehouse itself lives. Although it is also able to access and query against data that is stored in the lake’s other zones. In this way the data lakehouse can support not only traditional data warehousing use cases, but also innovative use cases such as data science and machine learning and artificial intelligence engineering.

The following are the advantages of the data lakehouse.

  1. More agile and less fragile than the data warehouse

Querying against data in the lake eliminates the multistep process involved in moving the data, engineering it and moving it again before loading it into the warehouse. (In extract, load, transform [ELT], data is engineered in the warehouse itself. This removes a second data movement operation.) This process is closely associated with the use of extract, transform, load (ETL) software. With the data lakehouse, instead of modeling data twice — first, during the ETL phase, and, second, to design denormalized views for a semantic layer, or to instantiate data modeling and data engineering logic in code — experts need only perform this second modeling step.

The result is less complicated (and less costly) ETL, and a less fragile data lakehouse.

  1. Query against data in place in the data lake

Querying against the data lakehouse makes sense because all an organization’s business-critical data is already there — that is, in the data lake. Data gets stored into the lake from sensors and other sources, from workload, business apps and services, from online transaction processing systems, from subscription feeds, and so on.

The strong claim is that the extra ability to query against data in the whole of the lake — that is, its staging and non-curated zones — can accelerate data delivery for time-sensitive use cases. A related claim is that it is useful to query against data in the lakehouse, even if an organization already has a data warehouse, at least for some time-sensitive use cases or practices.

The weak claim is that the lakehouse is a suitable replacement for the data warehouse.

  1. Query against relational, semi-structured, and multi-structured data

The data lakehouse sits atop the data lake, which ingests, stores and manages data of every type. Moreover, the lake’s curated zone need not be restricted solely to relational data: Organizations can store and model time series, graph, document, and other types of data there. Even though this is possible with a data warehouse, it is not cost-effective.

  1. More rapidly provision data for time-sensitive use cases

Expert users — say, scientists working on a clinical trial — can access raw trial results in the data lake’s non-curated ingest zone, or in a special zone created for this purpose. This data is not provisioned for access by all users; only expert users who understand the clinical data are permitted to access and work with it. Again, this and similar scenarios are possible because the lake functions as a central hub for data collection, access, and governance. The necessary data is already there, in the data lake’s raw or staging zones, “outside” the data lakehouse’s strictly governed zone. The organization is just giving a certain class of privileged experts early access to it.

  1. Better support for DevOps and software engineering

Unlike the classic data warehouse, the lake and the lakehouse expose various access APIs, in addition to a SQL query interface.

For example, instead of relying on ODBC/JDBC interfaces and ORM techniques to acquire and transform data from the lakehouse — or using ETL software that mandates the use of its own tool-specific programming language and IDE design facility — a software engineer can use preferred dev tools and cloud services, so long as these are also supported by team’s DevOps toolchain. The data lake/lakehouse, with its diversity of data exchange methods, its abundance of co-local compute services, and, not least, the access it affords to raw data, is arguably a better “player” in the DevOps universe than is the data warehouse. In theory, it supports a larger variety of use cases, practices, and consumers — especially expert users.

True, most RDBMSs, especially cloud PaaS RDBMSs, now support access using RESTful APIs and language-specific SDKs. This does not change the fact that some experts, particularly software engineers, are not — at all — charmed of the RDBMS.

Another consideration is that the data warehouse, especially, is a strictly governed repository. The data lakehouse imposes its own governance strictures, but the lake’s other zones can be less strictly governed. This makes the combination of the data lake + data lakehouse suitable for practices and use cases that require time-sensitive, raw, lightly prepared, so on, data (such as ML engineering).

  1. Support more and different types of analytic practices.

For expert users, the data lakehouse simplifies the task of accessing and working with raw or semi-/multi-structured data.

Data scientists, ML, and AI engineers, and, not least, data engineers can put data into the lake, acquire data from it, and take advantage of its co-locality with an assortment of intra-cloud compute services to engineer data. Experts need not use SQL; rather, they can work with their preferred languages, libraries, services and tools (notebooks, editors, and favorite CLI shells). They can also use their preferred conceptual vocabularies. So, for example, experts can build and work with data pipelines, as distinct to designing ETL jobs. In place of an ETL tool, they can use a tool such as Apache Airflow to schedule, orchestrate, and monitor workflows.

Summary

It is impossible to untie the value and usefulness of the data lakehouse from that of the data lake. In theory, the combination of the two — that is, the data lakehouse layered atop the data lake — outperforms the usefulness, flexibility, and capabilities of the data warehouse. The discussion above sometimes refers separately to the data lake and to the data lakehouse. What is usually, however, is the co-locality of the data lakehouse with the data lake — the “data lake/house,” if you like.

 

OneFS SmartQuotas

OneFS SmartQuotas help measure, predict, control, and limit the rate of storage capacity consumption, allowing precise cluster provisioning to best meet an organization’s storage needs. SmartQuotas also enables ‘thin provisioning’, or the ability to present more storage capacity to applications and users than is physically present (over-provisioning). This allows storage capacity to be purchased and provisioned organically and in real time, rather than making large, speculative buying decisions ahead of time. As we will see, OneFS also leverages quotas for calculating and reporting on data reduction and storage efficiency across user-defined subsets of the /ifs file system.

SmartQuotas provides two fundamental types of capacity quota:

  • Accounting Quotas
  • Enforcement Quotas

Accounting Quotas simply monitor and report on the amount of storage consumed, but do not take any limiting action or intervention. Instead, they are primarily used for auditing, planning, or billing purposes. For example, SmartQuotas accounting quotas can be used to:

  • Generate reports to analyze and identify storage usage patterns and trends. These can then be used to define storage policies for the business.
  • Track the amount of disk space used by various users, groups, or departments to bill each entity for only the storage capacity they actually consume (charge-back).
  • Intelligently plan for capacity expansions and future storage need.

The ‘isi quota quotas create –enforced=false’ CLI command can be used to create an accounting quota. Alternatively, this can be done from the WebUI by navigating to File System > SmartQuotas > Quotas and usage > Create quota.

The following CLI command creates an accounting quota for the /ifs/data/acct_quota_1 directory, setting an advisory threshold that is informative rather than enforced.

# isi quota quotas create /ifs/data/acct_quota_1 directory \ --advisory-threshold=10M --enforced=false

Before using quota data for analysis or other purposes, verify that no QuotaScan jobs are in progress by running the following CLI command:

# isi job events list --job-type quotascan

In contrast, enforcement quotas include all of the functionality of the accounting option plus the ability to limit disk storage and send notifications. The ‘isi quota quotas create –enforced=true’ CLI syntax can be used to create an enforcement quota. Alternatively, this can be done from the WebUI by navigating to File System > SmartQuotas > Quotas and usage > Create quota.

The following CLI command creates an enforcement quota for the /ifs/data/enforce_quota_1 directory, setting an advisory threshold that is informative rather than enforced.

# isi quota quotas create /ifs/data/enforce_quota_1 directory \ --advisory-threshold=10M --enforced=true

Using enforcement limits, a cluster can be logically partitioned in order to control or restrict how much storage that a user, group, or directory can use. For example, capacity limits can be configured to ensure that adequate space is always available for key projects and critical applications – and to ensure that users of the cluster do not exceed their allotted storage capacity.

Optionally, real-time email quota notifications can be sent to users, group managers, or administrators when they are approaching or have exceeded a quota limit.

A OneFS quota can have one of four enforcement types:

Enforcement Description
Hard A limit that cannot be exceeded. If an operation such as a file write causes a quota target to exceed a hard quota, the operation fails, an alert is logged to the cluster and a notification is sent to any specified recipients. Writes resume when the usage falls below the threshold.
Soft A limit that can be exceeded until a grace period has expired. When a soft quota is exceeded, an alert is logged to the cluster and a notification is issued to any specified recipients. However, data writes are permitted during the grace period. If the soft threshold is still exceeded when the period expires, writes will be blocked, and a hard-limit notification issued to any specified recipients.
Advisory An informal limit that can be exceeded. When an advisory quota threshold is exceeded, an alert is logged to the cluster and a notification is issued to any specified recipients. Reaching an advisory quota threshold does not prevent data writes.
None No enforcement. Quota is accounting only.

All three quota types have both a limit, or threshold, and a grace period. In OneFS 8.2 and later, a soft quota and advisory quota threshold can be specified as a percentage, as well as a specific capacity. For example:

# isi quota quotas create /ifs/quota directory --percent-advisory-threshold=80 --percent-soft-threshold=90 --soft-grace=1d --hard-threshold=100G

A hard quota has a zero-time grace period, an advisory quota has an infinite grace period and a soft quota has a configurable grace period. When a quota limit and grace period have been exceeded, a client write operations to anywhere within that quota domain will fail with EDQUOT. Although enforcements are implemented generically in the quota data bases, only one resource may be limited per domain, either logical or physical space.

Even when a hard quota limit is reached, there are certain instances where operations are not blocked. These include administrative control through root (UID 0), system maintenance activities, and the ability of a blocked user to free up space.

The table below describes the three SmartQuotas enforcement states:

Enforcement State Description
Under (U) If the usage is less than the enforcement threshold, the enforcement is in state U.
Over (O) If the usage is greater than the enforcement threshold, the enforcement is in state O.
Expired (E) If the usage is greater than the soft threshold, and the usage has remained over the enforcement threshold past the grace period expiration, the soft threshold is in state E. If an administrator modifies the soft threshold but not the grace period, and the usage still exceeds the threshold, the enforcement is in state E.

There are a few exceptions to enforcement of Quotas including the following scenarios:

  • If a domain has an accounting only quota, enforcements for the domain are not applied.
  • Any administrator action may push a domain over quota. Examples include changing protection, taking a snapshot, or removing a snapshot. The administrator may write into any domain without obeying enforcements.
  • Any system action may push a domain over quota, including repair. OneFS maintenance processes are as powerful as the administrator.

Governance is the mechanism by which SmartQuotas determines which domains apply to a given file or directory. After a sequence of domain configuration changes, a persistent record is needed in order to know where a file had been accounted. As such, quotas utilize ‘tagging’, and the governing domains are recorded in a dynamic attribute of the inode.

A Quota Domain Account tracks usages and limits of a particular domain. For scalability reasons, the QDA system dynamically breaks up the Quota Domain’s account of the quota into some number of Quota Domain Account Constituents (QAC), each of which tracks a part of the account. Modifications to the account are distributed at random among these constituents. Each Quota Domain Account Constituent is stored in a set of mirrored Quota Accounting Blocks (QABs). QABs track usage of a quota and consist of several level counters for different tracked resource types and level limits for advisory, soft, and hard quotas.

The Quota Domain Record stores all configuration and state associated with a domain. The record can be subdivided into three components:

Component Description
Configuration Quota configuration.
Enforcement This includes the grace period, limit, and notification state.
Account The mechanism for space utilization accounting.

With SmartQuotas, there are three main ways of tracking, enforcing, and reporting resource usage:

Tracking Method Description
Physical size This is simple to track, since it includes all the data and metadata resources used, including the data-protection overhead. The quota system is also able to track the difference before and after the operation.
File system logical size This is slightly more complex to calculate and track but provides the user with a more comprehensible means of understanding their usage.
File accounting This is the most straightforward, since whenever a file is added to a domain, the file count is incremented.
Application logical size Reports total logical data store across different tiers, including CloudPools, to account for the exact file sizes. Allows users to view quotas and free space as an application would view it, in terms of how much capacity is available to store logical data, regardless of data reduction or tiering technology.

 

Prior to OneFS 8.2, SmartQuota size accounting metrics typically used a count of the number of 8 KB blocks required to store file data on cluster. Accounting based on block count can result in challenges, such as small file over-reporting. For example, a 4KB file would be logically accounted for as 8KB. Similarly, block-based quota accounting only extends to on-premises capacity consumption. This means that a 100MB file stored within a CloudPools tier would only be account for as an 8KB SmartLink stub file, rather than its actual size.

To directly address this issue in OneFS 8.2 and later, application logical quotas provide an additional quota accounting metric. Application logical size accounts for, reports, and enforces on the actual space consumed and available for storage, independent of whether files are cloud-tiered, sparse, deduplicated, or compressed. Application logical quotas can be easily configured from the CLI with the following syntax:

# isi quota quotas create <dir> directory –-thresholds-on=applogicalsize

Any legacy quotas created on OneFS versions prior to 8.2 can easily be converted to use application logical size upon upgrade.

For logical space accounting, some inode attributes such as ACLs and symbolic links are included in the resource count. This uses the same data that is displayed in the ‘logical size’ field by the ‘isi get –DD <file>’ CLI command.