HDP Upgrade and Transparent Data Encryption(TDE) support on Isilon OneFS 8.2

HDP Upgrade and Transparent Data Encryption support on Isilon OneFS 8.2

The objective of this testing is to demonstrate the Hortonworks HDP upgrade from HDP 2.6.5 to HDP 3.1 , during which Transparent Data Encryption(TDE) KMS keys and configuration are ported to OneFS Service from HDFS service after upgrade accurately, this facilitates Hadoop user to leverage TDE support on OneFS 8.2 straight out of the box after upgrade without any changes to the TDE/KMS configurations.

HDFS Transparent Data Encryption

The primary motivation of Transparent Data Encryption on HDFS is to support both end-to-end on wire and at rest encryption for data without any modification to the user application. The TDE scheme adds an additional layer of data protection by storing the decryption keys for files on a separate key management server. This separation of keys and data guarantees that even if the HDFS service is completely compromised the files cannot be decrypted without also compromising the keystore.

Concerns and Risks

The primary concern with TDE is mangling/losing Encrypted Data Encryption Keys (EDEKs) which are unique to each file in an Encryption Zone and are necessary to decrypt the data within. If this occurs, the customer’s data will be lost (DL). A secondary concern is managing Encryption Zone Keys (EKs) which are unique to each Encryption Zone and are associated with the root directory of each Zone. Losing/Mangling the EK would result in data unavailability (DU) for the customer and would require admin intervention to remedy. Finally, we need to make sure that EDEKs are not reused in anyway as this would weaken the security of TDE. Otherwise, there is little to no risk to existing or otherwise unencrypted data since TDE only works within Encryption Zones which are not currently supported.

Hortonworks HDP 2.6.5 on Isilon OneFS 8.2

To install HDP 2.6.5 on OneFS 8.2 by following the install guide.

Note: In install, the document is for OneFS 8.1.2 in which hdfs user is mapped to root in the Isilon setting, which is not required on OneFS 8.2, but need to create a new role to the hdfs user to backup/restore RWX access on the file system.

OneFS 8.2  [New Steps to be new role to the hdfs access zone]

hop-isi-dd-3# isi auth roles create --name=BackUpAdmin --description="Bypass FS permissions" --zone=hdp
hop-isi-dd-3# isi auth roles modify BackupAdmin --add-priv=ISI_PRIV_IFS_RESTORE --zone=hdp
hop-isi-dd-3# isi auth roles modify BackupAdmin --add-priv=ISI_PRIV_IFS_BACKUP --zone=hdp
hop-isi-dd-3# isi auth roles view BackUpAdmin --zone=hdp
Name: BackUpAdmin
Description: Bypass FS permissions
    Members: -
Privileges
ID: ISI_PRIV_IFS_BACKUP
      Read Only: True

ID: ISI_PRIV_IFS_RESTORE
      Read Only: True

hop-isi-dd-3# isi auth roles modify BackupAdmin --add-user=hdfs --zone=hdp


----- [ Optional:: Flush the auth mapping and cache to make hdfs take effect immediately]
hop-isi-dd-3# isi auth mapping flush --all
hop-isi-dd-3# isi auth cache flush --all
-----

 

1. After HDP 2.6.5 is installed on OneFS 8.2 following the install guide and above steps to add hdfs user backup/restore role. Install Ranger and Ranger KMS services, run service check on all the services to make sure the cluster is healthy and functional.

 

2. On the Isilon make sure hdfs access zone and hdfs user role are setup as required.

Isilon version

hop-isi-dd-3# isi version
Isilon OneFS v8.2.0.0 B_8_2_0_0_007(RELEASE): 0x802005000000007:Thu Apr  4 11:44:04 PDT 2019 root@sea-build11-04:/b/mnt/obj/b/mnt/src/amd64.amd64/sys/IQ.amd64.release   FreeBSD clang version 3.9.1 (tags/RELEASE_391/final 289601) (based on LLVM 3.9.1)
hop-isi-dd-3#
HDFS user role setup
hop-isi-dd-3# isi auth roles view BackupAdmin --zone=hdp
Name: BackUpAdmin
Description: Bypass FS permissions
Members: hdfs
Privileges
ID: ISI_PRIV_IFS_BACKUP
Read Only: True

ID: ISI_PRIV_IFS_RESTORE
Read Only: True
hop-isi-dd-3#

 

Isilon HDFS setting
hop-isi-dd-3# isi hdfs settings view --zone=hdp
Service: Yes
Default Block Size: 128M
Default Checksum Type: none
Authentication Mode: all
Root Directory: /ifs/data/zone1/hdp
WebHDFS Enabled: Yes
Ambari Server:
Ambari Namenode: kb-hdp-z1.hop-isi-dd.solarch.lab.emc.com
ODP Version:
Data Transfer Cipher: none
Ambari Metrics Collector: pipe-hdp1.solarch.emc.com
hop-isi-dd-3#

 

hdfs to root mapping removed from the access zone setting
hop-isi-dd-3# isi zone view hdp
Name: hdp
Path: /ifs/data/zone1/hdp
Groupnet: groupnet0
Map Untrusted:
Auth Providers: lsa-local-provider:hdp
NetBIOS Name:
User Mapping Rules:
Home Directory Umask: 0077
Skeleton Directory: /usr/share/skel
Cache Entry Expiry: 4H
Negative Cache Entry Expiry: 1m
Zone ID: 2
hop-isi-dd-3#

3. TDE Functional Testing

Primary Testing Foci

Reads and Writes: Clients with the correct permissions must always be able to reliably decrypt.

Kerberos Integration: Realistically, customers will not deploy TDE without Kerberos. [ In this testing Kerberos is not integrated]

TDE Configurations

HDFS TDE Setup
a. Create an encryption zone (EZ) key

Hadoop key create <keyname>

User “keyadmin” has privileges to create, delete, rollover, set key material, get, get keys, get metadata, generate EEK and Decrypt EEK. These privileges are controlled in Ranger web UI, login as keyadmin / <password> and setup these privileges.

[root@pipe-hdp1 ~]# su keyadmin
bash-4.2$ whoami
keyadmin

bash-4.2$ hadoop key create key_a
key_a has been successfully created with options Options{cipher='AES/CTR/NoPadding', bitLength=128, description='null', attributes=null}.
KMSClientProvider[http://pipe-hdp1.solarch.emc.com:9292/kms/v1/] has been updated.

bash-4.2$ hadoop key create key_a
key_a has been successfully created with options Options{cipher='AES/CTR/NoPadding', bitLength=128, description='null', attributes=null}.
KMSClientProvider[http://pipe-hdp1.solarch.emc.com:9292/kms/v1/] has been updated.

bash-4.2$
bash-4.2$ hadoop key list
Listing keys for KeyProvider: KMSClientProvider[http://pipe-hdp1.solarch.emc.com:9292/kms/v1/]
key_data
key_b
key_a
bash-4.2$
Note:: New Keys can also be created from Ranger KMS UI.

 

OneFS TDE Setup

a. Configure KMS URL in the Isilon OneFS CLI

isi hdfs crypto settings modify –kms-url=<url-string> –zone=<hdfs-zone-name> -v

isi hdfs crypto settings view –zone=<hdfs-zone-name>

hop-isi-dd-3# isi hdfs crypto settings view --zone=hdp
Kms Url: http://pipe-hdp1.solarch.emc.com:9292

hop-isi-dd-3#


b. Create a new directory in Isilon OneFS CLI under the Hadoop zone that needs to be encryption zone

mkdir /ifs/hdfs/<new-directory-name>

hop-isi-dd-3# mkdir /ifs/data/zone1/hdp/data_a

hop-isi-dd-3# mkdir /ifs/data/zone1/hdp/data_b
c. After new directory created, create encryption zone by assigning encryption key and directory path

isi hdfs crypto encryption-zones create –path=<new-directory-path> –key-name=<key-created-via-hdfs> –zone=<hdfs-zone-name> -v

hop-isi-dd-3# isi hdfs crypto encryption-zones create --path=/ifs/data/zone1/hdp/data_a --key-name=key_a --zone=hdp -v
Create encryption zone named /ifs/data/zone1/hdp/data_a, with key_a

hop-isi-dd-3# isi hdfs crypto encryption-zones create --path=/ifs/data/zone1/hdp/data_b --key-name=key_b --zone=hdp -v
Create encryption zone named /ifs/data/zone1/hdp/data_b, with key_b
NOTE:
    1. Encryption keys need to be created from hdfs client
    2. Need KMS store to manage keys example Ranger KMS
    3. Encryption zones can be created only on Isilon with CLI
    4. Creating an encryption zone from hdfs client fails with Unknown RPC RemoteException.

TDE Setup Validation

On HDFS Cluster

a. Verify the same from hdfs client   [Path is listed from the hdfs root dir]

hdfs crypto -listZones

bash-4.2$ hdfs crypto -listZones
/data_a  key_a
/data_b  key_b

On Isilon Cluster

a. List the encryption zones on Isilon                                            [Path is listed from the Isilon root path]

hdfs crypto -listZones

bash-4.2$ hdfs crypto -listZones
/data_a  key_a
/data_b  key_b

 

TDE Functional Testing

Authorize users to the EZ and KMS Keys

Ranger KMS UI

a. Login into Ranger KMS UI using keyadmin / <password>

 

b. Create 2 new policies to assign users (yarn, hive) to key_a and (mapred, hive) to key_b with the Get, Get Keys, Get Metadata, Generate EEK and Decrypt EEK permissions.

 

TDE HDFS Client Testing

a. Create sample files, copy it to respective EZs and access them from respective users.
/data_a EZ associated with key_a and only yarn, hive users have permissions
bash-4.2$ whoami
yarn
bash-4.2$ echo "YARN user test file, can you read this?" > yarn_test_file
bash-4.2$ rm -rf yarn_test_fil
bash-4.2$ hadoop fs -put yarn_test_file /data_a/
bash-4.2$ hadoop fs -cat /data_a/yarn_test_file
YARN user test file, can you read this?
bash-4.2$ whoami
yarn
bash-4.2$ exit
exit

[root@pipe-hdp1 ~]# su mapred
bash-4.2$ hadoop fs -cat /data_a/yarn_test_file
cat: User:mapred not allowed to do 'DECRYPT_EEK' on 'key_a'

bash-4.2$
/data_b EZ associated with key_b and only mapred, hive users have permissions
bash-4.2$ whoami
mapred
bash-4.2$ echo "MAPRED user test file, can you read this?" > mapred_test_file
bash-4.2$ hadoop fs -put mapred_test_file /data_b/
bash-4.2$ hadoop fs -cat /data_b/mapred_test_file
MAPRED user test file, can you read this?
bash-4.2$ exit
exit

[root@pipe-hdp1 ~]# su yarn
bash-4.2$ hadoop fs -cat /data_b/mapred_test_file
cat: User:yarn not allowed to do 'DECRYPT_EEK' on 'key_b'

bash-4.2$

User hive has permission to decrypt both keys i.e. ca access both EZs
USER user with decrypt privilege [HIVE]
[root@pipe-hdp1 ~]# su hive
bash-4.2$ pwd
/root
bash-4.2$ hadoop fs -cat /data_b/mapred_test_file
MAPRED user test file, can you read this?
bash-4.2$  hadoop fs -cat /data_a/yarn_test_file
YARN user test file, can you read this?
bash-4.2$


Sample distcp to copy data between EZs.
bash-4.2$ hadoop distcp -skipcrccheck -update /data_a/yarn_test_file /data_b/
19/05/20 21:20:02 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, ignoreFailures=false, overwrite=false, append=false, useDiff=false, fromSnapshot=null, toSnapshot=null, skipCRC=true, blocking=true, numListstatusThreads=0, maxMaps=20, mapBandwidth=100, sslConfigurationFile='null', copyStrategy='uniformsize', preserveStatus=[], preserveRawXattrs=false, atomicWorkPath=null, logPath=null, sourceFileListing=null, sourcePaths=[/data_a/yarn_test_file], targetPath=/data_b, targetPathExists=true, filtersFile='null', verboseLog=false}
19/05/20 21:20:03 INFO client.RMProxy: Connecting to ResourceManager at pipe-hdp1.solarch.emc.com/10.246.156.91:8050
19/05/20 21:20:03 INFO client.AHSProxy: Connecting to Application History server at pipe-hdp1.solarch.emc.com/10.246.156.91:10200
"""
"""
19/05/20 21:20:04 INFO mapreduce.Job: Running job: job_1558336274787_0003
19/05/20 21:20:12 INFO mapreduce.Job: Job job_1558336274787_0003 running in uber mode : false
19/05/20 21:20:12 INFO mapreduce.Job: map 0% reduce 0%
19/05/20 21:20:18 INFO mapreduce.Job: map 100% reduce 0%
19/05/20 21:20:18 INFO mapreduce.Job: Job job_1558336274787_0003 completed successfully
19/05/20 21:20:18 INFO mapreduce.Job: Counters: 33
File System Counters
FILE: Number of bytes read=0
FILE: Number of bytes written=152563
FILE: Number of read operations=0
FILE: Number of large read operations=0
FILE: Number of write operations=0
HDFS: Number of bytes read=426
HDFS: Number of bytes written=40
HDFS: Number of read operations=15
HDFS: Number of large read operations=0
HDFS: Number of write operations=4
Job Counters
Launched map tasks=1
Other local map tasks=1
Total time spent by all maps in occupied slots (ms)=4045
Total time spent by all reduces in occupied slots (ms)=0
Total time spent by all map tasks (ms)=4045
Total vcore-milliseconds taken by all map tasks=4045
Total megabyte-milliseconds taken by all map tasks=4142080
Map-Reduce Framework
Map input records=1
Map output records=0
Input split bytes=114
Spilled Records=0
Failed Shuffles=0
Merged Map outputs=0
GC time elapsed (ms)=91
CPU time spent (ms)=2460
Physical memory (bytes) snapshot=290668544
Virtual memory (bytes) snapshot=5497425920
Total committed heap usage (bytes)=196083712
File Input Format Counters
Bytes Read=272
File Output Format Counters
Bytes Written=0
org.apache.hadoop.tools.mapred.CopyMapper$Counter
BYTESCOPIED=40
BYTESEXPECTED=40
COPY=1
bash-4.2$
bash-4.2$ hadoop fs -ls /data_b/
Found 2 items
-rwxrwxr-x   3 mapred hadoop         42 2019-05-20 04:24 /data_b/mapred_test_file
-rw-r--r--   3 hive   hadoop         40 2019-05-20 21:20 /data_b/yarn_test_file
bash-4.2$ hadoop fs -cat /data_b/yarn_test_file
YARN user test file, can you read this?

bash-4.2$

Hadoop user without permission
bash-4.2$ hadoop fs -put test_file /data_a/
put: User:hdfs not allowed to do 'DECRYPT_EEK' on 'key_A'
19/05/20 02:35:10 ERROR hdfs.DFSClient: Failed to close inode 4306114529
org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does not exist: /data_a/test_file._COPYING_ (inode 4306114529)

TDE OneFS CLI Testing

EZ on Isilon EZ, no user has access to read the file
hop-isi-dd-3# whoami
root
hop-isi-dd-3# cat data_a/yarn_test_file

▒?Tm@DIc▒▒B▒▒>\Qs▒:[VzC▒▒Rw^<▒▒▒▒▒8H#


hop-isi-dd-3% whoami
yarn
hop-isi-dd-3% cat data_a/yarn_test_file

▒?Tm@DIc▒▒B▒▒>\Qs▒:[VzC▒▒Rw^<▒▒▒▒▒8H%

Upgrade the HDP to the latest version, following the upgrade process blog.

After upgrade make sure all the services are up running and pass the service check.

HDFS service will be replaced with OneFS service, under OneFS service configuration make sure KMS related properties are ported successfully.

Login into KMS UI and check the policies are intact after upgrade [ Note after upgrading new “Policy Labels” column added]

 

TDE validate existing configuration and keys after HDP 3.1 upgrade

TDE HDFS client testing existing configuration and keys

a. List the KMS provider and key to check they are intact after the upgrade
[root@pipe-hdp1 ~]# su hdfs
bash-4.2$ hadoop key list
Listing keys for KeyProvider: org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@2f54a33d
key_b
key_a
key_data

bash-4.2$ hdfs crypto -listZones
/data    key_data
/data_a  key_a
/data_b  key_b

b. Create sample files, copy it to respective EZs and access them from respective users
[root@pipe-hdp1 ~]# su yarn
bash-4.2$ cd
bash-4.2$ pwd
/home/yarn
bash-4.2$ echo "YARN user testfile after upgrade to hdp3.1, can you read this?" > yarn_test_file_2
bash-4.2$ hadoop fs -put yarn_test_file_2 /data_a/
bash-4.2$ hadoop fs -cat /data_a/yarn_test_file_2
YARN user testfile after upgrade to hdp3.1, can you read this?

bash-4.2$
[root@pipe-hdp1 ~]# su mapred
bash-4.2$ cd
bash-4.2$ pwd
/home/mapred
bash-4.2$ echo "MAPRED user testfile after upgrade to hdp3.1, can you read this?" > mapred_test_file_2
bash-4.2$ hadoop fs -put mapred_test_file_2 /data_b/
bash-4.2$ hadoop fs -cat /data_b/mapred_test_file_2
MAPRED user testfile after upgrade to hdp3.1, can you read this?

bash-4.2$
[root@pipe-hdp1 ~]# su yarn
bash-4.2$ hadoop fs -cat /data_b/mapred_test_file_2
cat: User:yarn not allowed to do 'DECRYPT_EEK' on 'key_b'

bash-4.2$
[root@pipe-hdp1 ~]# su hive
bash-4.2$ hadoop fs -cat /data_a/yarn_test_file_2
YARN user testfile after upgrade to hdp3.1, can you read this?

bash-4.2$ hadoop fs -cat /data_b/mapred_test_file_2
MAPRED user testfile after upgrade to hdp3.1, can you read this?

bash-4.2$
bash-4.2$ hadoop distcp -skipcrccheck -update /data_a/yarn_test_file_2 /data_b/
19/05/21 05:23:38 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, ignoreFailures=false, overwrite=false, append=false, useDiff=false, useRdiff=false, fromSnapshot=null, toSnapshot=null, skipCRC=true, blocking=true, numListstatusThreads=0, maxMaps=20, mapBandwidth=0.0, copyStrategy='uniformsize', preserveStatus=[BLOCKSIZE], atomicWorkPath=null, logPath=null, sourceFileListing=null, sourcePaths=[/data_a/yarn_test_file_2], targetPath=/data_b, filtersFile='null', blocksPerChunk=0, copyBufferSize=8192, verboseLog=false}, sourcePaths=[/data_a/yarn_test_file_2], targetPathExists=true, preserveRawXattrsfalse
19/05/21 05:23:38 INFO client.RMProxy: Connecting to ResourceManager at pipe-hdp1.solarch.emc.com/10.246.156.91:8050
19/05/21 05:23:38 INFO client.AHSProxy: Connecting to Application History server at pipe-hdp1.solarch.emc.com/10.246.156.91:10200
"
19/05/21 05:23:54 INFO mapreduce.Job: map 0% reduce 0%
19/05/21 05:24:00 INFO mapreduce.Job: map 100% reduce 0%
19/05/21 05:24:00 INFO mapreduce.Job: Job job_1558427755021_0001 completed successfully
19/05/21 05:24:00 INFO mapreduce.Job: Counters: 36
"
Bytes Copied=63
Bytes Expected=63
Files Copied=1

bash-4.2$ hadoop fs -ls /data_b/
Found 4 items
-rwxrwxr-x   3 mapred hadoop         42 2019-05-20 04:24 /data_b/mapred_test_file
-rw-r--r--   3 mapred hadoop         65 2019-05-21 05:21 /data_b/mapred_test_file_2
-rw-r--r--   3 hive   hadoop         40 2019-05-20 21:20 /data_b/yarn_test_file
-rw-r--r--   3 hive   hadoop         63 2019-05-21 05:23 /data_b/yarn_test_file_2

bash-4.2$ hadoop fs -cat /data_b/yarn_test_file_2
YARN user testfile after upgrade to hdp3.1, can you read this?

bash-4.2$ hadoop fs -cat /data_b/yarn_test_file_2
YARN user testfile after upgrade to hdp3.1, can you read this?

bash-4.2$
TDE OneFS client testing existing configuration and keys
a. List the KMS provider and key to check they are intact after upgrade
hop-isi-dd-3# isi hdfs crypto settings view --zone=hdp
Kms Url: http://pipe-hdp1.solarch.emc.com:9292
hop-isi-dd-3# isi hdfs crypto encryption-zones list
Path                       Key Name
------------------------------------
/ifs/data/zone1/hdp/data   key_data
/ifs/data/zone1/hdp/data_a key_a
/ifs/data/zone1/hdp/data_b key_b
------------------------------------
Total: 3

hop-isi-dd-3#
b. Permission to access previous created EZs
hop-isi-dd-3# cat data_b/yarn_test_file_2
3▒
▒{&▒{<N▒7▒      ,▒▒l▒n.▒▒▒bz▒6▒ ▒G▒_▒l▒Ieñ+
▒t▒▒N^▒ ▒# hop-isi-dd-3# whoami
root
hop-isi-dd-3#

TDE validate new configuration and keys after HDP 3.1 upgrade

TDE HDFS Client new keys setup

a. Create new keys and list
bash-4.2$ hadoop key create up_key_a
up_key_a has been successfully created with options Options{cipher='AES/CTR/NoPadding', bitLength=128, description='null', attributes=null}.
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@11bd0f3b has been updated.

bash-4.2$ hadoop key create up_key_b
up_key_b has been successfully created with options Options{cipher='AES/CTR/NoPadding', bitLength=128, description='null', attributes=null}.
org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@11bd0f3b has been updated.

bash-4.2$ hadoop key list
Listing keys for KeyProvider: org.apache.hadoop.crypto.key.kms.LoadBalancingKMSClientProvider@2f54a33d
key_b
key_a
key_data
up_key_b
up_key_a

bash-4.2$

b. After EZs created from OneFS CLI check the zones reflect from HDFS client
bash-4.2$ hdfs crypto -listZones
/data       key_data
/data_a     key_a
/data_b     key_b
/up_data_a  up_key_a
/up_data_b  up_key_b


TDE OneFS Client new encryption zone setup

a. Create new EZ from OneFS CLI

HOP-ISI-DD-3# ISI HDFS CRYPTO ENCRYPTION-ZONES CREATE --PATH=/IFS/DATA/ZONE1/HDP/UP_DATA_A --KEY-NAME=UP_KEY_A --ZONE=HDP -V
Create encryption zone named /ifs/data/zone1/hdp/up_data_a, with up_key_a
hop-isi-dd-3# isi hdfs crypto encryption-zones create --path=/ifs/data/zone1/hdp/up_data_b --key-name=up_key_b --zone=hdp -v
Create encryption zone named /ifs/data/zone1/hdp/up_data_b, with up_key_b

hop-isi-dd-3# isi hdfs crypto encryption-zones list
Path                          Key Name
---------------------------------------
/ifs/data/zone1/hdp/data key_data
/ifs/data/zone1/hdp/data_a    key_a
/ifs/data/zone1/hdp/data_b    key_b
/ifs/data/zone1/hdp/up_data_a up_key_a
/ifs/data/zone1/hdp/up_data_b up_key_b
---------------------------------------
Total: 5

hop-isi-dd-3#

Create 2 new policies to assign users (yarn, hive) to up_key_a and (mapred, hive) to up_key_b with the Get, Get Keys, Get Metadata, Generate EEK and Decrypt EEK permissions.

TDE HDFS Client testing on upgraded HDP 3.1

a. Create sample files, copy it to respective EZs and access them from respective users

/up_data_a EZ associated with up_key_a and only yarn, hive users have permissions

[root@pipe-hdp1 ~]# su yarn
bash-4.2$ echo "After HDP Upgrade to HDP 3.1, YARN user, Creating this file" > up_yarn_test_file
bash-4.2$ hadoop fs -put up_yarn_test_file /up_data_a/
bash-4.2$ hadoop fs -cat /up_data_a/up_yarn_test_file
After HDP Upgrade to HDP 3.1, YARN user, Creating this file

bash-4.2$ hadoop fs -cat /up_data_b/up_mapred_test_file
cat: User:yarn not allowed to do 'DECRYPT_EEK' on 'up_key_b'
bash-4.2$

/up_data_b EZ associated with up_key_b and only mapred, hive users have permissions

[root@pipe-hdp1 ~]# su mapred
bash-4.2$ cd
bash-4.2$ echo "After HDP Upgrade to HDP 3.1, MAPRED user, Creating this file" > up_mapred_test_file
bash-4.2$ hadoop fs -put up_mapred_test_file /up_data_b/
bash-4.2$ hadoop fs -cat /up_data_b/up_mapred_test_file
After HDP Upgrade to HDP 3.1, MAPRED user, Creating this file

bash-4.2$


 

User hive has permission to decrypt both keys i.e. ca access both EZs

USER user with decrypt privilege [HIVE]

[root@pipe-hdp1 ~]# su hive
bash-4.2$ hadoop fs -cat /up_data_b/up_mapred_test_file
After HDP Upgrade to HDP 3.1, MAPRED user, Creating this file

bash-4.2$ hadoop fs -cat /up_data_a/up_yarn_test_file
After HDP Upgrade to HDP 3.1, YARN user, Creating this file
bash-4.2$


Sample distcp to copy data between EZs.

bash-4.2$ hadoop distcp -skipcrccheck -update /up_data_a/up_yarn_test_file /up_data_b/
19/05/22 04:48:21 INFO tools.DistCp: Input Options: DistCpOptions{atomicCommit=false, syncFolder=true, deleteMissing=false, ignoreFailures=false, overwrite=false, append=false, useDiff=false, useRdiff=false, fromSnapshot=null, toSnapshot=null, skipCRC=true, blocking=true, numListstatusThreads=0, maxMaps=20, mapBandwidth=0.0, copyStrategy='uniformsize', preserveStatus=[BLOCKSIZE], atomicWorkPath=null, logPath=null, sourceFileListing=null, sourcePaths=[/up_data_a/up_yarn_test_file], targetPath=/up_data_b, filtersFile='null', blocksPerChunk=0, copyBufferSize=8192, verboseLog=false}, sourcePaths=[/up_data_a/up_yarn_test_file], targetPathExists=true, preserveRawXattrsfalse
"
19/05/22 04:48:23 INFO mapreduce.Job: The url to track the job: http://pipe-hdp1.solarch.emc.com:8088/proxy/application_1558505736502_0001/
19/05/22 04:48:23 INFO tools.DistCp: DistCp job-id: job_1558505736502_0001
19/05/22 04:48:23 INFO mapreduce.Job: Running job: job_1558505736502_0001
"
Bytes Expected=60
Files Copied=1

bash-4.2$ hadoop fs -ls /up_data_b/
Found 2 items
-rw-r--r--   3 mapred hadoop         62 2019-05-22 04:43 /up_data_b/up_mapred_test_file
-rw-r--r--   3 hive   hadoop         60 2019-05-22 04:48 /up_data_b/up_yarn_test_file

bash-4.2$ hadoop fs -cat /up_data_b/up_yarn_test_file
After HDP Upgrade to HDP 3.1, YARN user, Creating this file

bash-4.2$

Hadoop user without permission

bash-4.2$ hadoop fs -put test_file /data_a/
put: User:hdfs not allowed to do 'DECRYPT_EEK' on 'key_A'
19/05/20 02:35:10 ERROR hdfs.DFSClient: Failed to close inode 4306114529
org.apache.hadoop.ipc.RemoteException(java.io.FileNotFoundException): File does not exist: /data_a/test_file._COPYING_ (inode 4306114529)

 

TDE OneFS CLI Testing

Permissions on Isilon EZ, no user has access to read the file
hop-isi-dd-3# cat up_data_a/up_yarn_test_file
%*݊▒▒ixu▒▒▒=}▒΁▒▒h~▒7▒=_▒▒▒0▒[.-$▒:/▒Ԋ▒▒▒▒\8vf▒{F▒Sl▒▒#

 

Conclusion

Above testing and results prove that HDP upgrade does not break and TDE configuration and same are ported to new OneFS service after a successful upgrade.

Exploring Hive LLAP using Testbench On OneFS

Short Description

Explore Hive LLAP by using Horontworks Testbench to generate data and run queries with LLAP enabled using HiveServer2 Interactive JDBC URL.

Article

The latest release of Isilon OneFS 8.1.2 delivers new capabilities and support like Apache Hadoop 3, Isilon Ambari Management pack, Apache Hive LLAP, Apache Ranger with SSL and WebHDFS. In this article, we shall explore Apache Hive LLAP using Horontworks Hive Testbench which supports LLAP. The Hive Testbench consists of a data generator and a standard set of queries typically used for benchmarking hive performance. This article describes how to generate data and run a query in beeline, with LLAP enabled.

If you don’t have a cluster already configured for LLAP, set up new HDP2.6 on Isilon OneFS from here and enable Interactive query under Ambari Server UI –> Hive –> Configs as below.

Hive Testbench setup

1. Log into the HDP client node where HIVE is installed.

2. Install and Setup Hive testbench

3. Generate 5GB of test data: [Here we shall use TPC-H Testbench]

  /If GCC not install/  yum install -y gcc

/If javac not found/ export JAVA_HOME=/usr/jdk64/jdk1.8.**** ;  export PATH=$JAVA_HOME/bin:$PATH

cd hive-testbench-hdp3/ sudo ./tpch-build.sh ./tpch-setup.sh 5

 

4. A MapReduce job runs to create the data and load the data into hive. This will take some time to complete. The last line in the script is:

Data loaded into database tpch_******.

 

Make sure all the below prerequisites are met before proceeding ahead.

1. HDP cluster up and running

2. YARN all services, Hive all services are up and running

3. Uid/gid parity and necessary directory structure maintained between HDP and OneFS

4. Interactive Query enabled.

5. Hive Testbench TPC-H database setup and data loaded.

Connecting to Interactive Service and running queries
Through Command Line

1. Log into the HDP client node where HIVE is installed and Hive Testbench setup.

2. Change to the directory where Hive Testbench is placed and into the sample-queries-tpch.

3. From Ambari Server Web UI –> HIVE Service –> Summary page, copy the HiveServer2 Interactive JDBC URL”

 

4. Run beeline with HiveServer2 Interactive JDBC URL with credential hive/hive.

beeline -n hive -p hive -u "jdbc:hive2://hawkeye03.kanagawa.demo:2181/;serviceDiscoveryMode=zooKeeper;zooKeeperNamespace=hiveserver2-hive2"

5. Run show databases command to check the tpch databases created during HIVE Testbench setup.

0: jdbc:hive2://hawkeye03.kanagawa.demo:2181/> show databases;

6. Switch to the tpch_flat_orc_5 databases

0: jdbc:hive2://hawkeye03.kanagawa.demo:2181/> use tpch_flat_orc_5;

7. Run the query7.sql by issuing run command as below and note down the execution time, let’s call this 1st run.

0: jdbc:hive2://hawkeye03.kanagawa.demo:2181/> !run query7.sql

To monitor LLAP functioning open HiveServer2 Interactive UI from the Ambari Server web UI –> Hive service –> Summary –> Quick Links –> HiveServer2 Interactive UI

Figure :: HiveServer2 Interactive UI

Now click on Running Instances Web URL(highlighted in above image) to go to LLAP Monitor page

On this LLAP Monitor UI, metrics to watch are Cache (use rate, Request count, Hit Rate) and System (LLAP open Files).

 

8. Immediate after step 7, which was 1st time query7.sql run, let us run the same query7.sql again and call it 2nd run. Monitor execution time, LLAP cache Metrics and System metrics.

 

Notice the drastic reduction in the execution time, with increase in Cache metrics.

9. Let us run the same query7.sql again, 3rd time.

Notice that the 2nd and 3rd run of the query7.sql completes much more quickly, this is because the LLAP cache fills with data, queries respond more quickly.

Summary

Hive LLAP combines persistent query servers and intelligent in-memory caching to deliver blazing-fast SQL queries without sacrificing the scalability Hive and Hadoop are known for. With OneFS 8.1.2 support for Hive LLAP the Hadoop cluster installed on OneFS benefit with LLAP feature for fast and interactive SQL on Hadoop with Hive LLAP. So benefits of Hive LLAP include LLAP uses persistent query servers to avoid long startup times and deliver fast SQL. Shares its in-memory cache among all SQL users, maximizing the use of this scarce resource. LLAP has fine-grained resource management and preemption, making it great for highly concurrent access across many users. LLAP is 100% compatible with existing Hive SQL and Hive tools.

Bigdata File Formats Support on DellEMC Isilon

This article describes the DellEMC Isilon’s support for Apache Hadoop file formats in terms of disk space utilization. To determine this, we will use Apache Hive service to create and store different file format tables and analyze the disk space utilization by each table on the Isilon storage.

Apache Hive supports several familiar file formats used in Apache Hadoop. Hive can load and query different data files created by other Hadoop components such as PIG, Spark, MapReduce, etc. In this article, we will check Apache Hive file formats such as TextFile, SequenceFIle, RCFile, AVRO, ORC and Parquet formats. Cloudera Impala also supports these file formats.

To begin with, let us understand a bit about these Bigdata File formats. Different file formats and compression codes work better for different data sets in Hadoop, the main objective of this article is to determine their supportability on DellEMC Isilon storage which is a scale-out NAS storage for Hadoop cluster.

Following are the Hadoop file formats

Test File: This is a default storage format. You can use the text format to interchange the data with another client application. The text file format is very common for most of the applications. Data is stored in lines, with each line being a record. Each line is terminated by a newline character(\n).

The test format is a simple plane file format. You can use the compression (BZIP2) on the text file to reduce the storage spaces.

Sequence File: These are Hadoop flat files that store values in binary key-value pairs. The sequence files are in binary format and these files can split. The main advantage of using the sequence file is to merge two or more files into one file.

RC File: This is a row columnar file format mainly used in Hive Datawarehouse, offers high row-level compression rates. If you have a requirement to perform multiple rows at a time, then you can use the RCFile format. The RCFile is very much like the sequence file format. This file format also stores the data as key-value pairs.

AVRO File: AVRO is an open-source project that provides data serialization and data exchange services for Hadoop. You can exchange data between the Hadoop ecosystem and a program written in any programming language. Avro is one of the popular file formats in Big Data Hadoop based applications.

ORC File: The ORC file stands for Optimized Row Columnar file format. The ORC file format provides a highly efficient way to store data in the Hive table. This file system was designed to overcome limitations of the other Hive file formats. The Use of ORC files improves performance when Hive is reading, writing, and processing data from large tables.

More information on the ORC file format: https://cwiki.apache.org/confluence/display/Hive/LanguageManual+ORC

Parquet File: Parquet is a column-oriented binary file format. The parquet is highly efficient for the types of large-scale queries. Parquet is especially good for queries scanning particular columns within a particular table. The Parquet table uses compression Snappy, gzip; currently Snappy by default.

More information on the Parquet file format: https://parquet.apache.org/documentation/latest/

Please note for below testing Hortonworks HDP 3.1 is installed on DellEMC Isilon OneFS 8.2.

Disk Space Utilization on DellEMC Isilon

What is the space on the disk that is used for these formats in Hadoop on DellEMC Isilon? Saving on disk space is always a good thing, but it can be hard to calculate exactly how much space you will be used with compression. Every file and data set is different, and the data inside will always be a determining factor for what type of compression you’ll get. The text will compress better than binary data. Repeating values and strings will compress better than pure random data, and so forth.

As a simple test, we took the 2008 data set from http://stat-computing.org/dataexpo/2009/the-data.htmlThe compressed bz2 download measures at 108.5 Mb, and uncompressed at 657.5 Mb. We then uploaded the data to DellEMC Isilon through HDFS protocol, and created an external table on top of the uncompressed data set:

Copy the original dataset to Hadoop cluster
(base) [root@pipe-hdp4 ~]# ll
-rw-r--r--   1 root root 689413344 Dec  9  2014 2008.csv
-rwxrwxrwx   1 root root 113753229 Dec  9  2014 2008.csv.bz2


(base) [root@pipe-hdp4 ~]#hadoop fs -put 2008.csv.bz2 /
(base) [root@pipe-hdp4 ~]#hadoop fs -mkdir /flight_arrivals
(base) [root@pipe-hdp4 ~]#hadoop fs -put 2008.csv /flight_arrivals/
From Hadoop Compute Node, create a table
Create external table flight_arrivals (
year int,
month int,
DayofMonth int,
DayOfWeek int,
DepTime int,
CRSDepTime int,
ArrTime int,
CRSArrTime int,
UniqueCarrier string,
FlightNum int,
TailNum string,
ActualElapsedTime int,
CRSElapsedTime int,
AirTime int,
ArrDelay int,
DepDelay int,
Origin string,
Dest string,
Distance int,
TaxiIn int,
TaxiOut int,
Cancelled int,
CancellationCode int,
Diverted int,
CarrierDelay string,
WeatherDelay string,
NASDelay string,
SecurityDelay string,
LateAircraftDelay string
)
ROW FORMAT DELIMITED
FIELDS TERMINATED BY ','
STORED AS TEXTFILE
location '/flight_arrivals';

The total number of records in this primary table is
select count(*) from flight_arrivals;
+----------+
|   _c0    |
+----------+
| 7009728  |
+----------+


 

Similarly, create different file format tables using the primary table

To create different file formats files by simply specifying ‘STORED AS FileFormatName’ option at the end of a CREATE TABLE Command.

Create external table flight_arrivals_external_orc stored as ORC as select * from flight_arrivals;
Create external table flight_arrivals_external_parquet stored as Parquet as select * from flight_arrivals;
Create external table flight_arrivals_external_textfile stored as textfile as select * from flight_arrivals;
Create external table flight_arrivals_external_sequencefile stored as sequencefile as select * from flight_arrivals;
Create external table flight_arrivals_external_rcfile stored as rcfile as select * from flight_arrivals;
Create external table flight_arrivals_external_avro stored as avro as select * from flight_arrivals;

 

Disk space utilization of the tables

Now, let us compare the disk usage on Isilon of all the files from Hadoop compute nodes.

(base) [root@pipe-hdp4 ~]# hadoop fs -du -h /warehouse/tablespace/external/hive/ | grep flight_arrivals
670.7 M  670.7 M /warehouse/tablespace/external/hive/flight_arrivals_external_textfile
403.1 M  403.1 M /warehouse/tablespace/external/hive/flight_arrivals_external_rcfile
751.1 M  751.1 M /warehouse/tablespace/external/hive/flight_arrivals_external_sequencefile
597.8 M  597.8 M /warehouse/tablespace/external/hive/flight_arrivals_external_avro
145.7 M  145.7 M  /warehouse/tablespace/external/hive/flight_arrivals_external_parquet
93.1 M   93.1 M  /warehouse/tablespace/external/hive/flight_arrivals_external_orc
(base) [root@pipe-hdp4 ~]#

 

Summary

From the below table we can conclude that DellEMC Isilon as HDFS storage supports all the Hadoop file formats and provides the same disk utilization as with the traditional HDFS storage.

Format

Size

Compressed%

BZ2 108.5 M 16.5%
CSV (Text) 657.5 M
ORC 93.1 M 14.25%
Parquet 145.7 M 22.1%
AVRO 597.8 M 90.9%
RC FIle 403.1 M 61.3%
Sequence 751.1 M 114.2%

Here the default settings and values wee used to create all different format tables, as well as no other optimizations, were used for any of the formats. Each file format ships with many options and optimizations to compress the data, only the defaults that ship HDP 3.1 were used.

Hadoop Rest API – WebHDFS on OneFS

WebHDFS

Hortonworks developed an API to support operations such as create, rename or delete files and directories, open, read or write files, set permissions, etc based on standard REST functionalities called as WebHDFS. This is a great tool for applications running within the Hadoop cluster but there may be use cases where an external application needs to manipulate HDFS like it needs to create directories and write files to that directory or read the content of a file stored on HDFS. Webhdfs concept is based on HTTP operations like GT, PUT, POST and DELETE. Authentication can be based on user.name query parameter (as part of the HTTP query string) or if security is turned on then it relies on Kerberos.

Web HDFS is enabled in a Hadoop cluster by defining the following property in hdfs-site.xml: Also can be chekced in Ambari UI page under HDFS service –>config

  <property>
      <name>dfs.webhdfs.enabled</name>
      <value>true</value>
      <final>true</final>
    </property>

Ambari UI –> HDFS Service–> Config–General

 

Will use user hdfs-hdp265 for this further testing, initialize the hdfs-hdp265.

[root@hawkeye03 ~]# kinit -kt /etc/security/keytabs/hdfs.headless.keytab hdfs-hdp265
[root@hawkeye03 ~]# klist
Ticket cache: FILE:/tmp/krb5cc_0
Default principal: hdfs-hdp265@KANAGAWA.DEMO

Valid starting       Expires              Service principal
09/16/2018 19:01:36  09/17/2018 05:01:36  krbtgt/KANAGAWA.DEMO@KANAGAWA.DEMO
        renew until 09/23/2018 19:01:

CURL

curl(1) itself knows nothing about Kerberos and will not interact neither with your credential cache nor your keytab file. It will delegate all calls to a GSS-API implementation which will do the magic for you. What magic depends on the library, Heimdal and MIT Kerberos.

Verify this with curl –version mentioning GSS-API and SPNEGO and with ldd linked against your MIT Kerberos version.

    1. Create a client keytab for the service principal with ktutil or mskutil
    2. Try to obtain a TGT with that client keytab by kinit -k -t <path-to-keytab> <principal-from-keytab>
    3. Verify with klist that you have a ticket cache

Environment is now ready to go:

    1. Export KRB5CCNAME=<some-non-default-path>
    2. Export KRB5_CLIENT_KTNAME=<path-to-keytab>
    3. Invoke curl –negotiate -u : <URL>

MIT Kerberos will detect that both environment variables are set, inspect them, automatically obtain a TGT with your keytab, request a service ticket and pass to curl. You are done.

WebHDFS Examples

1. Check home directory

[root@hawkeye03 ~]# curl --negotiate -w -X -u : "http://isilon40g.kanagawa.demo:8082/webhdfs/v1?op=GETHOMEDIRECTORY"
{
   "Path" : "/user/hdfs-hdp265"
}
-X[root@hawkeye03 ~]#

 

2. Check Directory status

[root@hawkeye03 ~]# curl --negotiate -u : -X GET "http://isilon40g.kanagawa.demo:8082/webhdfs/v1/hdp?op=LISTSTATUS"
{
   "FileStatuses" : {
      "FileStatus" : [
         {
            "accessTime" : 1536824856850,
            "blockSize" : 0,
            "childrenNum" : -1,
            "fileId" : 4443865584,
            "group" : "hadoop",
            "length" : 0,
            "modificationTime" : 1536824856850,
            "owner" : "root",
            "pathSuffix" : "apps",
            "permission" : "755",
            "replication" : 0,
            "type" : "DIRECTORY"
         }
      ]
   }
}

[root@hawkeye03 ~]#

3. Create a directory

[root@hawkeye03 ~]# curl --negotiate -u : -X PUT "http://isilon40g.kanagawa.demo:8082/webhdfs/v1/tmp/webhdfs_test_dir?op=MKDIRS"
{
   "boolean" : true
}
[root@hawkeye03 ~]# hadoop fs -ls /tmp | grep webhdfs
drwxr-xr-x   - root      hdfs          0 2018-09-16 19:09 /tmp/webhdfs_test_dir
[root@hawkeye03 ~]#

 

4. Create a File :: With OneFS 8.1.2 files operation can be performed with single REST API call.

[root@hawkeye03 ~]# hadoop fs -ls -R /tmp/webhdfs_test_dir/
[root@hawkeye03 ~]#
[root@hawkeye03 ~]# curl --negotiate -u : -X PUT "http://isilon40g.kanagawa.demo:8082/webhdfs/v1/tmp/webhdfs_test_dir/webhdfs-test_file?op=CREATE"
[root@hawkeye03 ~]# curl --negotiate -u : -X PUT "http://isilon40g.kanagawa.demo:8082/webhdfs/v1/tmp/webhdfs_test_dir/webhdfs-test_file_2?op=CREATE"
[root@hawkeye03 ~]#
[root@hawkeye03 ~]# hadoop fs -ls -R /tmp/webhdfs_test_dir/
-rwxr-xr-x   3 root hdfs          0 2018-09-16 19:15 /tmp/webhdfs_test_dir/webhdfs-test_file
-rwxr-xr-x   3 root hdfs          0 2018-09-16 19:15 /tmp/webhdfs_test_dir/webhdfs-test_file_2
[root@hawkeye03 ~]#

 

5. Upload sample file

[root@hawkeye03 ~]# echo "WebHDFS Sample Test File" > WebHDFS.txt
[root@hawkeye03 ~]# curl --negotiate -T WebHDFS.txt -u : "http://isilon40g.kanagawa.demo:8082/webhdfs/v1/tmp/webhdfs_test_dir/WebHDFS.txt?op=CREATE&overwrite=false"
[root@hawkeye03 ~]# hadoop fs -ls -R /tmp/webhdfs_test_dir/
-rwxr-xr-x   3 root hdfs          0 2018-09-16 19:41 /tmp/webhdfs_test_dir/WebHDFS.txt

 

6. Open the read a file :: With OneFS 8.1.2 files operation can be performed with single REST API call.

[root@hawkeye03 ~]# curl --negotiate -i -L -u : "http://isilon40g.kanagawa.demo:8082/webhdfs/v1/tmp/webhdfs_test_dir/WebHDFS_read.txt?user.name=hdfs-hdp265&op=OPEN"             HTTP/1.1 307 Temporary Redirect
Date: Mon, 17 Sep 2018 08:18:45 GMT
Server: Apache/2.4.29 (FreeBSD) OpenSSL/1.0.2o-fips mod_fastcgi/mod_fastcgi-SNAP-0910052141
Location: http://172.16.59.102:8082/webhdfs/v1/tmp/webhdfs_test_dir/WebHDFS_read.txt?user.name=hdfs-hdp265&op=OPEN&datanode=true
Content-Length: 0
Content-Type: application/octet-stream
HTTP/1.1 200 OK
Date: Mon, 17 Sep 2018 08:18:45 GMT
Server: Apache/2.4.29 (FreeBSD) OpenSSL/1.0.2o-fips mod_fastcgi/mod_fastcgi-SNAP-0910052141
Content-Length: 30
Content-Type: application/octet-stream


Sample WebHDFS read test file

 

or

[root@hawkeye03 ~]# curl --negotiate -L -u : "http://isilon40g.kanagawa.demo:8082/webhdfs/v1/tmp/webhdfs_test_dir/WebHDFS_read.txt?op=OPEN&datanode=true"
Sample WebHDFS read test file
[root@hawkeye03 ~]#

 

7. Rename DIRECTORY

[root@hawkeye03 ~]# curl --negotiate -u : -X PUT "http://isilon40g.kanagawa.demo:8082/webhdfs/v1/tmp/webhdfs_test_dir?op=RENAME&destination=/tmp/webhdfs_test_dir_renamed"
{
   "boolean" : true
}
[root@hawkeye03 ~]# hadoop fs -ls /tmp/ | grep webhdfs
drwxr-xr-x   - root      hdfs          0 2018-09-16 19:48 /tmp/webhdfs_test_dir_renamed

 

8. Delete directory :: Directory should be empty to delete

[root@hawkeye03 ~]# curl --negotiate -u : -X DELETE "http://isilon40g.kanagawa.demo:8082/webhdfs/v1/tmp/webhdfs_test_dir_renamed?op=DELETE"
{
   "RemoteException" : {
      "exception" : "PathIsNotEmptyDirectoryException",
      "javaClassName" : "org.apache.hadoop.fs.PathIsNotEmptyDirectoryException",
      "message" : "Directory is not empty."
   }
}
[root@hawkeye03 ~]#

 

Once the directory contents are removed, it can be deleted

[root@hawkeye03 ~]# curl --negotiate -u : -X DELETE "http://isilon40g.kanagawa.demo:8082/webhdfs/v1/tmp/webhdfs_test_dir_renamed?op=DELETE"
{
   "boolean" : true
}
[root@hawkeye03 ~]# hadoop fs -ls /tmp | grep webhdfs
[root@hawkeye03 ~]#

Summary

WebHDFS provides a simple, standard way to execute Hadoop filesystem operations by an external client that does not necessarily run on the Hadoop cluster itself. The requirement for WebHDFS is that the client needs to have a direct connection to namenode and datanodes via the predefined ports. Hadoop HDFS over HTTP – that was inspired by HDFS Proxy – addresses these limitations by providing a proxy layer based on preconfigured Tomcat bundle; it is interoperable with WebHDFS API but does not require the firewall ports to be open for the client.

Using Dell EMC Isilon with Microsoft’s SQL Server Big Data Clusters

By Boni Bruno, Chief Solutions Architect | Dell EMC

Dell EMC Isilon

Dell EMC Isilon solves the hard scaling problems our customers have with consolidating and storing large amounts of unstructured data.  Isilon’s scale-out design and multi-protocol support provides efficient deployment of data lakes as well as support for big data platforms such as Hadoop, Spark, and Kafka to name a few examples.

In fact, the embedded HDFS implementation that comes with Isilon OneFS has been CERTIFIED by Cloudera for both HDP and CDH Hadoop distributions.  Dell EMC has also been recognized by Gartner as a Leader in the Gartner Magic Quadrant for Distributed File Systems and Object Storage four years in a row.  To that end, Dell EMC is delighted to announce that Isilon is a validated HDFS tiering solution for Microsoft’s SQL Server Big Data Clusters.

SQL Server Big Data Clusters & HDFS Tiering with Dell EMC Isilon

SQL Server Big Data Clusters allow you to deploy clusters of SQL Server, Spark, and HDFS containers on Kubernetes. With these components, you can combine and analyze MS SQL relational data with high-volume unstructured data on Dell EMC Isilon. This means that Dell EMC customers who have data on their Isilon clusters can now make their data available to their SQL Server Big Data Clusters for analytics using the embedded HDFS interface that comes with Isilon OneFS.

Note:  The HDFS Tiering feature of SQL Server 2019 Big Data Clusters currently does not support Cloudera Hadoop, Isilon provides immediate access to HDFS data with or without a Hadoop distribution being deployed in the customers’ environment.  This is a unique value proposition of Dell EMC Isilon storage solution for SQL Server Big Data Clusters.  Unstructured data stored on Isilon is directly accessed over HDFS and will transparently appear as local data to the SQL Server Big Data Cluster platform.

The Figure below depicts the overall architecture between SQL Server Big Data Cluster platform and Dell EMC Isilon or ECS storage solutions.

Dell EMC provides two storage solutions that can integrate with SQL Server Big Data Clusters. Dell EMC Isilon provides a high-performance scale-out HDFS solution and Dell EMC ECS provides a high-capacity scale-out S3A solution, both are on-premise storage solutions.

We are currently working with the Microsoft’s Azure team to get these storage solutions available to customers in the cloud as well.  The remainder of this article provides details on how Dell EMC Isilon integrates with SQL Server Big Data Cluster over HDFS.

Setting up HDFS on Dell EMC Isilon

Enabling HDFS on Isilon is as simple as clicking a button in the OneFS GUI.  Customers have the choice of having multiple access zones if needed, access zones provide a logical separation of the data and users with support for independent role-based access controls.  For the purposes of this article, a “msbdc” access zone will be used for reference.  By default, HDFS is disabled on a given access zone as shown below:

To activate HDFS, simply click the Activate HDFS button.  Note:  HDFS licenses are free with the purchase of Isilon, HDFS licenses can be installed under Cluster Management\Licenses.

Once an HDFS license in installed and HDFS is activated on a given access zone, the HDFS settings can be viewed as shown below:

The GUI allows you to easily change the HDFS block size, Authentication Type, Enable the Ranger Security Plugin, etc.  Isilon OneFS also supports various authentication providers and additional protocols as shown below:

Simply pick the authentication provider of your choice and specify the provider details to enable remote authentication services on Isilon.  Note:  Isilon OneFS has a robust security architecture and authentication, identity management, and authorization stack, you can find more details here.

The multi-protocol support included with Isilon allows customers to land data on Isilon over SMB, NFS, FTP, or HTTP and make all or part of the data available to SQL Server Big Data Clusters over HDFS without having a Hadoop cluster installed – Beautiful!

A key performance aspect of Dell EMC Isilon is the scale-out design of both the hardware and the integrated OneFS storage operating system.  Isilon OneFS provides a unique SmartConnect feature that provides HDFS namenode and datanode load balancing and redundancy.

To use SmartConnect, simply delegate a sub-domain of your choice on your internal DNS server to Isilon and OneFS will automatically load balance all the associated HDFS connections from SQL Server Big Data Clusters transparently across all the physical nodes on the Isilon storage cluster.

The SmartConnect zone name is configured under Cluster Management\Network Configuration\ in the OneFS GUI as shown below:

 

In the example screen shot above, the SmartConnect Zone name is msbdc.dellemc.com, this means the delegated subdomain on the internal DNS server should be msbdc, a nameserver record for this msbdc subdomain needs to point to the defined SmartConnect Service IP.

The Service IP information is in the subnet details in the OneFS GUI as shown below:

In the above example, the service IP address is 10.10.10.10.  So, creating DNS records for 10.10.10.10 (e.g. isilon.dellemc.com) and a NS record for msbdc.dellemc.com that is served by isilon.dellemc.com (10.10.10.10) is all that would be needed on the internal DNS server configuration to take advantage of the built-in load balancing capabilities of Isilon.

Use “ping” to validate the SmartConnect/DNS configuration.  Multiple ping tests to msbdc.dellemc.com should result with different IP address responses returned by Isilon, the range of IP addresses returned is defined by the IP Pool Range in the Isilon GUI.

SQL Server Big Data Cluster would simply have a single mount configuration pointing to the defined SmartConnect Zone name on Isilon.  Details on how to setup the HDFS mount to Isilon from SQL Server Big Data Cluster is presented in the next section.

SmartConnect makes storage administration easy.  If more storage capacity is required, simply add more Isilon nodes to the cluster and storage capacity and I/O performance instantly increases without having to make a single configuration change to the SQL Server Big Data Clusters – BRILLIANT!

With HDFS enabled, the access zone defined, and the network/DNS configuration complete, the Isilon storage system can now be mounted by SQL Server Big Data Clusters.

Mounting Dell EMC Isilon from SQL Server Big Data Cluster

Assuming you have a SQL Server Big Data Cluster running, begin with opening a terminal session to connect to your SQL Server Big Data Cluster.  You can obtain the IP address of the end point controller-svc-external service of your cluster with the following command:

Using the IP of the controller end point obtained from the above command, log into your big data cluster:

Mount Isilon using HDFS on your SQL Server Big Data Cluster with the following command:

Note:  hdfs://msbdc.dellemc.com is shown as an example, the hdfs uri must match the SmartConnect Zone name defined in the Isilon configuration.  The data directory specified is also an example, any directory name that exists within the Isilon Access Zone can be used.  Also, the mount point /mount1 that is shown above is just an example, any name can be used for the mount point.

An example of a successful response of the above mount command is shown below:

Create mount /mount1 submitted successfully.  Check mount status for progress.

Check the mount status with the following command:

sample output:

Run an hdfs shell and list the contents on Isilon:

sample output:

In addition to using hdfs shell commands, you can use tools like Azure Data Studio to access and browse files over the HDFS service on Dell EMC Isilon.  The example below is using Spark to read the data over HDFS:

To learn more about Dell EMC Isilon, please visit us at DellEMC.com.

 

OneFS and IPMI

First introduced in version 9.0, OneFS provides support for IPMI, the Intelligent Platform Management Interface protocol. IPMI allows out-of-band console access and remote power control across a dedicated ethernet interface via Serial over LAN (SoL). As such, IMPI provides true lights-out management for PowerScale F-series all-flash nodes and Gen6 H-series and A-series chassis without the need for additional rs-232 serial port concentrators or PDU rack power controllers.

For example, IPMI enables individual nodes or the entire cluster to be powered on after maintenance or a power outage. For example:

  • Power off nodes or the cluster, such as after a power outage and when the cluster is operating on backup power.
  • Perform a Hard/Cold Reboot/Power Cycle, for example, if a node is unresponsive to OneFS.

IPMI is disabled by default in OneFS 9.0 and later, but can be easily enabled, configured, and operated from the CLI via the new ‘isi ipmi’ command set.

A cluster’s console can easily be accessed using the IPMItool utility, available as part of most Linux distributions, or accessible through other proprietary tools. For the PowerScale F900, F600 and F200 platforms, the Dell iDRAC remote console option can be accessed via an https web browser session to the default port 443 at a node’s IPMI address.

Note that support for IPMI on Isilon Generation 6 hardware requires node firmware package 10.3.2 and SSP firmware 02.81 or later.

With OneFS 9.0 and later, IPMI is fully supported on both PowerScale Gen6 H-series and A-series chassis-based platforms, and PowerScale all-flash F-series platforms. For Gen6 nodes running 8.2.x releases, IPMI is not officially supported but does generally work.

IPMI can be configured for DHCP, static IP, or a range of IP addresses. With the range option, IP addresses are allocated on a first-available basis and be cannot assign a specific IP address to a specific node. For security purposes, the recommendation is to restrict IPMI traffic to a dedicated, management-only VLAN.

A single username and password is configured for IPMI management across all the nodes in a cluster using isi ipmi user modify — username= –set-password CLI syntax. Usernames can be up to 16 characters in length, and passwords must comprise 17-20 characters. To verify the username configuration, use isi ipmi user view.

Be aware that a node’s physical serial port is disabled when a SoL session is active, but becomes re-enabled when the SoL session is terminated with the ‘deactivate’ command option.

In order to run the OneFS IPMI commands, the administrative account being used must have the RBAC ISI_PRIV_IPMI privilege.

The following CLI syntax can be used to enable IPMI for DHCP:

# isi ipmi settings modify --enabled=True --allocation-type=dhcp 35 426 IPMI

Simiarly, to enable IPMI for a static IP address:

# isi ipmi settings modify --enabled=True --allocation-type=static

To enable IPMI for a range of IP addresses use:

# isi ipmi network modify --gateway=[gateway IP] --prefixlen= --ranges=[IP Range]

The power control and Serial over LAN features can be configured and viewed using the following CLI command syntax. For example:

# isi ipmi features list

ID            Feature Description           Enabled
----------------------------------------------------
Power-Control Remote power control commands Yes

SOL           Serial over Lan functionality Yes
----------------------------------------------------

To enable the power control feature:

# isi ipmi features modify Power-Control --enabled=True

To enable the Serial over LAN (SoL) feature:

# isi ipmi features modify SOL --enabled=True

The following CLI commands can be used to configure a single username and password to perform IPMI tasks across all nodes in a cluster. Note that usernames can be up to 16 characters in length, while the associated passwords must be 17-20 characters in length.

To configure the username and password, run the CLI command:

# isi ipmi user modify --username [Username] --set-password

To confirm the username configuration, use:

# isi ipmi user view

Username: power

In this case, the user ‘power’ has been configured for OneFS IPMI control.

On the client side, the ‘ipmiItool’ command utility is ubiquitous in the Linux and UNIX world, and is included natively as part of most distributions. If not, it can easily be installed using the appropriate package manager, such as ‘yum’.

The ipmitool usage syntax is as follows:

[Linux Host:~]$ ipmitool -I lanplus -H [Node IP] -U [Username] -L OPERATOR -P [password]

For example, to execute power control commands:

ipmitool -I lanplus -H [Node IP] -U [Username] -L OPERATOR -P [password] power [command]

The ‘power’ command options above include status, on, off, cycle, and reset.

And, similarly, for Serial over LAN:

ipmitool -I lanplus -H [Node IP] -U [Username] -L OPERATOR -P [password] sol [command]

The serial over LAN ‘command’ options include info, activate, and deactivate.

Once active, a Serial over LAN session can easily be exited using the ‘tilde dot’ command syntax, as follows:

# ~.

On PowerScale F600 and F200 nodes, the remote console can be accessed via the Dell iDRAC by browsing to https://<node_IPMI_IP_address>:443 and, unless it’s been changed, using the default password of root/calvin.

Double clicking on the ‘Virtual Console’ image on the bottom right of the iDRAC main page above brings up a full-size console window:

From here, authenticate using your preferred cluster username and password for full out-of-band access to the OneFS console.

When it comes to troubleshooting OneFS IPMI, a good place to start is by checking that the daemon is enabled. This can be done using the following CLI command:

# isi services -a | grep -i ipmi_mgmt

isi_ipmi_mgmt_d      Manages remote IPMI configuration        EnabledTroubleshooting & Firmware

The IPMI management daemon, isi_ipmi_mgmt_d, can also be run with a variety of options including the -s flag to list the current IPMI settings across the cluster, the -d flag to enable debugging output, etc, as follows:

# /usr/bin/isi_ipmi_mgmt_d -h

usage: isi_ipmi_mgmt_d [-h] [-d] [-m] [-s] [-c CONFIG]

Daemon that manages the remote IPMI configuration.

optional arguments:

-h, --help            show this help message and exit

-d, --debug           Enable debug logging

-m, --monitor         Launch the remote IPMI monitor daemon

-s, --show            Show the remote IPMI settings

-c CONFIG, --config CONFIG

Configure IPMI management settings

IPMI writes errors, warnings, etc, to its log file, located at /var/log/isi_ipmi_mgmt_d.log, and which includes a host of useful troubleshooting information.

Isilon OneFS and Hadoop Known Issues

The following are known issues that exist with OneFS and Hadoop HDFS integrations:

Oozie sharedlib deployment fails with Isilon

The deployment of the oozie shared libraries fails on Ambari 2.7/HDP 3.x against Isilon.

oozie makes a rpc check for erasure encoding when deploying the shared libararies, OneFS doesn’t support HDFS erasure encoding as OneFS is natively using its own Erasure Encoding for data protection and the call fails with poor handling on the oozie side of the code, this causes a failure in the deployment of the shared lib.

[root@centos-01 ~]# /usr/hdp/current/oozie-server/bin/oozie-setup.sh sharelib create -fs hdfs://hdp-27.foo.com:8020 -locallib /usr/hdp/3.0.1.0-187/oozie/libserver

  setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-server/conf}

  setting CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-server/oozie-server}

  setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}

  setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat

  setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112

  setting JRE_HOME=${JAVA_HOME}

  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"

  setting OOZIE_LOG=/var/log/oozie

  setting CATALINA_PID=/var/run/oozie/oozie.pid

  setting OOZIE_DATA=/hadoop/oozie/data

  setting OOZIE_HTTP_PORT=11000

  setting OOZIE_ADMIN_PORT=11001

  setting JAVA_LIBRARY_PATH=/usr/hdp/3.0.1.0-187/hadoop/lib/native/Linux-amd64-64

  setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} -Doozie.connection.retry.count=5 "

  setting OOZIE_CONFIG=${OOZIE_CONFIG:-/usr/hdp/current/oozie-server/conf}

  setting CATALINA_BASE=${CATALINA_BASE:-/usr/hdp/current/oozie-server/oozie-server}

  setting CATALINA_TMPDIR=${CATALINA_TMPDIR:-/var/tmp/oozie}

  setting OOZIE_CATALINA_HOME=/usr/lib/bigtop-tomcat

  setting JAVA_HOME=/usr/jdk64/jdk1.8.0_112

  setting JRE_HOME=${JAVA_HOME}

  setting CATALINA_OPTS="$CATALINA_OPTS -Xmx2048m"

  setting OOZIE_LOG=/var/log/oozie

  setting CATALINA_PID=/var/run/oozie/oozie.pid

  setting OOZIE_DATA=/hadoop/oozie/data

  setting OOZIE_HTTP_PORT=11000

  setting OOZIE_ADMIN_PORT=11001

  setting JAVA_LIBRARY_PATH=/usr/hdp/3.0.1.0-187/hadoop/lib/native/Linux-amd64-64

  setting OOZIE_CLIENT_OPTS="${OOZIE_CLIENT_OPTS} -Doozie.connection.retry.count=5 "

SLF4J: Class path contains multiple SLF4J bindings.

SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/oozie/lib/slf4j-simple-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/oozie/libserver/log4j-slf4j-impl-2.10.0.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: Found binding in [jar:file:/usr/hdp/3.0.1.0-187/oozie/libserver/slf4j-log4j12-1.6.6.jar!/org/slf4j/impl/StaticLoggerBinder.class]

SLF4J: See http://www.slf4j.org/codes.html#multiple_bindings for an explanation.

SLF4J: Actual binding is of type [org.slf4j.impl.SimpleLoggerFactory]

3138 [main] WARN org.apache.hadoop.util.NativeCodeLoader - Unable to load native-hadoop library for your platform... using builtin-java classes where applicable

4193 [main] INFO org.apache.hadoop.security.UserGroupInformation - Login successful for user oozie/centos-01.foo.com@FOO.COM using keytab file /etc/security/keytabs/oozie.service.keytab

4436 [main] INFO org.apache.hadoop.conf.Configuration.deprecation - mapred.local.dir is deprecated. Instead, use mapreduce.cluster.local.dir

4490 [main] INFO org.apache.hadoop.security.SecurityUtil - Updating Configuration

log4j:WARN No appenders could be found for logger (org.apache.htrace.core.Tracer).

log4j:WARN Please initialize the log4j system properly.

log4j:WARN See http://logging.apache.org/log4j/1.2/faq.html#noconfig for more info.

Found Hadoop that supports Erasure Coding. Trying to disable Erasure Coding for path: /user/root/share/lib

Error invoking method with reflection





Error: java.lang.reflect.InvocationTargetException

Stack trace for the error was (for debug purposes):

java.lang.RuntimeException: java.lang.reflect.InvocationTargetException

        at org.apache.oozie.tools.ECPolicyDisabler.invokeMethod(ECPolicyDisabler.java:111)

        at org.apache.oozie.tools.ECPolicyDisabler.tryDisableECPolicyForPath(ECPolicyDisabler.java:47)

        at org.apache.oozie.tools.OozieSharelibCLI.run(OozieSharelibCLI.java:171)

        at org.apache.oozie.tools.OozieSharelibCLI.main(OozieSharelibCLI.java:67)

Caused by: java.lang.reflect.InvocationTargetException

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:498)

        at org.apache.oozie.tools.ECPolicyDisabler.invokeMethod(ECPolicyDisabler.java:108)

        ... 3 more

Caused by: org.apache.hadoop.ipc.RemoteException(org.apache.hadoop.ipc.RpcNoSuchMethodException): Unknown RPC: getErasureCodingPolicy

        at org.apache.hadoop.ipc.Client.getRpcResponse(Client.java:1497)

        at org.apache.hadoop.ipc.Client.call(Client.java:1443)

        at org.apache.hadoop.ipc.Client.call(Client.java:1353)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:228)

        at org.apache.hadoop.ipc.ProtobufRpcEngine$Invoker.invoke(ProtobufRpcEngine.java:116)

        at com.sun.proxy.$Proxy9.getErasureCodingPolicy(Unknown Source)

        at org.apache.hadoop.hdfs.protocolPB.ClientNamenodeProtocolTranslatorPB.getErasureCodingPolicy(ClientNamenodeProtocolTranslatorPB.java:1892)

        at sun.reflect.NativeMethodAccessorImpl.invoke0(Native Method)

        at sun.reflect.NativeMethodAccessorImpl.invoke(NativeMethodAccessorImpl.java:62)

        at sun.reflect.DelegatingMethodAccessorImpl.invoke(DelegatingMethodAccessorImpl.java:43)

        at java.lang.reflect.Method.invoke(Method.java:498)

        at org.apache.hadoop.io.retry.RetryInvocationHandler.invokeMethod(RetryInvocationHandler.java:422)

        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeMethod(RetryInvocationHandler.java:165)

        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invoke(RetryInvocationHandler.java:157)

        at org.apache.hadoop.io.retry.RetryInvocationHandler$Call.invokeOnce(RetryInvocationHandler.java:95)

        at org.apache.hadoop.io.retry.RetryInvocationHandler.invoke(RetryInvocationHandler.java:359)

        at com.sun.proxy.$Proxy10.getErasureCodingPolicy(Unknown Source)

        at org.apache.hadoop.hdfs.DFSClient.getErasureCodingPolicy(DFSClient.java:3082)

        at org.apache.hadoop.hdfs.DistributedFileSystem$66.doCall(DistributedFileSystem.java:2884)

        at org.apache.hadoop.hdfs.DistributedFileSystem$66.doCall(DistributedFileSystem.java:2881)

        at org.apache.hadoop.fs.FileSystemLinkResolver.resolve(FileSystemLinkResolver.java:81)

        at org.apache.hadoop.hdfs.DistributedFileSystem.getErasureCodingPolicy(DistributedFileSystem.java:2898)

        ... 8 more
A workaround is a manual copy and unpack of the oozie-sharelib.tar.gz to the /user/oozie/share/lib

Cloudera BDR integration with Cloudera Manager Based Isilon Integration

Cloudera CDH with BDR is no longer supported with Isilon, CDH fails to integrate BDR completely with a Cloudera Manager based Isilon cluster.

Upgrading Ambari 2.6.5 to 2.7 – setfacl issue with Hive

Per the following procedure: http://www.unstructureddatatips.com/upgrade-hortonworks-hdp2-6-5-to-hdp3-on-dellemc-isilon-onefs-8-1-2-and-later/

When upgrading from Ambari 2.6.5 to 2.7, if the Hive Service is installed the following must be completed prior to upgrade otherwise the upgrade process will stall with an Unknown RPC issue as seen below.

 

The Isilon OneFS HDFS service does not support the HDFS ACL’s and the resulting setfacl will cause the upgrade to stall.

Add the following property: dfs.namenode.acls.enabled=false to the custom hdfs-site prior to upgrading and this will prevent the upgrade attempting to use setfacl.

Restart any services that need restarting

Execute the upgrade per the procedure and the Hive setfacl issue will not be encountered.

Additional Upgrade issue you may see:

– Error mapping uname \’yarn-ats\’ to uid (created yarn-ats user: isi auth users create yarn-ats –zone=<hdfs zone>)

– MySQL Dependency error (execute: ambari-server setup –jdbc-db=mysql –jdbc-driver=/usr/share/java/mysql-connector-java.jar)

– Ambari Metrics restart issue Reference: http://www.ryanchapin.com/fv-b-4-818/-SOLVED–Unable-to-Connect-to-ambari-metrics-collector-Issues.html

 

OneFS 8.2 Local Service Accounts need to be ENABLED

With the release of OneFS 8.2 a number of changes were made in the identity management stack, one modification that is required on 8.2 is that local accounts need to be in the enabled state to be used for identity, in prior version local account ID’s could be used with the local account disabled.

In 8.2 all local accounts must be ENABLED to be used for ID management by OneFS, this is required:

In 8.1.2 and prior, local accounts were functional when disabled

On upgrade to 8.2

  • All accounts should be set the ‘enabled state’
  • Enable all accounts prior to upgrade

The latest version of the create_users script on  the isilon_hadoop_tools github will now create enabled users by default

Enabling account does not make this account interactive logon aware they are still just ID’s used by Isilon for HDFS ID management.

 

Support for HDP 3.1 with the Isilon Management Pack 1.0.1.0

With the release of the Isilon Management Pack 1.0.1.0 support for HDP 3.1 is included, the procedure to upgrade the mpack is listed here if mpack 1.0.0.1 was installed with HDP 3.0.1.

Before upgrading the mpack the following KB should be consulted to assess the status of the Kerberized Spark2 services and if modifications were made to 3.0.1 installs were made in Ambari: Isilon: Spark2 fails to start after Kerberization with HDP 3 and OneFS due to missing configurations

Upgrade the Isilon Ambari Management Pack

  1. Download the Isilon Ambari Management Pack
  2. Install the management pack by running the following commands on the
    Ambari server:
    
    ambari-server upgrade-mpack –-mpack = <path-to-new-mpack.tar.gz> -verbose
    
    ambari-server restart

     

How to determine the Isilon Ambari Management Pack version

On the Ambari server host run the following command:

ls /var/lib/ambari-server/resources/mpacks | grep “onefs-ambari-mpack-”

The output will appear similar to this, where x.x.x.x indicates which version of the IAMP is currently installed:

onefs-ambari-mpack-x.x.x.x

How to find the README in Isilon Ambari Management Pack 1.0.1.0

Download the Isilon Ambari Management Pack

  1. Run the following command to extract the contents:
    • tar -zxvf isilon-onefs-mpack-1.0.1.0.tar.gz
  2. The README is located under isilon-onefs-mpack-1.0.1.0/addon-services/ONEFS/1.0.0/support/README
  3. Please review the README for release information.

 

The release of OneFS 8.2 brings changes to Hadoop Cluster Deployment and Setup

Prior to 8.2, the following two configurations were required to support Hadoop cluster

  1. Modification to the Access Control List Policy setting for OneFS is no longer needed

We used to run ‘isi auth settings acls modify –group-owner-inheritance=parent’  to make the OneFS file system act like an HDFS file system, this was a global setting and affected the whole cluster and other workflows. In 8.2 this is no longer needed, hdfs operation act like this natively so the setting is no longer required. Do not run this command on the setup of hdfs of new 8.2 clusters, if this was previously set on 8.1.2 and prior it is suggested to leave the setting as is because modifying it can affect other workflows.

  1. hdfs to root mappings is not needed – replaced by RBAC

Prior to 8.2 hdfs => root mappings were required to facilitate the behavior of the hdfs account, in 8.2 this root mapping has been replaced with an RBAC privilege, no root mapping is needed and instead the following RBAC role with the specified privileges should be created, add any account needing this access.

isi auth roles create --name=hdfs_access --description="Bypass FS permissions" --zone=System
isi auth roles modify hdfs_access --add-priv=ISI_PRIV_IFS_RESTORE --zone=System
isi auth roles modify hdfs_access --add-priv=ISI_PRIV_IFS_BACKUP --zone=System
isi auth roles modify hdfs_access --add-user=hdfs --zone=System
isi auth roles view hdfs_access --zone=System
isi_for_array "isi auth mapping flush --all"
isi_for_array "isi auth cache flush --all"

 

The installation guides will reflect these changes shortly.

Summary:

8.1.2 and Earlier:

hdfs=>root mapping

ACL Policy Change Needed

8.2 and Later

RBAC role for hdfs

No ACL Policy Change

 

When using Ambari 2.7 and the Isilon Management Pack, the following is seen in the Isilon hdfs.log:

isilon-3: 2019-05-14T14:34:06-04:00 <30.4> isilon-3 hdfs[95183]: [hdfs] Ambari: Agent for zone 12 got a bad exit code from its Ambari server. The agent will attempt to recover.

isilon-3: 2019-05-14T14:34:06-04:00 <30.6> isilon-3 hdfs[95183]: [hdfs] Ambari: The Ambari server for zone 12 is running a version unsupported by OneFS: 2.7.1.0. Agent will reset and retry until a supported Ambari server version is installed or Ambari is no longer enabled for this zone

isilon-3: 2019-05-14T14:35:12-04:00 <30.4> isilon-3 hdfs[95183]: [hdfs] Ambari: Agent for zone 12 got a bad exit code from its Ambari server. The agent will attempt to recover.

isilon-3: 2019-05-14T14:35:12-04:00 <30.6> isilon-3 hdfs[95183]: [hdfs] Ambari: The Ambari server for zone 12 is running a version unsupported by OneFS: 2.7.1.0. Agent will reset and retry until a supported Ambari server version is installed or Ambari is no longer enabled for this zone

isilon-3: 2019-05-14T14:36:17-04:00 <30.4> isilon-3 hdfs[95183]: [hdfs] Ambari: Agent for zone 12 got a bad exit code from its Ambari server. The agent will attempt to recover.

isilon-3: 2019-05-14T14:36:17-04:00 <30.6> isilon-3 hdfs[95183]: [hdfs] Ambari: The Ambari server for zone 12 is running a version unsupported by OneFS: 2.7.1.0. Agent will reset and retry until a supported Ambari server version is installed or Ambari is no longer enabled for this zone

When using Ambari with the Isilon Management Pack, Isilon should not be configured with an Ambari Server or ODP version as they are no longer needed since the Management Pack is in use.

If they have been added, remove them from the Isilon hdfs configuration for the zone in question, this only applied to Ambari 2.7 with the Isilon Management Pack, Ambari 2.6 and earlier still require these settings.

isilon-1# isi hdfs settings view --zone=zone-hdp27

Service: Yes

Default Block Size: 128M

Default Checksum Type: none

Authentication Mode: kerberos_only

Root Directory: /ifs/zone/hdp27/hadoop-root

WebHDFS Enabled: Yes

           Ambari Server: -

Ambari Namenode: hdp-27.foo.com

       Odp Version: -

Data Transfer Cipher: none

Ambari Metrics Collector: centos-01.foo.com

 

Ambari sees LDAPS issue connecting to AD during Kerberization

05 Apr 2018 20:05:14,081 ERROR [ambari-client-thread-38] KerberosHelperImpl:2379 - Cannot validate credentials: org.apache.ambari.server.serveraction.kerberos.KerberosInvalidConfigurationException: Failed to connect to KDC - Failed to communicate with the Active Directory at ldaps://rduvnode217745.west.isilon.com/DC=AMB3,DC=COM: simple bind failed: rduvnode217745.west.isilon.com:636

Make sure the server’s SSL certificate or CA certificates have been imported into Ambari’s truststore.

 

Review the following KB from Hortonworks on resolving this Ambari Issue:

https://community.hortonworks.com/content/supportkb/148572/failed-to-connect-to-kdc-make-sure-the-servers-ssl.html

 

HDFS rollup patch for 8.1.2 – Patch-240163:

Patch for OneFS 8.1.2.0. This patch addresses issues with the Hadoop Distributed File System (HDFS).

********************************************************************************

This patch can be installed on clusters running the following OneFS version:

8.1.2.0

This patch deprecates the following patch:

Patch-236288

 

This patch conflicts with the following patches:

Patch-237113

Patch-237483

 

If any conflicting or deprecated patches are installed on the cluster, you must

remove them before installing this patch.

********************************************************************************

RESOLVED ISSUES

 

* Bug ID 240177

The Hadoop Distributed File System (HDFS) rename command did not manage file

handles correctly and might have caused data unavailability with

STATUS_TOO_MANY_OPEN_FILES error.

 

* Bug ID 236286

If a OneFS cluster had the Hadoop Distributed File System (HDFS) configured for Kerberos authentication, WebHDFS requests over curl might have failed to follow a redirect request.

 

 

WebHDFS issue with Kerberized 8.1.2 – curl requests fail to follow redirects; Service Checks and Ambari Views will fail

 

Isilon HDFS error: STATUS_TOO_MANY_OPENED_FILES causes jobs to fail

 

OneFS 8.0.0.X and Cloudera Impala 5.12.X: Impala queries fail with `WARNINGS: TableLoadingException: Failed to load metadata for table: <tablename> , CAUSED BY: IllegalStateException: null`

 

Ambari agent fails to send heartbeats to Ambari server if agent is running on a NANON node

NameNode gives out any IP addresses in an access zone, even across pools and subnets; client connection may fail as a result

Other Known Issues

  1. Host Registrations fails on RHEL 7 hosts with opensslissues

Modify the ambari-agent.ini file:

/etc/ambari-agent/conf/ambari-agent.ini

[security]

force_https_protocol=PROTOCOL_TLSv1_2

 

Restart the ambari-server and all ambari-agents

https://community.hortonworks.com/questions/145/openssl-error-upon-host-registration.html

 

OneFS 9.0.0 the services are now disabled by default

Check the service status using isi sevrices -a

hop-ps-a-3# isi services -a
Available Services:    
apache2              Apache2 Web Server                       Enabled 
auth                 Authentication Service                   Enabled  
celog                Cluster Event Log                        Enabled connectemc           ConnectEMC Service                       Disabled 
cron                 System cron Daemon                       Enabled dell_dcism           Dell iDRAC Service Module                Enabled dell_powertools      Dell PowerTools Agent Daemon             Enabled 
dmilog               DMI log monitor                          Enabled  
gmond                Ganglia node monitor                     Disabled  
hdfs                 HDFS Server                              Disabled 

Enable the hdfs service manually to get  going with Hadoop cluster access from hdfs client.

Upgrade Hortonworks HDP2.6.5 to HDP3.* on DellEMC Isilon OneFS 8.1.2 and later

Introduction

This blog post walks you through the process of upgrading  Hortonworks Data Platform (HDP) 2.6.5 to HDP 3.0.1 or HDP3.1.0  on DellEMC Isilon OneFS 8.1.2/OneFS 8.2 This is intended for systems administrators, IT program managers, IT architects, and IT managers who are upgrading Hortonworks Data Platform installed on OneFS 8.1.2.0. or later versions

There are two official ways to upgrade to HDP 3.* as follows:

    1. Deploy a fresh HDP 3.* cluster and migrate existing data using Data Lifecycle Manager or distributed copy (distcp).
    2. Perform an in-place upgrade of an existing HDP 2.6.x cluster.

This post will demonstrate in-place upgrades. Make sure your cluster is ready and meets all the success criteria as mentioned here and in the official Hortonworks Upgrade documentation.

The installation or upgrade process of the new HDP 3.0.1 and later versions on Isilon OneFS 8.1.2 and later versions has changed as follows:

The OneFS is not presented as a host to the HDP cluster anymore, and instead, OneFS is internally managed as a dedicated service in place of HDFS by installing a management pack called the Ambari Management Pack for Isilon OneFS. It is a software component that can be installed on the Ambari Server to define OneFS as a service in a Hadoop cluster. The management pack allows an Ambari administrator to start, stop, and configure OneFS as a HDFS storage service. This provides native NameNode and DataNode capabilities similar to traditional HDFS.

This management pack is OneFS release-independent and can be updated in between releases if needed.

Prerequisites

    1. Hadoop cluster running HDP 2.6.5 and Ambari Server 2.6.2.2.
    2. DellEMC Isilon OneFS updated to 8.1.2 and patch 240163 installed.
    3. Ambari Management Pack for Isilon OneFS download fromhere.
    4. HDFS to OneFS Service converter script download from here.

We will perform the upgrade in two parts: first we will make the changes on the OneFS host and followed by updates on the HDP cluster.

OneFS Host Preparation

The step-by-step process to prepare the OneFS host for the HDP upgrade is as follows:.

    1. First make sure the Isilon OneFS cluster is running 8.1.2 installed with the latest patch available. Check DellEMC support or Current Isilon OneFS Patches

  1. HDP 3.0.1 comes with TLSv2.0 service which relies on the yarn-ats user and a dedicated HBase storage in the back-end for Yarn apps and jobs framework metrics collections. For this, we  create two new users yarn-ats and yarn-ats-hbase on the OneFS host.

Login to the Isilon OneFS terminal node using root credentials, and run the following commands:

isi auth group create yarn-ats
isi auth users create yarn-ats --primary-group yarn-ats --home-directory=/ifs/home/yarn-ats
isi auth group create yarn-ats-hbase
isi auth users create yarn-ats-hbase --primary-group yarn-ats-hbase --home-directory=/ifs/home/yarn-ats-hbase
  1. Once the new users are created, you need to map yarn-ats-hbase to yarn-ats on the OneFS host. This step is required only if you are going to secure the HDP cluster with Kerberization.
isi zone modify --add-user-mapping-rules="yarn-ats-hbase=>yarn-ats" –-zone=ZONE_NAME

This user mapping depends on the mode of Timeline Service 2.0 Installation. Read those instructions carefully and opt for the deployment mode to avoid ats-hbase service failure.

You can skip the yarn-ats-hbase to yarn-ats user mapping in the following two cases:

    • Renaming yarn-ats-hbase principals to yarn-ats during Kerberization if Timeline Service V2.0s are deployed in Embedded or System Service mode.
    • There is no need to set user mapping if TLSv2.0 is configured on external Hbase.

HDP Cluster preparation and upgrade

Follow the steps as documented. The steps  must to meet all of the prerequisites in the Hortonworks upgrade document.

  1. Before starting the process, make sure the HDP 2.6.5 cluster is healthy by doing a service check, and address all of the alerts, if any display.

  1. Now stop the HDFS service and all other components running on the OneFS host.

  1. Delete the Datanode/Namenode/SNamenode using the following curl command:

Note that before DN/NN and SNN are deleted, you’ll see something like the following:

Use the following curl commands to delete the DN, NN and SNN:

export AMBARI_SERVER=<Ambar server IP/FQDN>
export CLUSTER=<HDP2.6.5 cluster name>
export HOST=<OneFS host FQDN>
curl -u admin:admin -H "X-Requested-By: Ambari" -X DELETE http://$AMBARI_SERVER:8080/api/v1/clusters/$CLUSTER/hosts/$HOST/host_components/DATANODE
curl -u admin:admin -H "X-Requested-By: Ambari" -X DELETE http://$AMBARI_SERVER:8080/api/v1/clusters/$CLUSTER/hosts/$HOST/host_components/NAMENODE
curl -u admin:admin -H "X-Requested-By: Ambari" -X DELETE http://$AMBARI_SERVER:8080/api/v1/clusters/$CLUSTER/hosts/$HOST/host_components/SECONDARY_NAMENODE

After the deleting DN/NN and SNN, you’ll see something similar to the following:

  1. Manually delete the OneFS host from the Ambari Server UI.

Following the steps from five to nine are critical and are related to the Hortonworks HDP upgrade process. Refer to the Hortonworks upgrade documentations or consult the Hortonworks support if necessary.

Note: Steps five to nine in the HDP upgrade process below are related to the services running on our POC cluster. You’ll have to do backup, migration, upgrades to the necessary service as described in the Hortonworks documentation before going to  step 10.

———-

  1. Upgrade Ambari Server/agent to 2.7.1, by follow the Hortonworks Ambari Server upgrade document.

  1. Register and install HDP 3.0.1, by following the steps in this Hortonworks HDP register and install target version guide.
  2. Upgrade Ambari metrics, by following the steps in this upgrade ambari metrics guide
  3. Note: This next step is critical: Perform a service check on all the services and make sure to address all  alerts if any.
  4. Click upgrade and complete the upgrade process. Address any issues encountered before proceeding to avoid service failures after finalizing the upgrade.

A screen similar to the following displays:

———–

After the successful upgrade to HDP 3.0.1, continue installing Ambari Management pack for Isilon OneFS on the upgraded Ambari Server.
  1. For the Ambari Server Management Pack installation, login to the Ambari Server terminal, download the management pack, install, and then restart the Ambari server.

a. Download the Ambari Management Pack for Isilon OneFS from here

b. Install the management pack as shown below. Once it is installed, the following displays: Ambari Server ‘install-mpack’ completed successfully.

root@RDUVNODE334518:~ # ambari-server install-mpack --mpack=isilon-onefs-mpack-0.1.0.0.tar.gz --verbose
Using python /usr/bin/python
Installing management pack
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Installing management pack isilon-onefs-mpack-0.1.0.0-SNAPSHOT.tar.gz
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Download management pack to temp location /var/lib/ambari-server/data/tmp/isilon-onefs-mpack-0.1.0.0-SNAPSHOT.tar.gz
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Expand management pack at temp location /var/lib/ambari-server/data/tmp/isilon-onefs-mpack-0.1.0.0-SNAPSHOT/
2018-11-07 06:36:39,137 - Execute[('tar', '-xf', '/var/lib/ambari-server/data/tmp/isilon-onefs-mpack-0.1.0.0-SNAPSHOT.tar.gz', '-C', '/var/lib/ambari-server/data/tmp/')] {'tries': 3, 'sudo': True, 'try_sleep': 1}
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Stage management pack onefs-ambari-mpack-0.1 to staging location /var/lib/ambari-server/resources/mpacks/onefs-ambari-mpack-0.1
INFO: Processing artifact ONEFS-addon-services of type stack-addon-service-definitions in /var/lib/ambari-server/resources/mpacks/onefs-ambari-mpack-0.1/addon-services
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
INFO: Adjusting file permissions and ownerships
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/stacks
INFO:
process_pid=28352
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/stacks
INFO:
process_pid=28353
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/extensions
INFO:
process_pid=28354
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/extensions
INFO:
process_pid=28355
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/common-services
INFO:
process_pid=28356
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/common-services
INFO:
process_pid=28357
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/mpacks
INFO:
process_pid=28358
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/mpacks
INFO:
process_pid=28359
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/mpacks/cache
INFO:
process_pid=28360
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/mpacks/cache
INFO:
process_pid=28361
INFO: about to run command: chmod -R 0755 /var/lib/ambari-server/resources/dashboards
INFO:
process_pid=28362
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/dashboards
INFO:
process_pid=28363
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/stacks
INFO:
process_pid=28364
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/extensions
INFO:
process_pid=28365
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/common-services
INFO:
process_pid=28366
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/mpacks
INFO:
process_pid=28367
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/mpacks/cache
INFO:
process_pid=28368
INFO: about to run command: chown -R -L root /var/lib/ambari-server/resources/dashboards
INFO:
process_pid=28369
INFO: Management pack onefs-ambari-mpack-0.1 successfully installed! Please restart ambari-server.
INFO: Loading properties from /etc/ambari-server/conf/ambari.properties
Ambari Server 'install-mpack' completed successfully.

c. Restart the Ambari Server.

root@RDUVNODE334518:~ # ambari-server restart
Using python /usr/bin/python
Restarting ambari-server
Waiting for server stop...
Ambari Server stopped
Ambari Server running with administrator privileges.
Organizing resource files at /var/lib/ambari-server/resources...
Ambari database consistency check started...
Server PID at: /var/run/ambari-server/ambari-server.pid
Server out at: /var/log/ambari-server/ambari-server.out
Server log at: /var/log/ambari-server/ambari-server.log
Waiting for server start................
Server started listening on 8080

DB configs consistency check: no errors and warnings were found.

 

  1. Replace the HDFS service with the OneFS service; the management pack installed contains OneFS Service related settings.

For this step, delete the HDFS service, add the OneFS service installed from the Ambari Management Pack above, and copy the HDFS service configuration into the OneFS service.

a. To delete HDFS, add the OneFS service, and copy the configuration, you have an automation tool “hdfs_to_onefs_convertor.py”.

Login to the Ambari Server terminal and download the script from here.

wget --no-check-certificate https://raw.githubusercontent.com/apache/ambari/trunk/contrib/management-packs/isilon-onefs-mpack/src/main/tools/hdfs_to_onefs_convert.py

b. Now run the script by issuing the Ambari server and cluster name as the parameters. Once it completes, you see all the services up and running.

root@RDUVNODE334518:~ # python hdfs_to_onefs_convertor.py -o 'RDUVNODE334518.west.isilon.com' -c 'hdpupgd'
This script will replace the HDFS service to ONEFS
The following prerequisites are required:
* ONEFS management package must be installed
* Ambari must be upgraded to >=v2.7.0
* Stack must be upgraded to HDP-3.0
* Is highly recommended to backup ambari database before you proceed.
Checking Cluster: hdpupgd (http://RDUVNODE334518.west.isilon.com:8080/api/v1/clusters/hdpupgd)
Found stack HDP-3.0
Please, confirm you have made backup of the Ambari db [y/n] (n)? y
Collecting hosts with HDFS_CLIENT
Found hosts [u'rduvnode334518.west.isilon.com']
Stopping all services..
Downloading core-site..
Downloading hdfs-site..
Downloading hadoop-env..
Deleting HDFS..
Adding ONEFS..
Adding ONEFS config..
Adding core-site
Adding hdfs-site
Adding hadoop-env-site
Adding ONEFS_CLIENT to hosts: [u'rduvnode334518.west.isilon.com']
Starting all services..
root@RDUVNODE334518:~ #


  1. At this point, you have successfully upgraded to HDP 3.0.1 and replaced the HDFS service with the OneFS service. From now on, Isilon OneFS only acts as an HDFS storage layer, so you can remove the Ambari Server and ODP Version settings from the Isilon’s HDFS settings as follows:
kbhusan-y93o5ew-1# isi hdfs settings modify --zone=System --odp-version=
kbhusan-y93o5ew-1# isi hdfs settings modify --zone=System --ambari-server=
kbhusan-y93o5ew-1# isi hdfs settings view
Service: Yes
Default Block Size: 128M
Default Checksum Type: none
Authentication Mode: all
Root Directory: /ifs/hdfs-root
WebHDFS Enabled: Yes
Ambari Server: -
Ambari Namenode: kb-hdp-1.west.isilon.com
Odp Version: -
Data Transfer Cipher: none
Ambari Metrics Collector: kb-hdp-1.west.isilon.com
kbhusan-y93o5ew-1#


13. Login into the Ambari Web UI and check the OneFS service and its configuration. Perform the service check.

A screen similar to the following displays:

Review the results:

Summary

In this blog, we demonstrated how you can successfully upgrade the Apache Ambari Server/agents to 2.7.1 and Hortonworks HDP 2.6.5 to HDP 3.0.1 on DellEMC Isilon OneFS 8.1.2 installed with the latest patch available. The same steps apply to upgrading the later versions of HDP3.0.1.

We installed Ambari server Management Pack for DellEMC Isilon OneFS which replaced the HDFS service to the OneFS service. This enables Ambari administrator to start, stop, and configure OneFS as a HDFS storage service, and this also provides native NameNode and DataNode capabilities like traditional HDFS to DellEMC Isilon OneFS.

 

 

OneFS S3 Protocol Support

First introduced in version 9.0,  OneFS supports the AWS S3 API as a protocol, extending the PowerScale data lake to natively include object, and enabling workloads which write data via file protocols such as NFS, HDFS or SMB, and then read that data via S3, or vice versa.

Because objects are files “under the hood”, the same OneFS data services, such as Snapshots, SyncIQ, WORM, etc, are all seamlessly integrated.

Applications now have multiple access options – across both file and object – to the same underlying dataset, semantics, and services, eliminating the need for replication or migration for different access requirements, and vastly simplifying management.

This makes it possible to run hybrid and cloud-native workloads, which use S3-compatible backend storage, for example cloud backup & archive software, modern apps, analytics flows, IoT workloads, etc. – and to run these on-prem, alongside and coexisting with traditional file-based workflows.

In addition to HTTP 1.1, OneFS S3 supports HTTPS 1.2, to meet organizations’ security and compliance needs. And since S3 is integrated as a top-tier protocol, performance is anticipated to be similar to SMB.

By default, the S3 service listens on port 9020 for HTTP and 9021 for HTTPS, although both these ports are easily configurable.

Every S3 object is linked to a file, and each S3 bucket maps to a specific directory called the bucket path.  If the bucket path is not specified, a default is used. When creating a bucket, OneFS adds a dot-s3 directory under the bucket path, which is used to store temporary files for PUT objects.

The AWS S3 data model is a flat structure, without a strict hierarchy of sub-buckets or sub-folders. However, it does provide a logical hierarchy, using object key-name prefixes and delimiters, which OneFS leverages to support a rudimentary concept of folders.

OneFS S3 also incorporates multi-part upload, using HTTP’s ‘100 continue’ header, allowing OneFS to ingest large objects, or copy existing objects, in parts, thereby improving upload performance.

OneFS allows both ‘virtual hosted-style requests’, where you specify a bucket in a request using the HTTP Host header, and also ‘path-style requests’, where a bucket is specified using the first slash-delimited component of the Request-URI path.

Every interaction with S3 is either authenticated or anonymous. While authentication verifies the identity of the requester, authorization controls access to the desired data. OneFS treats unauthenticated requests as anonymous, mapping it to the user ‘nobody’.

OneFS S3 uses either AWS Signature Version 2 or Version 4 to authenticate requests, which must include a signature value that authenticates the request sender. This requires the user to have both an access ID and a secret Key, which can be obtained from the OneFS key management portal.

The secret key is used to generate the signature value, along with several request header values. After receiving the signed request, OneFS uses the access ID to retrieve a copy of the secret key internally, recomputes the signature value of the request, and compares it against the received signature. If they match, the requester is authenticated, and any header value used in the signature is verified to be tamper-free.

Bucket ACLs control whether a user has permission on an S3 bucket. When receiving a request for a bucket operation, OneFS parses the user access ID from the request header and evaluates the request according to the target bucket ACL. To access OneFS objects, the S3 request must be authorized at both the bucket and object level, using permission enforcement based on the native OneFS ACLs.

Here’s the list of the principle S3 operations that OneFS 9.0 currently supports:

Operation S3 API name Description
DELETE object DeleteObject This operation enables you to delete a single object from a bucket. Delete multiple objects from a bucket using a single request is not supported.
GET object GetObject Retrieves an object content.
GET object ACL GetObjectAcl Get the access control list (ACL) of an object.
HEAD object HeadObject The HEAD operation retrieves metadata from an object without returning the object itself. This operation is useful if you’re only interested in an object’s metadata. The operation returns a 200 OK if the object exists and you have permission to access it. Otherwise, the operation might return responses such as 404 Not Found and 403 Forbidden.
PUT object PutObject Adds an object to a bucket.
PUT object – copy CopyObject Creates a copy of an object that is already stored in OneFS.
PUT object ACL PutObjectAcl Sets the access control lists (ACL) permissions for an object that already exists in a bucket.
Initiate multipart upload CreateMultipartUpload Initiate a multipart upload and returns an upload ID. This upload ID is used to associate with all the parts in the specific multipart upload. You specify this upload ID in each of your subsequent upload part requests. You also include this upload ID in the final request to either complete or abort the multipart upload request.
Upload part UploadPart Uploads a part in a multipart upload. Each part must be at least 5MB in size, except the last part. And max size of each part is 5GB.
Upload part – Copy UploadPartCopy Upload a part by copying data from an existing object as data source. Each part must be at least 5MB in size, except the last part. And max size of each part is 5GB.
Complete multipart upload CompleteMultipartUpload Completes a multipart upload by assembling previously uploaded parts.
List multipart uploads ListMultipartUploads Lists in-progress multipart uploads. An in-progress multipart upload is a multipart upload that has been initiated using the Initiate Multipart Upload request but has not yet been completed or aborted.
List parts ListParts List the parts that have been uploaded for a specific multipart upload.
Abort multipart upload AbortMultipartUpload Abort a multipart upload. After a multipart upload is aborted, no additional parts can be uploaded using that upload ID. The storage consumed by any previously uploaded parts will be freed. However, if any part uploads are currently in progress, those part uploads might or might not succeed. As a result, it might be necessary to abort a given multipart upload multiple times in order to completely free all storage consumed by all parts.

 

Essentially, this includes the basic bucket and object create, read, update, delete, or CRUD, operations, plus multipart upload.

It’s worth noting that OneFS can accommodate individual objects up to 16TB in size, unlike AWS S3, which caps this at a maximum of 5TB per object.

Please be aware that OneFS 9.0 does not natively support versioning or Cross-Origin Resource Sharing (CORS). However, SnapshotIQ and SyncIQ can be used as a substitute for this functionality.

The OneFS S3 implementation includes a new WebUI and CLI for ease of configuration and management.  This enables:

  • The creation of buckets and configuration of OneFS specific options, such as object ACL policy
  • The ability to generate access IDs and secret keys for users through the WebUI key management portal.
  • Global settings, including S3 service control and configuration of the HTTP listening ports.
  • Configuration of Access zones, for multi-tenant support.

All the WebUI functionality and more is also available through the CLI using the new ‘isi s3’ command set:

# isi s3

Description:

    Manage S3 buckets and protocol settings.

Required Privileges:

    ISI_PRIV_S3

Usage:

    isi s3 <subcommand>

        [--timeout <integer>]

        [{--help | -h}]

Subcommands:

    buckets      Manage S3 buckets.

    keys         Manage S3 keys.

    log-level    Manage log level for S3 service.

    mykeys       Manage user's own S3 keys.

    settings     Manage S3 default bucket and global protocol settings.