Stackable Operator for Apache HDFS
The Stackable Operator for Apache HDFS (Hadoop Distributed File System) is used to set up HFDS in high-availability mode. HDFS is a distributed file system designed to store and manage massive amounts of data across multiple machines in a fault-tolerant manner. The Operator depends on the Stackable Operator for Apache ZooKeeper to operate a ZooKeeper cluster to coordinate the active and standby NameNodes.
Follow the Getting started guide which will guide you through installing the Stackable HDFS and ZooKeeper Operators, setting up ZooKeeper and HDFS and writing a file to HDFS to verify that everything is set up correctly.
The Operator manages the HdfsCluster custom resource. The cluster implements three roles:
DataNode - responsible for storing the actual data.
JournalNode - responsible for keeping track of HDFS blocks and used to perform failovers in case the active NameNode fails. For details see: https://hadoop.apache.org/docs/stable/hadoop-project-dist/hadoop-hdfs/HDFSHighAvailabilityWithQJM.html
NameNode - responsible for keeping track of HDFS blocks and providing access to the data.
The operator creates the following K8S objects per role group defined in the custom resource.
Service - ClusterIP used for intra-cluster communication.
ConfigMap - HDFS configuration files like
log4j.propertiesare defined here and mounted in the pods.
StatefulSet - where the replica count, volume mounts and more for each role group is defined.
In addition, a
NodePort service is created for each pod labeled with
exposes all container ports to the outside world (from the perspective of K8S).
In the custom resource you can specify the number of replicas per role group (NameNode, DataNode or JournalNode). A minimal working configuration requires:
2 NameNodes (HA)
1 DataNode (should match at least the
The Operator creates a service discovery ConfigMap for the HDFS instance. The
discovery ConfigMap contains the
core-site.xml file and the
Two demos that use HDFS are available.
hbase-hdfs-cycling-data loads a dataset of cycling data from S3 into HDFS and then uses HBase to analyze the data.
jupyterhub-pyspark-hdfs-anomaly-detection-taxi-data showcases the integration between HDFS and Jupyter. New York Taxi data is stored in HDFS and analyzed in a Jupyter notebook.