A distributed file system that provides high-throughput access to application data.
This service can be used to:
[Webhdfs][crate::services::Webhdfs] is powered by hdfs's RESTful HTTP API.
HDFS support needs to enable feature services-hdfs
.
root
: Set the work dir for backend.name_node
: Set the name node for backend.Refer to [HdfsBuilder
]'s public API docs for more information.
HDFS needs some environment set correctly.
JAVA_HOME
: the path to java home, could be found via java -XshowSettings:properties -version
HADOOP_HOME
: the path to hadoop home, opendal relays on this env to discover hadoop jars and set CLASSPATH
automatically.Most of the time, setting JAVA_HOME
and HADOOP_HOME
is enough. But there are some edge cases:
error while loading shared libraries: libjvm.so: cannot open shared object file: No such file or directory
Java's lib are not including in pkg-config find path, please set LD_LIBRARY_PATH
:
export LD_LIBRARY_PATH=${JAVA_HOME}/lib/server:${LD_LIBRARY_PATH}
The path of libjvm.so
could be different, please keep an eye on it.
(unable to get stack trace for java.lang.NoClassDefFoundError exception: ExceptionUtils::getStackTrace error.)
CLASSPATH
is not set correctly or your hadoop installation is incorrect.
To set CLASSPATH
:
export CLASSPATH=$(find $HADOOP_HOME -iname "*.jar" | xargs echo | tr ' ' ':'):${CLASSPATH}
use std::sync::Arc; use anyhow::Result; use opendal::services::Hdfs; use opendal::Operator; #[tokio::main] async fn main() -> Result<()> { // Create fs backend builder. let mut builder = Hdfs::default(); // Set the name node for hdfs. builder.name_node("hdfs://127.0.0.1:9000"); // Set the root for hdfs, all operations will happen under this root. // // NOTE: the root must be absolute path. builder.root("/tmp"); // `Accessor` provides the low level APIs, we will use `Operator` normally. let op: Operator = Operator::new(builder)?.finish(); Ok(()) }