Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

How to configure Flink to use Hdfs for backend state and checkpoints

I have a setup with Flink v1.2, 3 JobManagers, 2 TaskManagers. I want to use hdfs for backend state and checkpoints and zookeeper storageDir

state.backend: filesystem

state.backend.fs.checkpointdir: hdfs:///[ip:port]/flink-checkpoints
state.checkpoints.dir: hdfs:///[ip:port]/external-checkpoints
high-availability: zookeeper
high-availability.zookeeper.storageDir: hdfs:///[ip:port]/recovery

In the JobManager I log I have

2017-03-22 17:41:43,559 INFO  org.apache.flink.configuration.GlobalConfiguration            - Loading configuration property: high-availability.zookeeper.client.acl, open
2017-03-22 17:41:43,680 ERROR org.apache.flink.runtime.jobmanager.JobManager                - Error while starting up JobManager
java.io.IOException: The given HDFS file URI (hdfs:///ip:port/recovery/blob) did not describe the HDFS NameNode. The attempt to use a default HDFS configuration, as specified in the 'fs.hdfs.hdfsdefault' or 'fs.hdfs.hdfssite' config parameter failed due to the following problem: Either no default file system was registered, or the provided configuration contains no valid authority component (fs.default.name or fs.defaultFS) describing the (hdfs namenode) host and port.
        at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSystem.java:298)
        at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:288)
        at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:310)
        at org.apache.flink.runtime.blob.FileSystemBlobStore.<init>(FileSystemBlobStore.java:67)
        at org.apache.flink.runtime.blob.BlobServer.<init>(BlobServer.java:114)
        at org.apache.flink.runtime.jobmanager.JobManager$.createJobManagerComponents(JobManager.scala:2488)
        at org.apache.flink.runtime.jobmanager.JobManager$.startJobManagerActors(JobManager.scala:2643)
        at org.apache.flink.runtime.jobmanager.JobManager$.startJobManagerActors(JobManager.scala:2595)
        at org.apache.flink.runtime.jobmanager.JobManager$.startActorSystemAndJobManagerActors(JobManager.scala:2242)
        at org.apache.flink.runtime.jobmanager.JobManager$.liftedTree3$1(JobManager.scala:2020)
        at org.apache.flink.runtime.jobmanager.JobManager$.runJobManager(JobManager.scala:2019)
        at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply$mcV$sp(JobManager.scala:2098)
        at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply(JobManager.scala:2076)
        at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply(JobManager.scala:2076)
        at scala.util.Try$.apply(Try.scala:192)
        at org.apache.flink.runtime.jobmanager.JobManager$.retryOnBindException(JobManager.scala:2131)
        at org.apache.flink.runtime.jobmanager.JobManager$.runJobManager(JobManager.scala:2076)
        at org.apache.flink.runtime.jobmanager.JobManager$$anon$9.call(JobManager.scala:1971)
        at org.apache.flink.runtime.jobmanager.JobManager$$anon$9.call(JobManager.scala:1969)
        at org.apache.flink.runtime.security.HadoopSecurityContext$1.run(HadoopSecurityContext.java:43)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
        at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:40)
        at org.apache.flink.runtime.jobmanager.JobManager$.main(JobManager.scala:1969)
        at org.apache.flink.runtime.jobmanager.JobManager.main(JobManager.scala)
2017-03-22 17:41:43,694 WARN  org.apache.hadoop.security.UserGroupInformation               - PriviledgedActionException as:ubuntu (auth:SIMPLE) cause:java.io.IOException: The given HDFS file URI (hdfs:///ip:port/recovery/blob) did not describe the HDFS NameNode. The attempt to use a default HDFS configuration, as specified in the 'fs.hdfs.hdfsdefault' or 'fs.hdfs.hdfssite' config parameter failed due to the following problem: Either no default file system was registered, or the provided configuration contains no valid authority component (fs.default.name or fs.defaultFS) describing the (hdfs namenode) host and port.
2017-03-22 17:41:43,694 ERROR org.apache.flink.runtime.jobmanager.JobManager                - Failed to run JobManager.
java.io.IOException: The given HDFS file URI (hdfs:///ip:port/recovery/blob) did not describe the HDFS NameNode. The attempt to use a default HDFS configuration, as specified in the 'fs.hdfs.hdfsdefault' or 'fs.hdfs.hdfssite' config parameter failed due to the following problem: Either no default file system was registered, or the provided configuration contains no valid authority component (fs.default.name or fs.defaultFS) describing the (hdfs namenode) host and port.
        at org.apache.flink.runtime.fs.hdfs.HadoopFileSystem.initialize(HadoopFileSystem.java:298)
        at org.apache.flink.core.fs.FileSystem.getUnguardedFileSystem(FileSystem.java:288)
        at org.apache.flink.core.fs.FileSystem.get(FileSystem.java:310)
        at org.apache.flink.runtime.blob.FileSystemBlobStore.<init>(FileSystemBlobStore.java:67)
        at org.apache.flink.runtime.blob.BlobServer.<init>(BlobServer.java:114)
        at org.apache.flink.runtime.jobmanager.JobManager$.createJobManagerComponents(JobManager.scala:2488)
        at org.apache.flink.runtime.jobmanager.JobManager$.startJobManagerActors(JobManager.scala:2643)
        at org.apache.flink.runtime.jobmanager.JobManager$.startJobManagerActors(JobManager.scala:2595)
        at org.apache.flink.runtime.jobmanager.JobManager$.startActorSystemAndJobManagerActors(JobManager.scala:2242)
        at org.apache.flink.runtime.jobmanager.JobManager$.liftedTree3$1(JobManager.scala:2020)
        at org.apache.flink.runtime.jobmanager.JobManager$.runJobManager(JobManager.scala:2019)
        at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply$mcV$sp(JobManager.scala:2098)
        at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply(JobManager.scala:2076)
        at org.apache.flink.runtime.jobmanager.JobManager$$anonfun$2.apply(JobManager.scala:2076)
        at scala.util.Try$.apply(Try.scala:192)
        at org.apache.flink.runtime.jobmanager.JobManager$.retryOnBindException(JobManager.scala:2131)
        at org.apache.flink.runtime.jobmanager.JobManager$.runJobManager(JobManager.scala:2076)
        at org.apache.flink.runtime.jobmanager.JobManager$$anon$9.call(JobManager.scala:1971)
        at org.apache.flink.runtime.jobmanager.JobManager$$anon$9.call(JobManager.scala:1969)
        at org.apache.flink.runtime.security.HadoopSecurityContext$1.run(HadoopSecurityContext.java:43)
        at java.security.AccessController.doPrivileged(Native Method)
        at javax.security.auth.Subject.doAs(Subject.java:422)
        at org.apache.hadoop.security.UserGroupInformation.doAs(UserGroupInformation.java:1548)
        at org.apache.flink.runtime.security.HadoopSecurityContext.runSecured(HadoopSecurityContext.java:40)
        at org.apache.flink.runtime.jobmanager.JobManager$.main(JobManager.scala:1969)
        at org.apache.flink.runtime.jobmanager.JobManager.main(JobManager.scala)
2017-03-22 17:41:43,697 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator         - Shutting down remote daemon.
2017-03-22 17:41:43,704 INFO  akka.remote.RemoteActorRefProvider$RemotingTerminator         - Remote daemon shut down; proceeding with flushing remote transports.
2

Hadoop is installed as a single node cluster on the VM which I set up in the settings. Why is Flink asking to configure extra parameters? (they are not in the official documentation btw)

like image 592
razvan Avatar asked Oct 18 '22 15:10

razvan


1 Answers

I think you have to use this URL pattern hdfs://[ip:port]/flink-checkpoints for accessing HDFS with hostname:port specification.

If you are using the fs.defaultFS from the Hadoop config, you don't need to put the NameNode details.

like image 162
Robert Metzger Avatar answered Oct 21 '22 08:10

Robert Metzger