Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Read timed out Httpfs HDFS

I have set up access to HDFS using httpfs setup in Kubernetes as I need to have access to HDFS data nodes and not only the metadata on the name node. I can connect to the HDFS using Node port service with telnet, however, when I try to get some information from HDFS - reading files, checking if the files exist, I get an error:

[info]   java.net.SocketTimeoutException: Read timed out
[info]   at java.net.SocketInputStream.socketRead0(Native Method)
[info]   at java.net.SocketInputStream.socketRead(SocketInputStream.java:116)
[info]   at java.net.SocketInputStream.read(SocketInputStream.java:171)
[info]   at java.net.SocketInputStream.read(SocketInputStream.java:141)
[info]   at java.io.BufferedInputStream.fill(BufferedInputStream.java:246)
[info]   at java.io.BufferedInputStream.read1(BufferedInputStream.java:286)
[info]   at java.io.BufferedInputStream.read(BufferedInputStream.java:345)
[info]   at sun.net.www.http.HttpClient.parseHTTPHeader(HttpClient.java:735)
[info]   at sun.net.www.http.HttpClient.parseHTTP(HttpClient.java:678)
[info]   at sun.net.www.protocol.http.HttpURLConnection.getInputStream0(HttpURLConnection.java:1587)

What could be the reason for this error? Here is the source code for setting up the connection to HDFS file system and checking files existence:

val url = "webhdfs://192.168.99.100:31400"
val fs = FileSystem.get(new java.net.URI(url), new org.apache.hadoop.conf.Configuration())
val check = fs.exists(new Path(dirPath))

The directory by the dirPath argument exists on HDFS.

HDFS Kubernetes setup looks like this:

apiVersion: v1
kind: Service
metadata:
  name: namenode
spec:
  type: NodePort
  ports:
    - name: client
      port: 8020
    - name: hdfs
      port: 50070
      nodePort: 30070
    - name: httpfs
      port: 14000
      nodePort: 31400
  selector:
    hdfs: namenode
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: namenode
spec:
  replicas: 1
  template:
    metadata:
      labels:
        hdfs: namenode
    spec:
      containers:
        - env:
            - name: CLUSTER_NAME
              value: test
          image: bde2020/hadoop-namenode:2.0.0-hadoop2.7.4-java8
          name: namenode
          args:
            - "/run.sh &"
            - "/opt/hadoop-2.7.4/sbin/httpfs.sh start"
          envFrom:
            - configMapRef:
                name: hive-env
          ports:
            - containerPort: 50070
            - containerPort: 8020
            - containerPort: 14000
          volumeMounts:
            - mountPath: /hadoop/dfs/name
              name: namenode
      volumes:
        - name: namenode
          emptyDir: {}
---
apiVersion: v1
kind: Service
metadata:
  name: datanode
spec:
  ports:
    - name: hdfs
      port: 50075
      targetPort: 50075
  selector:
    hdfs: datanode
---
apiVersion: v1
kind: ReplicationController
metadata:
  name: datanode
spec:
  replicas: 1
  template:
    metadata:
      labels:
        hdfs: datanode
    spec:
      containers:
        - env:
            - name: SERVICE_PRECONDITION
              value: namenode:50070
          image: bde2020/hadoop-datanode:2.0.0-hadoop2.7.4-java8
          envFrom:
            - configMapRef:
                name: hive-env
          name: datanode
          ports:
            - containerPort: 50075
          volumeMounts:
            - mountPath: /hadoop/dfs/data
              name: datanode
      volumes:
        - name: datanode
          emptyDir: {}

UPD: Ping returns such results (192.168.99.100 - minikube ip, 31400 - service node port):

ping 192.168.99.100  -M do -s 28
PING 192.168.99.100 (192.168.99.100) 28(56) bytes of data.
36 bytes from 192.168.99.100: icmp_seq=1 ttl=64 time=0.845 ms
36 bytes from 192.168.99.100: icmp_seq=2 ttl=64 time=0.612 ms
36 bytes from 192.168.99.100: icmp_seq=3 ttl=64 time=0.347 ms
36 bytes from 192.168.99.100: icmp_seq=4 ttl=64 time=0.287 ms
36 bytes from 192.168.99.100: icmp_seq=5 ttl=64 time=0.547 ms
36 bytes from 192.168.99.100: icmp_seq=6 ttl=64 time=0.357 ms
36 bytes from 192.168.99.100: icmp_seq=7 ttl=64 time=0.544 ms
36 bytes from 192.168.99.100: icmp_seq=8 ttl=64 time=0.702 ms
36 bytes from 192.168.99.100: icmp_seq=9 ttl=64 time=0.307 ms
36 bytes from 192.168.99.100: icmp_seq=10 ttl=64 time=0.346 ms
36 bytes from 192.168.99.100: icmp_seq=11 ttl=64 time=0.294 ms
36 bytes from 192.168.99.100: icmp_seq=12 ttl=64 time=0.319 ms
36 bytes from 192.168.99.100: icmp_seq=13 ttl=64 time=0.521 ms
^C
--- 192.168.99.100 ping statistics ---
13 packets transmitted, 13 received, 0% packet loss, time 12270ms
rtt min/avg/max/mdev = 0.287/0.463/0.845/0.173 ms

And for the host and port:

ping 192.168.99.100 31400 -M do -s 28
PING 31400 (0.0.122.168) 28(96) bytes of data.
^C
--- 31400 ping statistics ---
27 packets transmitted, 0 received, 100% packet loss, time 26603ms
like image 630
Cassie Avatar asked Nov 06 '22 18:11

Cassie


1 Answers

My colleague found out that the problem was with docker in minikube. Running this before setting up HDFS on Kubernetes solved the problem:

minikube ssh echo "sudo ip link set docker0 promisc on"
like image 176
Cassie Avatar answered Nov 15 '22 09:11

Cassie