Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

PrematureCloseException: Connection prematurely closed

im using Web-flux/Reactive and Webclient, running it on tomcat and spring-boot.

Everything works fine. I read a lot about it. The problem seems to be that whenever you use webclient, you have to return or use the response, otherwise it will close the connection and you didn't consume it yet, and you will see a lot of log messages saying that the connection close prematurely, if i had a scenario where a 404 status code is a error i could just use OnStatus and throw an exception, but my scenario is: when the upstream service returns a 404, i have to return manually a mono empty. so i don't use the response from web client request, i just use ClientResponse from .exchange() to check the status and handle it. My initial problem is the log messages, because is just "garbage", you don't want see a lot of it on your log messages. I've read somewhere that if it happens the connection cant be re-used as well, so it sounds really bad, but i don't know... and i just have this message when is not found, if the response is 200 it returns the object and log messages is not printed.

I tried to use clientResponse.BodyToMono(Void.Class) but it doesn't work either. the log messages keeping apearing

 @Bean
  public WebClient webClient(
      @Value("${http.client.connection-timeout-millis}") final int connectionTimeoutMillis,
      @Value("${http.client.socket-timeout-millis}") final int socketTimeoutMillis,
      @Value("${http.client.wire-tap-enabled}") final boolean wireTapEnabled,
      final ObjectMapper objectMapper) {

    Consumer<Connection> doOnConnectedConsumer = connection ->
        connection
            .addHandler(new ReadTimeoutHandler(socketTimeoutMillis, MILLISECONDS))
            .addHandler(new WriteTimeoutHandler(connectionTimeoutMillis, MILLISECONDS));

    TcpClient tcpClient = TcpClient.newConnection()
        .wiretap(wireTapEnabled)
        .option(CONNECT_TIMEOUT_MILLIS, connectionTimeoutMillis)
        .doOnConnected(doOnConnectedConsumer);

    return WebClient.builder()
        .clientConnector(new ReactorClientHttpConnector(HttpClient.from(tcpClient).compress(true)))
        .exchangeStrategies(customExchangeStrategies(objectMapper))
        .build();
  } 

// ..........

    MultiValueMap<String, String> params = getParams(t1, t2);

    return webClient.get()
        .uri(HttpUtils.buildUrl(serviceUrl, params, name))
        .exchange()
        .flatMap(this::handleClientResponse)
        .onErrorMap(Exception.class, ex -> handleUnexpectedEx(ex, name, params));
  }

log entries

2019-07-08 11:56:51.972  WARN [-,,,] 1504 --- [ctor-http-nio-3] reactor.netty.channel.FluxReceive        : [id: 0x66c8568c, L:/127.0.0.1:62319 ! R:localhost/127.0.0.1:8990] An exception has been observed post termination

reactor.netty.http.client.PrematureCloseException: Connection prematurely closed DURING response

2019-07-08 11:56:52.013 DEBUG [-,,,] 1504 --- [ctor-http-nio-2] reactor.netty.ReactorNetty               : [id: 0xf50bdf8d, L:/127.0.0.1:62324 ! R:localhost/127.0.0.1:8990] Non Removed handler: ReadTimeoutHandler, context: ChannelHandlerContext(ReadTimeoutHandler, [id: 0xf50bdf8d, L:/127.0.0.1:62324 ! R:localhost/127.0.0.1:8990]), pipeline: DefaultChannelPipeline{(reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.left.decompressor = io.netty.handler.codec.http.HttpContentDecompressor), (WriteTimeoutHandler = io.netty.handler.timeout.WriteTimeoutHandler), (ReadTimeoutHandler = io.netty.handler.timeout.ReadTimeoutHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-07-08 11:56:52.014 DEBUG [-,,,] 1504 --- [ctor-http-nio-2] reactor.netty.ReactorNetty               : [id: 0xf50bdf8d, L:/127.0.0.1:62324 ! R:localhost/127.0.0.1:8990] Non Removed handler: WriteTimeoutHandler, context: ChannelHandlerContext(WriteTimeoutHandler, [id: 0xf50bdf8d, L:/127.0.0.1:62324 ! R:localhost/127.0.0.1:8990]), pipeline: DefaultChannelPipeline{(reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (reactor.left.decompressor = io.netty.handler.codec.http.HttpContentDecompressor), (WriteTimeoutHandler = io.netty.handler.timeout.WriteTimeoutHandler), (ReadTimeoutHandler = io.netty.handler.timeout.ReadTimeoutHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-07-08 11:56:52.014 DEBUG [-,,,] 1504 --- [ctor-http-nio-2] r.netty.resources.NewConnectionProvider  : [id: 0xf50bdf8d, L:/127.0.0.1:62324 ! R:localhost/127.0.0.1:8990] onStateChange([response_incomplete], GET{uri=/service/TWFDHF?T1=1.0.0&T2=1, connection=SimpleConnection{channel=[id: 0xf50bdf8d, L:/127.0.0.1:62324 ! R:localhost/127.0.0.1:8990]}})
2019-07-08 11:56:52.014  WARN [-,,,] 1504 --- [ctor-http-nio-2] reactor.netty.channel.FluxReceive        : [id: 0xf50bdf8d, L:/127.0.0.1:62324 ! R:localhost/127.0.0.1:8990] An exception has been observed post termination

log entries with wiretap enabled

2019-07-10 14:51:19.295 DEBUG [-,,,] 2940 --- [ctor-http-nio-2] reactor.netty.tcp.TcpClient              : [id: 0x677da0d4, L:/127.0.0.1:62385 ! R:localhost/127.0.0.1:8990] UNREGISTERED
2019-07-10 14:51:19.541 DEBUG [-,,,] 2940 --- [ctor-http-nio-4] reactor.netty.tcp.TcpClient              : [id: 0xa7f41d1e, L:/127.0.0.1:62384 - R:localhost/127.0.0.1:8990] CLOSE
2019-07-10 14:51:19.542 DEBUG [-,,,] 2940 --- [ctor-http-nio-4] r.n.resources.PooledConnectionProvider   : [id: 0xa7f41d1e, L:/127.0.0.1:62384 ! R:localhost/127.0.0.1:8990] Channel cleaned, now 0 active connections and 9 inactive connections
2019-07-10 14:51:19.542 DEBUG [-,,,] 2940 --- [ctor-http-nio-4] r.n.resources.PooledConnectionProvider   : [id: 0xa7f41d1e, L:/127.0.0.1:62384 ! R:localhost/127.0.0.1:8990] Channel closed, now 0 active connections and 8 inactive connections
2019-07-10 14:51:19.542 DEBUG [-,,,] 2940 --- [ctor-http-nio-4] reactor.netty.ReactorNetty               : [id: 0xa7f41d1e, L:/127.0.0.1:62384 ! R:localhost/127.0.0.1:8990] Non Removed handler: ReadTimeoutHandler, context: ChannelHandlerContext(ReadTimeoutHandler, [id: 0xa7f41d1e, L:/127.0.0.1:62384 ! R:localhost/127.0.0.1:8990]), pipeline: DefaultChannelPipeline{(reactor.left.loggingHandler = io.netty.handler.logging.LoggingHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (WriteTimeoutHandler = io.netty.handler.timeout.WriteTimeoutHandler), (ReadTimeoutHandler = io.netty.handler.timeout.ReadTimeoutHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-07-10 14:51:19.543 DEBUG [-,,,] 2940 --- [ctor-http-nio-4] reactor.netty.ReactorNetty               : [id: 0xa7f41d1e, L:/127.0.0.1:62384 ! R:localhost/127.0.0.1:8990] Non Removed handler: WriteTimeoutHandler, context: ChannelHandlerContext(WriteTimeoutHandler, [id: 0xa7f41d1e, L:/127.0.0.1:62384 ! R:localhost/127.0.0.1:8990]), pipeline: DefaultChannelPipeline{(reactor.left.loggingHandler = io.netty.handler.logging.LoggingHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (WriteTimeoutHandler = io.netty.handler.timeout.WriteTimeoutHandler), (ReadTimeoutHandler = io.netty.handler.timeout.ReadTimeoutHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-07-10 14:51:19.543 DEBUG [-,,,] 2940 --- [ctor-http-nio-4] reactor.netty.tcp.TcpClient              : [id: 0xa7f41d1e, L:/127.0.0.1:62384 ! R:localhost/127.0.0.1:8990] INACTIVE
2019-07-10 14:51:19.544 DEBUG [-,,,] 2940 --- [ctor-http-nio-4] r.n.resources.PooledConnectionProvider   : [id: 0xa7f41d1e, L:/127.0.0.1:62384 ! R:localhost/127.0.0.1:8990] onStateChange(GET{uri=/dataviews/TWN_EMPLBENINFO_FIXED?version=1.0.0&sequence=1, connection=PooledConnection{channel=[id: 0xa7f41d1e, L:/127.0.0.1:62384 ! R:localhost/127.0.0.1:8990]}}, [response_incomplete])
2019-07-10 14:51:19.544  WARN [-,,,] 2940 --- [ctor-http-nio-4] reactor.netty.channel.FluxReceive        : [id: 0xa7f41d1e, L:/127.0.0.1:62384 ! R:localhost/127.0.0.1:8990] An exception has been observed post termination

and my handler

protected Mono handleClientResponseError(final ClientResponse clientResponse) {
    clientResponse.bodyToMono(Void.class);
    Mono<ErrorResponse> errorResponse = clientResponse.body(BodyExtractors.toMono(ErrorResponse.class));

    return errorResponse.flatMap(err -> {
      log.debug("Received HttpStatusCodeException when calling {} Registry: {}", getGatewayName(),
          err.getErrorEnvelope().getMessage());

      return Mono.error(new UpStreamServiceHttpException(err, clientResponse.rawStatusCode()));
    }).switchIfEmpty(Mono.error(() -> {
      log.debug("Received HttpStatusCodeException when calling {} Registry: {}", getGatewayName());

      return new UpStreamServiceHttpException("Bad Gateway", clientResponse.rawStatusCode());
    }));
  }

I tried already use clientResponse.bodyToMono(Void.class); to complete the response, but it still doesnt work, i need a way to finish the response from web-client, and then be able to re-use the connection pool and get rid of those log messages

i've created an issue on github, but it was closed, so im creating this question here. https://github.com/spring-projects/spring-framework/issues/23249

like image 284
Douglas Santos Avatar asked Jul 09 '19 16:07

Douglas Santos


2 Answers

so i finnaly found a way to finish the response and i will share here for anyone who come across this problem

before i had this:

 private Mono<Optional<JsonNode>> handleHttpErrorStatus(final ClientResponse clientResponse) {
    if (clientResponse.statusCode().equals(HttpStatus.NOT_FOUND)) {
      clientResponse.bodyToMono(Void.class);
      return Mono.just(Optional.empty());
    } else {
      return handleClientResponseError(clientResponse);
    }
  }

But it didn't work, apparently because i wasn't returning it anyway, so after digging i tried this:

private Mono<Optional<JsonNode>> handleHttpErrorStatus(final ClientResponse clientResponse) {
    if (clientResponse.statusCode().equals(HttpStatus.NOT_FOUND)) {
      return clientResponse.bodyToMono(Void.class).thenReturn((Optional.empty()));
    } else {
      return handleClientResponseError(clientResponse);
    }
  }

and it works fine, i checked the wiretap logs and everything looks fine now.

2019-07-10 16:44:39.096 DEBUG [-,,,] 16260 --- [ctor-http-nio-2] r.n.http.client.HttpClientOperations     : [id: 0x461d8170, L:/127.0.0.1:64298 - R:localhost/127.0.0.1:8990] Received response (auto-read:false) : [Content-Type=application/json;charset=UTF-8, Transfer-Encoding=chunked, Date=Wed, 10 Jul 2019 15:44:38 GMT]
2019-07-10 16:44:39.096 DEBUG [-,,,] 16260 --- [ctor-http-nio-2] r.n.resources.PooledConnectionProvider   : [id: 0x461d8170, L:/127.0.0.1:64298 - R:localhost/127.0.0.1:8990] onStateChange(GET{uri=/dataviews/TWN_EMPLPIM_AVRO?version=1.0.0&sequence=1, connection=PooledConnection{channel=[id: 0x461d8170, L:/127.0.0.1:64298 - R:localhost/127.0.0.1:8990]}}, [response_received])
2019-07-10 16:44:39.098 DEBUG [-,bdd6433efa276f10,bdd6433efa276f10,false] 16260 --- [ctor-http-nio-2] reactor.netty.channel.FluxReceive        : [id: 0x461d8170, L:/127.0.0.1:64298 - R:localhost/127.0.0.1:8990] Subscribing inbound receiver [pending: 0, cancelled:false, inboundDone: false]
2019-07-10 16:44:39.099 DEBUG [-,bdd6433efa276f10,bdd6433efa276f10,false] 16260 --- [ctor-http-nio-2] reactor.netty.tcp.TcpClient              : [id: 0x461d8170, L:/127.0.0.1:64298 - R:localhost/127.0.0.1:8990] CLOSE
2019-07-10 16:44:39.099 DEBUG [-,bdd6433efa276f10,bdd6433efa276f10,false] 16260 --- [ctor-http-nio-2] r.n.resources.PooledConnectionProvider   : [id: 0x461d8170, L:/127.0.0.1:64298 ! R:localhost/127.0.0.1:8990] Channel cleaned, now 1 active connections and 1 inactive connections
2019-07-10 16:44:39.099 DEBUG [-,bdd6433efa276f10,bdd6433efa276f10,false] 16260 --- [ctor-http-nio-2] r.n.resources.PooledConnectionProvider   : [id: 0x461d8170, L:/127.0.0.1:64298 ! R:localhost/127.0.0.1:8990] Channel closed, now 1 active connections and 0 inactive connections
2019-07-10 16:44:39.099 DEBUG [-,bdd6433efa276f10,bdd6433efa276f10,false] 16260 --- [ctor-http-nio-2] reactor.netty.ReactorNetty               : [id: 0x461d8170, L:/127.0.0.1:64298 ! R:localhost/127.0.0.1:8990] Non Removed handler: ReadTimeoutHandler, context: ChannelHandlerContext(ReadTimeoutHandler, [id: 0x461d8170, L:/127.0.0.1:64298 ! R:localhost/127.0.0.1:8990]), pipeline: DefaultChannelPipeline{(reactor.left.loggingHandler = io.netty.handler.logging.LoggingHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (WriteTimeoutHandler = io.netty.handler.timeout.WriteTimeoutHandler), (ReadTimeoutHandler = io.netty.handler.timeout.ReadTimeoutHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-07-10 16:44:39.099 DEBUG [-,bdd6433efa276f10,bdd6433efa276f10,false] 16260 --- [ctor-http-nio-2] reactor.netty.ReactorNetty               : [id: 0x461d8170, L:/127.0.0.1:64298 ! R:localhost/127.0.0.1:8990] Non Removed handler: WriteTimeoutHandler, context: ChannelHandlerContext(WriteTimeoutHandler, [id: 0x461d8170, L:/127.0.0.1:64298 ! R:localhost/127.0.0.1:8990]), pipeline: DefaultChannelPipeline{(reactor.left.loggingHandler = io.netty.handler.logging.LoggingHandler), (reactor.left.httpCodec = io.netty.handler.codec.http.HttpClientCodec), (WriteTimeoutHandler = io.netty.handler.timeout.WriteTimeoutHandler), (ReadTimeoutHandler = io.netty.handler.timeout.ReadTimeoutHandler), (reactor.right.reactiveBridge = reactor.netty.channel.ChannelOperationsHandler)}
2019-07-10 16:44:39.100 DEBUG [-,,,] 16260 --- [ctor-http-nio-2] reactor.netty.tcp.TcpClient              : [id: 0x461d8170, L:/127.0.0.1:64298 ! R:localhost/127.0.0.1:8990] READ COMPLETE
2019-07-10 16:44:39.102 DEBUG [-,,,] 16260 --- [ctor-http-nio-2] reactor.netty.tcp.TcpClient              : [id: 0x461d8170, L:/127.0.0.1:64298 ! R:localhost/127.0.0.1:8990] INACTIVE
2019-07-10 16:44:39.102 DEBUG [-,,,] 16260 --- [ctor-http-nio-2] reactor.netty.tcp.TcpClient              : [id: 0x461d8170, L:/127.0.0.1:64298 ! R:localhost/127.0.0.1:8990] UNREGISTERED
2019-07-10 16:44:39.122 DEBUG [-,,,] 16260 --- [ctor-http-nio-2] reactor.netty.tcp.TcpClient              : [id: 0xbe2cb147, L:/127.0.0.1:64295 - R:localhost/127.0.0.1:8990] READ: 398B
like image 56
Douglas Santos Avatar answered Nov 02 '22 12:11

Douglas Santos


I had the same issue that I kept battling with for 5 days wondering what exactly I had done wrong after a new feature deployment. I finally had my Eureka moment.

So in my Dockerfile I had exposed the default port 8080.

enter image description here Which meant the tomcat server in my application image would always be available on that port. My container also had to be running on port 8080 (inside my .yml file) enter image description here

before I apply the port mapping directive to map the external port 9003 on my linux host to the 8080 in the container. enter image description here

What I missed was that, during the deployment of the services using the docker-compose file, I forgot to change the server.port in the application.yml file to 8080 so that the port mapping could happen. I left it as 9003 (because I was using 9003 on my local machine to test the app) which meant anytime I started the containers using docker-compose, my service port was mapped as

0.0.0.0:9003 -> 9003/tcp instead of 0.0.0.0:9003 -> 8080/tcp.

This was why the api-gateway kept throwing the

PrematureCloseException: Connection prematurely closed

because there was actually no internal port 9003 found in the container instance of my microservice for the 9003 on the linux host to map to.

I just had to change the 9003 in my application.yml to 8080 and that was all.

like image 1
Louis Avatar answered Nov 02 '22 13:11

Louis