When uploading a csv file and a JSON object at the following endpoint
@PostMapping(value = "dataset/rows/query", consumes = MediaType.MULTIPART_FORM_DATA_VALUE)
Mono<List<Integer>> getRowsByQuery(@RequestPart("dataset") Mono<FilePart> file,
@RequestPart("query") QueryDTO query){
return Mono.just(new ArrayList<>());
}
I get the following error:
2020-12-17 12:25:05.142 ERROR 195281 --- [or-http-epoll-3] a.w.r.e.AbstractErrorWebExceptionHandler : [d418565e-17] 500 Server Error for HTTP POST "/dataset/rows/query"
org.springframework.core.io.buffer.DataBufferLimitException: Part headers exceeded the memory usage limit of 8192 bytes
at org.springframework.http.codec.multipart.MultipartParser$HeadersState.onNext(MultipartParser.java:360) ~[spring-web-5.3.1.jar:5.3.1]
Suppressed: reactor.core.publisher.FluxOnAssembly$OnAssemblyException:
Error has been observed at the following site(s):
|_ checkpoint ⇢ org.springframework.boot.actuate.metrics.web.reactive.server.MetricsWebFilter [DefaultWebFilterChain]
|_ checkpoint ⇢ HTTP POST "/dataset/rows/query" [ExceptionHandlingWebHandler]
Stack trace:
at org.springframework.http.codec.multipart.MultipartParser$HeadersState.onNext(MultipartParser.java:360) ~[spring-web-5.3.1.jar:5.3.1]
at org.springframework.http.codec.multipart.MultipartParser.hookOnNext(MultipartParser.java:104) ~[spring-web-5.3.1.jar:5.3.1]
at org.springframework.http.codec.multipart.MultipartParser.hookOnNext(MultipartParser.java:46) ~[spring-web-5.3.1.jar:5.3.1]
at reactor.core.publisher.BaseSubscriber.onNext(BaseSubscriber.java:160) ~[reactor-core-3.4.0.jar:3.4.0]
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120) ~[reactor-core-3.4.0.jar:3.4.0]
at reactor.core.publisher.FluxPeek$PeekSubscriber.onNext(FluxPeek.java:199) ~[reactor-core-3.4.0.jar:3.4.0]
at reactor.core.publisher.FluxMap$MapSubscriber.onNext(FluxMap.java:120) ~[reactor-core-3.4.0.jar:3.4.0]
at reactor.netty.channel.FluxReceive.drainReceiver(FluxReceive.java:265) ~[reactor-netty-core-1.0.1.jar:1.0.1]
at reactor.netty.channel.FluxReceive.onInboundNext(FluxReceive.java:371) ~[reactor-netty-core-1.0.1.jar:1.0.1]
at reactor.netty.channel.ChannelOperations.onInboundNext(ChannelOperations.java:381) ~[reactor-netty-core-1.0.1.jar:1.0.1]
at reactor.netty.http.server.HttpServerOperations.onInboundNext(HttpServerOperations.java:535) ~[reactor-netty-http-1.0.1.jar:1.0.1]
at reactor.netty.channel.ChannelOperationsHandler.channelRead(ChannelOperationsHandler.java:94) ~[reactor-netty-core-1.0.1.jar:1.0.1]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.54.Final.jar:4.1.54.Final]
at reactor.netty.http.server.HttpTrafficHandler.channelRead(HttpTrafficHandler.java:229) ~[reactor-netty-http-1.0.1.jar:1.0.1]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.CombinedChannelDuplexHandler$DelegatingChannelHandlerContext.fireChannelRead(CombinedChannelDuplexHandler.java:436) ~[netty-transport-4.1.54.Final.jar:4.1.54.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:324) ~[netty-codec-4.1.54.Final.jar:4.1.54.Final]
at io.netty.handler.codec.ByteToMessageDecoder.fireChannelRead(ByteToMessageDecoder.java:311) ~[netty-codec-4.1.54.Final.jar:4.1.54.Final]
at io.netty.handler.codec.ByteToMessageDecoder.callDecode(ByteToMessageDecoder.java:425) ~[netty-codec-4.1.54.Final.jar:4.1.54.Final]
at io.netty.handler.codec.ByteToMessageDecoder.channelRead(ByteToMessageDecoder.java:276) ~[netty-codec-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.CombinedChannelDuplexHandler.channelRead(CombinedChannelDuplexHandler.java:251) ~[netty-transport-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.AbstractChannelHandlerContext.fireChannelRead(AbstractChannelHandlerContext.java:357) ~[netty-transport-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.DefaultChannelPipeline$HeadContext.channelRead(DefaultChannelPipeline.java:1410) ~[netty-transport-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:379) ~[netty-transport-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.AbstractChannelHandlerContext.invokeChannelRead(AbstractChannelHandlerContext.java:365) ~[netty-transport-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.DefaultChannelPipeline.fireChannelRead(DefaultChannelPipeline.java:919) ~[netty-transport-4.1.54.Final.jar:4.1.54.Final]
at io.netty.channel.epoll.AbstractEpollStreamChannel$EpollStreamUnsafe.epollInReady(AbstractEpollStreamChannel.java:795) ~[netty-transport-native-epoll-4.1.54.Final-linux-x86_64.jar:4.1.54.Final]
at io.netty.channel.epoll.EpollEventLoop.processReady(EpollEventLoop.java:480) ~[netty-transport-native-epoll-4.1.54.Final-linux-x86_64.jar:4.1.54.Final]
at io.netty.channel.epoll.EpollEventLoop.run(EpollEventLoop.java:378) ~[netty-transport-native-epoll-4.1.54.Final-linux-x86_64.jar:4.1.54.Final]
at io.netty.util.concurrent.SingleThreadEventExecutor$4.run(SingleThreadEventExecutor.java:989) ~[netty-common-4.1.54.Final.jar:4.1.54.Final]
at io.netty.util.internal.ThreadExecutorMap$2.run(ThreadExecutorMap.java:74) ~[netty-common-4.1.54.Final.jar:4.1.54.Final]
at io.netty.util.concurrent.FastThreadLocalRunnable.run(FastThreadLocalRunnable.java:30) ~[netty-common-4.1.54.Final.jar:4.1.54.Final]
at java.base/java.lang.Thread.run(Thread.java:832) ~[na:na]
I tried to customize the defaultCodex().maxinMemorySize()
by
@Component
public class ServerConfiguration implements WebFluxConfigurer {
@Override
public void configureHttpMessageCodecs(ServerCodecConfigurer configurer) {
configurer.defaultCodecs().maxInMemorySize(16 * 1024 * 1024);
}
}
and the following application.yml
server:
port: ${SERVER_PORT:8080}
max-http-header-size: 900000000
spring:
codec:
max-in-memory-size: 900000000
but it doesn't seem to have any effect.
Moreover, what is strange is that the error on the server side only seems to happen when calling the API from Angular, but not from postman.
From Angular I have the following headers:
POST /dataset/rows/query HTTP/1.1
Host: localhost:4200
Connection: keep-alive
Content-Length: 496570
Pragma: no-cache
Cache-Control: no-cache
Accept: */*
DNT: 1
User-Agent: Mozilla/5.0 (X11; Linux x86_64) AppleWebKit/537.36 (KHTML, like Gecko) Chrome/83.0.4103.61 Safari/537.36
Content-Type: multipart/form-data; boundary=----WebKitFormBoundarySBh6gJvnTeDzB43Y
Origin: http://localhost:4200
Sec-Fetch-Site: same-origin
Sec-Fetch-Mode: cors
Sec-Fetch-Dest: empty
Referer: http://localhost:4200/app
Accept-Encoding: gzip, deflate, br
Accept-Language: it-IT,it;q=0.9,en-GB;q=0.8,en;q=0.7,ru-RU;q=0.6,ru;q=0.5,en-US;q=0.4
and finally, this is the corresponding OpenAPI yaml for the endpoint:
openapi: 3.0.1
info:
title: OpenAPI definition
version: v0
servers:
- url: http://localhost:8080
description: Generated server url
paths:
/dataset/rows/query:
post:
tags:
- dataset-controller
operationId: getRowsByQuery
requestBody:
content:
multipart/form-data:
schema:
type: object
properties:
dataset:
type: string
format: binary
query:
$ref: '#/components/schemas/QueryDTO'
responses:
"200":
description: OK
content:
'*/*':
schema:
type: array
items:
type: integer
format: int32
components:
schemas:
PredicateDTO:
type: object
properties:
value:
type: object
key:
type: string
operator:
type: string
enum:
- EQUAL
- NOT_EQUAL
- BELONGING
- NOT_BELONGING
- GREATER_THAN
- GREATER_THAN_EQUAL
- LESS_THAN
- LESS_THAN_EQUAL
QueryDTO:
type: object
properties:
predicates:
type: object
additionalProperties:
$ref: '#/components/schemas/PredicateDTO'
Here is my dataset and my json object. How can I increase the Part headers memory usage limit?
UPDATE:
As of Spring 5.3.13, the default maxHeadersSize
in the DefaultPartHttpMessageReader
class is 10KiB instead of the previous 8KiB. See this Github commit. Try updating the version of Spring you are using to at least this version and hopefully your issue will be resolved.
Original answer below:
You were on the right track.
According to the Spring docs, you have to provide your own MultipartHttpMessageReader
if you want to customize any of the pre-defined limits:
For Multipart parsing the maxInMemorySize property limits the size of non-file parts. For file parts, it determines the threshold at which the part is written to disk. For file parts written to disk, there is an additional maxDiskUsagePerPart property to limit the amount of disk space per part. There is also a maxParts property to limit the overall number of parts in a multipart request. To configure all three in WebFlux, you’ll need to supply a pre-configured instance of MultipartHttpMessageReader to ServerCodecConfigurer.
The docs don't seem to mention anything about the part header size limit referred to in the stack trace. However, for me, that was the only property I needed to increase from the defaults to fix the issue, but you could easily customize any of the other default limits of the DefaultPartHttpMessageReader
.
@Configuration
public class CodecConfig implements WebFluxConfigurer {
@Override
public void configureHttpMessageCodecs(ServerCodecConfigurer configurer) {
DefaultPartHttpMessageReader partReader = new DefaultPartHttpMessageReader();
partReader.setMaxHeadersSize(9216); // 9 KiB, default is 8 KiB
partReader.setEnableLoggingRequestDetails(true);
MultipartHttpMessageReader multipartReader = new MultipartHttpMessageReader(partReader);
multipartReader.setEnableLoggingRequestDetails(true);
configurer.defaultCodecs().multipartReader(multipartReader);
}
}
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With