We are trying to Solve the Problem of Handling Huge Volume of Http POST Requests, and while using Netty Server, I was able to handle only ~50K requests/sec
which is too low.
My question is how to tune this Server to ensure to handle > 1.5 million requests/second
?
Netty4 Server
// Configure the server.
EventLoopGroup bossGroup = new NioEventLoopGroup();
EventLoopGroup workerGroup = new NioEventLoopGroup();
try {
ServerBootstrap b = new ServerBootstrap();
b.option(ChannelOption.SO_BACKLOG, 1024);
b.group(bossGroup, workerGroup)
.channel(NioServerSocketChannel.class)
.handler(new LoggingHandler(LogLevel.INFO))
.childHandler(new HttpServerInitializer(sslCtx));
Channel ch = b.bind(PORT).sync().channel();
System.err.println("Open your web browser and navigate to " +
(SSL? "https" : "http") + "://127.0.0.1:" + PORT + '/');
ch.closeFuture().sync();
} finally {
bossGroup.shutdownGracefully();
workerGroup.shutdownGracefully();
}
Initializer
public class HttpServerInitializer extends ChannelInitializer<SocketChannel> {
private final SslContext sslCtx;
public HttpServerInitializer(SslContext sslCtx) {
this.sslCtx = sslCtx;
}
@Override
public void initChannel(SocketChannel ch) {
ChannelPipeline p = ch.pipeline();
if (sslCtx != null) {
p.addLast(sslCtx.newHandler(ch.alloc()));
}
p.addLast(new HttpServerCodec());
p.addLast("aggregator", new HttpObjectAggregator(Integer.MAX_VALUE));
p.addLast(new HttpServerHandler());
}
}
Handler
public class HttpServerHandler extends ChannelInboundHandlerAdapter {
private static final String CONTENT = "SUCCESS";
@Override
public void channelReadComplete(ChannelHandlerContext ctx) {
ctx.flush();
}
@Override
public void channelRead(ChannelHandlerContext ctx, Object msg) {
if (msg instanceof HttpRequest) {
HttpRequest req = (HttpRequest) msg;
final FullHttpRequest fReq = (FullHttpRequest) req;
Charset utf8 = CharsetUtil.UTF_8;
final ByteBuf buf = fReq.content();
String in = buf.toString( utf8 );
System.out.println(" In ==> "+in);
buf.release();
if (HttpHeaders.is100ContinueExpected(req)) {
ctx.write(new DefaultFullHttpResponse(HttpVersion.HTTP_1_1, HttpResponseStatus.CONTINUE));
}
in = null;
if (HttpHeaders.is100ContinueExpected(req)) {
ctx.write(new DefaultFullHttpResponse(HTTP_1_1, CONTINUE));
}
boolean keepAlive = HttpHeaders.isKeepAlive(req);
FullHttpResponse response = new DefaultFullHttpResponse(HTTP_1_1, OK, Unpooled.wrappedBuffer(CONTENT.getBytes()));
response.headers().set(CONTENT_TYPE, "text/plain");
response.headers().set(CONTENT_LENGTH, response.content().readableBytes());
if (!keepAlive) {
ctx.write(response).addListener(ChannelFutureListener.CLOSE);
} else {
response.headers().set(CONNECTION, Values.KEEP_ALIVE);
ctx.write(response);
}
}
}
@Override
public void exceptionCaught(ChannelHandlerContext ctx, Throwable cause)
{
cause.printStackTrace();
ctx.close();
}
}
Your question is very generic. However, I'll try to give you an answer regarding the netty optimizations and your code improvements.
Your code issues:
System.out.println(" In ==> "+in);
- you shouldn't use this in high load cocurrent handler. Why? Because code inside println
method is synchronized and thus gives penalties to your performance;HttpRequest
and to FullHttpRequest
. You may use just last one;Netty specific issues in your code:
EventLoopGroup bossGroup = new NioEventLoopGroup();
- you need to correctly setup sizes of bossGroup
and workerGroup
groups. Depending on your test scenarios. You didn't provide any info regarding your test cases, so I can't give you advice here;new HttpObjectAggregator(Integer.MAX_VALUE)
- you actually don't need this handler in your code. So for better performance, you may remove it.new HttpServerHandler()
- you don't need to create this handler for every channel. As it doesn't hold any state it may be shared across all pipelines. Search for @Sharable
in netty.new LoggingHandler(LogLevel.INFO)
- you don't need this handler for high load tests as it logs a lot. Make your own logging when necessary;buf.toString( utf8 )
- this is very wrong. You convert income bytes to string. But this doesn't make any sense as all data is already decoded in netty HttpServerCodec
. So you do double work here;Unpooled.wrappedBuffer(CONTENT.getBytes())
- you wrap constant message on every request. And thus - do unnecessary work on every request. You may create ByteBuf only once and do retain()
, duplicate()
depending on how you'll do this;ctx.write(response)
- you may consider using ctx.write(response, ctx.voidPromise())
in order to allocate less;This is not all. However, fixing above issues would be a good start.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With