Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Why is .NET OData Serializer so slow

I have an OData endpoint (tested with both .NET Core and .NET 4.7.1) which exposes 2,500 objects built in memory. The Get OData call takes 30-40 seconds. The equivalent ASP.NET WEB API call that return raw JSON takes 1 second. It feels as though the OData framework is not as efficient as Json.NET. Any suggestions on how to improve performance?

Really slow.

    [EnableQuery(EnsureStableOrdering = false)]
    public ActionResult<IEnumerable<Person>> Get()
    {
        var list = new List<Person>();
        for (var i = 0; i < 2500; i++)
        {
            list.Add(new Person());
        }

        return list;
    }

Really fast.

public IHttpActionResult Get()
{
    var list = new List<Person>();
    for (var i = 0; i < 2500; i++)
    {
        list.Add(new Person());
    }

    var json = JsonConvert.SerializeObject(list);
    return Ok(json);
}
like image 295
Kye Avatar asked Jan 31 '20 12:01

Kye


1 Answers

Well, the answer is not the serializer. When you add EnableQuery, you are allowing by default the different methods that OData uses Select, Count, Skip, Top, Expand. But most important is that EnableQuery is an ActionFilterAttribute which means:

Filters in ASP.NET Core allow code to be run before or after specific stages in the request processing pipeline.

Check here for more info about ActionFilters

With that said, EnableQuery overrides two methods (Before/After):

public override void OnActionExecuting(ActionExecutingContext actionExecutingContext)

and

public override void OnActionExecuted(ActionExecutedContext actionExecutedContext)

The first one is where the query options are created and validated. This involves a lot of reflection work. The second one checks if the result is set and successful, for example if you are returning an IQueryable, here is where it gets executed. The query is materialized at this level. It first tries to retrieve the IQueryable from the returning response message. It then validates the query from uri based on the validation settings on "EnableQueryAttribute". It finally applies the query appropriately, and reset it back on the response message.

As you can see, all of this extra logic is more complex than just converting the result into a JSON output which is the final step.

I took your example:

[ApiController]
    [Route("api/test")]
    public class WeatherForecastController : ControllerBase
    {
        [Route("/get1")]
        [HttpGet]
        [EnableQuery(EnsureStableOrdering = false)]
        public ActionResult<IEnumerable<Person>> Get1()
        {
            var list = new List<Person>();
            for (var i = 0; i < 2500; i++)
            {
                list.Add(new Person());
            }

            return list;
        }

        [Route("/get2")]
        [HttpGet]
        public IActionResult Get2()
        {
            var list = new List<Person>();
            for (var i = 0; i < 2500; i++)
            {
                list.Add(new Person());
            }

            var json = JsonConvert.SerializeObject(list);
            return Ok(json);
        }

        [Route("/get3")]
        [HttpGet]
        public IActionResult Get3()
        {
            var list = new List<Person>();
            for (var i = 0; i < 2500; i++)
            {
                list.Add(new Person());
            }

            return Ok(list);
        }

and I run a performance test with 20 different threads doing the same request to each of these enpoints: get1, get2, get3 the result is the following:

avg milliseconds for each one is: 433,355,337 which in my opinion is not bad, the first one is the Odata and compared with the last one it has only 96 milliseconds difference for this load test. Not sure why your example is taking 30-40 secs here because I used your same code and Jmeter to do the load test and the max time I got was 900 ms for one request, the first one only and it make sense because the apppool is starting with the first request in case it was in sleep

IMHO, If you want to implement all the operations that you can do with odata (shaping, order, paging, filter) you will need to work a lot with reflection as well, at least for the shaping and the filter, for not saying all the binary operators that are available as well. Creating your own grammar, for me is not an option, you will need to create a lexer and a parser just to accommodate your own syntax. So I think the benefit you get with this is huge unless your API is not going to require these complex operators. One thing to consider is how you scale your api to not hurt the performance as well, but this depends on the infrastructure you use to host it. Hope this helps.

like image 158
Zinov Avatar answered Nov 09 '22 21:11

Zinov