Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Ember fewer requests for hasMany and belongsTo lookups

I have the following three models:

ConversationApp.Post = DS.Model.extend(
  body: DS.attr()
  replies: DS.hasMany('reply', async: true)
  author: DS.belongsTo('user', async: true)
  likedBy: DS.hasMany('user', async: true)
)

ConversationApp.Reply = DS.Model.extend(
  body: DS.attr()
  post: DS.belongsTo('post')
  author: DS.belongsTo('user', async: true)
  likedBy: DS.hasMany('user', async: true)
)

ConversationApp.User = DS.Model.extend(
  firstName: DS.attr()
  lastName: DS.attr()
)

And my index route makes this call:

ConversationApp.IndexRoute = Em.Route.extend(
  model: (args) ->
    @store.find('post', page: 1) # => /api/v1/posts?page=1
)

After that call is made, Ember starts fetching all the users needed for the first page - a total of 17(!) different requests for users on the first page (with 10 posts). Here are 3 examples of the requests Ember makes to the server:

  • /api/v1/users/11375
  • /api/v1/users/4383
  • /api/v1/users?ids[]=34588&ids[]=7442&ids[]=10294

I would like Ember to only make one request, that requests all the required users for the first page:

  • /api/v1/users?ids[]=34588&ids[]=7442&ids[]=10294&ids[]=11375&ids[]=4383

The handlebars file looks like this:

{{#each}}
  {{author.firstName}}
  {{#each likedBy}}
    [... removed for brevity ...]
  {{/each}}

  {{#each replies}}
    {{author.firstName}}
    {{#each likedBy}}
      [... removed for brevity ...]
    {{/each}}
  {{/each}}
{{/each}}

How can I accomplish that?

like image 749
Martin Avatar asked Dec 16 '22 01:12

Martin


2 Answers

I know that this is an old thread, but here is a solution using the DS.RESTAdapter.

It's pretty easy to accomplish this. The only thing you have to do is set coalesceFindRequests to be true, like so:

App.ApplicationAdapter = DS.RESTAdapter.extend({
    coalesceFindRequests: true
});

or if you are using Ember CLI

// app/adapters/application.js
import DS from 'ember-data';

export default DS.RESTAdapter.extend({
    coalesceFindRequests: true
});

You can read more about it in the DS.RESTAdapter api docs

Good luck! :)

like image 115
Mirko Akov Avatar answered Feb 04 '23 11:02

Mirko Akov


So I took a stab at this. Unfortunately, I don't believe Ember-Data provides this kind of functionality out of the box. But it doesn't seem to difficult to implement. My approach was to use a debouncer. Basically, every time a request for users is made, the request is put into a pool. The pool fills up until a long enough period has gone by (50ms in my code) without another request. After that time, the requests are all sent together and the pool is emptied. Then, when the giant request comes back, it's broken up into smaller ones that can be used to fulfill the requests that were originally in the pool.

Keep in mind, I haven't tested this yet, but this should show the general idea.

App.UserAdapter = DS.RESTAdapter.extend({
    _findMany: null,

    find: function(store, type, id) {
        return this.findMany(store, type, [id]);
    },

    findMany: function(store, type, ids) {
        this._findMany = this._super;

        // Create a promise, but keep the resolve function so we can call it later
        var resolve;
        var promise = new Ember.RSVP.Promise(function(r) {
            resolve = r;
        });

        // Let our debouncer know about this request
        this.concatenateRequest(store, ids, resolve);

        // Return a promise as usual
        return promise;
    },

    /**
     * The number of milliseconds after a request to wait before sending it.
     * Tweak this as necessary for performance.
     */
    debounceTimeout: 50,

    concatenateRequest: (function() {
        // All of the IDs currently requested in the pool
        var allIds = new Em.Set();
        // The pool of promises that is currently awaiting fulfillment
        var allPromises = [];

        // The ID of the last `setTimeout` call
        var timeout = null;

        // Takes the specified users out of the payload
        // We do this to break the giant payload into the small ones that were requested
        var extractUsers = function(payload, ids) {
            // Filter out the users that weren't requested
            // Note: assuming payload = { users: [], linked: {}, meta: {} }
            var users = payload.users.filter(function(user) {
                return (ids.indexOf(user.id.toString()) >= 0);
            });

            // Return payload in form that store is expecting
            return { users: users };
        };

        return function(store, ids, resolve) {
            // clear the timeout (if it's already cleared, no effect)
            clearTimeout(timeout);

            // Add the current promise to the list of promises to resolve
            allIds.addObjects(ids);
            allPromises.push({ ids: ids, resolve: resolve });

            // Set our timeout function up in case another request doesn't come in time
            timeout = setTimeout(function() {
                // Get the IDs and promises store so far so we can resolve them
                var ids = allIds.toArray();
                var promises = allPromises;

                // Clear these so the next request doesn't resolve the same promises twice
                allIds = new Em.Set();
                allPromises = [];

                // Send off for the users we know need
                this._findMany(store, ConversationApp.User, ids).then(function(payload) {                   
                    // Resolve each promise individually
                    promises.forEach(function(promise) {
                        // extract the correct users from the payload
                        var users = extractUsers(payload, promise.ids);
                        // resolve the promise with the users it requested
                        promise.resolve(users);
                    });
                });
            }.bind(this), this.get('debounceTimeout'));
        };
    })()
});

EDIT: I set up a quick JSBin with a unit test and it seems to function OK. It's a pretty fragile test, but it shows that the idea works well enough.

like image 32
GJK Avatar answered Feb 04 '23 11:02

GJK