Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Ajax GET: multiple data-specific calls, or fewer less specific calls?

I'm developing a web app using a Node.js/express backend and MongoDB as a database.

The below example is for an admin dashboard page where I will display cards with different information relating to the users on the site. I might want to show - on the sample page - for example:

  1. The number of each type of user
  2. The most common location for each user type
  3. How many signups there are by month
  4. Most popular job titles

I could do this all in one route, where I have a controller that performs all of these tasks, and bundles them as an object to a url that I can then pull data from using ajax. Or, I could split each task into its own route/controller, with a separate ajax call to each. What I'm trying to decide is what are the best practices around making multiple ajax calls on a single page.


Example:

I am building up a page where I will make an interactive table using DataTables for different types of user ( currently have two: mentors and mentees). This example requires just two data requests (one for each user type), but my final page will be more like 10.

For each user type, I am making an ajax get call for each user type, and building the table from the returned data:

User type 1 - Mentees

$.get('/admin/' + id + '/mentees')
    .done(data => {
        $('#menteeTable').DataTable( {
            data: data,
            "columns": [
                { "data": "username"},
                { "data": "status"}
            ]
        });
    })

User type 2 - Mentors

$.get('/admin/' + id + '/mentors')
    .done(data => {
        $('#mentorTable').DataTable( {
            data: data,
            "columns": [
                { "data": "username"},
                { "data": "position"}
            ]
        });
    })

This then requires two routes in my Node.js backend:

router.get("/admin/:id/mentors", getMentors);
router.get("/admin/:id/mentees", getMentees);

And two controllers, that are structured identically (but filter for differnt user types):

getMentees(req, res, next){
    console.log("Controller: getMentees");
    let query = { accountType: 'mentee', isAdmin: false };
    Profile.find(query)
        .lean()
        .then(users => {
            return res.json(users);
        })
        .catch(err => {
            console.log(err)
        })
}

This works great. However, as I need to make multiple data requests I want to make sure that I'm building this the right way. I can see several options:

  1. Make individual ajax calls for each data type, and do any heavy lifting on the backend (e.g. tally user types and return) - as above
  2. Make individual ajax calls for each data type, but do the heavy lifting on the frontend. In the above example I could have just as easily filtered out isAdmin users on the data returned from my ajax call
  3. Make fewer ajax calls that request less refined data. In the above example I could have made one call (requiring only one route/controller) for all users, and then filtered data on the frontend to build two tables

I would love some advice on which strategy is most efficient in terms of time spent sourcing data


UPDATE

To clarify the question, I could have achieved the same result as above using a controller setup something like this:

Profile.find(query)
    .lean()
    .then(users => {
        let mentors = [],
        mentees = []

        users.forEach(user => {
            if(user.accountType === 'mentee') {
                mentees.push(user);
            } else if (user.accountType === 'mentor') {
                mentors.push(user);
            }
        });
        return res.json({mentees, mentors});
    })

And then make one ajax call, and split the data accordingly. My question is: which is the preferred option?

like image 694
fugu Avatar asked Jul 05 '19 10:07

fugu


1 Answers

TL;DR: Option 1

IMO I wouldn't serve unprocessed data to the front-end, things can go wrong, you can reveal too much, it could take a lot for the unspecified client machine to process (could be a low power device with limited bandwidth and battery power for example), you want a smooth user experience, and javascript on the client churning out information from a mass of data would detract from that. I use the back-end for the processing (prepare the information how you need it), JS for retrieving and placing the information (AJAX) on the page and things like switching element states, and CSS for anything moving around (animations and transitions etc) as much as possible before resorting to JS. Also for the routes, my approach would be each distinct package of information (dataTable) has a route, so you're not overloading a method with too many purposes, keep it simple and maintainable. You can always abstract away anything that's identical and repeated often.

So to answer your question, I'd go with Option 1. You could also offer a single 'page-load' endpoint, then if anything changes update the individual tables later using their distinct endpoints. This initial 'page-load' call could collate the information from the endpoints on the backend and serve as one package of data to populate all tables initially. One initial request with one lot of well-defined data, then the ability to update an individual table if the user requests it (or there is a push if you get into that).

like image 71
Rack Sinchez Avatar answered Oct 10 '22 15:10

Rack Sinchez