I asked this question on server fault around web site scalability but this was more focused around hardware configuration, increasing memory, etc. So i thought I would ask it here as well as there is the more development side of my question.
As per the question i have a perfect good working asp.net-mvc site with SQL server backend and using nhibernate and syscache 2nd level cache and i have a request to increase the user base from about 1000 to 7000 and i am trying to figure out where I should focus my development energies in terms of things that work perfectly fine now but are going to cause issues at scale. I have been doing a lot of reading and so far the things that seems to be of interest from a coding point of view are;
My SQL server today is 4GB and in terms of data, I would expect a few of the table to grow in size linearly (such as a person table that would grow from 1000 rows to 7000 rows) with this increase of users but most of the other tables (reference data, etc) should only have marginal growth (a table like location maybe would double)
The architecture you describe is not scalable. But based on the numbers you provided, maybe scalability is not a necessity for you at all? Be pragmatic before you design for scalability. Don't get involved into it unless you need it.
Anyway, if you want go for it, you need to scale as follows.
First, distinguish between commands and queries. Commands modify data, queries retrieve data.
For commands, you can use a message broker (e.g. Rabbit MQ) or a service bus (e.g. NServiceBus). Idea is that web server can quickly place a command on the queue, and return the response to the user. Scalability is achieved by scaling out the number of command handlers without touching the web server. Obviously, if you want to inform the user, you need to use some technology such as SignalR.
For queries, you need to understand that they are not so good at scaling as commands are. So, you need to be creative with these.
You really need to have a solid system of backing up data. With the chance of critical error increasing with the amount of users a good backup system is very important. Also you may go offline and loose data.
Data needs a home. Creating a robust system of storing data is very important. This is for when you go offline for whatever reason.
Stress testing will help you know what to fix. Fill up the database with 10,000 random items and test all the functions you can. Try searching for specific id numbers.
Make sure the bandwidth for the server will be able to handle it. Increasing the userbase will increasingly load down the server. The more users the more on at one time.
As the amount of data being requested increasing so do the deadlock chances increase. You may want to read this article
I would approach this problem in multiple steps to achieve best results.
Generally in my projects some of the biggest performance pitfalls come from multiple DB queries per page load. Try loading pages and viewing the database query logs. If there are extra queries, try to consolidate the requests to lighten the load on the DB.
Also, ensure all your stylesheets and javascript assets are compiled into minified single files in your production environment. This will reduce load on your web server.
View your database logs and see which transactions are causing the most latency or triggering full table scans. Add good indexes to these problematic areas and watch your application performance take off.
In a test environment (!!!) use a database entry faker tool such as Faker (It's Ruby but you get the idea). Test out common transactions with a much larger table size than usual and the performance bottlenecks will begin to show themselves more prominently.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With