Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Client side javascript driven websites [closed]

Is it possible to build dynamic web applications using client side javascript as the pivotal point? I'm not talking about server side javascript (like node), I'm talking about handling most of the site with javascript: templating, form handling etc.

Of course, the short answer is "yes, it is possible". But my main concern is about database data manipulation and security when the database is traditionally located on a server. A clientside javascript driven application should ideally talk almost directly with the database. I know Couchdb allows this, but how to prevent users to submit queries meant to see data they should not be allowed to see? How to handle input validation considering that the main validation should be also client side and so easily forged?

This seems to me very interesting but not really doable, but maybe there are solutions I'm not aware of, or tiny security layers to wrap around some db, or projects I ignore etc.

I'm aware of CouchDb Standalone apps (couchapp) , it's a technology close to what i'm after, but it enforces an open approach that makes it not ideal for every scenario I can think of.

Any suggestion on this topic is welcome.

EDIT: As examples are required, think at the simples blog. I want to show the last five posts in the front page. If someone "hacks" the page in a very simple way, they could retrieve older posts. That's fine. Bu what when I want to insert a new post? If javascript has open access to the database, anyone can also write posts in my blog - I don't want it. Also, anyone can delete my posts or other users comment, a privilege that I want. And what if I want to avoid comments longer than 500 characters and containing bad words? Again, being validation on the client-side, users can bypass it.

like image 258
pistacchio Avatar asked Nov 05 '22 21:11

pistacchio


2 Answers

First, running code on your clients to directly access the database sounds like trouble; it seems to go against the very idea of information hiding that has been so instrumental in designing more complex systems.

So, I assume most of this is an academic exercise. :)

Most centralized database systems have multiple users or roles themselves; typically, we use a single "user account" per application, and allow the applications to define users in their own way. (This has always bothered me, it always felt like admin vs user roles should also be separated in the database. But I appear to be alone in my opinion. :)

However, you could define an admin role and a user role in your database, GRANT SELECT privileges to your user role, and GRANT SELECT, UPDATE, INSERT privileges to your admin role.

PostgreSQL at least has a predefined PUBLIC that can take the place of user; the following example is copied from http://www.postgresql.org/docs/9.0/static/sql-grant.html :

GRANT SELECT ON mytable TO PUBLIC;
GRANT SELECT, UPDATE, INSERT ON mytable TO admin;

Correct configuration of the pg_hba.conf file could allow admin users to log in from specific IPs, and user users to log in from everywhere else, so you could restrict UPDATE and INSERT to just specific IPs.

Now the difficulty is in writing a PostgreSQL client library in JavaScript. :) I honestly haven't got a clue if the browser-based JavaScript virtual machines are flexible enough to allow arbitrary sockets communication with a remote host. (Given how WebSockets have been so heartily embraced, I'm guessing JavaScript is very much stuck in the HTTP world.)

Of course, you have to serve your HTML pages to the web browsers somehow, and that means having an HTTP server. You might as well ask that server to sit between the clients and the database. And you might as well do some processing tasks on the server to avoid sending redundant / unnecessary data to clients as well... which is exactly how we have the situation we have today. :)

like image 172
sarnold Avatar answered Nov 09 '22 07:11

sarnold


Yes it's possible to have a web application which depends heavily on JavaScript; however, most of the time this layer is just additional; not a replacement. Most of the Developers out there think about JavaScript as just a convenience layer which makes transactions "easier" for the end-user, and you could say that that's the best approach security-wise. JavaScript as a "definitive" tool, to sanitize malleable data to a trusted database is just not efficient; unless you want to treat all your data as insecure, and sanitize it every single time you display it and deal with it from JavaScript itself.

Fancy animations, AJAX, validations, calculations, are usually there only for convinience, under the thinking that sometimes it's better to use the client processor rather than the server's. And of course, the fact that making things "faster" has been something everyone has been wanting to achieve since the 56k internet connection.

Security wise, having the security logic without any extra layer of protection available to the end user is just insane. Think about JavaScript as an Extra hand. What JavaScript can read, the user can read. You wouldn't be storing database credentials, or password hashing keys on there would you? Specially since JavaScript can be modified in the run with a debugger, and pretty much any obfuscated code can be solved to something readable by a human.

Leaving that insertion and 'secure' data aside, you shouldn't have much trouble fetching publicly available information, if your source is secure.

For the javascript front-end I would recommend Backbone.js, which will give you a solid base on the organization and interaction side:

Backbone supplies structure to JavaScript-heavy applications by providing models with key-value binding and custom events, collections with a rich API of enumerable functions, views with declarative event handling, and connects it all to your existing application over a RESTful JSON interface.

The only thing that would be left is having a thin layer that could rest on top of your DB (or even the db itself) to sanitize the data upon insertion and to store / compute sensitive information that can't be exposed under ANY situation to the client.

UPDATE

The Blog Example

The 3 real requirements to do what you want.

  1. Authentication (backend) : this would be needed to access the db, to erase comments, and so on.
  2. Validation & Sanitization (backend) : Limit amount of characters, escape malicious code, ban words.
  3. Sensitive Data Handling (backend) : Using a Hash to for passwords...

Note: You can pretty much bypass the "Validation & Sanitization" by dealing all your data as insecure when you display it; like I mentioned highly inefficient since you would need the client to parse it somehow to make it secure; and it's still highly probable it'll be bypassed. The other two, really require you to have a secure way to interact with your valuable db.

Usually: If your back-end fails, your front-end fails. And vice versa. A XSS can do a heavy amount of damage on a JavaScript ruled site (like Facebook for instance).

like image 45
MarioRicalde Avatar answered Nov 09 '22 07:11

MarioRicalde