I read quite some stuff about Client-Side JavaScript Apps and search engine bot crawling approaches. I found two general approaches:
Precondition: The whole web app degrades gracefully and is usable without JavaScript. So it is visible for search engine bots to crawl.
Precondition: The server backend is designed after Google's ajax-crawling guides ( https://developers.google.com/webmasters/ajax-crawling ) and returns to escaped_fragment urls (e.g. www.example.com/ajax.html?_escaped_fragment_=key=value ) plain html. As far as I understood something like http://phantomjs.org/ could be used for that to make sure there is no frontend code duplication.
What should a crawlable emberjs application stack look like to offer server side rendering for search engine bots and frontend js-framework goodness? What is recommended by the emberjs core developers to achieve this? (Eg. Node + Emberjs + phantomjs +- x OR Rails + Emberjs + y OR Playframework + Z)?
I know there might be a lot of ways to get there but I feel it would be good to use stackoverflow to filter out common approaches.
I already had a look at some JS frameworks that want to create such a full stack out of the box. To name these here:
I especially ask about emberjs because I like their approach and I think the team behind it is definitely capable of building one of the best frameworks.
I have yet to see anything pre-existing like this built for emberjs. There are however early attempts to integrate ember as a server-side module for node.
Something to check out is derby.js, which actually does workflow #1. You might want to look at their codebase, and if you are up to the task, adapt it for ember.
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With