Logo Questions Linux Laravel Mysql Ubuntu Git Menu
 

Ajax Crawling: old way vs new way (#!)

Tags:

jquery

ajax

seo

Old way

When I used to load page asynchronously in projects that required the content to be indexed by search engines I used a really simple technique, that is

<a href="page.html" id="example">Page</a>
<script type="text/javascript">
    $('#example').click(function(){
        $.ajax({
            url: 'ajax/page.html',
            success: function(data){
                $('#content').html(data);
            }
        })
   });
</script>

edit: I used to implement the haschange event to support bookmarking for javascript users.

New way

Recently Google came up with the idea of ajax crawling, read about it here:

http://code.google.com/web/ajaxcrawling/

http://www.asual.com/jquery/address/samples/crawling/

Basically they suggest to change "website.com/#page" to "website.com/#!page" and add a page that contains the fragment, like "website.com/?_escaped_fragment_=page"

What's the benefit of using the new way?

To me it seems that the new way adds a lot more work and complexity to something that before I did in a simple way: I designed the website to work without ajax and then I added ajax and hashchange event (to support back button and bookmarking) at a final stage.

From an SEO perspective, what are the benefits of using the new way?

like image 356
nemesisdesign Avatar asked Nov 13 '10 11:11

nemesisdesign


2 Answers

The idea is to make the AJAX applications crawlable. According to the HTTP specifications, URLs refer to the same document regardless of the fragment identifier (the part after the hash mark). Therefore search engines ignore the fragment identifier: if you have a link to www.example.com/page#content, the crawler will simply request www.example.com/page.

With the new schemes, when you use the #! notation the crawler knows that the link refers to additional content. The crawler transforms the URL into another (ugly) URL and requests it from your web server. The web server is supposed to respond with static HTML representing the AJAX content.

EDIT Regarding the original question: if you already had regular links to static pages, then this scheme doesn't help you.

like image 78
Amnon Avatar answered Sep 21 '22 01:09

Amnon


The advantage is not really applicable for you, because you are using progressive enhancement. The new Google feature is for applications written entirely in Javascript, which therefore can't be read by the crawler. I don't think you need to do anything here.

like image 30
lonesomeday Avatar answered Sep 20 '22 01:09

lonesomeday