I use jQuery to retrieve content from the database with a json request. It then replaces a wildcard in the HTML (like %title%) with the actual content. This works great and this way I can maintain my multi-language texts in a database, but Googlebot only sees the wildcards, not the actual content. I know Googlebot sees pages without javascript, but is there a way to deal with this? Thanks!
We ran a series of tests that verified Google is able to execute and index JavaScript with a multitude of implementations. We also confirmed Google is able to render the entire page and read the DOM, thereby indexing dynamically generated content.
Even though Google can usually index dynamic AJAX content, it's not always that simple.
Googlebot processes JavaScript web pages in three phases: crawling, rendering, and indexing.
Yes, Google crawls dynamic content created using javascript. It can recognize the DOM after loading, including modifications to the title tag. It can also follow links created with the onclick event handler.
Google appears to have a near-fully or fully functional javascript-crawling bot at the time of this answer:
In 2009 Google proposed a solution for making AJAX crawlable: https://webmasters.googleblog.com/2009/10/proposal-for-making-ajax-crawlable.html
In 2015 Google deprecated the above approach: https://webmasters.googleblog.com/2015/10/deprecating-our-ajax-crawling-scheme.html
I have successfully built multiple single page applications that are correctly rendered in Google's Webmaster tools.
There are lots of resources on the web if you want to dive deeper:
If you love us? You can donate to us via Paypal or buy me a coffee so we can maintain and grow! Thank you!
Donate Us With