JavaScript is a great option to make website pages more interactive and less boring.
But it’s also a good way to kill a website’s SEO if implemented incorrectly.
Here’s a simple truth: Even the best things in the world need a way to be found.
No matter how great your website is, if Google can’t index it due to JavaScript issues, you’re missing out on traffic opportunities.
In this post, you’ll learn everything you need to know about JavaScript SEO best practices as well as the tools you can use to debug JavaScript issues.
Why JavaScript Is Dangerous for SEO: Real-World Examples
“Since redesigning our website in React, our traffic has dropped drastically. What happened?”
This is just one of the many questions I’ve heard or seen on forums.
You can replace React with any other JS framework; it doesn’t matter. Any of them can hurt a website if implemented without consideration for the SEO implications.
Here are some examples of what can potentially go wrong with JavaScript.
Example 1: Website Navigation Is Not Crawlable
What’s wrong here:
The links in the navigation are not in accordance with web standards. As a result, Google can’t see or follow them.
Why it’s wrong:
- It makes it harder for Google to discover the internal pages.
- The authority within the website is not properly distributed.
- There’s no clear indication of relationships between the pages within the website.
As a result, a website with links that Googlebot can’t follow will not be able to utilize the power of internal linking.
Example 2: Image Search Has Decreased After Improper Lazy Load Implementation
What’s wrong here:
While lazy loading is a great way to decrease page load time, it can also be dangerous if implemented incorrectly.
In this example, lazy loading prevented Google from seeing the images on the page.
Why it’s wrong:
- The content “hidden” under lazy loading might not be discovered by Google (when implemented incorrectly).
- If the content is not discovered by Google, the content is not ranked.
As a result, image search traffic can suffer a lot. It’s especially critical for any business that heavily relies on visual search.
Example 3: The Website Was Switched to React With No Consideration of SEO
What’s wrong here:
This is my favorite example from a website I audited a while ago. The owner came to me after all traffic just tanked. It’s like they unintentionally tried to kill their website:
- The URLs were not crawlable.
- The images were not crawlable.
- The title tags were the same across all website pages.
- There was no text content on the internal pages.
Why it’s wrong:
- If Google doesn’t see any content on the page, it won’t rank this page.
- If multiple pages look the same to Googlebot, it can choose just one of them and canonicalize the rest to it.
In this example, the website pages looked exactly the same to Google, so it deduplicated them and used the homepage as a canonical version.
A Few Things You Need to Know About Google–JavaScript Relationships
When it comes to how Google treats your content, there are a few main things you should know.
Google Doesn’t Interact With Your Content
Googlebot can’t click the buttons on your pages, expand/collapse the content, etc.
Googlebot can see only the content available in rendered HTML without any additional interaction.
For example, if you have an expandable text section, and its text is available in the source code or rendered HTML, Google will index it.
On the contrary, if you have a section where the content is not initially available in the page source code or DOM and loads only after a user interacts with it (e.g. clicks a button), Google won’t see this content.
Google Doesn’t Scroll
Googlebot does not behave like a usual user on a website; it doesn’t scroll through the pages. So if your content is “hidden” behind an endless amount of scrolls, Google won’t see it.
See: Google’s Martin Splitt on Indexing Pages with Infinite Scroll.
Google doesn’t see the content which is rendered only in a browser vs on a server.
That’s why client-side rendering is a bad idea if you want Google to index and rank your website (and you do want it if you need traffic and sales).
Ok, so is JavaScript really that bad?
Not if JavaScript is implemented on a website using best practices.
And that’s exactly what I’m going to cover below.
JavaScript SEO Best Practices
Add Links According to the Web Standards
While “web standards” can sound intimidating, in reality, it just means you should link to internal pages using the HREF attribute:
This way, Google can easily find the links and follow them (unless you add a nofollow attribute to them, but that’s a different story).
Don’t use the following techniques to add internal links on your website:
By the way, the last option can still be successfully used on a page if you want to bring people to a specific part of this page.
But Google will not index all individual variations of your URL with “#” added to it.
See: Google SEO 101: Do’s and Don’ts of Links & JavaScript.
Add Images According to the Web Standards
As with internal links, image usage should also follow web standards so that Googlebot can easily discover and index images.
To be discovered, an image should be linked from the ‘src’ HTML tag:
Many JavaScript-based lazy loading libraries use a ‘data-src’ attribute to store the real image URL, and they replace the ‘src’ tag with a placeholder image or gif that loads fast.
For example: