This is another SEO Consulting edition from my client consultation, wherein he was asking if why Google indexed a page with noindex Meta Tag. Through Google webmaster tool in the HTML improvement section he has seen that there are pages that created duplicate meta-description but when you check it the other pages are actually have noindex Meta tag. So why Google is indexing Noindex Meta Tag?
First let’s recap about Noindex Meta Tag:
It is a tag that can prevent pages from getting indexed by Google making it not to appear in the web index. It allows you to control the pages of your website from getting accessed by the Googlebot. If noindex meta tag prevent pages from getting indexed or being accessed by Google , how come my client’s pages with noindex meta tag are still indexed by Google?
Explanation from Google:
There are chances that Google will index a page with noindex meta tag although they are crawling the entire page just to see the noindex meta tag. It can still be indexed because according to them they might have not crawled the site after adding the noindex meta tag. Remember that Googlebots are visiting pages in “varying time periods”. It is possible that the page with no index meta tag was not yet revisited by the bot. Also, if you are using a robot.txt file in order to block the page, Google may not be able to see the tag. Additionally they added an advice wherein if you have still seen it on their index, the next time that they are going to crawl the page they are going to remove it from their web index.
On the other hand, if you wanted fast removal of the page you can use the URL removal request in the Google Webmaster Tools. You will be given a chance what page you want to remove in web index.
Action Plan for Google indexing Noindex Meta Tag
• In order to make the page not added into the web index of Google, setting the robots Meta in the page’s header section into “noindex” is advisable. Just remember that you must not disallow robots.txt to be crawled since bots must need to crawl your page first in order to see the “noindex” meta tag because robots.txt file was added with Disallow: / and the page has been already indexed, it will simply still remain a page indexed even though there is a “noindex” Meta tag.
• Wait until Googlebot access again the page with noindex Meta tag.
How can “noindex” Meta tag be helpful?
Based on Google they are recommending website owners to insert a code for no index which is. Such attribute will prevent Googlebots in indexing pages with thin and having duplicate content. This is more beneficial if the website is newly built and contents were not yet polished. However, what’s the sense of building the website if you don’t want Google to index it? It is not just right to add the noindex tag in order that we cannot suffer from algorithm updates but the thing is that if those articles remain to be thin and duplicates, these instances will affect your user’s site experience and therefore leading to negative effects. It should not be a reason to continue providing low quality content on the website. You won’t be needing the noindex meta tag if you create contents with substance, unique and worth sharing. Valuable contents on your website is your arm against Google algorithm refreshes.
Here’s another threat why you should not be dependent on noindex meta tag. Remember that it’s not just Googlebots that visit your site but also humans. When Google may not add your site to web index, it will be added in the list of visited websites by your visitors who can actually be one of your potential customers. Therefore, we can say that noindex Meta tag can only be helpful at times but it should not be the sole reason why you won’t make a quality website. You may escape Googlebots but not real visitors.
Insights? Share it below.