Google error with Robots.txt might prevent site indexing

Shawn Hogan, operater of DigitalPoint.com had his blog (www.digitalpoint.com/~shawn) removed from Google’s index. He originally wrote on November 17th 2005 about it. “Google Broke up with ME“.

Now there are many reasons why this could happen which I will not get into here at the moment. However on March 21st 2006 Shawn wrote in a little detail that Google is not interpretting his robots.txt file correctly. “Google Not Interpreting robots.txt Consistently

This drew the attention of Matt Cutts, an employee of Google. Google then starting indexing and sending good old Shawn visitors and he wrote about it here… “And Google Said Let There be Shawn

Basically the problem was when you had a generic User Agent section and a Google User Agent section. If you had a Google User Agent section, then Google would ignore the User Agent section instead of merging the two. Please read Shawn’s awesome post to get more info.

Now I’m not one to normally summarize somebody elses blog postings, but I really enjoy Shawn’s random blog. Also consider this a Public Service Announcement to keep an eye on your Robots.txt file.

Leave a Reply

Your email address will not be published. Required fields are marked *


− 5 = four

You may use these HTML tags and attributes: <a href="" title=""> <abbr title=""> <acronym title=""> <b> <blockquote cite=""> <cite> <code> <del datetime=""> <em> <i> <q cite=""> <strike> <strong>