You’re too busy for SEO. You’ve got a hundred things in the queue to get your latest site off the ground, and time spent optimizing for a robot is at the bottom of the list. That’s fair.

Search engines shouldn’t require us to do anything. Just build a good website, make some content, get incoming links (so people can find you!), and, if it’s interesting enough, it should appear on the first page. Sounds reasonable?

Well, unfortunately, neither of these things are true. Here’s the story of how we discovered we weren’t appearing in Google searches for ‘Bindle’ and the journey to find a solution. Along the way we learn the importance of restraint to keep the GoogleBot happy.

Where are we?: The Problem.

Just three months ago when you tried to find Bindle via Google search we’d be absent from the results… until page 38. The only hint that our site existed was passing reference from blogs that had featured us. Bing, on the other hand, proudly displayed us at #6 on their results list. What was going on?

Where are we?

2 relatively minor blogs linking to our homepage, but no sign of that homepage itself.

We're there on Bing

In Bing, on the other hands, we appear at #6.


Webmaster Tools shows we are result #380

Here Google's Webmaster Tools says we should appear on page #38, although I never actually found us.


After remedying most of the obvious SEO missteps documented widely on the internet, our rank was unchanged. For a creator it’s absolutely frightening to feel that the product is out of your control. Thus you might imagine that we were both stunned and worried that somehow the almighty GoogleBot had forgotten about Bindle altogether. Could this be the infamous Panda update in action?

Loading time

The Bindle team was left with delving into the smoke-and-mirror optimizations that are far from well-defined. The first course of action was to resolve any issues with our loading times.

Google cares about the speed of the internet: Pages need download quickly. We thought the previous load time was fairly fast. Nevertheless the challenge was undertaken and now we fully utilize caches, CDNs, gzip, everything you can think of. Did it fix it? Nope, but the upside is Bindle is faster than ever.

Duplicate Content

Google doesn’t like it if more than one URL has similar content. The previous iteration of Bindle had invitation links that rendered custom copy depending on the invite, we couldn’t have these invites showing up in the index so we needed a link[rel=canonical], otherwise our friendly gBot might think we’re up to no good.

Bindle has two web accessible servers: a staging & production. We assumed staging to be obfuscated because there were no inbound links, sitemaps or anything that noted its existence. So when we learned our staging server was somehow found and indexed by google it was clear that a staging specific robots.txt was in order.

Did either of these solutions fix it? Nope, but we can rest easy knowing we’ve resolved our duplicate content problem.

Keyword limit culprit

We stumbled upon the solution serendipitously. Without informing us of the fact (or why) the gBot had decided we were spamming the keyword “bindle” and that it would be best not show us in the results for any query involving it. Which, obviously, made it kind of hard to find us.

After some fairly mindless deletions of the word “bindle” from the homepage, we achieved the holy grail:

There we are! #6 baby

Finally, after some spurious deletions of 'bindle' from the homepage, we appear in our rightful spot at #6 in google!

Tom Coleman

Co-creator of, searching for simplicity, quality and elegance in technology, products and code.

See all Tom Coleman's posts

3 Responses to “When the Panda strikes — Why you can’t afford to ignore SEO”

  1. Joe Reynolds says:

    The Panda update completely devastated my traffic and the traffic of some of my clients. After about 9 months I was finally able to recover my traffic to pre-panda levels. The main things I did was remove any internal duplicate content using the rel=”canonical” tag and rewrite any external duplicate content (I have an ecommerce website and was using the manufacturer’s default product descriptions). The traffic increase didn’t come over not but gradually increase with each Panda “refresh.” It was a painstaking process to get my traffic back but in the end it was worth it. There are 100s if not 1,000s of articles out there about how to recover from Panda…some good, so not so good. If anyone’s interested here’s the guide that helped me personally recover the from the Panda penalty:

  2. Richard Haussmann says:

    Hi Tom-
    How did you figure out it was keyword stuffing? I would figure that Panda would have hit you on duplicate content and the canonical and robots.txt would have covered it.

    • Tom Coleman says:

      Based on the timing of us reappearing in the index. We don’t know for sure (this is part of the black art) but we made the duplicate content changes weeks before considering the keyword stuffing without seeing any improvement, whereas the keyword stuff seemed to have a pretty quick effect.

Leave a Reply

  • Search: