Home | Archive | SEO Tools | Contact

Google Webmaster Accounts – Your Permanent Record

August 24, 2010


In case it hadn’t crossed your mind, your Google Webmaster Tools account (and most likely other Google services) are a permanent record of your activity.

Delete a site on Webmaster Tools and it’s not really gone. Don’t believe me? Try attaching your Google Account to show stats for a site on somewhere like Digital Point and you’ll magically see site names that you deleted long ago.

What do you think happens when you keep doing dodgy stuff and all your accounts are connected on Google Webmaster Tools (or Analytics)?

I wonder how long it will be before there is a market for Google Accounts in “good standing”. I’m certainly not the only person that’s noticed launching almost identical sites on different accounts has its differences.

Like this article? Then subscribe to the feed!

Related Posts:


Posted by Mark at 9:55pm
12 Comments »

A boring post on SEO hypocrisy

April 23, 2010

Like some kind of super-hero (or super-villain?), I live something of a double life, blogging on both here (my personal blog) and working and blogging at a search marketing agency that I just gave an anchor text link to.

Yesterday, I posted an article highlighting some rather high-profile link selling that I don’t agree with.

I received a message (or actually a comment that I didn’t approve because it didn’t fit in with my evil propaganda regime) that I was a hypocrite. On the one hand, I blog here about blackhat SEO and on the other, I blog somewhere else and denounce link buying and say it’s improper practise.

I don’t feel this is the case, but I don’t like to duck or ignore questions, as I think it’s bad form – so I’m more than prepared to handle any questions here, on my personal blog, if you want to give me a good slagging off.

Why do you blog about blackhat on Digerati?

1) There’s already a billion white hat “best practise” SEO blogs, which is great – but I don’t see the point in me (personally at least) adding to this fray. There’s very few blogs that give detailed guides on blackhat topics or effective guides to make money online. Yes, I can guarantee you the “Get Rich With BlackHat SEOs” E-Book you bought is a pile of crap.

2) I find search engines really bloody interesting. I like understanding how they work – and sometimes the best way to do that is to try and turn some of the dials to 11 and see what impact it has on search results. A lot of effective white-hat knowledge and white-hat techniques can be derived from what might be deemed “blackhat experiments”. So, by knowing that “x tactic” or “x type of links” produces great results, you can work getting these types of links into your white-hat strategies. The devil is in the detail with SEO and experimentation can prove the key to success, knowing exactly what metrics can tip the balance in your favour.

This is almost a “reverse engineering” method of trying to fathom which techinques work well and which work great; and of course, you have to understand that correlation does not equal causation.

Pirates and Global Warming
It really doesn’t

So this method of collection, as Ryan will constantly remind me, is flawed.

Increasingly, there are more reliable ways to make these ranking correlations with data from people like MajesticSEO and SEOmoz’s Linkscape. (SEOmoz actually did this really cool post on PageRank correlation recently)

So in some ways, I’ve been doing less experiments and more analysis on existing data – perhaps the reason for less posts in the last 12 months. My goal here isn’t to be a blackhat SEO or a whitehat SEO; it’s to learn and become a great SEO – which is what everyone should be doing.

3) Blackhat SEO is fun, just stay away from cracking – that’s not SEO, that’s just you being a very naughty boy (or girl (: )

Information I give on this blog and advice I give to people that pay
There’s a big difference between these two. On this blog, I try and give people information about various SEO techniques that they may have not come across before – or tutorials on how to do neat/interesting things. If any of these things stray into techniques that search engines currently don’t like, I put them in a “blackhat SEO” category.

I think I’m being clear on this, if it’s filed under “blackhat SEO” and you do it, there is a risk of being slapped around by Google (or perhaps Bing or Yahoo). I’m not sure how much of a cross-over on readership there is, but most of the feedback I get on this blog is positive and I am helping individuals learn, experiment and sometimes (I have the e-mails to prove it!) make some money.

If you come to me, with a “real” business, a brand – and you have long-term ambitions to make money online, interact and engage your customers, I’m not going to advise you to do anything blackhat, because while it may be effective in the short term, the results you get are not compatible with traditional business models.

If you can make money, sitting at home with affiliates, tricking Google into ranking your website better – that’s fine: your overheads allow you to be flexible and rebuild from the ground up in short amount of time if the worse happens: I’ve written about lone SEOs vs Big Business in the SEO Guerilla Warfare post previously.

If you’re a business, you employ people, you work hard to be the best at what you do, you invest money in training, premises, stock and a reputation that puts value to a brand, getting banned in Google is really going to piss on your parade.

Why don’t you like paid links?
I’m well aware of the underground (and sometimes not) links trade on the web – and by links, I am specifically referring to the trade of links that are aimed to influence rankings. Most of the highly competitive niches are full of link buyers – I’ve got no problem with this.

While link buying does usually positively affect rankings, my specific grumble is that it that it is not scalable. Google has it’s hands tied somewhat, as banning well-known brands will lower result quality for the end user, so they tackle the problem by trying to devalue the links from those selling them. This means that you might be spending vast amounts of money on links that are providing you with no SEO value – a complete waste, apart from a trickle of referral traffic.

Why bring up this specific case?
That being said – I don’t mind people buying and selling links. If anything, it gives me the long-term advantage of not having the overheads and wastage. The problem is when people (whoever this may be, some SEOs, brokers etc) who try and pass link selling off as a non-blackhat strategy that carries no risk.

To be very clear, take words from Google:

Links purchased for advertising should be designated as such. This can be done in several ways, such as: Adding a rel=”nofollow” attribute to the a tag”

“Google works hard to ensure that it fully discounts links intended to manipulate search engine results, such excessive link exchanges and purchased links that pass PageRank”

I really don’t think that leaves too much to the imagination.

In summary:
The information on this blog isn’t the same information you’d receive from me in a professional capacity. Digerati is not an SEO agency, I don’t do work for clients, don’t try this at home work.

Like this article? Then subscribe to the feed!

Related Posts:


Posted by Mark at 9:18pm
17 Comments »

Autostumble Source Code Released

February 01, 2010

As long time readers will know, I released Autostumble back in April 2008 (wow, yes coming up two years ago). At the time, it absolutely wtfpwned StumbleUpon and our network delivered just under half a million vote swaps.

The program went through a few versions, having various features added on the way and multiple fixes as StumbleUpon desperately tried to keep its pants up. The final (version 4, which was fixed in such a hurry it came out the factory with “v3″ still plastered on) was the last incarnation of the application.

Unfortunately, the time taken to monitor the network, answer support e-mails and keep the fixes coming was consuming more time than I ever intended. I worked with another dev to get an online version up, but that never got to a stable launch, so the whole thing basically gathered dust.

So three things:

1) I’ve released the Autostumble source code for download for anyone that wants to have a crack at reverse engineering it.

2) For those that can’t code There is actually a really nice copy of Autostumble which works, that is currently doing pretty well from what I can see. It also has a function to add reviews, which AutoStumble never had. So, if you still want to swap some stumbles, that’s the place to go.

3) I talked to the guy that runs it, and he has arranged this voucher code for old Autostumble users: AutoStumbleOld

Enter that code and you’ll get a pretty nice $20 off.

I’ve been working hard with the development team at the full-time SEO company I work at to develop some really nice apps for the SEO world, which shall be surfacing in Q2 of 2010. It’s really surprising what you can achieve when you have a team of full-time developers to help.

Anyway for now:

Coders can get Autostumble source code.

Marketers can get a working Autostumble type program.

This is all. [insert vague promising of posting more here].

Like this article? Then subscribe to the feed!

Related Posts:


Posted by Mark at 10:34pm
6 Comments »

Does Blackhat SEO still work?

December 07, 2009

If anyone tells you blackhat SEO doesn’t work, get them to comment on this.

Like this article? Then subscribe to the feed!

Related Posts:


Posted by Mark at 12:30pm
13 Comments »

How to make a Twitter bot with no coding

September 23, 2009

As usual, lazy-man post overview:

With this post you can learn to make a Twitter bot that will automatically retweet users talking about keywords that you specify. You can achieve this with (just about) no coding whatsoever.

Why would you want to do this? Lots of reasons I guess, ranging from spammy to fairly genuine. Normally giving somebody a ReTweet is enough to make them follow you and it keeps your profile active, so you can semi-automate accounts and use it as an aide for making connections. That or you can spam the sh*t out of Twitter, whatever takes your fancy really.

Here we go.

Step 1: Make your Twitter Bot account
Head over to Twitter.com and create a new account for your bot. Shouldn’t really need much help at this stage.. Try to pick a nice name and cute avatar. Or something.

Step 2: Find conversations you want to Retweet
Okay, we’ve got our Twitter account and we’re going to need to scan twitter for conversations to possibly retweet. To do this, we’re going to use Twitter Search. In this example, we’re going to search for “SEO Tips”, but to stop our bot Retweeting itself you want to add a negative keyword of your botname. So search for SEO Tips -botname, likely this:

Twitter Bot




So my bot is called “DigeratiTestBot”. Hit search now, muffin.



Step 3: Getting the feed
The next thing you need to do is get the feed results, which isn’t quite as simple as you’d think you see. Twitter being a bit of a prude doesn’t like bots and services like Feedburner or Pipes interacting with it, so you’re going to need to repurpose the feed or it’s game over for you.

After you’ve done your search you need to get the feed location (top right) so copy the URL of the “Feed for this query”

Twitter Bot




Store that in a safe place, we’ll need it in a second.



Step 4: Making the feed accessible
Okay, so there’s a teeny-tiny bit of code, but this is all, I promise! You’re going to need to republish the feed so it can be accessed later on, but don’t worry – it’s a piece of cake. All we’re going to do is screen scrape the whole feed results page onto our own server.

Make a file called “myfeed.php” and put this in it:

<?
$url = "http://search.twitter.com/search.atom?q=seo+tips+-yourbotname";
$ch = curl_init($url);
curl_setopt($ch, CURLOPT_RETURNTRANSFER, true);
$curl_scraped_page = curl_exec($ch);
curl_close($ch);
echo $curl_scraped_page;
?>

The only bit you need to change is:

“$url = “http://search.twitter.com/search.atom?q=seo+tips+-yourbotname”;”

which needs to be replaced with whatever your Twitter RSS feed that we carefully saved and stored in a safe place earlier. If you’ve already lost that URL, please proceed back to Step 3 and consider yourself a fail.

So, having completed this and uploaded your myfeed.php to your domain, you can now access the real-time Twitter results feed by accessing http://www.yourdomain.com/myfeed.php.

Step 5: Yahoo Pipes!
Now comes the fun bit, we’re going to set up most of the mechanism for our bot in Yahoo Pipes. You’ll need a Yahoo account, so if you don’t have one, get one and login and click “Create a Pipe” at the top of the screen.

This will give you a blank canvas, so let’s MacGyver us up a god damn Twitter Bot!

Add “Fetch Feed” block from “Sources”
Then in the “URL” field, enter the URL of the feed we repurposed, http://www.yourdomain.com/myfeed.php.

Twitter Bot




Add “Filter” block from “Operators”
Leave the settings as “Block” and “all” then add the following rules:
item.title CONTAINS RT.*RT
item.title CONTAINS @
item.twitter:lang DOES NOT CONTAIN EN


(You click the little green + to add more rules). Once you’ve done that drag a line between the bottom of the “Feed Fetch” box and the top of the “Filter” box to connect them. Hey presto.

Twitter Bot




Add “Loop” block from “Operators”

Add a “String Builder” from “String” and drag in ONTO the “Loop” block you just added


In the String Builder block you just put inside the Loop block, add these 3 items:
item.author.uri
item.y:published.year
item.content.content

Check the radio box of “assign results to” and change this to item.title

Great, now drag a connection between your Filter and Loop blocks. Should look like this now:

Twitter Bot




Add “Regex” block from “Operators”
Add these two rules:
item.title REPLACE http://twitter.com/ WITH RT @
item.title REPLACE 2009 WITH (space character)

Extra points for anyone who writes “(space character)” instead of using a space. Also don’t miss the trailing slash from twitter.com/



Drag a connection between Loop Block and Regex Block, then a connection between Regex and Pipe Output blocks.

Finished! Should look something like this:

Twitter Bot




All you need to do now is Save your pipe (name it whatever you like) and Run Pipe (at the top of the screen).

Once you run your pipe, you’ll get an output screen something like this:

Twitter Bot




What you need to do here is save the URL of your pipe’s RSS feed and keep it in a safe place. If you didn’t lose your RSS feed from Step 3, then I’d suggest keeping it in the same place as that.



Step 6: TwitterFeed
Almost there, comrades. All we need to do now is whack our feed into our TwitterBot account, which is made really easy with TwitterFeed.com. Get yourself over there and sign up for an account.

To set up your bot in TwitterFeed:

1) I suggest not using oauth, as it will make it easer to use multiple Twitter accounts. Click the “Having Oauth Problems?” link and enter the username and password for your TwitterBot account and hit test account details.

2) Name your feed whatever you like and then enter the URL of your Yahoo Pipes RSS that we carefully saved earlier, then hit “test feed”.

3) Important: Click “Advanced Settings” we need to change some stuff here:

Post Frequency: Every 30mins
Updates at a time: 5
Post Content: Title Only
Post Link: No (uncheck)

Then hit “Create Feed”

Twitter Bot




All done!

Have fun and please, don’t buy anything from those losers who are peddling $20 “automate this” Twitter scripts. If you really need to do it, just make it yourself or if you don’t know how leave a comment here and I’ll show you how.

Bosh.

Like this article? Then subscribe to the feed!

Related Posts:


Posted by Mark at 10:47pm
115 Comments »