Get Google Search Results With PHP – Google AJAX API And The SEO Perspective

If you’ve ever tried to write a program that fetches search results from Google, you’ll no doubt be familiar with the excrutiating annoyances of parsing the results and getting blocked periodically. Run a couple hundred queries in a row and bam! – your script is banned until proven innocent by entering an captcha. Even that would provide only a short reprieve, as you’d soon get blocked again.

Luckily there’s an official Google search API that will let you avoid that hassle. In this post you’ll find an example PHP script and a (mainly) SEO-oriented review of the API.

Using the AJAX API in PHP

I must confess that until yesterday I didn’t know you could use the Google AJAX search API in languages other than JavaScript. The documentation didn’t even mention the possibility when the API was first released. Well, it does now, and PHP is among the supported languages. Oh, the joy.

The API is already pretty well documented, so I won’t waste your time with another lengthy tutorial. Instead, here’s a simple example of how you could use it in PHP :

/**
 * google_search_api()
 * Query Google AJAX Search API
 *
 * @param array $args URL arguments. For most endpoints only "q" (query) is required.  
 * @param string $referer Referer to use in the HTTP header (must be valid).
 * @param string $endpoint API endpoint. Defaults to 'web' (web search).
 * @return object or NULL on failure
 */
function google_search_api($args, $referer = 'http://localhost/test/', $endpoint = 'web'){
	$url = "http://ajax.googleapis.com/ajax/services/search/".$endpoint;
	
	if ( !array_key_exists('v', $args) )
		$args['v'] = '1.0';
	
	$url .= '?'.http_build_query($args, '', '&');
	
	$ch = curl_init();
	curl_setopt($ch, CURLOPT_URL, $url);
	curl_setopt($ch, CURLOPT_RETURNTRANSFER, 1);
	// note that the referer *must* be set
	curl_setopt($ch, CURLOPT_REFERER, $referer);
	$body = curl_exec($ch);
	curl_close($ch);
	//decode and return the response
	return json_decode($body);
}

$rez = google_search_api(array(
		'q' => 'antique shoes',
 ));

print_r($rez);

That’s it for the programming part.

So should we really throw away our lovingly crafted SERP scrapers and embrace the “official” API? Perhaps not. There are some peculiar things I’ve noticed after trying out the new API.

The Good

Lets start with the positive aspects. First, it looks like you can indeed safely use the API without getting blocked – I successfully ran about 1800 API queries in ~2 hours. Due to my crappy connection I was unable to test how it would behave if you turn it up to eleven and send hundreds of requests per second, but the rate limiter is definitely more lenient on API users than on plain SERP scrapers. This is a major plus for people who don’t like throttling their software to one request per minute or hunting for working proxies to get around bans.

The API also makes it easy to parse the results. All queries return JSON-encoded data, so you just json_decode() it and go. No need to invent complicated regexps that must be rewritten every time Google changes the HTML structure of the search results page.

The Bad

Of course, with a cliche megacorporation like Google it’s never all fun and games. You can only get 8 search results at a time, and no more than 64 results in total for any particular keyword. Whether this is a problem depends on what you intend to do with the API, but it’s certainly an unpleasant limitation.

The really peculiar – nay, insidious – thing is how the search results returned by the API differ from normal SERPs. A site that is #10 in a normal Google search may suddenly turn up as #1 in the API results. The typical #5 result may be moved to the second page. Basically, the API results look like they’ve been shuffled around a bit – the same URLs are returned but in slightly different order. Also, the “estimated result count” provided by the API is consistently much lower than what a normal search shows. All this makes the API useless for rank checking and similar SEO applications.

According to my tests you can’t just write off these discrepancies as a sideffect of geo-targeting.

It Depends

Overall, the API is either great or it kind-of sucks, depending on what you want to do with it.

At the risk of sounding like a conspiracy theorist, I must say the API seems to be cleverly engineered to be useful for “normal” purposes and somewhat useless for SEO. After all, only SEO workers really need accurate ranking data and more than 64 results per keyword phrase. Typical search engine users rarely move beyond the first page of results, so the limitations don’t hurt them. The various mashup makers that cater to the common user are also unaffected. It’s only the SEOs (and the rare academic researcher) that would be dissatisfied with the imposed constraints.

Of course, I’m sure you can still imagine a few interesting uses for the API πŸ˜‰

Related posts :

86 Responses to “Get Google Search Results With PHP – Google AJAX API And The SEO Perspective”

  1. Alan Williams says:

    This was a very informative article. I will read your blog often.

  2. Areeb says:

    Yea the AJAX API is a pain in the butt because of the huge variation in search result counts. For my software (SEnuke) what I did was I bought a huge proxy list of around 200 proxies (not too expensive) and just change the proxy on Google every time it throws back the “automated search” message. Make sure to wipe out the cookie that you send to Google for the search request every time you flip the proxy on them. Works a charm! πŸ˜‰ But of course there is a little bit of monetary investment involved, but it can work real well if you are desperate for this info.

  3. Cameron Logie says:

    Guess you could achieve the same thing for free if you set up Tor to anonimise your IP… until all the endpoints get blocked. πŸ™‚

    Cam.

  4. White Shadow says:

    I’ve actually tried using it, but in my experience I get blocked even faster with Tor. Switching identities helps for a while, but the new identity is also soon blocked. I’m guessing this is because lots of other people also had the same bright idea and try to use Tor for Google scraping.

  5. kana says:

    This is what i ve been been searching for the past 2 weeks. thanks.

  6. Umar says:

    Why this only gives 4 results??

    Please help

  7. White Shadow says:

    Read the documentation. The API only returns 4 or 8 results per call.

  8. Umar says:

    Thank you for quick reply!

    How can we get 8 instead of 4? What is that parameter which does this?

  9. White Shadow says:

    I surmise you still didn’t read the documentation.

    Anyway, append “rsz=large” to the request URL to get 8 results.

  10. Umar says:

    ah,

    My fault!

    Thanks for quick help.

  11. Nick says:

    Does anyone know whether the results returned from API would generate revenue from pay per click?

  12. White Shadow says:

    No, I don’t think they generate PPC revenue.

  13. Nick says:

    Pity… thanks to all for a great contributions – got the code up and running in minutes.

  14. Sarah says:

    if the estimated result count it lower than it is on the web, than the api is useless for me. So there is still no good solution for people like me. The API is inaccurate and i’m constantly getting banned by scrapers or having to wait patiently……..

  15. White Shadow says:

    True, but there isn’t much else one can do without resorting to using massive amounts of anonymous proxies.

  16. Oren says:

    Thank you so much!!! You saved me tons of time!

  17. Derek says:

    Very nice article man! This information has helped me a bunch. My brother and I are just beginning some work on a mash up website and I can’t wait to apply this. Thanks again!

  18. op says:

    how i get the estimated result count to keep variable?

  19. White Shadow says:

    I’m afraid I don’t understand the question.

  20. Nilay says:

    This may be related to php, but is there a way to get just the urls. Right now, I get things beginning with

    “stdClass Object ( [responseData] => stdClass Object ( [results] => Array ( [0] => stdClass Object ( [GsearchResultClass] => GwebSearch [unescapedUrl] =”

    and it goes on to display a visible url, a cached url, etc. I guess I need the list of visible urls. Hope you can help.

Leave a Reply