How To Extract All URLs From A Page Using PHP

Recently I needed a crawler script that would create a list of all pages on a single domain. As a part of that I wrote some functions that could download a page, extract all URLs from the HTML and turn them into absolute URLs (so that they themselves can be crawled later). Here’s the PHP code.

Extracting All Links From A Page
Here’s a function that will download the specified URL and extract all links from the HTML. It also translates relative URLs to absolute URLs, tries to remove repeated links and is overall a fine piece of code 🙂 Depending on your goal you may want to comment out some lines (e.g. the part that strips ‘#something’ (in-page links) from URLs).

function crawl_page($page_url, $domain) {
/* $page_url - page to extract links from, $domain - 
    crawl only this domain (and subdomains)
    Returns an array of absolute URLs or false on failure. 
*/

/* I'm using cURL to retrieve the page */
    $ch = curl_init();
    curl_setopt($ch, CURLOPT_URL, $page_url);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
    curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);

/* Spoof the User-Agent header value; just to be safe */
    curl_setopt($ch, CURLOPT_USERAGENT, 
      'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)');

/* I set timeout values for the connection and download
because I don't want my script to get stuck 
downloading huge files or trying to connect to 
a nonresponsive server. These are optional. */
    curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 10);
    curl_setopt($ch, CURLOPT_TIMEOUT, 15);

/* This ensures 404 Not Found (and similar) will be 
    treated as errors */
    curl_setopt($ch, CURLOPT_FAILONERROR, true);

/* This might/should help against accidentally 
  downloading mp3 files and such, but it 
  doesn't really work :/  */
    $header[] = "Accept: text/html, text/*";
    curl_setopt($ch, CURLOPT_HTTPHEADER, $header);

/* Download the page */
    $html = curl_exec($ch);
    curl_close($ch);
    
    if(!$html) return false;

/* Extract the BASE tag (if present) for
  relative-to-absolute URL conversions later */
    if(preg_match('/<base&#91;\s&#93;+href=\s*&#91;\"\'&#93;?(&#91;^\'\" >]+)[\'\" >]/i',$html, $matches)){
        $base_url=$matches[1];
    } else {
        $base_url=$page_url;
    }

    $links=array();
    
    $html = str_replace("\n", ' ', $html);
    preg_match_all('/<a&#91;\s&#93;+&#91;^>]*href\s*=\s*([\"\']+)([^>]+?)(\1|>)/i', $html, $m);
/* this regexp is a combination of numerous 
    versions I saw online; should be good. */
        
    foreach($m[2] as $url) {
        $url=trim($url);
	    
        /* get rid of PHPSESSID, #linkname, &amp; and javascript: */
        $url=preg_replace(
            array('/([\?&]PHPSESSID=\w+)$/i','/(#[^\/]*)$/i', '/&amp;/','/^(javascript:.*)/i'),
            array('','','&',''),
            $url);
        
        /* turn relative URLs into absolute URLs. 
          relative2absolute() is defined further down 
          below on this page. */
            $url = relative2absolute($base_url, $url);    
        
            // check if in the same (sub-)$domain
            if(preg_match("/^http[s]?:\/\/[^\/]*".str_replace('.', '\.', $domain)."/i", $url)) {
                //save the URL
                if(!in_array($url, $links)) $links[]=$url;
            } 
    }
    
    return $links;
}

How To Translate a Relative URL to an Absolute URL
This script is based on a function I found on the web with some small but significant changes.

function relative2absolute($absolute, $relative) {
        $p = @parse_url($relative);
        if(!$p) {
	        //$relative is a seriously malformed URL
	        return false;
        }
        if(isset($p["scheme"])) return $relative;
        
        $parts=(parse_url($absolute));
        
        if(substr($relative,0,1)=='/') {
            $cparts = (explode("/", $relative));
            array_shift($cparts);
        } else {
            if(isset($parts['path'])){
                 $aparts=explode('/',$parts['path']);
                 array_pop($aparts);
                 $aparts=array_filter($aparts);
            } else {
                 $aparts=array();
            }
           $rparts = (explode("/", $relative));
           $cparts = array_merge($aparts, $rparts);
           foreach($cparts as $i => $part) {
                if($part == '.') {
                    unset($cparts[$i]);
                } else if($part == '..') {
                    unset($cparts[$i]);
                    unset($cparts[$i-1]);
                }
            }
        }
        $path = implode("/", $cparts);
        
        $url = '';
        if($parts['scheme']) {
            $url = "$parts[scheme]://";
        }
        if(isset($parts['user'])) {
            $url .= $parts['user'];
            if(isset($parts['pass'])) {
                $url .= ":".$parts['pass'];
            }
            $url .= "@";
        }
        if(isset($parts['host'])) {
            $url .= $parts['host']."/";
        }
        $url .= $path;
        
        return $url;
}
Related posts :

57 Responses to “How To Extract All URLs From A Page Using PHP”

  1. […] By the way, this is not just a theoretical rant. I had this problem with a new project of mine that… well, let’s just say it has to do with del.icio.us and crawling webpages Share and Enjoy: These icons link to social bookmarking sites where readers can share and discover new web pages. […]

  2. wesley says:

    Thanks for the relative2absolute script, that will come in handy. Also the url regex is nice.

  3. White Shadow says:

    Thanks for the comment. By the way, I think I just spotted a mistake in the relative2absolute function, I’m going to fix it immediately.

  4. Cameron Manderson says:

    Looks good, – how about taking into consideration base href?

  5. White Shadow says:

    Hmm… I actually had to look up documentation for base href.

    I think it would be enough to extract the base URL from $html and use it in place of $page_url when doing the relative to absolute conversion. I’ve modified the function to do that (haven’t tested it though).

    Damn, I just noticed WordPress is messing with the backslashes in my code! I hope I’ve fixed the post for now but I can’t guarantee the regular expressions are displaying correctly.

  6. Mike says:

    Can you make the files available to download? This will give a quick work-around to the back-slash problem you’re having

  7. Mike says:

    After tweaking the script for a few minutes, i found the section that checks to see if the link is in the (sub-)domain always seems to return false to me. So no links ever get returned.

    BTW – Thanks for making this code available. Kudos

  8. White Shadow says:

    I think I’ve fixed the backslashes now.

    This is an early version of the function – my actual app – del.icio.us linkback counter – uses a slightly different code. It extracts the domain name from the URL and compares it with the original domain with the help of this function :

    function extract_domain_name($url){
    	if(preg_match('@^(?:http:\/\/)?([^\/]+)@i', $url, $matches)) {
    		return trim(strtolower($matches[1]));
    	} else {
    		return '';
    	};
    };
    
  9. Ed says:

    thanks..its a good piece of code…but the portion
    // check if in the same (sub-)$domain
    didn’t work…there is some problem in regular expression…
    could anybody fix it & let me know..

  10. White Shadow says:

    Hey Ed,

    It was missing a few backslashes (again!). When I wrote this post there was some kind of problem with my blog because it kept removing backslashes and some other “special” symbols from my posts. It should be fixed now.

    Thanks for letting me know.

  11. Ed says:

    Thanks White Shadow….
    but still it’s giving me a warning:

    Warning: preg_match() [function.preg-match]: Unknown modifier ‘/’

    could you tell me..why it is??!!
    thanks again
    Ed

  12. White Shadow says:

    Are you sure you have used the fixed version? I just tried the code on my local server and it worked fine.

    Check that the $domain parameter you’re passing to the crawl_page() function contains no slashes – it should be a domain name only, like subdomain.domain.com, not an URL like http://subdomain.domain.com/. You can use parse_url() to extract the domain name from an address.

  13. Ed says:

    hey!!! thanks..white shadow…i was making a mistake…
    i was using ‘/’ in $domain…but now it fixed…..
    its working great….it works…….
    THANKS again

  14. Angshuman says:

    This is an excellent function. Congrats..and Thanks a lot

  15. BlackCoder says:

    Thanks for the relative to absolute function, but i found a bug in it. It fails with multiple recursives like ../../../Something or ../../
    This is my fix to this problem :

    function constructAbsolutePath($absolute, $relative)
    {
    $p = parse_url($relative);
    if($p[“scheme”])return $relative;
    extract(parse_url($absolute));
    $path = dirname($path);
    if($relative{0} == ‘/’)
    {
    $newPath = array_filter(explode(“/”, $relative));
    }
    else
    {
    $aparts = array_filter(explode(“/”, $path));
    $rparts = array_filter(explode(“/”, $relative));
    $cparts = array_merge($aparts, $rparts);
    $k = 0;
    $newPath = array();
    foreach($cparts as $i => $part)
    {
    if($part == ‘..’)
    {
    $k = $k – 1;
    $newPath[$k] = null;
    }
    else
    {
    $newPath[$k] = $cparts[$i];
    $k = $k + 1;
    }
    }
    $newPath = array_filter($newPath);
    }
    $path = implode(“/”, $newPath);
    $url = “”;
    if($scheme)
    {
    $url = “$scheme://”;
    }
    if($user)
    {
    $url .= “$user”;
    if($pass)
    {
    $url .= “:$pass”;
    }
    $url .= “@”;
    }
    if($host)
    {
    $url .= “$host/”;
    }
    $url .= $path;
    return $url;
    }

  16. White Shadow says:

    Okay, I haven’t tested your version, but thanks 🙂

  17. Johan says:

    Wow, exactly what I was looking for.
    It only needed a little tweak to fetch emails as well (dont worry, I’m no spammer, it’s for a intranet).

  18. Adrian says:

    Nothing work to me. Please give me an example to use these functions some crawl_page(……..); i don’t know if I had a problem on localhost or problem is me :);

  19. Kino says:

    Nothing work to me. Please give me an example to use these functions some crawl_page(……..); i don’t know if I had a problem on localhost or problem is me :);

    Same problem to me I use EazyPHP…you?

Leave a Reply