How To Extract All URLs From A Page Using PHP

Recently I needed a crawler script that would create a list of all pages on a single domain. As a part of that I wrote some functions that could download a page, extract all URLs from the HTML and turn them into absolute URLs (so that they themselves can be crawled later). Here’s the PHP code.

Extracting All Links From A Page
Here’s a function that will download the specified URL and extract all links from the HTML. It also translates relative URLs to absolute URLs, tries to remove repeated links and is overall a fine piece of code 🙂 Depending on your goal you may want to comment out some lines (e.g. the part that strips ‘#something’ (in-page links) from URLs).

function crawl_page($page_url, $domain) {
/* $page_url - page to extract links from, $domain - 
    crawl only this domain (and subdomains)
    Returns an array of absolute URLs or false on failure. 

/* I'm using cURL to retrieve the page */
    $ch = curl_init();
    curl_setopt($ch, CURLOPT_URL, $page_url);
    curl_setopt($ch, CURLOPT_RETURNTRANSFER,1);
    curl_setopt($ch, CURLOPT_FOLLOWLOCATION, 1);

/* Spoof the User-Agent header value; just to be safe */
    curl_setopt($ch, CURLOPT_USERAGENT, 
      'Mozilla/4.0 (compatible; MSIE 5.01; Windows NT 5.0)');

/* I set timeout values for the connection and download
because I don't want my script to get stuck 
downloading huge files or trying to connect to 
a nonresponsive server. These are optional. */
    curl_setopt($ch, CURLOPT_CONNECTTIMEOUT, 10);
    curl_setopt($ch, CURLOPT_TIMEOUT, 15);

/* This ensures 404 Not Found (and similar) will be 
    treated as errors */
    curl_setopt($ch, CURLOPT_FAILONERROR, true);

/* This might/should help against accidentally 
  downloading mp3 files and such, but it 
  doesn't really work :/  */
    $header[] = "Accept: text/html, text/*";
    curl_setopt($ch, CURLOPT_HTTPHEADER, $header);

/* Download the page */
    $html = curl_exec($ch);
    if(!$html) return false;

/* Extract the BASE tag (if present) for
  relative-to-absolute URL conversions later */
    if(preg_match('/<base&#91;\s&#93;+href=\s*&#91;\"\'&#93;?(&#91;^\'\" >]+)[\'\" >]/i',$html, $matches)){
    } else {

    $html = str_replace("\n", ' ', $html);
    preg_match_all('/<a&#91;\s&#93;+&#91;^>]*href\s*=\s*([\"\']+)([^>]+?)(\1|>)/i', $html, $m);
/* this regexp is a combination of numerous 
    versions I saw online; should be good. */
    foreach($m[2] as $url) {
        /* get rid of PHPSESSID, #linkname, &amp; and javascript: */
            array('/([\?&]PHPSESSID=\w+)$/i','/(#[^\/]*)$/i', '/&amp;/','/^(javascript:.*)/i'),
        /* turn relative URLs into absolute URLs. 
          relative2absolute() is defined further down 
          below on this page. */
            $url = relative2absolute($base_url, $url);    
            // check if in the same (sub-)$domain
            if(preg_match("/^http[s]?:\/\/[^\/]*".str_replace('.', '\.', $domain)."/i", $url)) {
                //save the URL
                if(!in_array($url, $links)) $links[]=$url;
    return $links;

How To Translate a Relative URL to an Absolute URL
This script is based on a function I found on the web with some small but significant changes.

function relative2absolute($absolute, $relative) {
        $p = @parse_url($relative);
        if(!$p) {
	        //$relative is a seriously malformed URL
	        return false;
        if(isset($p["scheme"])) return $relative;
        if(substr($relative,0,1)=='/') {
            $cparts = (explode("/", $relative));
        } else {
            } else {
           $rparts = (explode("/", $relative));
           $cparts = array_merge($aparts, $rparts);
           foreach($cparts as $i => $part) {
                if($part == '.') {
                } else if($part == '..') {
        $path = implode("/", $cparts);
        $url = '';
        if($parts['scheme']) {
            $url = "$parts[scheme]://";
        if(isset($parts['user'])) {
            $url .= $parts['user'];
            if(isset($parts['pass'])) {
                $url .= ":".$parts['pass'];
            $url .= "@";
        if(isset($parts['host'])) {
            $url .= $parts['host']."/";
        $url .= $path;
        return $url;
Related posts :

57 Responses to “How To Extract All URLs From A Page Using PHP”

  1. James says:

    I did that and the weird thing is the array outputs nothing. Just blank when using:

    print_r(crawl_page("", ""));
  2. White Shadow says:

    You could put some echo/print_r statements in the crawl_page function to see where and why it fails. For example, I’d put one after curl_exec to display the loaded HTML and a


    after the preg_match_all() call.

  3. Ravi says:

    cud u plz help me by teling how to use these functions in my php page.
    i’m naive to all this.
    but i need it for my project.. plz tell

  4. Neil Trigger says:

    I’m trying to work out how to do this for the whole site… Currently I have the script working with 2 simple variables, but I need this function to work within a preg-replace. most of what I want is working, but finding the absolute path is vital to make sure the output of CSS layouts works properly.

    I’m currently doing this:
    $page = preg_replace($pattern, $replacement, $page);

    How do I add your function in there?

  5. White Shadow says:

    I don’t think there is an easy way to do that. The function isn’t really suited for such use; you’d probably have to figure out how it works and write a custom function for your specific situation.

    Are you, by any chance, trying to download an entire site and rewrite the link paths so that it displays properly, etc? If so, I’m pretty sure there are already existing applications that can do that. Maybe it would be easier to use one of those instead of writing your own.

  6. Neil Trigger says:

    I managed to work this much out:

    0) {
    $url = preg_replace(“#(/[a-z0-9-]+){{$num_of_them}}$#iD”, ”, $url);
    $path = $url . ‘/’. str_replace(‘../’, ”, $path);
    return $path;

    echo ”;

    Now I just need to work out how to make this pseudo code work:
    $page = preg_replace($pattern, $new_url, $page);

  7. White Shadow says:

    Well, I still don’t get what you’re trying to do. However, look into preg_replace_callback, it lets you use a function for replacements.

  8. Neil Trigger says:

    Sorry, yes… I’m making an application which can take a website code and render it in the browser, with some changes made to it (kinda like a spell-check).

    I managed to get most of it working, but have an issue with putting a variable into the get_src function at the bottom. I want to replace the hard-coded URL for google with the pre-defined one at the top of the code.
    So far I have this:

    $path = '../../intl/en/images/logo.gif'; #this should display
    $path2 = '../../../intl/en/images/logo.gif'; # this should not
    $page=' Don\’t Display This: ‘;
    #————————- Function Below ——————————–
    function get_src($tmp_url,$path){
    $tmp_url = rtrim($tmp_url, ‘/’);
    $path = ltrim($path, ‘\\’);
    if(($num_of_them = substr_count($path, ‘../’)) > 0) {
    $tmp_url = preg_replace(“#(/[a-z0-9-]+){{$num_of_them}}$#iD”, ”, $tmp_url);
    $path = $tmp_url . ‘/’. str_replace(‘../’, ”, $path);
    return $path;
    ?>Display This:

    I’ve been searching for something like this for ages, so hopefully someone may find it useful.

  9. Neil Trigger says:

    This shout box seems to cut off my code. Here’s the last part:

  10. Neil Trigger says:

    function real_links($matches){
    return ‘src=”‘.get_src(‘’,$matches[1]).'”‘;
    echo $page;

  11. White Shadow says:

    You can do that by defining the $url variable as global in the function, like this :

    function real_links($matches){
        global $url;
        return 'src="'.get_src($url, $matches[1]).'"';
  12. Your fan says:

    You did a great work….
    but plz will you help me in some problem….
    I am using net whose traffic go through proxy server:

    HTTP proxy:

    Above script run fines when i use direct connection with no proxy,but doesnt works when i use it in my college where proxy is needed…
    plz help me how to tunnel traffic of your code through this proxy…….
    Will be greatly thankful….
    You are really gr8,i am using wamp5…

  13. White Shadow says:

    To make the script use a proxy, add

    curl_setopt($ch, CURLOPT_PROXY, proxyip);


    curl_setopt($ch, CURLOPT_PROXYPORT, portnumber)

    after the curl_init() call in the crawl_page() function. More information can be found in the curl_setopt documentation.

  14. sadr1an1 says:

    10x, it workd like a charm

  15. Joshua says:

    I’m sorry for the uneducated question… but how do I point this thing at a webpage to extract urls?

  16. This particular article, “How To Extract All URLs From A Page Using PHP
    |” ended up being great. I am printing out a
    backup to clearly show my colleagues. I appreciate it,Marcy

Leave a Reply