[prev in list] [next in list] [prev in thread] [next in thread] 

List:       wget
Subject:    The new HTML parser
From:       Hrvoje Niksic <hniksic () iskon ! hr>
Date:       2000-03-21 15:19:34
[Download RAW message or body]

--=-=-=

For hackers' eyes: this is an improved HTML parser that I wrote for
Wget two years ago (and published here), but never got around to
integrating.  The comments in the code explain the difference between
this and the old parser, and give some history.

The new parser is in several files:

* html-parse.c: the parser itself.  This part is reusable and not
  specific to Wget.

* html-parse.h: declarations of public functions and data structures
  of the parser.

* html-url.c: code that *uses* the parser to collect all URLs out of
  an HTML file.  This file exports the new parser interface in the way
  that the rest of Wget expects.


--=-=-=
Content-Disposition: attachment; filename=html-parse.c
Content-Description: HTML parser

/* HTML parser for Wget.
   Copyright (C) 1998 Free Software Foundation, Inc.

This file is part of Wget.

This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.  */

/* The only entry point to this module is map_html_tags(), which see.  */

/* TODO:
   - Allow hooks for callers to process contents outside tags, to implement
     handling <style> and <script>;
   - Create a test suite for regression testing. */

/* HISTORY:

   This is the third HTML parser written for Wget.  The first one was
   written some time during the Geturl 1.0 beta cycle, and was very
   inefficient and buggy.  It also contained some very complex code to
   remember a list if parser states, because it was supposed to be
   reentrant.  The idea was that several parsers would be running
   concurrently, and you'd have pass the function a unique ID string
   (for example, the URL) by which it found the relevant parser state
   and returned the next URL.  This was unnecessary, memory-consuming,
   and overall bogus.

   The second HTML parser was written for Wget 1.4 (the first version
   by that name), and was a complete rewrite.  Although the new parser
   behaved much better and made no claims of reentrancy, it still
   shared many of the fundamental flaws of the old version -- it only
   regarded HTML in terms tag-attribute pairs, where the attribute's
   value was a URL to be returned.  Any other property of HTML, such
   as <base href=...>, or strange way to specify a URL, such as <meta
   http-equiv=Refresh content="0; URL=..."> had to be crudely hacked
   in -- and the caller had to be aware of these hacks.  Like its
   predecessor, this parser did not support HTML comments.

   After Wget 1.5.1 was released, I set out to write a third HTML
   parser.  The objectives of the new parser were to: (1) provide a
   clean way to analyze HTML lexically, (2) separate interpretation of
   the markup from the parsing process, (3) be as correct as possible,
   e.g. correctly skipping comments and other SGML declarations, (4)
   understand the most common errors in markup and skip them, and (5)
   be reasonably efficient (no regexps, minimum unnecessary copying).

   I believe this parser meets all of the above goals.  It is
   reasonably well structured, and could be relatively easily
   separated from Wget and used elsewhere.  However, some of its
   intrinsic properties still limit its value as a general-purpose
   HTML parser.

   The entry point of this parser, map_html_tags(), allows you to
   specify a callback function to be called for each tag, with a
   structure describing the tag, and a pointer argument given to
   map_html_tags.  */

/* To test as standalone, compile with `-DSTANDALONE -I.'.  You'll
   still need Wget headers to compile.  */

#include <config.h>

#include <stdio.h>
#include <stdlib.h>
#include <ctype.h>
#ifdef HAVE_STRING_H
# include <string.h>
#else
# include <strings.h>
#endif

#include "wget.h"
#include "html-parse.h"

#ifdef STANDALONE
# define xmalloc malloc
# define xrealloc realloc
#endif /* STANDALONE */

/* Pool support.  For efficiency, map_html_tags() stores temporary
   string data to a single pool, which is resized as necessary.  */

struct pool {
  char *contents;
  int size, index;
  int alloca_p;
};

#define AP_DOWNCASE		1
#define AP_PROCESS_ENTITIES	2
#define AP_SKIP_BLANKS		4

/* Add text beginning at POS of length SIZE to POOL, optionally
   performing operations specified by FLAGS.  FLAGS may be any
   combination of AP_DOWNCASE, AP_PROCESS_ENTITIES and AP_SKIP_BLANKS
   with the following meaning:

   * AP_DOWNCASE -- downcase all the letters;

   * AP_PROCESS_ENTITIES -- process the SGML entities and write out
   the decoded string.  Recognized entities are &lt, &gt, &amp, &quot,
   &nbsp and the numerical entities.

   * AP_SKIP_BLANKS -- ignore blanks at the beginning and at the end
   of text.  */
static void
add_to_pool (struct pool *pool, const char *pos, int size, int flags)
{
  int old_index = pool->index;

  /* First, skip blanks if required.  We must do this before entities
     are processed, so that blanks can still be inserted as, for
     instance, `&#32;'.  */
  if (flags & AP_SKIP_BLANKS)
    {
      while (size && ISSPACE (*pos))
	++pos, --size;
      while (size && ISSPACE (pos[size - 1]))
	--size;
    }

  if (flags & AP_PROCESS_ENTITIES)
    {
      /* Stack-allocate a copy of text, process entities and copy it
         to the pool.  */
      int newsize;
      char *tmp = (char *)alloca (size);
      const char *from = pos, *end = pos + size;
      char *to = tmp;

      while (from < end)
	{
	  if (*from != '&')
	    *to++ = *from++;
	  else
	    {
	      const char *save = from;
	      int remain;

	      if (++from == end) goto lose;
	      remain = end - from;

	      if (*from == '#')
		{
		  int numeric;
		  ++from;
		  if (from == end || !ISDIGIT (*from)) goto lose;
		  for (numeric = 0; from < end && ISDIGIT (*from); from++)
		    numeric = 10 * numeric + (*from) - '0';
		  if (from < end && ISALPHA (*from)) goto lose;
		  numeric &= 0xff;
		  *to++ = numeric;
		}
#define FROB(x) (remain >= (sizeof (x) - 1)			\
		 && !memcmp (from, x, sizeof (x) - 1)		\
		 && (*(from + sizeof (x) - 1) == ';'		\
		     || remain == sizeof (x) - 1		\
		     || !ISALNUM (*(from + sizeof (x) - 1))))
	      else if (FROB ("lt"))
		*to++ = '<', from += 2;
	      else if (FROB ("gt"))
		*to++ = '>', from += 2;
	      else if (FROB ("amp"))
		*to++ = '&', from += 3;
	      else if (FROB ("quot"))
		*to++ = '\"', from += 4;
	      /* We don't implement the proposed "Added Latin 1"
                 entities (except for nbsp), because it is unnecessary
                 in the context of Wget, and would require hashing to
                 work efficiently.  */
	      else if (FROB ("nbsp"))
		*to++ = 160, from += 4;
	      else
		goto lose;
#undef FROB
	      /* If the entity was followed by `;', we step over the
                 `;'.  Otherwise, it was followed by either a
                 non-alphanumeric or EOB, in which case we do nothing.  */
	      if (from < end && *from == ';')
		++from;
	      continue;

	    lose:
	      /* This was not an entity after all.  Back out.  */
	      from = save;
	      *to++ = *from++;
	    }
	}
      newsize = to - tmp;
      DO_REALLOC_FROM_ALLOCA (pool->contents, pool->size,
			      pool->index + 1 + newsize, pool->alloca_p, char);
      memcpy (pool->contents + pool->index, tmp, newsize);
      pool->index += newsize;
    }
  else
    {
      /* Just copy the text to the pool.  */
      DO_REALLOC_FROM_ALLOCA (pool->contents, pool->size,
			      pool->index + 1 + size, pool->alloca_p, char);
      memcpy (pool->contents + pool->index, pos, size);
      pool->index += size;
    }

  if (flags & AP_DOWNCASE)
    {
      char *p = pool->contents + old_index;
      char *end = p + pool->index;
      for (; p < end; p++)
	*p = TOLOWER (*p);
    }

  pool->contents[pool->index] = '\0';
  ++pool->index;
}

/* Check whether the contents of [POS, POS+LENGTH) match any of the
   strings in the ARRAY.  */
static int
array_allowed (char **array, const char *pos, int length)
{
  if (array)
    {
      for (; *array; array++)
	if (!memcmp (*array, pos, MINVAL (length, strlen (*array))))
	  break;
      if (!*array)
	return 0;
    }
  return 1;
}

/* Auxiliary functions for advancing over specific portions of the
   HTML buffer.  The functions update buffer pointer and buffer size.
   They return 0 if end-of-buffer is hit.  */

/* Advance over whitespace, placing the resulting point on the first
   non-whitespace character.  If there is no whitespace at BUFP, do
   nothing.  */
static int
advance_whitespace (const char **bufp, int *bufsizep)
{
  while (*bufsizep && ISSPACE (**bufp))
    ++*bufp, --*bufsizep;
  return !!*bufsizep;
}

/* RFC1866: name [of attribute or tag] consists of letters, digits,
   periods, or hyphens.  We also allow _, for compatibility with
   brain-damaged generators.  */
#define IS_NAME_CHAR(x) (ISALPHA (x) || ISDIGIT (x)		\
			 || (x) == '.' || (x) == '-' || (x) == '_')

/* Advance over an SGML declaration (the <!...> form).  In most cases,
   it will be an empty declaration declaration, which happens to be
   the only way to specify an HTML comment.  BUFP should point to the
   first character after `!'.  When the function finishes, BUFP will
   point to the data outside the declaration.

   To recap, an HTML comment is an empty SGML declaration, i.e.:
       <!-- some stuff here -->

   Several comments may be embedded in one comment declaration:
       <!-- have -- -- fun -->

   Whitespace is allowed between and after the comments, but not
   before the first comment.

   Additionally, this function attempts to handle double quotes in
   SGML declarations correctly.  */
static int
advance_comment (const char **bufp, int *bufsizep)
{
  int emptyp, state = 0;
  const char *backout_buf = *bufp;
  int backout_bufsize = *bufsizep;

  if (**bufp == '-')
    emptyp = 1;
  else if (IS_NAME_CHAR (**bufp))
    emptyp = 0;
  else
    /* <! -- foo --> is not a comment. */
    return !!*bufsizep;

  while (*bufsizep)
    {
      char c = **bufp;
      switch (state)
	{
	case 0:
	  if (c == '-')		state = 1;
	  else if (c == '>')
	    {
	      ++*bufp, --*bufsizep;
	      return !!*bufsizep;
	    }
	  else if (ISSPACE (c)) ;
	  else if (IS_NAME_CHAR (c))
	    {
	      if (emptyp)	goto backout_comment;
	    }
	  else if (c == '\"')
	    {
	      if (emptyp)	goto backout_comment;
	      else		state = 4;
	    }
	  else if (c == '\'')
	    {
	      if (emptyp)	goto backout_comment;
	      else		state = 5;
	    }
	  else goto backout_comment;
	  break;
	case 1:
	  if (c == '-')	state = 2;
	  else		state = 0;
	  break;
	case 2:
	  /* If Netrape comment compatibility flag is ever added, the
             following should be here:
	     if (c == '>') state = 0; */
	  if (c == '-') state = 3;
	  break;
	case 3:
	  if (c == '-')	state = 0;
	  else		state = 2;
	  break;
	case 4:
	  if (c == '\"') state = 0;
	  break;
	case 5:
	  if (c == '\'') state = 0;
	  break;
	default:
	  abort ();
	}
      ++*bufp, --*bufsizep;
    }
  if (!*bufsizep)
    return 0;
  else
    {
    backout_comment:
      *bufp = backout_buf;
      *bufsizep = backout_bufsize;
      return !!*bufsizep;
    }
}

#define ADVANCE (++buf, --bufsize, !!bufsize)

#define ADVANCE_OR_FINISH do { if (!ADVANCE) goto finish; } while (0)

/* Map MAPFUN over HTML tags in BUF.  MAPFUN will be called with two
   arguments: pointer to an initialized struct taginfo, and CLOSURE.

   ALLOWED_TAG_NAMES should be a NULL-terminated array of tag names to
   be processed by this function.  If it is NULL, all the tags are
   allowed.  The same goes for attributes and ALLOWED_ATTRIBUTE_NAMES.  */
void
map_html_tags (const char *buf, int bufsize,
	       char **allowed_tag_names, char **allowed_attribute_names,
	       void (*mapfun) (struct taginfo *, void *),
	       void *closure)
{
  const char *buf_beginning = buf;

  int attr_pair_count = 8;
  int attr_pair_alloca_p = 1;
  struct attr_pair *pairs = ALLOCA_ARRAY (struct attr_pair, attr_pair_count);

  struct pool pool;
  /* #### This should be abstracted away into a separate macro like
     INITIALIZE_POOL(). */
  pool.size = 256;
  pool.contents = ALLOCA_ARRAY (char, pool.size);
  pool.alloca_p = 1;

  if (!bufsize)
    return;

  {
    int nattrs, end_tag;
    const char *tag_name_begin, *tag_name_end;
    const char *backout_buf;
    int backout_bufsize;
    int uninteresting_tag = 0;

  look_for_tag:
    /* #### This should be reset_pool. */
    pool.index = 0;
    nattrs = 0;
    end_tag = 0;

    /* Find beginning of tag. */
    while (*buf != '<')
      ADVANCE_OR_FINISH;

    /* Establish the type of the tag (start-tag, end-tag or
       declaration).  */
    ADVANCE_OR_FINISH;
    backout_buf = buf;
    backout_bufsize = bufsize;
    if (*buf == '!')
      {
	/* This is an SGML declaration -- just skip it.  */
	ADVANCE_OR_FINISH;
	if (!advance_comment (&buf, &bufsize)) goto finish;
	goto look_for_tag;
      }
    else if (*buf == '/')
      {
	end_tag = 1;
	ADVANCE_OR_FINISH;
      }
    tag_name_begin = buf;
    while (IS_NAME_CHAR (*buf))
      ADVANCE_OR_FINISH;
    tag_name_end = buf;
    if (!advance_whitespace (&buf, &bufsize)) goto finish;
    if (!(*buf == '>' || (!end_tag && IS_NAME_CHAR (*buf))))
      goto backout_tag;

    if (!array_allowed (allowed_tag_names, tag_name_begin,
			tag_name_end - tag_name_begin))
      /* We can't just say "goto look_for_tag" here because we need
         the loop below to properly advance over the tag's attributes.  */
      uninteresting_tag = 1;
    else
      add_to_pool (&pool, tag_name_begin, tag_name_end - tag_name_begin,
		   AP_DOWNCASE);

    /* Find the attributes. */
    while (1)
      {
	const char *attr_name_begin, *attr_name_end;
	const char *attr_value_begin, *attr_value_end;
	const char *attr_raw_value_begin, *attr_raw_value_end;
	int operation = AP_DOWNCASE;

	/* Establish bounds of attribute name. */
	if (*buf == '>')
	  break;
	attr_name_begin = buf;
	while (IS_NAME_CHAR (*buf))
	  ADVANCE_OR_FINISH;
	attr_name_end = buf;

	/* Establish bounds of attribute value. */
	if (!advance_whitespace (&buf, &bufsize)) goto finish;
	if (IS_NAME_CHAR (*buf) || *buf == '>')
	  {
	    /* Minimized attribute syntax allows `=' to be omitted.
               For example, <UL COMPACT> is a valid shorthand for <UL
               COMPACT="compact">.  Even if we don't make use of such
               attributes in Wget, we need to support them, so that
               the tags containing them can be parsed correctly. */
	    attr_raw_value_begin = attr_value_begin = attr_name_begin;
	    attr_raw_value_end = attr_value_end = attr_name_end;
	  }
	else if (*buf == '=')
	  {
	    ADVANCE_OR_FINISH;
	    if (!advance_whitespace (&buf, &bufsize)) goto finish;
	    if (*buf == '\"' || *buf == '\'')
	      {
		int newline_seen = 0;
		char delimiter = *buf;
		attr_raw_value_begin = buf;
		ADVANCE_OR_FINISH;
		attr_value_begin = buf;
		while (*buf != delimiter)
		  {
		    if (!newline_seen && *buf == '\n')
		      {
			/* If a newline is seen within the quotes, it
			   is most likely that someone forgot to close
			   the quote.  In that case, we back out to
			   the value beginning, and terminate the tag
			   at either `>' or the delimiter, whichever
			   comes first.  Such a tag terminated at `>'
			   is discarded.  */
			bufsize += buf - attr_value_begin;
			buf = attr_value_begin;
			newline_seen = 1;
			continue;
		      }
		    else if (newline_seen && *buf == '>')
		      break;
		    ADVANCE_OR_FINISH;
		  }
		attr_value_end = buf;
		if (*buf == delimiter)
		  ADVANCE_OR_FINISH;
		else
		  goto look_for_tag;
		attr_raw_value_end = buf;
		/* The AP_SKIP_BLANKS part is not entirely correct,
		   because we don't want to skip blanks for all the
		   attribute values.  */
		operation = AP_PROCESS_ENTITIES | AP_SKIP_BLANKS;
	      }
	    else
	      {
		attr_value_begin = buf;
		/* According to SGML, a name token should consist only
		   of alphanumerics, . and -.  However, this is often
		   violated by, for instance, `%' in `width=75%'.
		   We'll be liberal and allow more or less anything as
		   an attribute value.  */
		while (!ISSPACE (*buf) && (*buf != '>'))
		  ADVANCE_OR_FINISH;
		attr_value_end = buf;
		if (attr_value_begin == attr_value_end)
		  /* <foo bar=> */
		  goto backout_tag;
		attr_raw_value_begin = attr_value_begin;
		attr_raw_value_end = attr_value_end;
	      }
	  }
	else
	  goto backout_tag;
	if (!advance_whitespace (&buf, &bufsize)) goto finish;

	/* If we aren't interested in the attribute, skip it.  We
           cannot do this test any sooner, because our text pointer
           needs to correctly advance over the attribute.  */
	if (uninteresting_tag
	    || (allowed_attribute_names
		&& !array_allowed (allowed_attribute_names, attr_name_begin,
				   attr_name_end - attr_name_begin)))
	  continue;

	DO_REALLOC_FROM_ALLOCA (pairs, attr_pair_count, nattrs + 1,
				attr_pair_alloca_p, struct attr_pair);

	pairs[nattrs].name_pool_index = pool.index;
	add_to_pool (&pool, attr_name_begin,
		     attr_name_end - attr_name_begin, AP_DOWNCASE);

	pairs[nattrs].value_pool_index = pool.index;
	add_to_pool (&pool, attr_value_begin,
		     attr_value_end - attr_value_begin, operation);
	pairs[nattrs].value_raw_beginning =
	  attr_raw_value_begin - buf_beginning;
	pairs[nattrs].value_raw_size =
	  attr_raw_value_end - attr_raw_value_begin;
	++nattrs;
      }

    /* By now, we have a valid tag with a name and zero or more
       attributes.  Fill in the data and call the mapper function.  */
    {
      int i;
      struct taginfo taginfo;

      taginfo.name = pool.contents;
      taginfo.end_tag = end_tag;
      taginfo.nattrs = nattrs;
      /* We fill in the char pointers only now, when pool can no
	 longer get realloc'ed.  If we did that above, we could get
	 hosed by reallocation.  */
      for (i = 0; i < nattrs; i++)
	{
	  pairs[i].name = pool.contents + pairs[i].name_pool_index;
	  pairs[i].value = pool.contents + pairs[i].value_pool_index;
	}
      taginfo.attrs = pairs;
      /* Ta-dam! */
      (*mapfun) (&taginfo, closure);
    }
    goto look_for_tag;

  backout_tag:
    /* The tag wasn't really a tag.  Treat its contents as ordinary
       data characters. */
    buf = backout_buf;
    bufsize = backout_bufsize;
    goto look_for_tag;
  }

 finish:
  if (!pool.alloca_p) free (pool.contents);
  if (!attr_pair_alloca_p) free (pairs);
}

#ifdef STANDALONE
static void
test_mapper (struct taginfo *taginfo, void *arg)
{
  int i;

  printf ("%s%s", taginfo->end_tag ? "/" : "", taginfo->name);
  for (i = 0; i < taginfo->nattrs; i++)
    printf (" %s=%s", taginfo->attrs[i].name, taginfo->attrs[i].value);
  putchar ('\n');
  ++*(int *)arg;
}

int main ()
{
  int size = 256;
  char *x = (char *)xmalloc (size);
  int idx = 0, c;
  int counter = 0;

  while ((c = getchar ()) != EOF)
    {
      if (idx >= size)
	{
	  size <<= 1;
	  x = (char *)xrealloc (x, size);
	}
      x[idx++] = c;
    }

  map_html_tags (x, idx, NULL, NULL, test_mapper, &counter);
  printf ("TAGS: %d\n", counter);
  return 0;
}
#endif /* STANDALONE */

--=-=-=



--=-=-=
Content-Disposition: attachment; filename=html-parse.h
Content-Description: Declarations for html parser

/* Declarations for html-parse.c.
   Copyright (C) 1998 Free Software Foundation, Inc.

This file is part of Wget.

This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.  */

struct attr_pair {
  char *name;			/* attribute name */
  char *value;			/* attribute value */
  /* Needed for URL conversion: */
  int value_raw_beginning, value_raw_size;
  /* Used internally by map_html_tags. */
  int name_pool_index, value_pool_index;
};

struct taginfo {
  char *name;			/* tag name */
  int end_tag;			/* whether this is an end-tag */
  int nattrs;			/* number of attributes */
  struct attr_pair *attrs;	/* attributes */
};

void map_html_tags PARAMS ((const char *, int, char **, char **,
			    void (*) (struct taginfo *, void *), void *));

--=-=-=



--=-=-=
Content-Disposition: attachment; filename=html-url.c
Content-Description: URL extractor

/* Collect URLs from HTML source.
   Copyright (C) 1998 Free Software Foundation, Inc.

This file is part of Wget.

This program is free software; you can redistribute it and/or modify
it under the terms of the GNU General Public License as published by
the Free Software Foundation; either version 2 of the License, or
(at your option) any later version.

This program is distributed in the hope that it will be useful,
but WITHOUT ANY WARRANTY; without even the implied warranty of
MERCHANTABILITY or FITNESS FOR A PARTICULAR PURPOSE.  See the
GNU General Public License for more details.

You should have received a copy of the GNU General Public License
along with this program; if not, write to the Free Software
Foundation, Inc., 675 Mass Ave, Cambridge, MA 02139, USA.  */

#include <config.h>

#include <stdio.h>
#ifdef HAVE_STRING_H
# include <string.h>
#else
# include <strings.h>
#endif
#include <stdlib.h>
#include <ctype.h>
#include <errno.h>

#include "wget.h"
#include "html-parse.h"
#include "url.h"
#include "utils.h"

#ifndef errno
extern int errno;
#endif

enum tagtype { TT_UNKNOWN, TT_URL, TT_STYLESHEET, TT_HANDLE_META, TT_BASE };

static struct {
  char *tag, *attr;
  enum tagtype type;
} tag_handles[] = {
  { "a",	"href",		TT_URL },
  { "applet",	"code",		TT_URL },
  { "area",	"href",		TT_URL },
  { "base",	"href",		TT_BASE },
  { "bgsound",	"src",		TT_URL },
  { "body",	"background",	TT_URL },
  { "embed",	"src",		TT_URL },
  { "fig",	"src",		TT_URL },
  { "frame",	"src",		TT_URL },
  { "iframe",	"src",		TT_URL },
  { "img",	"href",		TT_URL },
  { "img",	"lowsrc",	TT_URL },
  { "img",	"src",		TT_URL },
  { "input",	"src",		TT_URL },
  { "layer",    "src",		TT_URL },
  { "link",	"href",		TT_STYLESHEET },
  { "meta",	"content",	TT_HANDLE_META },
  { "overlay",	"src",		TT_URL },
  { "script",	"src",		TT_URL },
  { "table",	"background",	TT_URL },
  { "td",	"background",	TT_URL },
  { "th",	"background",	TT_URL }
};

/* Lists of tag and attribute names we pay attention to.  The yucky
   thing is that these lists depend on (and can be derived from) the
   contents of the above one.  When new tags and attributes are added
   to the above list, you have to update the two lists below.  */
static char *interesting_tags[] = {
  "a", "applet", "area", "base", "bgsound", "body",
  "embed", "fig", "frame", "iframe", "img", "input",
  "link", "meta", "overlay", "script", "table", "td", "th",
  NULL
};

static char *interesting_attributes[] = {
  "background", "code", "content", "href", "http-equiv", "lowsrc",
  "rel", "src", NULL
};

/* Return type of TAG, as indexed in TAG_HANDLES.  If the tag is
   anything other than TT_UNKNOWN, the index of its attribute will be
   stored to *RELEVANT_ATTRIBUTE.  */
static enum tagtype
tag_type (struct taginfo *tag, int *relevant_attribute)
{
  int i;

  for (i = 0; i < ARRAY_COUNT (tag_handles); i++)
    {
      int cmp = strcmp (tag->name, tag_handles[i].tag);
      if (cmp < 0)
	break;
      else if (cmp == 0)
	{
	  int j;
	  for (j = 0; j < tag->nattrs; j++)
	    {
	      int attrcmp = strcmp (tag->attrs[j].name, tag_handles[i].attr);
	      if (attrcmp < 0)
		break;
	      else if (attrcmp == 0)
		{
		  *relevant_attribute = j;
		  return tag_handles[i].type;
		}
	    }
	}
    }
  return TT_UNKNOWN;
}

/* Return non-zero if attribute named NAME with value VALUE is present
   in TAG.  */
static int
attribute_present (struct taginfo *tag, char *name, char *value)
{
  int i;

  for (i = 0; i < tag->nattrs; i++)
    if (!strcasecmp (tag->attrs[i].name, name)
	&& !strcasecmp (tag->attrs[i].value, value))
      break;
  return (i < tag->nattrs);
}

struct collect_urls_closure {
  char *document_base;		/* Base as specified by the document,
				   normally via <base href=...> */
  urlpos *head, *tail;		/* List of URLs */
  const char *document_url;	/* URL of the current document. */
  const char *document_file;	/* File name of this document. */
  int silent;			/* Whether relative links without a
                                   base should be reported. */
};

extern char *merge_relative PARAMS ((const char *, const char *));

static void
collect_tags_mapper (struct taginfo *tag, void *arg)
{
  struct collect_urls_closure *closure = (struct collect_urls_closure *)arg;

  int attr_index;
  const char *base;
  char *url = NULL, *constr;

  switch (tag_type (tag, &attr_index))
    {
    case TT_URL:
      url = tag->attrs[attr_index].value;
      break;
    case TT_STYLESHEET:
      if (!attribute_present (tag, "rel", "stylesheet"))
	return;
      url = tag->attrs[attr_index].value;
      break;
    case TT_HANDLE_META:
      /* Some pages use a META tag to specify that the page be
	 refreshed by a new page after a given number of seconds.  We
	 need to attempt to extract an URL for the new page from the
	 other garbage present.  The general format for this is:

	 <meta http-equiv=Refresh content="0; URL=index2.html">

	 So we just need to skip past the "0; URL=" garbage to get to
	 the URL.  */
      if (!attribute_present (tag, "http-equiv", "refresh"))
	return;
      url = tag->attrs[attr_index].value;
      while (ISDIGIT (*url))
	++url;
      if (*url++ != ';') return; /* #### Signal some kind of warning. */
      while (ISSPACE (*url))
	++url;
      if (!(*url == 'U' && *(url + 1) == 'R' && *(url + 2) == 'L'
	    && *(url + 3) == '='))
	return;			/* #### Ditto. */
      url += 4;
      break;
    case TT_BASE:
      if (!closure->document_base)
	closure->document_base = xstrdup (tag->attrs[attr_index].value);
      return;
    case TT_UNKNOWN:
      return;
    }

  base = closure->document_base;
  if (!base)
    base = closure->document_url;
  if (!base)
    base = opt.base_href;
  if (!base)
    {
      /* Error condition -- a baseless relative link.  */
      /* #### BOGUS!  This should be printed out only if the link is
         actually relative! */
      if (!closure->silent)
	logprintf (LOG_NOTQUIET,
		   _("Error (%s): Link %s without a base provided.\n"),
		   closure->document_file, url);
      return;
    }
  /* #### TODO: Parse BASE only once, and store it to CLOSURE.  Parse
     URL here instead of calling rfc1808_merge().  */
  constr = merge_relative (base, url);
  if (!constr)
    return;

  DEBUGP (("file %s; this_url %s; base %s\nlink: %s; constr: %s\n",
	   closure->document_file,
	   closure->document_url ? closure->document_url : "(null)",
	   closure->document_base ? closure->document_base : "(null)",
	   url, constr));

  {
    urlpos *new = (urlpos *)xmalloc (sizeof (urlpos));

    memset (new, 0, sizeof (*new));
    new->next = NULL;
    new->url = constr;
    new->pos = tag->attrs[attr_index].value_raw_beginning;
    new->size = tag->attrs[attr_index].value_raw_size;
#if 0
    /* A URL is relative if the host and protocol are not named,
       and the name does not start with `/'.  */
    if (no_proto && *url != '/')
      new->flags |= (URELATIVE | UNOPROTO);
    else if (no_proto)
      new->flags |= UNOPROTO;
#endif

    if (closure->tail)
      {
	closure->tail->next = new;
	closure->tail = new;
      }
    else
      closure->tail = closure->head = new;
  }
}

/* Similar to get_urls_file, but for HTML files.  FILE is scanned as
   an HTML document.  get_urls_html() constructs the URLs from the
   relative href-s.

   If SILENT is non-zero, do not barf on baseless relative links.  */
urlpos *
get_urls_html (const char *file, const char *this_url, int silent)
{
  FILE *fp;
  char *buf;
  long nread;
  struct collect_urls_closure closure;

  if (file && !HYPHENP (file))
    {
      fp = fopen (file, "rb");
      if (!fp)
	{
	  logprintf (LOG_NOTQUIET, "%s: %s\n", file, strerror (errno));
	  return NULL;
	}
    }
  else
    fp = stdin;
  /* Load the file.  */
  load_file (fp, &buf, &nread);
  if (file && !HYPHENP (file))
    fclose (fp);

  closure.head = closure.tail = NULL;
  closure.document_base = NULL;
  closure.document_url = this_url;
  closure.silent = silent;
  closure.document_file = file;

  map_html_tags (buf, nread, interesting_tags, interesting_attributes,
		 collect_tags_mapper, &closure);

  FREE_MAYBE (closure.document_base);
  return closure.head;
}

--=-=-=


I don't remember if this stuff works as a drop-in replacement to Wget
1.5, but adapting it should not be too much work.  Anyone interested?

--=-=-=--

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic