[prev in list] [next in list] [prev in thread] [next in thread] 

List:       apache-modgzip
Subject:    [Mod_gzip] Antwort: mod_gzip digest, Vol 1 #390 - 7 msgs
From:       Michael.Schroepl () telekurs ! com
Date:       2001-07-27 1:11:26
[Download RAW message or body]

Hi folks,

 > we all know that netscape has lots of problems decompressing compressed
 > content different from html: js, jar, swf, images, and so on.
 > ...
> What ya think?

first thing I'd be happy about would be some kind of maintained
list which things are broken in which browser - even if this list
may never be near to complete it would at least give hints about
what to check if you want to use it.
(I am still reading through the complete mailing list archive -
would have loved having been there in fall 2000 - and have not
yet reached the events of the current year, but learned a lot
about internal Apache interfaces and cheaters & lyers.
Kevin, you really are able to explain difficult things very well!
Are there German articles about mod_gzip already? Maybe some day
I'll write one ...)

I can live well with Netscape not being able to *view the source*
of a gzipped HTML file.
I have CSS and JavaScript files which are to be *included* into
the HTML <head> section on the no-go list of mod_gzip (and I have
a comment and blanks stripper "precompiling" those files on my
server).
I am far less happy with Netscape being unable to *print* gzipped
pages, which our customers might find a real flaw.
But currently I am stumbling most painfully over Netscape 4.06/Win
not even being able to *display* some HTML pages coming gzipped.
(The same pages work perfectly when being displayed in Netscape
4.51 or 4.77 from the same PC.) Currently I am not yet allowed to
force our customers to move on to anything newer than Netscape 4.0
... hm. :-( At least Netscape 4.03(de) is 'safe', as it simply does
not yet send "Accept-Encoding: gzip" headers ...
Testing is not quite completed, but I am guessing it might be the
JavaScript code being *defined* explicitely in the <head> section
of these documents (not being included). Any known issues about
that would help me, as I might then simply put those javascript
routines into separate files and let them uncompressed.

> SEND_AS_IS:HAS_CE
> Means that the intial request has qualified for compression so mod_gzip
> starts the 'capture' phase but when the response arrives from the Server
> it is discovered to ALREADY have a 'Content-Encoding:' field of some
kind.
> Is the response actually compressed already?
> If not... then why the heck does it say it's already compressed?

Thanks to Kevins great explanation I finally found out that the
content of this black box in some rare cases (!) in fact already
*is* compressed. (Which the authors did not tell us, they just
added this feature secretly to the current version I am using ...
these things do really happen!)
They do compress content of HTTP/1.1 requests (only) if allowed
to by the Accept-Encoding header, which I never experienced while
testing with Netscape or M$IE in HTTP/1.0 mode before I got to
know mod_gzip only some days ago ...

> If so... then what's the problem? You should be seeing savings already.

Yep, I now see them when looking at the right position of the
access_log. (mod_gzip surely does not tell me the ratio of the
compressed output from the black box ...)
If anyone keeps record lists about document types that compress
best, look out for
- generated code,
- heavy use of tables,
- generated links to CGI URLs with long parameter lists and
- images being coded with every possible HTML parameter (height,
  width, alt, border, ...).
The documents that come out of the black box I talk about have
combined *all* those attributes, so none of them does compress
less than by factor of 10, some even 20 and up ...

Thanks so much, Kevin!



Keep on hacking,

      Michael

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic