[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kde-look
Subject:    Re: Idea for tool preview on mouseover...
From:       Dave Leigh <dave.leigh () cratchit ! org>
Date:       2002-03-06 20:44:50
[Download RAW message or body]

On Wednesday 06 March 2002 15:15, Sean Pecor wrote:

> 1. Retrieving external link information should only be done at idle. That
> is, when the current page is fully rendered and no input is being made. It
> is at this point when the user may be reading paragraphs, staring blankly,
> etc.

This is actually where it begins to fall apart for me. I find that when I'm 
using my browser I'm rarely AT idle in the sense that's useful for this 
technique. More often, I'm engaged in a search and want to rapidly get to a 
page or content. Then I might spent a significant amount of time on that 
page, but I'm not likely to use links out of it. The point is, I VERY rarely 
"browse"... rather I "search" or "read." My useage is possibly highly 
unusual, but I rather doubt it.

> 2. This "cacheing" would have an additional benefit, in that the links
> requested by the user may already be transferred, thus reducing the delay
> of subsequent requests. Whether or not this would have any perceivable
> benefit would be debatable, except for "slide-show" oriented pages where
> there are a limited number of links.

Hmm. True to an extent, but again, I don't think it's terribly useful at 
second glance. You would very likely transfer the HTML, but you're not likely 
to transfer all of the graphics and applets, etc. associated with the page. 
(Imagine the hit on a portal!!) Therefore what's cached is what's usually the 
LEAST time-consuming element of the page. Further, you've not just cached 
that link, but every other link whether you're going there or not. So you 
take something that's a mediocre hit by itself and transform it to a 
significant hit through sheer volume.

> 3. I can think of next-generation uses where the link is an XML document,
> which would be useful for generating complex tool tips for the content of
> the link (rolling over a job on a job site would display the job salary,
> location, etc.).

I see what you're getting at, though I think the example's a bit stretched 
(It's poor page design to hide the salary on a page displaying job links, 
IMHO).  I'm just having a hard time coming up with a better example. And, you 
get into the issue here of what the schema is going to be and how you're 
going to determine what information in the XML is significant for preview, 
how to display it, and what you're going to do with all of the the different 
kinds of info that are NOT XML, like graphics, or application documents. It 
turns out to be something much more than a minor patch.

> If these were http:// links I can think of one benefit, even if you were
> just requesting the HEAD of each link. Broken links could be post-rendered
> in red with a strikethrough (so you would know the link no longer exists).

Now that is a benefit, the best I've seen so far. But by itself it's not 
convincing due to the problem of false hits. It would be nice if we had 
something like a "page ping" that isn't a GET and isn't a POST... just an "is 
it there?" request that doesn't count as a hit. But we don't have it.

BTW, I'm not trying to be negative for negativity's sake. As I mention, it's 
not insignificant effort. I'm wondering if it's truly worth it when a big 
part of what's really needed can be gotten from the current page and 
displayed in a tooltip. 

-- 
dave.leigh@cratchit.org
http://www.cratchit.org

It's like deja vu all over again.
		-- Yogi Berra

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic