[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kde-devel
Subject:    Re: [OT] Any plans to keep up with longhorn?
From:       Kuba Ober <kuba () mareimbrium ! org>
Date:       2003-11-17 18:56:14
[Download RAW message or body]

On Friday 07 November 2003 06:34 pm, Joachim Eibl wrote:
> Of course a filter will never add new information. But when a picture is
> scaled without proper filter you will immediately see that the original
> picture must be small. When a good filter is used, then you might not
> notice if you don't zoom in too much.
>
> I've attached another pic which also shows the result of an even smoother
> resize algorithm.
>
> From left to right:
> Orig: 16x16
> QImage::scale(): 100x100
> QImage::smothScale()
> My smoothResize() (see my previous mail)
> My smootherResize() (more complicated and time consuming)

You are cheating on the pixel size. The pixel size in smoothResize() seems to 
be about 1/3rd of original pixel size, and you "interpolate" in between 
those. In smootherResize() you treat the original pixel as having zero 
dimensions, or close to zero.

In fact, as has been said, it's a matter of parametrizable upsampling and 
filtering that will fit the final application.

Let me explain:

1. You upsample your image. Trivially that's done by simple pixel replication. 
Less triviall it's done by replacing each pixel by a function whose 3D shape 
resembles a square or rectangular (depending on original's aspect ratio) 
plateau with corners rolling off .

2. You filter the upsampled stuff. By filtering too much you loose detail and 
contrast. Filtering too much is usually an excuse for sloppy filters (with 
slow rolloff), so that in order to reduce artifacts you have to overfilter.

3. You sample filtered stuff.

A reasonaly computationally efficient and extremely memory efficient 
(negligible temporary storage needed) approach is to represent small (say 3x3 
or 4x4) pixel clusters of original image in some nice parametric way, say 
using polynomials, piecewise harmonic or piecewise cycloidal functions, or 
even piecewise sinc3 functions, whatever fits the bill. Then you apply 
filtering to the parameters of those properly chosen parametric 
representations, and then sample those at exactly right spots.  By choosing 
the parametric representation you can cater to e.g. icons where spatial 
contrast preservation is usually necessary vs. photgraphs where you don't 
really like to see rectangular pixel boundaries in the final upsized thing.

As a side note: the nice thing about using forward FFT/filtering/reverse FFT 
is that 1D FFT's of several hundreds kilo-samples in length are available 
(GIMPS code). If you sample and window it carefully, you can squeeze the 
whole icon in a 1D sample set, feed it via one of those big FFTs, filter 
(i.e. partially zero) the result, apply reverse FFT and you're done. Even if 
you assume a 4:1 increase in amount of pixels in a 1D representation of 
upsized 2D icon, you will probably still fit into under 100k point FFT. And, 
if this is all for icons where numerical precision can be further sacrificed, 
the code can get pretty mean and fast. Obviously it requires decent knowledge  
of numerical methods, platform-specific assembly programming and overall 
decent self-esteem just to tackle it :), but that's another story.

Alas, back to the topic at hand. The implementation you've proposed is rather 
inefficient, and the end result doesn't look too good either. A simple 2D FIR 
would be simpler and look better. Anyway, the first thing I'd rather get rid 
of (replace by DDA if you really need it) are constructs like

min(int(floor(xOrig)),inWidth-1)

Also, the whole shebang with calculations using floating point numbers is an 
overkill. Use scaled integers (you run on an at least 32bit machine, 
remember?) and then you can add floating point to run in parallel to it, to 
fully utilize multiple pipelines. But add the floating point stuff later and 
only if you can make sure that it will make the code run faster. In any 
event, conversions between floating point and integers are costly, they may 
stall the pipelines, and you don't really want to have to think of it all. 
Integers are good enough for starters, and if you feel like coding it in 
assembly they are easier to work with unless you feel comfortable working 
with floating point assembly and its handful of quirks.

Cheers, Kuba Ober

 
>> Visit http://mail.kde.org/mailman/listinfo/kde-devel#unsub to unsubscribe <<
[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic