--===============1974211846== Content-Type: multipart/signed; boundary="nextPart1309508.GPN353qFLE"; protocol="application/pgp-signature"; micalg=pgp-sha1 Content-Transfer-Encoding: 7bit --nextPart1309508.GPN353qFLE Content-Type: text/plain; charset="iso-8859-1" Content-Transfer-Encoding: quoted-printable Content-Disposition: inline On Wednesday 20 June 2007, Moritz Moeller wrote: > I know. But the pushing is done with a tool. Which can creates a > brush-stroke (a bezier) which warps the space in which everything is > rendered (think of writing the position of every pixel into it. Now a > warp brush modifies this position data. > This means rendering brush strokes is done as implicitly as possible. > > I.e. instaed of: > > 1. Iterate along spline > 2. Place brush image at resp. position > > one would implement it like so: > > 1. Iterate over image > 2. Get position of resp pixel. > 3. Warp position applying current warp field (filled in by any brushes > that do warping) > 4. Find closest position on spline that is orthogonal to this point > 5. Use point to look up any varying properties of the spline along it's > length at this position. > 6. Render pixel of current brush using this information (possibly > putting an image of the brush into an offscreen brush cache and looking > that image up) > > This is a lot harder to implement (particularly 4. can get tricky, since > there might be multiple points returned), but it is very powerful. Since > all you do is looking up functions based on positional data, warping > etc. all becomes a breeze. Ew, yes -- that looks pretty hard. I think that'll have to wait until we've= =20 learned more, maybe even until Krita 3.0. It would be pretty hard to=20 reconcile with Krita's design. (I mean, kimageshop started out as a Gimp=20 clone only using ImageMagick at the core, while keeping one eye on Photosho= p.=20 Lots of internals are geared towards that kind of application, not towards= =20 dynamicallyt recomputed strokes.)=20 > If you want to deposit something -- that's merely having a 'paper' layer > and special brush stroke properties that brushes can put into such a > layer. The layer then uses them somehow (i.e diffuses them, if it's > watercolor etc.). > Since that paper layer would have certain properties, one can alter them > at any time and have the layer update (i.e. how much water it absorbs > and whatnot). Yes, that sounds very cool, but it's too advanced for what we can achieve=20 right now. We'll have the dynamic filter and transformation masks and=20 adjustment layers. Dynamic layer effects (like drop shadow) probably aren't= =20 going to make it -- which means that dynamically recomputing the effect of= =20 brush strokes using natural media simulation and physics filters is even=20 further away. The problem with natural media is that it's very computationally intensive,= =20 even for a small area -- the area most simulations work on, even if they us= e=20 the GPU to full effec, is just 500x500 pixels. > See my proposal above. The problem with the system I suggest is is of > course any kind of convolving filters that need neighbouring > information. But since you can feed any position into the function chain > and get the result, if a filter needs neighbouring data, you just need > to cache pixel samples of neighbouring pixels and provide them to the > filter. Lastly you need to write filters so that they antialias > themselves using the filter area of the pixel. Hm... For that to work, we would need to redesign a couple of classes. And = it=20 would be dependent on Bart's future work on making the tile backend more=20 efficient, I think. Am I right in thinking that scaling in these cases doesn't use interpolatio= n=20 or sampling, but that there's a blur filter or something like that as a fin= al=20 step to make the image appear smooth?=20 > We do this stuff when writing RenderMan shaders all the time... people > have done crazy stuff. There's e.g. a DSO that allows one to render > Illustrator files as textures at arbirtary resutions. It uses a tile & > sample cache (see > http://renderman.ru/i.php?p=3DProjects/VtextureEng&v=3Drdg) The interesti= ng > bit is that this is using explicit rendering but makes it accessible > through an implicit interface (feed in a position, return a color). The > caching rendering into an explicit image buffer happens behind the scenes. > > And since Krita itself would render the image, the access of such a > buffer would rarely be random. Coherence can be maximized in such a > situation. =2D-=20 Boudewijn Rempt=20 http://www.valdyas.org/fading/index.cgi --nextPart1309508.GPN353qFLE Content-Type: application/pgp-signature; name=signature.asc Content-Description: This is a digitally signed message part. -----BEGIN PGP SIGNATURE----- Version: GnuPG v1.4.6 (GNU/Linux) iD8DBQBGe2q6daCcgCmN5d8RAh6fAKCnoVEJl5Z+/0XhBCuX/MrqbYREFQCglEGG I2c+bf47WdS8l3iQ0DsPnJg= =GyIf -----END PGP SIGNATURE----- --nextPart1309508.GPN353qFLE-- --===============1974211846== Content-Type: text/plain; charset="us-ascii" MIME-Version: 1.0 Content-Transfer-Encoding: 7bit Content-Disposition: inline _______________________________________________ kimageshop mailing list kimageshop@kde.org https://mail.kde.org/mailman/listinfo/kimageshop --===============1974211846==--