From kwin Tue Jul 25 21:47:36 2006 From: Zack Rusin Date: Tue, 25 Jul 2006 21:47:36 +0000 To: kwin Subject: Re: Compositing manager Message-Id: <200607251747.36612.zack () kde ! org> X-MARC-Message: https://marc.info/?l=kwin&m=115386402831954 On Saturday 22 July 2006 10:40, Thomas Lübking wrote > given this insight, (X has no design problem on pixmap usage, i'm so > happy that i'm stupid =) i see no good reason to exclude XRender from > KWin FX stuff (as soon as XRender sanely HW accelerated by the > driver, GL cannot be faster) And if you need to store a pixmap and a > texture for each drawable (so they don't cover the same bitarray > internally), you'll double the RAM demand (what's crap, as we just > learned ;) or need to convert them on the fly for each frame (what's > not efficient as well) There's actually a huge amount of issues in each one of those. I'm not sure how good of a job I can do of explaining in a email right before going to sleep but I'll try :) We extensively discussed the semantics of texture_from_pixmap and we decided that it makes more sense to leave the binders immutable, otherwise serious synchronization issues occur. > (i don't think that pixmaps are stored as textures internally, as > textures used to need to be x^2 to be efficient while there's no such > restriction on pixmaps. That's actually not an issue, all modern cards support NPOT (non power of two) textures. > also the nVidia CSS xform wasn't hw > accelerated (at least not comparable to GL) while the blending was, > what wouldn't make sense if pixmaps allready where textures > internally - again: correct me if i'm terribly wrong) That's (again :) ) a little bit more complicated. There are two ways in which basic XRender acceleration can be implemented: on top of 2D engine (where you give up trying to accelerate many operations but don't have to do expensive engine switches - from 2D to 3D and back) or directly on top of 3D engine (a lot more complicated but can support everything). Now if you picked 3D engine (which of course make more technical sense) a step #2 is figuring out how lazy you are and how much you want to implement. The number of porter&duff composition operators in render is rather large and if you mix it with three ways of handling alpha channel and support for masks you get a permutation of operations to implement. Now what we do in Open Source DRI drivers is on initialization we statically divide available vram into dedicated segments - start is the framebuffer, after that it usually, but now always, follows backbuffer, stencil buffer, possible scratch area then pixmap area and finally texture segment. Textures get usually at least half of the available vram. This allocation is static (meaning it never changes) so even if you run only 2d applications the available vram will be a pretty small because texture segment will still occupy half of vram (even though it's completely unused). Just the opposite happens when running only OpenGL app (Xgl for example) when the pixmap region is unused (altough the pixmap region usually occupies a lot less than the texture segment). Yes, it's one of the issues we need to solve in the drivers. > So the key issue seems to be to get > 1. accelerated blending > 2. accelerated scaling That's trivial. > 3. shader usage (to get blurrage etc., just flew over glsl: cool > stuff ;) into XRender to make it usefull (in doubt by setting X upon > GL) Xrender will never get shaders. I don't even want to think about it :) It just begs for trouble. I have some plans for new rendering pipeline on X but they're beyond the scope of this document. Xrender supports filters though convolution filters so implementation of, e.g. gaussian blur is trivial, but that's something that no one accelerates right now (mostly because it has been unused). Hope that helps. z -- A computer scientist is someone who, when told to 'Go to Hell', sees the 'go to', rather than the destination, as harmful. _______________________________________________ Kwin mailing list Kwin@kde.org https://mail.kde.org/mailman/listinfo/kwin