[prev in list] [next in list] [prev in thread] [next in thread] 

List:       haiku-bugs
Subject:    [haiku-bugs] Re: [Haiku] #10877: ClipToPicture is too slow
From:       "pulkomandy" <trac () haiku-os ! org>
Date:       2015-11-24 21:03:14
Message-ID: 061.7aa610fb4fe560cb4b4ed31595e1a764 () haiku-os ! org
[Download RAW message or body]

#10877: ClipToPicture is too slow
----------------------------------+----------------------------
   Reporter:  pulkomandy          |      Owner:  stippi
       Type:  enhancement         |     Status:  closed
   Priority:  normal              |  Milestone:  R1
  Component:  Servers/app_server  |    Version:  R1/Development
 Resolution:  fixed               |   Keywords:
 Blocked By:                      |   Blocking:
Has a Patch:  0                   |   Platform:  All
----------------------------------+----------------------------
Changes (by pulkomandy):

 * status:  new => closed
 * resolution:   => fixed


Old description:

> We now have a working implementation of ClipToPicture. It is the only way
> to perform clipping in the transformed view space
> (ConstrainClippingRegion does not work with transforms). However,
> ClipToPicture hit tests work by alpha blending pixels, which is slow.
>
> WebKit makes heavy use of that feature which makes it painful to use. We
> have to make this work faster.
>
> An idea is to compute the un-transformed bounds of the clipping picture
> covered area, and use rectangle clipping to exclude anything outside that
> area from the drawing. When using a big view and a small clipping
> picture, this could cut down the time a lot. This extra clipping
> rectangle needs to be computed whenever the AlphaMask is generated, as
> changes in the view state could lead to different results.
>
> Maybe there are other ways to improve this on the drawing side: avoiding
> the alpha blending when the clipping picture is only made of fully opaque
> or fully transparent pixels, for example? This may needs changes to agg
> rasterizer code.

New description:

 We now have a working implementation of ClipToPicture. It is the only way
 to perform clipping in the transformed view space (ConstrainClippingRegion
 does not work with transforms). However, ClipToPicture hit tests work by
 alpha blending pixels, which is slow.

 WebKit makes heavy use of that feature which makes it painful to use. We
 have to make this work faster.

 An idea is to compute the un-transformed bounds of the clipping picture
 covered area, and use rectangle clipping to exclude anything outside that
 area from the drawing. When using a big view and a small clipping picture,
 this could cut down the time a lot. This extra clipping rectangle needs to
 be computed whenever the AlphaMask is generated, as changes in the view
 state could lead to different results.

 Maybe there are other ways to improve this on the drawing side: avoiding
 the alpha blending when the clipping picture is only made of fully opaque
 or fully transparent pixels, for example? This may needs changes to agg
 rasterizer code.

--

Comment:

 Jua solved this. Thanks!

--
Ticket URL: <https://dev.haiku-os.org/ticket/10877#comment:1>
Haiku <https://dev.haiku-os.org>
Haiku - the operating system.

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic