[prev in list] [next in list] [prev in thread] [next in thread] 

List:       fop-dev
Subject:    [jira] [Updated] (FOP-2860) BreakingAlgorithm causes high memory consumption
From:       "Simon Steiner (Jira)" <jira () apache ! org>
Date:       2022-09-08 14:59:00
Message-ID: JIRA.13228264.1555398402000.278015.1662649140036 () Atlassian ! JIRA
[Download RAW message or body]


     [ https://issues.apache.org/jira/browse/FOP-2860?page=com.atlassian.jira.plugin.system.issuetabpanels:all-tabpanel \
]

Simon Steiner updated FOP-2860:
-------------------------------
    Attachment: memory6.patch

> BreakingAlgorithm causes high memory consumption
> ------------------------------------------------
> 
> Key: FOP-2860
> URL: https://issues.apache.org/jira/browse/FOP-2860
> Project: FOP
> Issue Type: Bug
> Affects Versions: 2.3
> Reporter: Raman Katsora
> Priority: Critical
> Attachments: image-2019-04-16-10-07-53-502.png, memory6.patch, test-1500000.fo, \
> test-250000.fo, test-300000.fo 
> 
> when a single element (e.g. {{<fo:block>}}) contains a sufficiently large amount of \
> text, the fo-to-pdf transformation causes very high memory consumption. For \
> instance, transforming a document with {{<fo:block>}} containing 1.5 million \
> characters (~1.5Mb [^test-1500000.fo]) requires about 3Gb of RAM. The heapdump \
> shows 27.5 million {{org.apache.fop.layoutmgr.BreakingAlgorithm.KnuthNode}} \
> (~2.6Gb). We start observing this issue, having about 300 thousand  characters in a \
> single element ([^test-300000.fo]). But the  high memory consumption isn't observed \
> when processing 250 thousand  characters ([^test-250000.fo]).



--
This message was sent by Atlassian Jira
(v8.20.10#820010)


[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic