[prev in list] [next in list] [prev in thread] [next in thread] 

List:       xsl-list
Subject:    Re: [xsl] The time to execute a function is 625 times greater than the sum of the times to execute t
From:       "Michael Kay mike () saxonica ! com" <xsl-list-service () lists ! mulberrytech ! com>
Date:       2020-08-01 21:04:21
Message-ID: 20200801170350.84727 () lists ! mulberrytech ! com
[Download RAW message or body]

Roger, since everyone is speculating about the causes, if you can package up the test \
cases you are measuring so I can run them, then I'll take a look and see if I can \
come up with an explanation. My own guess is that it's likely to be something none of \
have thought of (based on my experience that this is usually the case).

I'll also try and record the progress of my investigation, since the blind alleys I \
go down are often as revealing as the final conclusion.

Michael Kay
Saxonica

> On 1 Aug 2020, at 20:03, Roger L Costello costello@mitre.org \
> <xsl-list-service@lists.mulberrytech.com> wrote: 
> Hi Folks,
> 
> I am having great difficulty determining why my XSLT neural network program runs so \
> slow. I am currently focusing on the f:train function. The Saxon profile tool \
> reports the gross total time for one call to the f:train function is 5,460.326 ms. 
> To see how much time each statement in f:train takes, Michael Müller-Hillebrand \
> suggested that I put each statement into its own function. Great idea! I did so. I \
> ran the Saxon profile tool on the modified f:train and you can see below the gross \
> total time required for each statement (each statement is now in its own function). \
> I summed their times and it came to a total of 8.742 ms. So, the f:train function \
> should take 8.742 ms but instead it takes 5,460.326 ms. How can it possibly be that \
> the time to execute the f:train function is 625 times greater than the sum of the \
> times to execute the statements in the function? Any suggestions? 
> /Roger
> 
> f:count-inputs-list						0.016
> f:create-inputs							0.039
> f:count-targets-list						0.004
> f:create-targets							0.520
> f:create-hidden-inputs						0.060
> f:create-hidden-outputs						0.058
> f:create-final-inputs						0.054
> f:create-final-outputs						0.008
> f:compute-output-errors					0.025
> f:compute-hidden-layer-errors					2.612
> f:compute-Ek_times_Ok					0.030
> f:compute-One_minus_Ok					0.017
> f:compute-updated-layer-values				0.003
> f:compute-output-transposed					0.003
> f:compute-weight-changes					0.003
> f:compute-learning-rate-multiplied-by-weight-changes		0.100
> f:compute-updated-who					0.034
> f:compute-Ek_times_Ok-v2					0.006
> f:compute-One_minus_Ok-v2					0.008
> f:compute-updated-layer-values-v2				0.006
> f:compute-output-transposed-v2				0.004
> f:compute-weight-changes-v2					0.004
> f:compute-learning-rate-multiplied-by-weight-changes-v2	0.060
> f:compute-updated-wih						0.027
> f:compute-neural-network-with-new-wih			4.959
> f:compute-neural-network-with-new-wih-and-new-who		0.085
> Total Time: 							8.742
> 
--~----------------------------------------------------------------
XSL-List info and archive: http://www.mulberrytech.com/xsl/xsl-list
EasyUnsubscribe: http://lists.mulberrytech.com/unsub/xsl-list/651070
or by email: xsl-list-unsub@lists.mulberrytech.com
--~--


[Attachment #3 (unknown)]

<html><head><meta http-equiv="Content-Type" content="text/html; \
charset=utf-8"></head><body style="word-wrap: break-word; -webkit-nbsp-mode: space; \
line-break: after-white-space;" class="">Roger, since everyone is speculating about \
the causes, if you can package up the test cases you are measuring so I can run them, \
then I'll take a look and see if I can come up with an explanation. My own guess is \
that it's likely to be something none of have thought of (based on my experience that \
this is usually the case).<div class=""><br class=""></div><div class="">I'll also \
try and record the progress of my investigation, since the blind alleys I go down are \
often as revealing as the final conclusion.<br class=""><div class=""><br \
class=""></div><div class="">Michael Kay</div><div class="">Saxonica<br \
class=""><div><br class=""><blockquote type="cite" class=""><div class="">On 1 Aug \
2020, at 20:03, Roger L Costello <a href="mailto:costello@mitre.org" \
class="">costello@mitre.org</a> &lt;<a \
href="mailto:xsl-list-service@lists.mulberrytech.com" \
class="">xsl-list-service@lists.mulberrytech.com</a>&gt; wrote:</div><br \
class="Apple-interchange-newline"><div class=""><div class="">Hi Folks,<br \
class=""><br class="">I am having great difficulty determining why my XSLT neural \
network program runs so slow. I am currently focusing on the f:train function. The \
Saxon profile tool reports the gross total time for one call to the f:train function \
is 5,460.326 ms.<br class=""><br class="">To see how much time each statement in \
f:train takes, Michael Müller-Hillebrand suggested that I put each statement into \
its own function. Great idea! I did so. I ran the Saxon profile tool on the modified \
f:train and you can see below the gross total time required for each statement (each \
statement is now in its own function). I summed their times and it came to a total of \
8.742 ms. So, the f:train function should take 8.742 ms but instead it takes \
5,460.326 ms. How can it possibly be that the time to execute the f:train function is \
625 times greater than the sum of the times to execute the statements in the \
function? Any suggestions?<br class=""><br class="">/Roger<br class=""><br \
class="">f:count-inputs-list<span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.016<br class="">f:create-inputs<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.039<br class="">f:count-targets-list<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.004<br class="">f:create-targets<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.520<br class="">f:create-hidden-inputs<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.060<br class="">f:create-hidden-outputs<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.058<br class="">f:create-final-inputs<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.054<br class="">f:create-final-outputs<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.008<br class="">f:compute-output-errors<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.025<br class="">f:compute-hidden-layer-errors<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>2.612<br class="">f:compute-Ek_times_Ok<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.030<br class="">f:compute-One_minus_Ok<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.017<br class="">f:compute-updated-layer-values<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.003<br class="">f:compute-output-transposed<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.003<br class="">f:compute-weight-changes<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.003<br \
class="">f:compute-learning-rate-multiplied-by-weight-changes<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.100<br class="">f:compute-updated-who<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.034<br class="">f:compute-Ek_times_Ok-v2<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.006<br class="">f:compute-One_minus_Ok-v2<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.008<br \
class="">f:compute-updated-layer-values-v2<span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.006<br class="">f:compute-output-transposed-v2<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.004<br class="">f:compute-weight-changes-v2<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.004<br \
class="">f:compute-learning-rate-multiplied-by-weight-changes-v2<span \
class="Apple-tab-span" style="white-space:pre">	</span>0.060<br \
class="">f:compute-updated-wih<span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.027<br \
class="">f:compute-neural-network-with-new-wih<span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>4.959<br \
class="">f:compute-neural-network-with-new-wih-and-new-who<span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>0.085<br class="">Total Time: <span \
class="Apple-tab-span" style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span><span class="Apple-tab-span" \
style="white-space:pre">	</span>8.742<br class=""><br \
class=""></div></div></blockquote></div><br class=""></div></div></body></html> \
<div><!-- begin bl.html.trailer --> <div style="border-top:1px solid black; \
                background-color: #dddddd;
color: #888888; font-size: smaller; padding: 5px; text-align: center;
font-family: arial,verdana,arial,sans-serif; margin-top:1em; clear:
both; margin: auto">
<a href="http://www.mulberrytech.com/xsl/xsl-list">
XSL-List info and archive</a>
<div style="text-align:center;">
<a style="color: blue;"
  href="http://lists.mulberrytech.com/unsub/xsl-list/651070"
> EasyUnsubscribe</a>
(<a style="color: blue;"
href="mailto:xsl-list-unsub@lists.mulberrytech.com?subject=remove"
> by email</a>)
</div>
</div>
<!-- end bl.html.trailer --></div>



[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic