[prev in list] [next in list] [prev in thread] [next in thread] 

List:       velocity-user
Subject:    Re: passing parameters to #parse'ed files0
From:       "Geir Magnusson Jr." <geirm () optonline ! net>
Date:       2001-10-24 23:04:16
[Download RAW message or body]

On 10/24/01 9:13 AM, "Darin Kelkhoff" <darink@sportingnews.com> wrote:

> [snip]
>>>> Why is the VM solution below different that that of your parameterized
>>>> tags?
>>>> 
>>>> <table>
>>>>   #foreach( $item in $itemset )
>>>>      #foo( $item.foo $item.bar )
>>>>   #end
>>>> </table>
>>>> 
>>>> (or whatever you do to get params)
>>>> 
>>>> This seems a direct parallel to your tag approach.
>>>> 
>>>> What are the drawbacks of this for you?
>>> 
>>> so, if we would have all our old "tag"'s now exist as macros in a macro
>>> library file.  good, then we would still have a consistent calling method
>>> (whether or not params were needed).  i guess the only trick now is the
>>> management of the vm library file(s).  we can have as many as we want,
>>> correct?
>> 
>> Yes.  They don't even have to be in a file, technically.  Could come from
>> anywhere, like a jar, database or gerbil, assuming you write a
>> GerbilResourceLoader ...
> [snip]
> 
> okay, so right now i'm pretty well down to considering 2 solutions.  the
> first would be just defining all our old tags to be macros and placing them
> in vm libraries. 
> 
> the second would be a macro that simulates the behaviour i was looking for in
> #parse (thanks, bill burton):
> 
> <table>
>  #foreach($i in $iterate)
>     ## Generic way to specify named parameters
>     #set($params = [ "param1=$bar1", "param2=$bar2" ])
>     ## The #parsetag macro calls a helper tool in the context
>     ## to parse parameter names out of the $params ArrayList
>     ## putting them into the Context.  It then calls #parse.
>     #parsetag("foo.vm" $params)
>  #end
> </table>
> 
> so the question i pose to the list now is, which of these methods would bear
> the least expense at run time?
> 
> bearing in mind that we have around 400 tags in production at all times,
> (that is, 4 sites each with ~40 for layout, and about 100 distinct templates
> each with 2 or 3 for data) with edits being made and new tags installed
> daily, our macro library files would be fairly large and updated quite often.

Would you push the macro libs out to production ad-hoc?  How many times a
day?
 
> does performance using macro libs degrade when they reach sizes like this?

The honest truth is "I don't know", but I'm not frightened :

- management performance should scale like a map, which is used to hold and
look these things up.

- when a template is used for the first time, it is parsed, initialized and
cached, and any VMs is also parsed, initialized and cached, so even if the
management of the VM lib didn't scale well (and I think it will...), once a
template is parsed and cached, it doesn't go back to the VM management
system again.

> how does the caching work with a macro in a library?

For the most part, all macros are treated the same.  If they come from a
library, there is a switch that lets them be reloaded when the template that
they come from is changed.

For this to work, the templates that *use* the macro also can't be cached,
as once they are cached, they don't ever go back to the VM system.

> if a template 
> containing one of these macros is cached, is the library rechecked every time
> the template is used?

No.

> 
> the second solution seems to be more costly every time it's used, as (i
> assume) it redefines the macro used by foo.vm every time foo.vm is parsed.

Yes, could very well be.  There might be some optimization tricks, but not
sure.
 
> if runtime efficiency were even, i'd like to go with sol. 1 (the libraries),
> as it is the most intuitive for developers compared to what we currently
> have.  if anyone can give me reason why its a bad solution though, i'd
> appreciate hearing it now.

I think libraries are the way to go.  The fact that it is 'conventional'
means that improvements to the VM system will benefit you, and your app may
also influence "large-scale features and practice".

In the end, I think what will matter is how often changes are deployed to
production, and what you expect to happen when changes are deployed.
Restarting the webapp would be the best thing, but you might not be able to
afford that.  Adding autoload and some kind of cache dump might be the way
to go.
 
> thanks much,
> --darin

-- 
Geir Magnusson Jr.                       geirm@optonline.net
System and Software Consulting
You're going to end up getting pissed at your software
anyway, so you might as well not pay for it. Try Open Source.

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic