[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kde-devel
Subject:    Re: Neural network window placement policy!
From:       Andreas Schlapbach <schlpbch () bluewin ! ch>
Date:       1999-12-13 18:43:42
[Download RAW message or body]

Nicolas Brodu wrote:
> 
> Cristian Tibirna wrote:
> >
> > On Sun, 12 Dec 1999, Nicolas Brodu wrote:
> >
> > > I'm studying neural networks here, and as a practical experiment I
> > > implemented a neural network window placement policy.
> >
> > Nice.
> 
> As I said, just a practical experiment. Here are more details, as I didn't
> want the first mail to be too technical. I shouldn't even have spoken
> about this dimension stuff or the training methods.
> It's just a 2-layer perceptron, with 23 inputs, 10 tanh hidden neurons, and
> 2 outputs. I wanted to code something we saw in the lessons, to pratice,
> and the 2 classical structures are multi-layer perceptrons and radial basis
> functions. Due to the number of inputs (23), MLP should be more efficient.
> The inputs are:
> - Position and size of the 5 largest windows (20 inputs)
> - Size of the window to place (2 inputs)
> - Total number of windows on this desktop (1 input)
> All coordinates are relative, between 0 and 1. Thus, it's resolution
> independant, and I simply multiply by the current resolution to get the
> actual position from the outputs.
> If you have more/better ideas for the inputs, please tell.
> 
> > 1) How's the performance hit? Smart placement and snap to window/border
> > are slowing down kwin *a lot*. I am planning a code optimization session
> > some times before KDE-2, but I doubt I can gain much.
> 
> I doubt it also. Of course, I don't say this doesn't take any time at all,
> but still it must be quite fast and I didn't notice any slowdown on my
> K6-233 (from the calculations, saving with KConfig is another problem).
> Of course, I use backpropagation, and the training is just stochastic
> gradient descent. Couldn't be faster. (which doesn't imply it's ultra-fast of
> course).

From my experience with NN I know that they need a lot of training data
(>500) in order to gain useful results. What are your expectations, when
do you think your net will converge?

What about using a partial recurrent network (Jordan/Elman)? I think the
previous position of windows has a _lot_ of influence on the placement
of the next window. With the above mentioned networks this can be taken
to account. IMHO MLP can't do so because they are feedforward only.
 
Hack on,
Andreas

--
> Linux - Where do you want to go tomorrow? <<

Andreas Schlapbach
mailto:schlpbch@iam.unibe.ch
http://iamexwiwww.unibe.ch/studenten/schlpbch

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic