[prev in list] [next in list] [prev in thread] [next in thread] 

List:       kde-devel
Subject:    Re: Neural network window placement policy!
From:       Nicolas Brodu <brodun () aston ! ac ! uk>
Date:       1999-12-13 12:30:48
[Download RAW message or body]

Cristian Tibirna wrote:
> 
> On Sun, 12 Dec 1999, Nicolas Brodu wrote:
> 
> > I'm studying neural networks here, and as a practical experiment I
> > implemented a neural network window placement policy.
> 
> Nice.

As I said, just a practical experiment. Here are more details, as I didn't
want the first mail to be too technical. I shouldn't even have spoken
about this dimension stuff or the training methods.
It's just a 2-layer perceptron, with 23 inputs, 10 tanh hidden neurons, and
2 outputs. I wanted to code something we saw in the lessons, to pratice,
and the 2 classical structures are multi-layer perceptrons and radial basis
functions. Due to the number of inputs (23), MLP should be more efficient.
The inputs are:
- Position and size of the 5 largest windows (20 inputs)
- Size of the window to place (2 inputs)
- Total number of windows on this desktop (1 input)
All coordinates are relative, between 0 and 1. Thus, it's resolution 
independant, and I simply multiply by the current resolution to get the 
actual position from the outputs.
If you have more/better ideas for the inputs, please tell.
 
> 1) How's the performance hit? Smart placement and snap to window/border
> are slowing down kwin *a lot*. I am planning a code optimization session
> some times before KDE-2, but I doubt I can gain much.

I doubt it also. Of course, I don't say this doesn't take any time at all, 
but still it must be quite fast and I didn't notice any slowdown on my
K6-233 (from the calculations, saving with KConfig is another problem).
Of course, I use backpropagation, and the training is just stochastic 
gradient descent. Couldn't be faster. (which doesn't imply it's ultra-fast of 
course).
And it's the user choice anyway...

> 2) How do you treat overtraining? I wouldn't like my windows start to pop
> up on the screen of the neighbour after 2-3 months :-)

I don't. This is on-line training, and overfitting would only happen on
a finite set of examples, with repetition. Here I just pick the new position 
from the user. After 2-3 months, if the user is consistent in his/her 
choices, then we might expect a good approximation to be achieved, and then
she/he doesn't have to move the window so often... If the user keeps changing
its placement policy (from the neural network limited point of view, that is)
then the network will just try to fit those new positions.
But I insist once more, neural networks are just approximations, and it
might practically not be completely satisfaying. Still, it cannot be worse
than a 'Random' placement policy, and a lot of fun.
 
> > Problems:
> > - The training was difficult, and gave poor results. So 'smart' isn't very
> > well approximated. (If someone has a good code to minimize a function in 351
> > dimensions, please contact me*). At least it cannot be worse than 'Random'...
> 
> Wow! I will have to look at the code. I'm almost sure you don't need more
> than three nodes per window. That makes 30-40 nodes most of the time.

Actually, I first added the desktop number and the name of the application in
the input parameters, but since I didn't know how to generate a relevant 
training set then, I reduced the inputs to the 23 presented above, so it's
no longer 351.
Still, 23*10 weights + 10 bias for the layer 1, and 10*2 + 2 for the layer 2,
gives you 262 parameters to adjust for the initial batch training. Hence
minimizing the error function in a 262 dimension space, with only 10 hidden
nodes.
Of course, adding more nodes would perhaps decrease the error, but then it's
even more difficult and slower to train. I tried 15 and 30 hidden neurons,
and from the results I had it's not worth the hassle.

> > - KConfig did corrupt the config file when called from kwin constructor. I
> > don't know why, but this means that all the parameters are hardcoded for now,
> > and anything the network learns will be lost the next session.
> 
> Hmmm! Rather dishabilitating.

Yep. That's a reason why the code isn't commited (together with the upcoming
Krash and that I wanted to talk about it here first). Perhaps I could hack
into a fstream-based solution...
But really, if someone knows about this KConfig problem (and also why it is
so slow when saving), the code is simple and clean and could be commited at
any time.

Cheers,
Nicolas

-- 
Life is a sexually transmitted fatal disease. (W. Allen?)

[prev in list] [next in list] [prev in thread] [next in thread] 

Configure | About | News | Add a list | Sponsored by KoreLogic