Following some corruption issue in my bayes database I have deleted all the bayes* files and set about relearning. This looks promising: # /kolab/bin/sa-learn --dbpath /kolab/var/amavisd/.spamassassin --ham /kolab/var/imapd/spool/domain/e/example.com/s/shared^learn-ham Learned tokens from 962 message(s) (963 message(s) examined) but immediately afterwards I get this: # /kolab/bin/sa-learn --dbpath /var/kolab/amavisd/.spamassassin --dump magic netset: cannot include 127.0.0.1/32 as it has already been included 0.000 0 3 0 non-token data: bayes db version 0.000 0 321 0 non-token data: nspam 0.000 0 1 0 non-token data: nham 0.000 0 13493 0 non-token data: ntokens 0.000 0 1257931035 0 non-token data: oldest atime 0.000 0 1257932330 0 non-token data: newest atime 0.000 0 1257932333 0 non-token data: last journal sync atime 0.000 0 1257931036 0 non-token data: last expiry atime 0.000 0 0 0 non-token data: last expire atime delta 0.000 0 0 0 non-token data: last expire reduction count So what happened to the other 961 pieces of ham I've supposedly learnt from? Or have I misunderstood what --dump magic returns? How do I get up to the required 200 in these circumstances? Ta -- Chris