Re: [Evolution] Problem with junk and spam filters



On Sun, 2005-12-04 at 01:38 +0100, guenther wrote:
Slight correction: SpamAssassins built-in /Bayes/ filters need to be
trained. The default SA rules work from the very first message, as do
the network tests. Bayes needs to learn 200 Spam and Ham /each/, before
kicking in (default install).

<snip>

[1] Actually, "mails" are never learned. Their contents, the words are
learned and identified with the overall score of the message. So
according to the Bayes filter, some words are strong signs of being
either Ham or Spam, whereas others aren't. (Basically, just keep in mind
that words are learned, rather than entire mails.)

I find that the "learning" is still producing too many negatives.  I'm
getting repeat spam that is similar to "learned" messages.
I found my SpamAssassin settings in webmin & the "Hits above which a
message is considered spam" setting is at 5.  I don't know what a good
SA setting is for this.  I'd like to make it more discriminating in
reasonable steps, but need some practical pointers for a good way to
fine tune it.  Is there a generally accepted start setting for this? Are
the other settings more applicable to it?  I have:
Hits above which a message is considered spam 5
Whitelist score factor .5
Number of times to check From: address MX 2
Seconds to wait between MX checks 2
Skip RBL open-relay check? No
Seconds to wait for RBL queries 30
Number of Received: headers to check with RBL  2







[Date Prev][Date Next]   [Thread Prev][Thread Next]   [Thread Index] [Date Index] [Author Index]