It is widely recognized that in learning prediction models from data, the use of relevant prior knowledge can give a
substantial improvement of performance. In many application areas of machine learning and data mining, prior knowledge
concerning the sign of influence (positive or negative) of predictor variables on the response variable is available.
Such a priori knowledge can be translated to the requirement that the predicted response should be a (partially)
monotone function of the predictors.
There are also many applications where a non-monotone model would be considered unfair or unreasonable.
Try explaining to a rejected job applicant why someone who scored worse on all application criteria got the job!
The same holds for many other application areas, such as credit rating and university entrance selection.
These considerations have motivated the development of learning algorithms that are guaranteed to produce
(or have a bias towards) monotone models.
Examples are monotone versions of: classification trees, neural networks, rule learning, Bayesian networks,
nearest neighbor methods and rough sets.
Work on this subject has however been scattered over different research communities (machine learning, data mining,
neural networks, statistics and operations research), and our aim is to bring together researchers from these different
fields in this workshop.
We aim at providing a forum for the discussion of recent advances in the use of machine learning and
data mining methods for learning monotone models, and at offering an opportunity for researchers and practitioners
to identify new promising research directions.
MoMo 2009 is a full-day workshop organized at ECML PKDD 2009 in Bled, Slovenia.
It will take place on Monday September 7, the first day of the conference.
The workshop is part of the main conference, and is free of charge for attendees of the conference.