References

Altman N.S. An Introduction to Kernel and Nearest-Neighbor Nonparametric Regression. The American Statistician. 1992;46(3):175–185.

Baradaran Hashemi H, Yazdani N, Shakery A, Pakdaman Naeini M. Application of ensemble models in web ranking. 2010 5th International Symposium on Telecommunications. 2010:726–731. doi: 10.1109/ISTEL.2010.5734118.

Breiman L. Random Forests. Machine Learning. 2001;45:5–32.

Brieman L.F. Classification and Regression Trees. Chapman and Hall; 1984.

Cohen W.W. Fast Effective Rule Induction. Machine Learning: Proceedings of the Twelfth International Conference. 1995.

Cortes C.A. Support Vector Networks. Machine Learning. 1995:273–297.

Cover T.A. Entropy, Relative Information, and Mutual Information. In: Cover T.A, ed. Elements of Information Theory. John Wiley and Sons; 1991:12–49.

Dietterich T.G. Ensemble Methods in Machine Learning. 2007 Retrieved from. http://www.eecs.wsu.edu/∼holder/courses/CptS570/fall07/papers/Dietterich00.pdf.

Fisher R.A. The use of multiple measurements in taxonomic problems. Annals of Human Genetics. 1936;7:179–188. doi: 10.1111/j.1469-1809.1936.tb02137.x.

Fletcher R. Practical Methods of Optimization. New York: John Wiley; 1987.

Gashler M, Giraud-Carrier C, Martinez T. Decision Tree Ensemble: Small Heterogeneous Is Better Than Large Homogeneous. 2008 Seventh International Conference on Machine Learning and Applications. 2008:900–905. doi: 10.1109/ICMLA.2008.154.

Grabusts P. The Choice of Metrics for Clustering Algorithms. Proceedings of the 8th International Scientific and Practical Conference. 2011;II(1):70–76.

Haapanen R, Lehtinen K, Miettinen J, Bauer M.E, Ek A.R. Progress in adapting k-NN methods for forest mapping and estimation using the new annual Forest Inventory and Analysis data. In Third Annual Forest Inventory and Analysis Symposium. 2001 (p. 87).

Hill T, Lewicki P. Statistics: Methods and Applications. 2007 p. 815.

Hsu C.-W.C.-C.-J. A practical guide to support vector classification. Taipei: Department of Computer Science. 2003 National Taiwan University.

Laine A. Neural Networks. In Encyclopedia of Computer Science 4th edition. John Wiley and Sons Ltd; 2003 1233–1239.

Langley P, Simon H. a. Applications of machine learning and rule induction. Communications of the ACM. 1995;38(11):54–64. doi: 10.1145/219717.219768.

Peterson L. K-Nearest Neighbors. Scholarpedia. 2009 Retrieved from. http://www.scholarpedia.org/article/K-nearest_neighbor.

Li E.Y. Artificial neural networks and their business applications. Information & Management. 1994;27(5):303–313. doi: 10.1016/0378-7206(94)90024-8.

Matan O, et al. Handwritten Character Recognition Using Neural Network Architectures. 4th USPS Advanced Technology Conference. 1990:1003–1011.

McInerney D. Remote Sensing Applications k-NN Classification. Remote Sensing Workshop. 2005 Retrieved April 27, 2014, from. http://www.forestry.gov.uk/pdf/DanielMcInerneyworkshop.pdf/$FILE/DanielMcInerneyworkshop.pdf.

Meagher P. Calculating Entropy for Data Mining. 2005 Retrieved from O’Reilly OnLamp.com PHP Dev Center. http://www.onlamp.com/pub/a/php/2005/01/06/entropy.html?page=1.

Mierswa I, Wurst M, Klinkenberg R, Scholz M, Euler T. YALE: Rapid prototyping for complex Data Mining tasks. . Proceedings of the ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 2006;Vol. 2006:935–940. doi: 10.1145/1150402.1150531.

Montgomery J.M, Hollenbach, F.M, Ward M.D. Improving predictions using ensemble Bayesian model averaging. Political Analysis. 2012;20(3):271–291.

Patel P. Predicting the Future of Drought Prediction. IEEE Spectrum. 2012 Retrieved April 26, 2014, from. http://spectrum.ieee.org/energy/environment/predicting-the-future-of-drought-prediction.

Polikar R. Ensemble Based Systems in Decision Making. IEEE CIRCUITS AND SYSTEMS MAGAZINE. 2006:21–45.

Predicting Drought (2013). National Drought Mitigation Center. Retrieved April 26, 2014, from http://drought.unl.edu/DroughtBasics/PredictingDrought.aspx.

Prosess Software (2013). Introduction to Bayseian Filtering. PreciseMail Whitepapers, 1–8. Retrieved from www.process.com.

Quinlan J. Induction of Decision Trees. Machine Learning. 1986:81–106.

Rish I. An empirical study of the naive Bayes classifier. IBM Research Report. 2001.

Sahami M, Dumais S, Heckerman D, Horvitz E. A Bayesian Approach to Filtering Junk E-Mail. Learning for Text Categorization. Papers from the 1998 Workshop. AAAI Technical Report. 1998.

Saian R, Ku-Mahamud K.R. Hybrid Ant Colony Optimization and Simulated Annealing for Rule Induction. 2011 UKSim 5th European Symposium on Computer Modeling and Simulation. 2011:70–75. doi: 10.1109/EMS.2011.17.

Shannon C. A Mathematical Theory of Communication. Bell Systems Technical Journal. 1948:379–423.

Smola A.A. A tutorial on support vector regression. Statistics and Computing. 2004:199–222.

Tan P.-N, Michael S, Kumar V. Classfication and Classification: Alternative Techniques. In: Tan P.-N, Michael S, Kumar V, eds. Introduction to Data Mining. Boston, MA: Addison-Wesley; 2005:145–315.

Zdziarski J.A. Ending spam: Bayesian content filtering and the art of statistical language classification. No Starch Press; 2005.


4 http://commons.wikimedia.org/wiki/File:Neuron_Hand-tuned.svg#mediaviewer/File:Neuron_Hand-tuned.svg

1 http://archive.ics.uci.edu/ml/datasets/Statlog+%28German+Credit+Data%29. All data sets used in this book are available at the companion website.

2 This is just a basic sampling technique and the sampling itself can involve a lot of work to validate and produce the correct sampling.

3 Performance criteria such as these are explained in Chapter 8 on evaluation.

..................Content has been hidden....................

You can't read the all page of ebook, please click here login for view all page.
Reset