Agrawal R, Srikant R (1994) Fast algorithms for mining association rules. In: Proceedings of the 20th VLDB conference, pp 487–499
Ahmed S, Coenen F and Leng PH (2006). Tree-based partitioning of date for association rule mining. Knowl Inf Syst 10(3): 315–331
Article
Google Scholar
Banerjee A, Merugu S, Dhillon I and Ghosh J (2005). Clustering with Bregman divergences. J Mach Learn Res 6: 1705–1749
MathSciNet
Google Scholar
Bezdek JC, Chuah SK, Leep D (1986) Generalized k-nearest neighbor rules. Fuzzy Sets Syst 18(3):237–256. http://dx.doi.org/10.1016/0165-0114(86)90004-7
Bloch DA, Olshen RA and Walker MG (2002). Risk estimation for classification trees. J Comput Graph Stat 11: 263–288
Article
MathSciNet
Google Scholar
Bonchi F and Lucchese C (2006). On condensed representations of constrained frequent patterns. Knowl Inf Syst 9(2): 180–201
Article
Google Scholar
Breiman L (1968) Probability theory. Addison-Wesley, Reading. Republished (1991) in Classics of mathematics. SIAM, Philadelphia
Breiman L (1999). Prediction games and arcing classifiers. Neural Comput 11(7): 1493–1517
Article
Google Scholar
Breiman L, Friedman JH, Olshen RA and Stone CJ (1984). Classification and regression trees. Wadsworth, Belmont
MATH
Google Scholar
Brin S and Page L (1998). The anatomy of a large-scale hypertextual Web Search Sngine. Comput Networks 30(1–7): 107–117
Google Scholar
Chen JR (2007). Making clustering in delay-vector space meaningful. Knowl Inf Syst 11(3): 369–385
Article
Google Scholar
Cheung DW, Han J, Ng V, Wong CY (1996) Maintenance of discovered association rules in large databases: an incremental updating technique. In: Proceedings of the ACM SIGMOD international conference on management of data, pp. 13–23
Chi Y, Wang H, Yu PS and Muntz RR (2006). Catch the moment: maintaining closed frequent itemsets over a data stream sliding window. Knowl Inf Syst 10(3): 265–294
Article
Google Scholar
Cost S, Salzberg S (1993) A weighted nearest neighbor algorithm for learning with symbolic features. Mach Learn 10:57.78 (PEBLS: Parallel Examplar-Based Learning System)
Google Scholar
Cover T and Hart P (1967). Nearest neighbor pattern classification. IEEE Trans Inform Theory 13(1): 21–27
MATH
Article
Google Scholar
Dasarathy BV (ed) (1991) Nearest neighbor (NN) norms: NN pattern classification techniques. IEEE Computer Society Press
Dempster AP, Laird NM and Rubin DB (1977). Maximum likelihood from incomplete data via the EM algorithm (with discussion). J Roy Stat Soc B 39: 1–38
MATH
MathSciNet
Google Scholar
Devroye L, Gyorfi L, Lugosi G (1996) A probabilistic theory of pattern recognition. Springer, New York. ISBN 0-387-94618-7
Dhillon IS, Guan Y, Kulis B (2004) Kernel k-means: spectral clustering and normalized cuts. KDD 2004, pp 551–556
Dietterich TG (1997). Machine learning: Four current directions. AI Mag 18(4): 97–136
Google Scholar
Domingos P (1999) MetaCost: A general method for making classifiers cost-sensitive. In: Proceedings of the fifth international conference on knowledge discovery and data mining, pp 155–164
Domingos P and Pazzani M (1997). On the optimality of the simple Bayesian classifier under zero-one loss. Mach Learn 29: 103–130
MATH
Article
Google Scholar
Fix E, Hodges JL, Jr (1951) Discriminatory analysis, nonparametric discrimination. USAF School of Aviation Medicine, Randolph Field, Tex., Project 21-49-004, Rept. 4, Contract AF41(128)-31, February 1951
Freund Y and Schapire RE (1997). A decision-theoretic generalization of on-line learning and an application to boosting. J Comput Syst Sci 55(1): 119–139
MATH
Article
MathSciNet
Google Scholar
Friedman JH, Bentley JL, Finkel RA (1977) An algorithm for finding best matches in logarithmic time. ACM Trans. Math. Software 3, 209. Also available as Stanford Linear Accelerator Center Rep. SIX-PUB-1549, February 1975
Friedman JH, Kohavi R, Yun Y (1996) Lazy decision trees. In: Proceedings of the thirteenth national conference on artificial intelligence, San Francisco, CA. AAAI Press/MIT Press, pp. 717–724
Friedman N, Geiger D and Goldszmidt M (1997). Bayesian network classifiers. Mach Learn 29: 131–163
MATH
Article
Google Scholar
Fung G and Stoeckel J (2007). SVM feature selection for classification of SPECT images of Alzheimer’s disease using spatial information. Knowl Inf Syst 11(2): 243–258
Article
Google Scholar
Gates GW (1972). The reduced nearest neighbor rule. IEEE Trans Inform Theory 18: 431–433
Article
Google Scholar
Golub GH, Van Loan CF (1983) Matrix computations. The Johns Hopkins University Press
Gondek D and Hofmann T (2007). Non-redundant data clustering. Knowl Inf Syst 12(1): 1–24
Article
Google Scholar
Han E (1999) Text categorization using weight adjusted k-nearest neighbor classification. PhD thesis, University of Minnesota, October 1999
Hand DJ and Yu K (2001). Idiots Bayes—not so stupid after all?. Int Stat Rev 69: 385–398
MATH
Article
Google Scholar
Gray RM and Neuhoff DL (1998). Quantization. IEEE Trans Inform Theory 44(6): 2325–2384
MATH
Article
MathSciNet
Google Scholar
Hart P (1968). The condensed nearest neighbor rule. IEEE Trans Inform Theory 14: 515–516
Article
Google Scholar
Han J, Pei J, Yin Y (2000) Mining frequent patterns without candidate generation. In: Proceedings of ACM SIGMOD international conference on management of data, pp 1–12
Hastie T and Tibshirani R (1996). Discriminant adaptive nearest neighbor classification. IEEE Trans Pattern Anal Mach Intell 18(6): 607–616
Article
Google Scholar
Friedman J, Hastie T and Tibshirani R (2000). Additive logistic regression: a statistical view of boosting with discussions. Ann Stat 28(2): 337–407
MATH
Article
MathSciNet
Google Scholar
Herbrich R, Graepel T, Obermayer K (2000) Rank boundaries for ordinal regression. Adv Mar Classif pp 115–132
Hu T and Sung SY (2006). Finding centroid clusterings with entropy-based criteria. Knowl Inf Syst 10(4): 505–514
Article
Google Scholar
Hunt EB, Marin J and Stone PJ (1966). Experiments in induction. Academic Press, New York
Google Scholar
Inokuchi A, Washio T and Motoda H (2005). General framework for mining frequent subgraphs from labeled graphs. Fundament Inform 66(1-2): 53–82
MATH
MathSciNet
Google Scholar
Jain AK and Dubes RC (1988). Algorithms for clustering data. Prentice-Hall, Englewood Cliffs
MATH
Google Scholar
Jin R, Goswami A and Agrawal G (2006). Fast and exact out-of-core and distributed k-means clustering. Knowl Inf Syst 10(1): 17–40
Article
Google Scholar
Kobayashi M and Aono M (2006). Exploring overlapping clusters using dynamic re-scaling and sampling. Knowl Inf Syst 10(3): 295–313
Article
Google Scholar
Koga H, Ishibashi T and Watanabe T (2007). Fast agglomerative hierarchical clustering algorithm using Locality-Sensitive Hashing. Knowl Inf Syst 12(1): 25–53
Article
Google Scholar
Kukar M (2006). Quality assessment of individual classifications in machine learning and data mining. Knowl Inf Syst 9(3): 364–384
Article
Google Scholar
Kuramochi M and Karypis G (2005). Gene Classification using Expression Profiles: A Feasibility Study. Int J Artif Intell Tools 14(4): 641–660
Article
Google Scholar
Langville AN and Meyer CD (2006). Google’s PageRank and beyond: the science of search engine rankings. Princeton University Press, Princeton
MATH
Google Scholar
Leung CW-k, Chan SC-f and Chung F-L (2006). A collaborative filtering framework based on fuzzy association rules and multiple-level similarity. Knowl Inf Syst 10(3): 357–381
Article
Google Scholar
Li T, Zhu S and Ogihara M (2006). Using discriminant analysis for multi-class classification: an experimental investigation. Knowl Inf Syst 10(4): 453–472
Article
Google Scholar
Liu B (2007). Web data mining: exploring hyperlinks, contents and usage Data. Springer, Heidelberg
MATH
Google Scholar
Lloyd SP (1957) Least squares quantization in PCM. Unpublished Bell Lab. Tech. Note, portions presented at the Institute of Mathematical Statistics Meeting Atlantic City, NJ, September 1957. Also, IEEE Trans Inform Theory (Special Issue on Quantization), vol IT-28, pp 129–137, March 1982
Leung CK-S, Khan QI, Li Z and Hoque T (2007). CanTree: a canonical-order tree for incremental frequent-pattern mining. Knowl Inf Syst 11(3): 287–311
Article
Google Scholar
McLachlan GJ (1987). On bootstrapping the likelihood ratio test statistic for the number of components in a normal mixture.. Appl Stat 36: 318–324
Article
Google Scholar
McLachlan GJ and Krishnan T (1997). The EM algorithm and extensions. Wiley, New York
MATH
Google Scholar
McLachlan GJ and Peel D (2000). Finite Mixture Models. Wiley, New York
MATH
Google Scholar
Messenger RC and Mandell ML (1972). A model search technique for predictive nominal scale multivariate analysis. J Am Stat Assoc 67: 768–772
Article
Google Scholar
Morishita S, Sese J (2000) Traversing lattice itemset with statistical metric pruning. In: Proceedings of PODS’00, pp 226–236
Olshen R (2001). A conversation with Leo Breiman. Stat Sci 16(2): 184–198
MATH
Article
MathSciNet
Google Scholar
Page L, Brin S, Motwami R, Winograd T (1999) The PageRank citation ranking: bringing order to the Web. Technical Report 1999–0120, Computer Science Department, Stanford University
Quinlan JR (1979) Discovering rules by induction from large collections of examples. In: Michie D (ed), Expert systems in the micro electronic age. Edinburgh University Press, Edinburgh
Quinlan R (1989) Unknown attribute values in induction. In: Proceedings of the sixth international workshop on machine learning, pp. 164–168
Quinlan JR (1993). C4.5: Programs for machine learning. Morgan Kaufmann Publishers, San Mateo
Google Scholar
Reyzin L, Schapire RE (2006) How boosting the margin can also boost classifier complexity. In: Proceedings of the 23rd international conference on machine learning. Pittsburgh, PA, pp. 753–760
Ridgeway G, Madigan D and Richardson T (1998). Interpretable boosted naive Bayes classification. In: Agrawal, R, Stolorz, P, and Piatetsky-Shapiro, G (eds) Proceedings of the fourth international conference on knowledge discovery and data mining., pp 101–104. AAAI Press, Menlo Park
Google Scholar
Schapire RE (1990). The strength of weak learnability. Mach Learn 5(2): 197–227
Google Scholar
Schapire RE, Freund Y, Bartlett P and Lee WS (1998). Boosting the margin: A new explanation for the effectiveness of voting methods. Ann Stat 26(5): 1651–1686
MATH
Article
MathSciNet
Google Scholar
Schapire RE and Singer Y (1999). Improved boosting algorithms using confidence-rated predictions. Mach Learn 37(3): 297–336
MATH
Article
Google Scholar
Scholkopf B, Smola AJ (2002) Learning with kernels. MIT Press
Seidl T and Kriegel H (1998). Optimal multi-step k-nearest neighbor search. In: Tiwary, A and Franklin, M (eds) Proceedings of the 1998 ACM SIGMOD international conference on management of data, Seattle, Washington, United States, 1–4 June, 1998, pp 154–165. ACM Press, New York
Chapter
Google Scholar
Srikant R, Agrawal R (1995) Mining generalized association rules. In: Proceedings of the 21st VLDB conference. pp. 407–419
Steinbach M, Karypis G, Kumar V (2000) A comparison of document clustering techniques. In: Proceedings of the KDD Workshop on Text Mining
Steinbach M and Kumar V (2007). Generalizing the notion of confidence. Knowl Inf Syst 12(3): 279–299
Article
Google Scholar
Tan P-N, Steinbach M, Kumar V (2006) Introduction to data mining. Pearson Addison-Wesley
Tao D, Li X, Wu X, Hu W and Maybank SJ (2007). Supervised tensor learning. Knowl Inf Syst 13(1): 1–42
Article
Google Scholar
Thabtah FA, Cowling PI and Peng Y (2006). Multiple labels associative classification. Knowl Inf Syst 9(1): 109–129
Article
Google Scholar
Ting KM (2002). An instance-weighting method to induce cost-sensitive trees. IEEE Trans Knowl Data Eng 14: 659–665
Article
Google Scholar
Toussaint GT (2002) Proximity graphs for nearest neighbor decision rules: recent progress. In: Interface-2002, 34th symposium on computing and statistics (theme: Geoscience and Remote Sensing). Ritz-Carlton Hotel, Montreal, Canada, 17–20 April, 2002
Toussaint GT (2002) Open problems in geometric methods for instance-based learning. JCDCG 273–283
Tsang IW, Kwok JT and Cheung P-M (2005). Core vector machines: Fast SVM training on very large data sets. J Mach Learn Res 6: 363–392
MathSciNet
Google Scholar
Uno T, Asai T, Uchida Y, Arimura H (2004) An efficient algorithm for enumerating frequent closed patterns in transaction databases. In: Proc. of the 7th international conference on discovery science. LNAI vol 3245, Springe, Heidelberg, pp 16–30
Vapnik V (1995). The nature of statistical learning theory. Springer, New York
MATH
Google Scholar
Viola P, Jones M (2001) Rapid object detection using a boosted cascade of simple features. In: Proceedings of the IEEE computer society conference on computer vision and pattern recognition. pages 511–518, Kauai, HI
Washio T, Nakanishi K, Motoda H (2005) Association rules based on levelwise subspace clustering. In: Proceedings. of 9th European conference on principles and practice of knowledge discovery in databases. LNAI, vol 3721, pp. 692–700 Springer, Heidelberg
Wasserman S and Raust K (1994). Social network analysis. Cambridge University Press, Cambridge
Google Scholar
Wettschereck D, Aha D and Mohri T (1997). A review and empirical evaluation of feature weighting methods for a class of lazy learning algorithms. Artif Intell Rev 11: 273–314
Article
Google Scholar
Wilson DL (1972). Asymptotic properties of nearest neighbor rules using edited data. IEEE Trans Syst Man Cyberne 2: 408–420
MATH
Article
Google Scholar
Yang Q and Wu X (2006). 10 challenging problems in data mining research. Int J Inform Technol Decis Making 5(4): 597–604
Article
Google Scholar
Yan X, Han J (2002) gSpan: Graph-based substructure pattern mining. In: Proceedings of ICDM’02, pp 721–724
Yu PS, Li X, Liu B (2005) Adding the temporal dimension to search—a case study in publication search. In: Proceedings of Web Intelligence (WI’05)
Zhang J, Kang D-K, Silvescu A and Honavar V (2006). Learning accurate and concise naïve Bayes classifiers from attribute value taxonomies and data. Knowl Inf Syst 9(2): 157–179
Article
Google Scholar