Skip to main content
Log in

UniDis: a universal discretization technique

  • Published:
Journal of Intelligent Information Systems Aims and scope Submit manuscript

Abstract

Discretization techniques have played an important role in machine learning and data mining as most methods in such areas require that the training data set contains only discrete attributes. Data discretization unification (DDU), one of the state-of-the-art discretization techniques, trades off classification errors and the number of discretized intervals, and unifies existing discretization criteria. However, it suffers from two deficiencies. First, the efficiency of DDU is very low as it conducts a large number of parameters to search good results, which does not still guarantee to obtain an optimal solution. Second, DDU does not take into account the number of inconsistent records produced by discretization, which leads to unnecessary information loss. To overcome the above deficiencies, this paper presents a Uni versal Dis cretization technique, namely UniDis. We first develop a non-parametric normalized discretization criteria which avoids the effect of relatively large difference between classification errors and the number of discretized intervals on discretization results. In addition, we define a new entropy-based measure of inconsistency for multi-dimensional variables to effectively control information loss while producing a concise summarization of continuous variables. Finally, we propose a heuristic algorithm to guarantee better discretization based on the non-parametric normalized discretization criteria and the entropy-based inconsistency. Besides theoretical analysis, experimental results demonstrate that our approach is statistically comparable to DDU evaluated by a popular statistical test and it yields a better discretization scheme which significantly improves the accuracy of classification than previously other known discretization methods except for DDU by running J4.8 decision tree and Naive Bayes classifier.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4

Similar content being viewed by others

References

  • Biba, M., Esposito, F., Ferilli, S., Mauro, N.D., Basile, T. (2007). Unsupervised discretization using kernel density estimation. In: Proceedings of Twentieth International Joint Conference on Artificial Intelligence (IJCAI) (pp. 696–701).

  • Bondu, A., Boulle, M., Lemaire, V., Loiseau, S., Duval, B. (2008). A Non-parametric semi-supervised discretization method. In: Proceedings of Eighth IEEE International Conference on Data Mining (ICDM) (pp. 53–62).

  • Boulle, M. (2004). Khiops: a statistical discretization method of continuous attributes. Machine Learning, 55, 53–69.

    Article  MATH  Google Scholar 

  • Boulle, M. (2006). MODL: a bayes optimal discretization method for continuous attributes. Machine Learning, 65, 131–165.

    Article  Google Scholar 

  • Ching, J.Y., Wong, A.K.C., Chan, K.C.C. (1995). Class-dependent discretization for inductive learning from continuous and mixed-mode data. IEEE Transactions on Pattern Analysis and Machine Intelligence, 17(7), 641–651.

    Article  Google Scholar 

  • Cios, K.J., & Kurgan, L.A. (2007). CLIP4: hybrid inductive machine learning algorithm that generates inequality rules. Information Sciences, 177(17), 3592–3612.

    Article  Google Scholar 

  • Cover, T.M., & Thomas, J.A. (2006). Elements of information thoery (2nd ed.). New York: Wiley.

    Google Scholar 

  • Demsar, J. (2006). Statistical comparisons of classifiers over multiple data sets. Journal of Machine Learning Research, 7, 1–30.

    MathSciNet  MATH  Google Scholar 

  • Dougherty, J., Kohavi, R., Sahami M. (1995). Supervised and unsupervised discretization of continuous feature. In: Proceedings of 12th International conference of Machine learning (pp. 194–202).

  • Fayyad, U., & Irani, K. (1993). Multi-interval discretization of continuous-valued attributes for classification learning. In: Proceedings of thirteenth international joint conference on artificial intelligence (pp. 1022–1027). San Mateo, CA: Morgan Kaufmann.

    Google Scholar 

  • Hand, D., Mannila, H., Smyth, P. (2001). Principles of data mining. MIT Press.

  • Hansen, M.H., & Yu, B. (2001). Model selection and the principle of minimum description length. Journal of the American Statistical Association, 96(545), 746–774.

    Article  MathSciNet  MATH  Google Scholar 

  • Hettich, S., & Bay, S.D. (1999). The UCI KDD Archive [DB/OL]. http://kdd.ics.uci.edu/. Accessed 12 Aug 2010.

  • Jin, R.M., Breitbart, Y., Muoh, C. (2007). Data discretization unification. In: Proceedings of seventh IEEE International Conference on Data Mining (ICDM Best Paper) (pp. 183–192).

  • Jin, Y.W., & Qu, W.Y. (2009). Multi-dimension multi-objective fuzzy optimum dynamic programming method with complicated information based on a maximal-sum-rule of decision sequence priority. In: Eighth IEEE international conference on embedded computing; IEEE international conference on scalable computing and communications (pp. 656–660). Dalian, China.

    Chapter  Google Scholar 

  • Kerber, R. (1992). ChiMerge: discretization of numeric attributes. In: Proceedings of ninth national conference on artificial intelligence (pp. 123–128). AAAI Press.

  • Kurgan, L.A., & Cios, K.J. (2004). CAIM discretization algorithm. IEEE Transactions on Knowledge and Data Engineering, 16(2), 145–153.

    Article  Google Scholar 

  • Ling, C.X., & Zhang, H.J. (2002). The representational power of discrete bayesian networks. Journal of Machine Learning Research, 3, 709–721.

    MathSciNet  Google Scholar 

  • Liu, L.L., Wong, A.K.C., Wang, Y. (2004). A global optimal algorithm for class-dependent discretization of continuous data. Intelligent Data Analysis, 8(2), 151–170.

    Google Scholar 

  • Liu, H., Hussain, F., Tan, C.L., Dash, M. (2002). Discretization: an enabling technique. Journal of Data Mining and Knowledge Discovery, 6(4), 393–423.

    Article  MathSciNet  Google Scholar 

  • Liu, H., & Setiono, R. (1997). Feature selection via discretization. IEEE Transactions on Knowledge and Data Engineering, 9(4), 642–645.

    Article  Google Scholar 

  • Mahady, H., Muhammad, A.C. , Qu, W.Y., Lin, X.M. (2010). Efficient algorithms to monitor continuous constrained k nearest neighbor queries. In: Data base systems for advanced applications (pp. 233–249). Tsukuba, Japan.

    Google Scholar 

  • Mussard, S., Seyte, F., Terraza, M. (2003). Decomposition of Gini and the generalized entropy inequality measures. Economic Bulletin, 4(7), 1–6.

    Google Scholar 

  • Pawlak, Z. (1982). Rough sets. International Journal of Computer and Information Sciences, 11(5), 341–356.

    Article  MathSciNet  MATH  Google Scholar 

  • Quinlan, J.R. (1986). Induction of decision trees. Machine Learning, 1, 81–106.

    Google Scholar 

  • Quinlan, J.R. (1993). C4.5: Programs for machine learning. San Mateo, California: Morgan Kaufmann.

    Google Scholar 

  • Roweis, S.T., & Saul, L.K. (2000). Science. Nonlinear Dimensionality Reduction by Locally Linear Embedding, 290(5500), 2323–2326.

    Google Scholar 

  • Schmidberger, G., & Frank, E. (2005). Unsupervised discretization using tree-based density estimation. In: Proceedings of The European Conference on Machine Learning and Principles and Practice of Knowledge Discovery in Databases (ECML PKDD) (pp. 240–251).

  • Su, C.T., & Hsu, J.H. (2005). An extended Chi2 algorithm for discretization of real value attributes. IEEE Transactions on Knowledge and Data Engineering, 17(3), 437–441.

    Article  Google Scholar 

  • Tay, E.H., & Shen, L. (2002). A modified Chi2 algorithm for discretization. IEEE Transactions on Knowledge and Data Engineering, 14(3), 666–670.

    Article  Google Scholar 

  • Tsai, C.J., Lee, C.I., Yang, W.P. (2008). A discretization algorithm based on class-attribute contingency coefficient. Information Sciences, 178, pp. 714–731.

    Article  Google Scholar 

  • Wang, H.X., & Zaniolo, C. (2000). CMP: a fast decision tree classifier using multivariate predictions. In: 16th International Conference on Data Engineering (ICDE00) (pp. 449–460).

  • Weka 3 Data mining software in Java (2007). http://www.cs.waikato.ac.nz/ml/weka. Accessed 26 Nov 2010.

  • Witten, I.H., & Frank, E. (2000). Data mining: Practical machine learning tools and techniques with java implementations. San Francisco, CA: Morgan Kaufmann.

    Google Scholar 

  • Zar, J.H. (1998). Biostatistical analysis (4th ed.). Englewood Clifs, New Jersey: Prentice Hall.

    Google Scholar 

  • Ziarko, W. (1993). Variable precision rough set model. Journal of Computer and System Science, 46, 39–59.

    Article  MathSciNet  MATH  Google Scholar 

Download references

Acknowledgements

This work is supported by NSFC under Grant nos. of 60973115, 60973117, 61173160, 61173162 and 61173165, and New Century Excellent Talents in University (NCET) of Ministry of Education of China.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Keqiu Li.

Appendix

Appendix

In this section, we show the monotonicity of f(β) and H β (R i ) with regard to β.

Theorem 1

f(β) and H β (R i ) are the monotonous decreasing functions of β in the interval (0,1].

Proof

Let β 1 and β 2 are two values in the interval (0,1], and β 1 < β 2. For f(β) in (1), we have

$$\begin{array}{rll} f(\beta_1)-f(\beta_2)&=& \frac{1-(\frac{1}{N})^{\beta_1}}{\beta_1}-\frac{1-(\frac{1}{N})^{\beta_2}}{\beta_2}\\ &=& \frac{\beta_2-\beta_1+\beta_1\big(\frac{1}{N}\big)^{\beta_2}-\beta_2\big(\frac{1}{N}\big)^{\beta_1}}{\beta_1\beta_2} \end{array}$$
  • \(\because ~ 0<\beta_1< \beta_2\leq 1\)

  • \(\therefore \beta_2-\beta_1>0,~~~~ \beta_1\beta_2>0, ~~~~\beta_1\big(\frac{1}{N}\big)^{\beta_2}-\beta_2\big(\frac{1}{N}\big)^{\beta_1}<0\)

  • \(\because N \gg 1\)

  • \(\therefore~ \beta_2-\beta_1 > \mid \beta_1\big(\frac{1}{N}\big)^{\beta_2}-\beta_2\big(\frac{1}{N}\big)^{\beta_1} \mid \)

  • \(\therefore f(\beta_1)>f(\beta_2)\)

Therefore, f(β) increases with the reduction of β in the interval (0,1]. Similarly, \(H_{\beta_1}(R_{i})>H_{\beta_2}(R_{i})\). Therefore, H β (R i ) is also monotone decreasing with regard to β in the interval (0,1].□

Theorem 2

\(f(\beta)\in \big[1-\frac{1}{N}, \ln N \big]\) , and H 1(R i ) ≤ H β (R i ) ≤ logS.

Proof

According to Theorem 1, f(β) achieves a minimum value when β = 1 and achieves a maximum value when β→0. Then, we have

$$ 1-\frac{1}{N} \leq f(\beta)<\displaystyle \lim\limits_{\beta\rightarrow 0}~\frac{1-(\frac{1}{N})^{\beta}}{\beta}=\ln N $$

Similarly,

$$\begin{array}{rll} H_{1}(R_{i}) \leq H_{\beta}(R_{i})&<&\lim\limits_{\beta\rightarrow 0} H_{\beta}(R_{i})\\ &=&\lim\limits_{\beta\rightarrow 0}\sum\limits_{j=1}^{S}\frac{N_{ij}}{N_{i\cdot}}\Big[1-\left(\frac{N_{ij}}{N_{i\cdot}}\right)^{\beta}\Big]\Big/\beta \\&=& \sum\limits_{j=1}^{S} \frac{N_{ij}}{N_{i\cdot}} \log \frac{N_{i\cdot}}{N_{ij}}\\&=&H(R_{i}) \end{array}$$

where H(R i ) is Shannon’s entropy (Cover and Thomas 2006) of interval R i . According to the extremum property of entropy, H(R i ) ≤ logS. Therefore, the theorem is proven.□

Rights and permissions

Reprints and permissions

About this article

Cite this article

Sang, Y., Jin, Y., Li, K. et al. UniDis: a universal discretization technique. J Intell Inf Syst 40, 327–348 (2013). https://doi.org/10.1007/s10844-012-0228-1

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10844-012-0228-1

Keywords

Navigation