[1] WEI W, LIANG J Y, GUO X Y, et al. Hierarchical division clustering framework for categorical data[J]. Neurocomputing, 2019,341:118-134.
[2] NAOUALI S, BEN SALEM S, CHTOUROU Z. Clustering categorical data: A survey[J]. International Journal of Information Technology & Decision Making, 2020,19(1):49-96.
[3] 项峥嵘. 类别型数据的划分迁移聚类[D]. 杭州:浙江大学, 2014.
[4] TSEKOURAS G E, PAPAGEORGIOU D, KOTSIANTIS S B, et al. Fuzzy clustering of categorical attributes and its use in analyzing cultural data[C]// International Conference on Computational Intelligence. 2004:202-206.
[5] HUANG Z X, NG M K. A fuzzy k-modes algorithm for clustering categorical data[J]. IEEE Transactions on Fuzzy Systems, 1999,7(4):446-452.
[6] HSU C C, CHEN C L, SU Y W. Hierarchical clustering of mixed data based on distance hierarchy[J]. Information Sciences, 2007,177(20):4474-4492.
[7] HUANG Z X. Extensions to the K-means algorithm for clustering large data sets with categorical values[J]. Data Mining and Knowledge Discovery, 1998,2(3):283-304.
[8] NG M K, LI M J, HUANG J Z, et al. On the impact of dissimilarity measure in k-modes clustering algorithm[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2007,29(3):503-507.
[9] CAO F Y, LIANG J Y, LI D Y, et al. A dissimilarity measure for the k-modes clustering algorithm[J]. Knowledge-based Systems, 2012, 26:120-127.
[10]GANTI V, GEHRKE J, RAMAKRISHNAN R. CACTUS—Clustering categorical data using summaries[C]// Proceedings of the 5h ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. 1999:73-83.
[11]AHMAD A, DEY L. A method to compute distance between two categorical values of same attribute in unsupervised learning for categorical data set[J]. Pattern Recognition Letters, 2007, 28(1):110-118.
[12]ZHANG K, WANG Q J, CHEN Z Z, et al. From categorical to numerical: Multiple transitive distance learning and embedding[C]// Proceedings of the 2015 SIAM International Conference on Data Mining. 2015:46-54.
[13]IENCO D, PENSA R G, MEO R. From context to distance: Learning dissimilarity for categorical data clustering[J]. ACM Transactions on Knowledge Discovery from Data(TKDD), 2012,6(1):1-25.
[14]WANG C, DONG X J, ZHOU F, et al. Coupled attribute similarity learning on categorical data[J]. IEEE Transactions on Neural Networks and Learning Systems, 2014,26(4):781-797.
[15]JIAN S L, CAO L B, PANG G S, et al. Embedding-based representation of categorical data by hierarchical value coupling learning[C]// Proceedings of the 26th International Joint Conference on Artificial Intelligence. 2017:1937-1943.
[16]VINCENT P, LAROCHELLE H, LAJOIE I, et al. Stacked denoising autoencoders: Learning useful representations in a deep network with a local denoising criterion[J]. Journal of Machine Learning Research, 2010,11(12):3371-3408.
[17]DONAHUE J, KRHENBHL P, DARRELL T. Adversarial feature learning[J]. arXiv preprint arXiv:1605.09782, 2016.
[18]KINGMA D P, WELLING M. Auto-encoding variational bayes[J]. arXiv preprint arXiv:1312.6114, 2013.
[19]BANERJEE A, PUJARI A K, RANI PANIGRAHI C, et al. A new method for weighted ensemble clustering and coupled ensemble selection[J]. Connection Science, 2021,33(3):623-644.
[20]JIA Y H, LIU H, HOU J H, et al. Clustering ensemble meets low-rank tensor approximation[J]. arXiv preprint arXiv:2012.08916, 2020.
[21]FRED A L N, JAIN A K. Combining multiple clusterings using evidence accumulation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2005,27(6):835-850.
[22]BEZDEK J C, HATHAWAYR J. VAT: A tool for visual assessment of (cluster) tendency[C]// Proceedings of the 2002 International Joint Conference on Neural Networks. IEEE, 2002,3:2225-2230.
[23]CHANG H, YEUNGD Y. Robust path-based spectral clustering[J]. Pattern Recognition, 2008,41(1):191-203.
[24]SHI J B, MALIK J. Normalized cuts and image segmentation[J]. IEEE Transactions on Pattern Analysis and Machine Intelligence, 2000,22(8):888-905.
[25]SOKOLOVA M, JAPKOWICZ N, SZPAKOWICZS. Beyond accuracy, F-score and ROC: A family of discriminant measures for performance evaluation[C]// Australasian Joint Conference on Artificial Intelligence. Springer, 2006:1015-1021.
[26]CHICCO D, JURMAN G. The advantages of the Matthews correlation coefficient (MCC) over F1 score and accuracy in binary classification evaluation[J]. BMC Genomics, 2020,21(1):1-13.
[27]DEMAR J. Statistical comparisons of classifiers over multiple data sets[J]. The Journal of Machine Learning Research, 2006,7:1-30.
|