Volume- 2
Issue- 3
Year- 2014
Article Tools: Print the Abstract | Indexing metadata | How to cite item | Email this article | Post a Comment
K. Revathi, , T. Kalai Selvi
Process with high dimensional data is enormous issue in data mining and machine learning applications. Feature selection is the mode of recognize the good number of features that produce well-suited outcome as the unique entire set of features. Feature selection process constructs a pathway to reduce the dimensionality and time complexity and also improve the accuracy level of classifier. In this paper, we use an alternative approach, called affinity propagation algorithm for effective and efficient feature selection and clustering process. The endeavor is to improve the performance in terms accuracy and time complexity.
[1] Qinbao Song, Jingjie Ni, and Guangtao Wang, “A Fast ClusteringBased Feature Subset Selection Algorithm for High-Dimensional Data,” IEEE Transaction on Knowledge and Data, Engineering, Vol. 25, No. 1, January 2013.
[2] M. Dash, H. Liu, and H. Motoda, “Consistency Based Feature Selection,” Proc. Fourth Pacific Asia Conf. Knowledge Discovery and Data Mining, pp. 98-109, 2000.
[3] M. Dash and H. Liu, “Consistency- Based Search in Feature Selection,” Artificial Intelligence, vol. 151, nos. 1/2, pp. 155-176, 2003.
[4] A. Arauzo-Azofra, J.M. Benitez, and J.L. Castro, “A Feature Set Measure Based on Relief,” Proc. Fifth Int’l Conf. Recent Advances in Soft Computing, pp. 104-109, 2004.
[5] H. Liu, H. Motoda, and L. Yu, “Selective Sampling Approach to Active Feature Selection,” Artificial Intelligence, vol. 159, nos. 1/2, pp. 49-74, 2004.
[6] Battiti,“Using Mutual Information for Selecting Features in Supervised Neural Net Learning,” IEEE Trans. Neural Networks, vol. 5, no. 4, pp. 537-550, July 1994.
[7] C.Krier, D.Francois, F. Rossi, and M. Verleysen, “Feature Clustering and Mutual Information for the Selection of Variables in Spectral Data,” Proc. European Symp. Artificial Neural Networks Advances in Computational Intelligence and Learning, pp. 157-162, 2007.
[8] Z. Zhao and H. Liu, “Searching for Interacting Features,” Proc. 20th Int’l Joint Conf. Artificial Intelligence, 2007
[9] R. Butterworth, G. Piatetsky-Shapiro, and D.A. Simovici, “On Feature Selection through Clustering,” Proc. IEEE Fifth Int’l Conf. Data Mining, pp. 581-584, 2005.
[10] L.D. Baker and A.K. McCallum, “Distributional Clustering of Words for Text Classification,” Proc. 21st Ann. Int’l ACM SIGIR Conf. Research and Development in information Retrieval, pp. 96- 103,1998.
[11] L. Yu and H. Liu, “Feature Selection for High-Dimensional Data: A Fast Correlation-Based Filter Solution,” Proc. 20th Int’l Conf. Machine Leaning, vol. 20, no. 2, pp. 856-863, 2003.
[12] B. Raman and T.R. Ioerger, “Instance-Based Filter for Feature Selection,” J. Machine Learning Research, vol. 1, pp. 1-23, 2002.
[13] J. Biesiada and W.Duch, “ Feature Election for High-Dimensional data a Pearson Redundancy Based Filter,” Advances in soft Computing, vol.45,pp.581-584,2005.
[14] Sivia Cateni, Valentina Colla and Marco Vannucci, “A Genetic Algorithm based Approach for Selecting Input variables and Setting Relevant Network Parameters of a SOM- based classifier”, International Journal of Simulation Systems, Science and Technology, vol.12.
[15] Asha Gowda Karegowda, A.S. Manjunath and M.A. Jayaram, “Comparative study of attribute selection using gain ratio and correlation based feature selection”, International Journal of Information Technology and Knowledge Management, vol.2, No2, pp.271-277
1Computer Science and Engineering, Erode Sengunthar Engineering College,Anna University Chennai, Tamilnadu
No. of Downloads: 1 | No. of Views: 805
Anu Sharma, Vivek Kumar.
May 2023 - Vol 11, Issue 3
Venkateswaran Radhakrishnan.
May 2023 - Vol 11, Issue 3
Heemakshi Sharma, Khushboo Tripathi.
May 2023 - Vol 11, Issue 3