The selection of functions involves the identification of a subset of the most useful features that produce compatible results as the original set of characteristics. A feature selection algorithm can be evaluated both from the point of view of efficiency and effectiveness. While efficiency refers to the time required to find a subset of features, effectiveness is related to the quality of the subset of features. Based on these criteria, a fast algorithm based clustering (FAST) is proposed and evaluated experimentally in this paper. The FAST algorithm works in two steps. In the first step, the features are divided into clusters by using graphical grouping methods. In the second step, the most representative feature that is strongly related to target classes is selected from each cluster to form a subset of features. The characteristics of the different clusters are relatively independent, the FAST clustered strategy has a high probability of producing a subset of useful and independent features. To ensure FAST efficiency, we adopt the efficient minimum clustering (MST) method. The efficiency and effectiveness of the FAST algorithm are evaluated through an empirical study. Extensive experiments are performed to compare FAST and several representative feature selection algorithms, namely FCBF, ReliefF, CFS, Consist and FOCUS-SF, with respect to four types of known classifiers, namely Naive Bayes based on probability , The C4.5-based tree, the IB1-based instance and the rule-based RIPPER before and after the selection of functions.