欢迎访问 生活随笔!

生活随笔

当前位置: 首页 > 编程资源 > 编程问答 >内容正文

编程问答

用OneR算法对Iris植物数据进行分类

发布时间:2025/4/16 编程问答 53 豆豆
生活随笔 收集整理的这篇文章主要介绍了 用OneR算法对Iris植物数据进行分类 小编觉得挺不错的,现在分享给大家,帮大家做个参考.

数据集介绍

Iris是植物分类数据集,这个数据集一共有150条植物数据。每条数据都 给出了四个特征:sepal length、sepal width、petal length、petal width(分别表示萼片和花瓣的长与宽),单位均为cm。
该数据集共有三种类别:Iris Setosa(山鸢尾)、Iris Versicolour(变色鸢尾)和Iris Virginica(维吉尼亚鸢尾)。我们这里的分类目的是根据植物的特征推测它的种类。

这个数据是sklearn自带的,首先我们导入数据:

# Load our dataset from sklearn.datasets import load_iris #X, y = np.loadtxt("X_classification.txt"), np.loadtxt("y_classification.txt") dataset = load_iris() X = dataset.data y = dataset.target print(dataset.DESCR) n_samples, n_features = X.shape

部分输出如下:

Iris Plants Database

Notes

Data Set Characteristics:
:Number of Instances: 150 (50 in each of three classes)
:Number of Attributes: 4 numeric, predictive attributes and the class
:Attribute Information:
- sepal length in cm
- sepal width in cm
- petal length in cm
- petal width in cm
- class:
- Iris-Setosa
- Iris-Versicolour
- Iris-Virginica
:Summary Statistics:

============== ==== ==== ======= ===== ====================Min Max Mean SD Class Correlation ============== ==== ==== ======= ===== ==================== sepal length: 4.3 7.9 5.84 0.83 0.7826 sepal width: 2.0 4.4 3.05 0.43 -0.4194 petal length: 1.0 6.9 3.76 1.76 0.9490 (high!) petal width: 0.1 2.5 1.20 0.76 0.9565 (high!) ============== ==== ==== ======= ===== ====================

数据预处理

为了应用OneR算法,我们需要对数据进行一些预处理工作。

数据集中各特征值为连续型,也就是有无数个可能的值。测量得到的数据就是这个样子,比 如,测量结果可能是1、1.2或1.25,等等。连续值的另一个特点是,如果两个值相近,表示相似 度很大。一种萼片长1.2cm的植物跟一种萼片宽1.25cm的植物很相像。

与此相反,类别的取值为离散型。虽然常用数字表示类别,但是类别值不能根据数值大小比 较相似性。Iris数据集用不同的数字表示不同的类别,比如类别0、1、2分别表示Iris Setosa、Iris Versicolour、Iris Virginica。

数据集的特征为连续值,而我们即将使用的算法使用类别型特征值,因此我们需要把连续值 转变为类别型,这个过程叫作离散化

简单的离散化算法,莫过于确定一个阈值,将低于该阈值的特征值置为0,高于阈值的置 为1。

# Compute the mean for each attribute attribute_means = X.mean(axis=0) assert attribute_means.shape == (n_features,) X_d = np.array(X >= attribute_means, dtype='int')

我们得到了一个长度为4的数组,这正好是特征的数量。数组的第一项是第一个特征的均值, 以此类推。接下来,用该方法将数据集打散,把连续的特征值转换为类别型。最后得到的就是类似于[0,1,0,0];[1,1,1,0]这样的(150,4)的数据。

实现OneR算法

OneR算法的思路很简单,它根据已有数据中,具有相同特征值的个体可能属于哪个类别进行分类。OneR是One Rule(一条规则)的简写,表示我们只选取四个特征中分类效果好的一个用作分类依据。

算法首先遍历每个特征的每一个取值,对于每一个特征值,统计它在各个类别中的出现次数, 找到它出现次数多的类别,并统计它在其他类别中的出现次数。

举例来说,假如数据集的某一个特征可以取0或1两个值。数据集共有三个类别。特征值为0 的情况下,A类有20个这样的个体,B类有60个,C类也有20个。那么特征值为0的个体可能属 于B类,当然还有40个个体确实是特征值为0,但是它们不属于B类。将特征值为0的个体分到B类 的错误率就是40%,因为有40个这样的个体分别属于A类和C类。特征值为1时,计算方法类似。

统计完所有的特征值及其在每个类别的出现次数后,我们再来计算每个特征的错误率。计算 方法为把它的各个取值的错误率相加,选取错误率低的特征作为唯一的分类准则(OneR),用于接下来的分类。

下面,用代码实现一下:

from collections import defaultdict from operator import itemgetterdef train(X, y_true, feature):"""Computes the predictors and error for a given feature using the OneR algorithmParameters----------X: array [n_samples, n_features] The two dimensional array that holds the dataset. Each row is a sample, each columnis a feature.y_true: array [n_samples,] The one dimensional array that holds the class values. Corresponds to X, such thaty_true[i] is the class value for sample X[i].feature: intAn integer corresponding to the index of the variable we wish to test.0 <= variable < n_featuresReturns-------predictors: dictionary of tuples: (value, prediction)For each item in the array, if the variable has a given value, make the given prediction.error: floatThe ratio of training data that this rule incorrectly predicts."""# Check that variable is a valid numbern_samples, n_features = X.shapeassert 0 <= feature < n_features# Get all of the unique values that this variable hasvalues = set(X[:,feature])# Stores the predictors array that is returnedpredictors = dict()errors = []for current_value in values:most_frequent_class, error = train_feature_value(X, y_true, feature, current_value)predictors[current_value] = most_frequent_classerrors.append(error)# Compute the total error of using this feature to classify ontotal_error = sum(errors)return predictors, total_error# Compute what our predictors say each sample is based on its value #y_predicted = np.array([predictors[sample[feature]] for sample in X])def train_feature_value(X, y_true, feature, value):# 输入的四个参数分别是数据集,类别数组,选好的特征索引值和特征值# Create a simple dictionary to count how frequency they give certain predictionsclass_counts = defaultdict(int)# Iterate through each sample and count the frequency of each class/value pairfor sample, y in zip(X, y_true):if sample[feature] == value:class_counts[y] += 1# Now get the best one by sorting (highest first) and choosing the first itemsorted_class_counts = sorted(class_counts.items(), key=itemgetter(1), reverse=True)most_frequent_class = sorted_class_counts[0][0]# The error is the number of samples that do not classify as the most frequent class# *and* have the feature value.n_samples = X.shape[1]error = sum([class_count for class_value, class_count in class_counts.items()if class_value != most_frequent_class])return most_frequent_class, error

测试算法

# Now, we split into a training and test set from sklearn.cross_validation import train_test_split# Set the random state to the same number to get the same results as in the book random_state = 14X_train, X_test, y_train, y_test = train_test_split(X_d, y, random_state=random_state) print("There are {} training samples".format(y_train.shape)) print("There are {} testing samples".format(y_test.shape)) # Compute all of the predictors all_predictors = {variable: train(X_train, y_train, variable) for variable in range(X_train.shape[1])} errors = {variable: error for variable, (mapping, error) in all_predictors.items()} # Now choose the best and save that as "model" # Sort by error best_variable, best_error = sorted(errors.items(), key=itemgetter(1))[0] print("The best model is based on variable {0} and has error {1:.2f}".format(best_variable, best_error))# Choose the bset model model = {'variable': best_variable,'predictor': all_predictors[best_variable][0]}def predict(X_test, model):variable = model['variable']predictor = model['predictor']y_predicted = np.array([predictor[int(sample[variable])] for sample in X_test])return y_predictedy_predicted = predict(X_test, model)# Compute the accuracy by taking the mean of the amounts that y_predicted is equal to y_test accuracy = np.mean(y_predicted == y_test) * 100 print("The test accuracy is {:.1f}%".format(accuracy))

参考资料
《Python数据挖掘入门与实践》

总结

以上是生活随笔为你收集整理的用OneR算法对Iris植物数据进行分类的全部内容,希望文章能够帮你解决所遇到的问题。

如果觉得生活随笔网站内容还不错,欢迎将生活随笔推荐给好友。