【AI】Sklearn

news2024/12/4 1:52:56

长期更新,建议关注、收藏、点赞。

友情链接:
AI中的数学_线代微积分概率论最优化
Python
numpy_pandas_matplotlib_spicy

建议路线:机器学习->深度学习->强化学习


目录

  • 预处理
  • 模型选择
  • 分类
    • 实例: 二分类比赛 +网格搜索
    • 实例:MNIST数字分类
  • 回归
  • 聚类
  • 降维
  • 综合实例1:鸢尾花数据集
  • 综合实例2:用8种不同算法


Sklearn (全称 Scikit-Learn) 是基于 Python 语言的机器学习工具。它建立在 NumPy, SciPy, Pandas 和 Matplotlib 之上,里面的 API 的设计非常好,所有对象的接口简单,很适合新手上路。

官方文档:sklearn

预处理

模型选择

分类

实例: 二分类比赛 +网格搜索

import numpy as np
import pandas as pd
train_data=pd.read_csv('train_data.csv')
train_data.head()
# train_data
train_data.drop(['ID'],inplace=True,axis=1)
train_data.head()

#训练数据分出输入和最后预测的值
train_X=train_data.iloc[:,train_data.columns!='y']
print(train_X.head())
train_y=train_data.iloc[:,train_data.columns=='y']
print(train_y.head())

test_data=pd.read_csv('test_set.csv')
test_data.head()
test_data.drop(['ID'],inplace=True,axis=1)
test_data.head()

#特征提取

#LabelEncoder
#pd.Categorical().codes可以直接得到原始数据的对应序号列表 详细参考官网:https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Categorical.html
#相当于encode
c = ['A','A','A','B','B','C','C','C','C']
category = pd.Categorical(c)
#接下来查看category的label即可

print(category.codes)  #[0 0 0 1 1 2 2 2 2]
print(category.dtype) #category

#factorize相当于编码encoding
job_feature=train_X['job'].unique() #去重
# print(job_feature)
len(job_feature)
example=train_X
example['job'],uniques=pd.factorize(example['job'])
#pd.factorize:Encode the object as an enumerated type or categorical variable.
print(pd.factorize(example['job']))
# print(example['job'])
# example.head()

train_X['job']=train_X['job']+1

marital_feature=train_X['marital'].unique()
print(marital_feature)
len(marital_feature)

train_X['marital'],unique=pd.factorize(train_X['marital'])
train_X['marital']=train_X['marital']+1
train_X.head()

education_feature=train_X['education'].unique()
print(education_feature)
len(education_feature)

train_X['education'],unique=pd.factorize(train_X['education'])
train_X['education']=train_X['education']+1
train_X.head()

contact_feature=train_X['contact'].unique()
print(contact_feature)
len(contact_feature)

train_X['contact'],unique=pd.factorize(train_X['contact'])
train_X['contact']=train_X['contact']+1
train_X.head()

month_feature=train_X['month'].unique()
print(month_feature)
len(month_feature)

train_X['month'],unique=pd.factorize(train_X['month'])
train_X['month']=train_X['month']+1
train_X.head()

poutcome_feature=train_X['poutcome'].unique()
print(poutcome_feature)
len(poutcome_feature)

train_X['poutcome'],unique=pd.factorize(train_X['poutcome'])
train_X['poutcome']=train_X['poutcome']+1
train_X.head()

default_feature=train_X['default'].unique()
print(default_feature)
len(default_feature)

train_X['default'],unique=pd.factorize(train_X['default'])
train_X['default']=train_X['default']+1
train_X.head()

housing_feature=train_X['housing'].unique()
print(housing_feature)
len(housing_feature)
train_X['housing'],unique=pd.factorize(train_X['housing'])
train_X['housing']=train_X['housing']+1
train_X.head()

loan_feature=train_X['loan'].unique()
print(loan_feature)
len(loan_feature)
train_X['loan'],unique=pd.factorize(train_X['loan'])
train_X['loan']=train_X['loan']+1
train_X.head()

#测试集数据数字化
test_data.head()
test_data['job'],jnum=pd.factorize(test_data['job'])
test_data['job']=test_data['job']+1
test_data.head()

test_data['marital'],jnum=pd.factorize(test_data['marital'])
test_data['marital']=test_data['marital']+1

test_data['education'],jnum=pd.factorize(test_data['education'])
test_data['education']=test_data['education']+1

test_data['default'],jnum=pd.factorize(test_data['default'])
test_data['default']=test_data['default']+1

test_data['housing'],jnum=pd.factorize(test_data['housing'])
test_data['housing']=test_data['housing']+1

test_data['loan'],jnum=pd.factorize(test_data['loan'])
test_data['loan']=test_data['loan']+1

test_data['contact'],jnum=pd.factorize(test_data['contact'])
test_data['contact']=test_data['contact']+1

test_data['month'],jnum=pd.factorize(test_data['month'])
test_data['month']=test_data['month']+1

test_data['poutcome'],jnum=pd.factorize(test_data['poutcome'])
test_data['poutcome']=test_data['poutcome']+1

test_data.head()

#LogisticRegression
from sklearn.linear_model import LogisticRegression
LR=LogisticRegression()
LR.fit(train_X,train_y)
#测试
test_y=LR.predict(test_data)
test_y
df_test=pd.read_csv('test_set.csv')
df_test['pred']=test_y.tolist()
df_result=df_test.loc[:,['ID','pred']]#save res
df_result.to_csv('LR.csv',index=False)

#SVM
from sklearn.svm import LinearSVC
classifierSVM=LinearSVC()
classifierSVM.fit(train_X,train_y)
test_ySVM=classifierSVM.predict(test_data)
df_test=pd.read_csv('test_set.csv')
df_test['pred']=test_ySVM.tolist()
df_result=df_test.loc[:,['ID','pred']]
df_result.to_csv('LSVM.csv',index=False)

#knn

#decision tree

#average prediction
test_yAver=(test_y+test_ySVM+test_yKNN+test_yTree)/4
test_yAver #array([0.  , 0.  , 0.  , ..., 0.25, 0.  , 0.25])
df_test=pd.read_csv('test_set.csv')
df_test['pred']=test_yAver.tolist()
df_result=df_test.loc[:,['ID','pred']]
df_result.to_csv('Aver.csv',index=False)

#提高泛化能力
'''
GridSearchCV网格搜索
Exhaustive search over specified parameter values for an estimator.
The parameters of the estimator used to apply these methods are 
optimized by cross-validated grid-search over a parameter grid.

param_grid:
e.g. {'n_estimators':list(range(10,401,10))}
每一轮 params其中一个元素为{'n_estimators':x 其中一个值 从前往后}
Dictionary with parameters names (str) as keys and lists of parameter settings to try as values, or a list of such dictionaries, in which case the grids spanned by each dictionary in the list are explored. This enables searching over any sequence of parameter settings.

scoring:Strategy to evaluate the performance of the cross-validated model on the test set.

cv:Determines the cross-validation splitting strategy.

n_estimators:the number of trees to be used in the forest.
The number of boosting stages to perform. 
Gradient boosting is fairly robust to over-fitting 
so a large number usually results in better performance. 
Values must be in the range [1, inf).

min_samples_split:
determines the minimum number of features to consider while looking for a split.

min_samples_leaf:
The minimum number of samples required to be at a leaf node.
A split point at any depth will only be considered if it 
leaves at least min_samples_leaf training samples in each of the left 
and right branches. 
This may have the effect of smoothing the model, especially in regression.
--------------
GradientBoostingClassifier
基于决策树DT
subsample:The fraction比例 of samples to be used for fitting the individual单个 base learners. 

max_features:The number of features to consider when looking for the best split
Choosing max_features < n_features leads to a reduction of variance and an increase in bias.
the search for a split does not stop until at least one valid partition of the node samples is found, even if it requires to effectively inspect more than max_features features.
若一个节点一直没找到一个有效划分,则一直找,即使已经找过超过max_features

random_state:Controls the random seed given to each Tree estimator at each boosting iteration. In addition, it controls the random permutation of the features at each split (see Notes for more details).


'''
param_test1={'n_estimators':list(range(10,401,10))}#网格搜索max_iteration
gsearch1=GridSearchCV(estimator=GradientBoostingClassifier(learning_rate=0.1,max_features=None, 
                                                          subsample=0.8,random_state=10),param_grid=param_test1,
                                                          scoring='roc_auc',iid=False,cv=3)
gsearch1.fit(train_X.values,train_y2)
gsearch1.grid_scores_,gsearch1.best_params_,gsearch1.best_score_
##{'n_estimators': 350}, 0.8979275309747781)
## 找到一个合适的迭代次数,开始对决策树进行调参。
'''
grid_scores_:
每轮打印 mean/std/params

best_params_:
e.g. {'n_estimators': 350}指向这个350轮
Parameter setting that gave the best results on the hold out data.

best_score_:
Mean cross-validated score of the best_estimator
'''
param_test2={'max_depth':list(range(3,14,2)),'min_samples_split':list(range(20,100,10))}#网格搜索max_depth
gsearch2=GridSearchCV(estimator=GradientBoostingClassifier(learning_rate=0.1,n_estimators=350,min_samples_leaf=20,
                                                           max_features=None,subsample=0.8,random_state=10),
                      param_grid=param_test2,scoring='roc_auc',iid=False,cv=3  )
gsearch2.fit(train_X.values,train_y2)
gsearch2.grid_scores_,gsearch2.best_params_,gsearch2.best_score_
#{'max_depth': 3, 'min_samples_split': 90}, 0.8973756708021962)

'''
上述的决策树的深度可以定下来,
但是划分所需要的最小样本数min_samples_split还不能定下来,
这个参数还与决策树其他参数存在关联

记下来对内部节点再划分所需最小样本数min_samples_split和叶子结点最少样本数min_samples_leaf一起调参
'''
param_test3={'min_samples_split':list(range(80,1080,100)),'min_samples_leaf':list(range(60,101,10))}
gsearch3=GridSearchCV(estimator=GradientBoostingClassifier(learning_rate=0.1,n_estimators=350,max_depth=3,
                                                          max_features=None,subsample=0.8,random_state=10),
                     param_grid=param_test3,scoring='roc_auc',iid=False,cv=3)
gsearch3.fit(train_X.values,train_y2)
gsearch3.grid_scores_,gsearch3.best_params_,gsearch3.best_score_
##{'min_samples_leaf': 60, 'min_samples_split': 280}, 0.8976660805899851)

##调完参后,放到GBDT里面看看效果
gbm1=GradientBoostingClassifier(learning_rate=0.1,n_estimators=350,max_depth=3,min_samples_leaf=60,min_samples_split=280,
                                max_features=None,subsample=0.8,random_state=10)
gbm1.fit(train_X.values,train_y2)
y_pred=gbm1.predict(train_X)
y_predprob=gbm1.predict_proba(train_X)[:,1]
print("Accuracy : %.4g" % metrics.accuracy_score(train_y.values,y_pred))
print("AUC score(Train):%f" % metrics.roc_auc_score(train_y,y_predprob))

## 对最大特征数max_features进行网格搜索
param_test4={'max_features':list(range(4,16,2))}
gsearch4=GridSearchCV(estimator=GradientBoostingClassifier(learning_rate=0.1,n_estimators=350,max_depth=3,min_samples_leaf=60 ,
                                                          min_samples_split=280,subsample=0.8,random_state=10),
                     param_grid=param_test4,scoring='roc_auc',iid=False,cv=3)
gsearch4.fit(train_X.values,train_y2)
gsearch4.grid_scores_,gsearch4.best_params_,gsearch4.best_score_
## {'max_features': 14}, 0.8971037288653009)

## 对子采样比例进行网格搜索
param_test5={'subsample':[0.6,0.7,0.75,0.8,0.85,0.9]}
gsearch5=GridSearchCV(estimator=GradientBoostingClassifier(learning_rate=0.1,n_estimators=350,max_depth=3,min_samples_leaf=60,
                                                          min_samples_split=280,max_features=14,random_state=10),
                     param_grid=param_test5,scoring='roc_auc',iid=False,cv=3)
gsearch5.fit(train_X.values,train_y2)
gsearch5.grid_scores_,gsearch5.best_params_,gsearch5.best_score_
##{'subsample': 0.85}, 0.8976770026809427)

#基本得到所有调优的参数结果了,可以减半步长,加倍最大迭代次数增加模型的泛化能力
gbm2=GradientBoostingClassifier(learning_rate=0.05,n_estimators=350,max_depth=3,min_samples_leaf=60,min_samples_split=280,
                               max_features=14,subsample=0.85,random_state=10)
gbm2.fit(train_X.values,train_y2)
y_pred=gbm2.predict(train_X)
y_predprob=gbm2.predict_proba(train_X)[:,1]
print("Accuracy : %.4g" % metrics.accuracy_score(train_y.values,y_pred))
print("AUC Score(Train): %f" % metrics.roc_auc_score(train_y,y_predprob))

gbm5=GradientBoostingClassifier(learning_rate=0.05,n_estimators=700,max_depth=3,min_samples_leaf=60,min_samples_split=280,
                               max_features=14,subsample=0.85,random_state=10)
gbm5.fit(train_X.values,train_y2)
y_pred=gbm5.predict(train_X)
y_predprob=gbm5.predict_proba(train_X)[:,1]
print("Accuracy : %.4g" % metrics.accuracy_score(train_y.values,y_pred))
print("AUC Score(Train): %f" % metrics.roc_auc_score(train_y,y_predprob))

#继续减小步长,增加迭代次数
gbm3=GradientBoostingClassifier(learning_rate=0.01,n_estimators=350,max_depth=3,min_samples_leaf=60,min_samples_split=280,
                               max_features=14,subsample=0.85,random_state=10)
gbm3.fit(train_X.values,train_y2)
y_pred=gbm3.predict(train_X)
y_predprob=gbm3.predict_proba(train_X)[:,1]
print("Accuracy : %.4g" % metrics.accuracy_score(train_y.values,y_pred))
print("AUC Score(Train): %f" % metrics.roc_auc_score(train_y,y_predprob))

#继续减小步长,增加迭代次数
gbm4=GradientBoostingClassifier(learning_rate=0.01,n_estimators=600,max_depth=3,min_samples_leaf=60,min_samples_split=280,
                               max_features=14,subsample=0.85,random_state=10)
gbm4.fit(train_X.values,train_y2)
y_pred=gbm4.predict(train_X)
y_predprob=gbm4.predict_proba(train_X)[:,1]
print("Accuracy : %.4g" % metrics.accuracy_score(train_y.values,y_pred))
print("AUC Score(Train): %f" % metrics.roc_auc_score(train_y,y_predprob))

#继续减小步长,增加迭代次数
gbm6=GradientBoostingClassifier(learning_rate=0.005,n_estimators=1200,max_depth=3,min_samples_leaf=60,min_samples_split=280,
                               max_features=14,subsample=0.85,random_state=10)
gbm6.fit(train_X.values,train_y2)
y_pred=gbm6.predict(train_X)
y_predprob=gbm6.predict_proba(train_X)[:,1]
print("Accuracy : %.4g" % metrics.accuracy_score(train_y.values,y_pred))
print("AUC Score(Train): %f" % metrics.roc_auc_score(train_y,y_predprob))

gbm7=GradientBoostingClassifier(learning_rate=0.05,n_estimators=1200,max_depth=3,min_samples_leaf=60,min_samples_split=280,
                               max_features=14,subsample=0.85,random_state=10)
gbm7.fit(train_X.values,train_y2)
y_pred=gbm7.predict(train_X)
y_predprob=gbm7.predict_proba(train_X)[:,1]
print("Accuracy : %.4g" % metrics.accuracy_score(train_y.values,y_pred))
print("AUC Score(Train): %f" % metrics.roc_auc_score(train_y,y_predprob))

gbm8=GradientBoostingClassifier(learning_rate=0.01,n_estimators=1200,max_depth=3,min_samples_leaf=60,min_samples_split=280,
                               max_features=14,subsample=0.85,random_state=10)
gbm8.fit(train_X.values,train_y2)
y_pred=gbm8.predict(train_X)
y_predprob=gbm8.predict_proba(train_X)[:,1]
print("Accuracy : %.4g" % metrics.accuracy_score(train_y.values,y_pred))
print("AUC Score(Train): %f" % metrics.roc_auc_score(train_y,y_predprob))

#调来调去发现gbm7的accuracy最高0.954668,选这个保存
test_y_predprob=gbm7.predict_proba(test_data)[:,1]
df_test['pred']=test_y_predprob.tolist()
df_result=df_test.loc[:,['ID','pred']]
df_result.to_csv('GBDToptimiza.csv',index=False)

实例:MNIST数字分类

采用逻辑回归。
Note that this accuracy of this l1-penalized linear model is significantly below what can be reached by an l2-penalized linear model or a non-linear multi-layer perceptron model on this dataset.不如L2正则化 以及非线性模型的

# Authors: The scikit-learn developers
# SPDX-License-Identifier: BSD-3-Clause

import time

import matplotlib.pyplot as plt
import numpy as np
import pandas as pd
from sklearn.datasets import fetch_openml
from sklearn.linear_model import LogisticRegression
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler
from sklearn.utils import check_random_state

# Turn down for faster convergence
t0 = time.time()
train_samples = 10000

# Load data from https://www.openml.org/d/554
X, y = fetch_openml("mnist_784", version=1, return_X_y=True, as_frame=False)
#type:ndarray
#y:label
#X:70000张图片矩阵

random_state = check_random_state(0)#return <class 'numpy.random.mtrand.RandomState'>
permutation = random_state.permutation(X.shape[0])#70000个随机数
X = X[permutation]#打乱,得到随机数对应的图片和label
y = y[permutation]
#X = X.reshape((X.shape[0], -1)) #这个操作实际上没什么必要,一直是70000*784

X_train, X_test, y_train, y_test = train_test_split(
    X, y, train_size=train_samples, test_size=10000
)

scaler = StandardScaler()#训练集、测试集都要标准化 
X_train = scaler.fit_transform(X_train)
X_test = scaler.transform(X_test)

# Turn up tolerance for faster convergence
clf = LogisticRegression(C=50.0 / train_samples, penalty="l1", solver="saga", tol=0.1)
#c:Inverse of regularization strength;正则化强度的逆,c值越小正则化越强,
#solver:Algorithm to use in the optimization problem.saga适合较大的数据集,
#tol:Tolerance for stopping criteria.什么时候停止
clf.fit(X_train, y_train)
#print(clf.coef_.shape)#the number == 7840
print(np.mean(clf.coef_==0))#coef相关系数, True=1 False=0来计算mean
#print(np.sum(clf.coef_==0))
#print(np.sum(clf.coef_!=0))

sparsity = np.mean(clf.coef_ == 0) * 100 #.coef即相关系数coefficient
#用这个表示稀疏程度 
#等价于np.sum(clf.coef_==0)/(clf.coef_.shape[0]*clf.coef_.shape[1])

score = clf.score(X_test, y_test)
# print('Best C % .4f' % clf.C_)
print("Sparsity with L1 penalty: %.2f%%" % sparsity)
print("Test score with L1 penalty: %.4f" % score)

coef = clf.coef_.copy()
plt.figure(figsize=(10, 5))
scale = np.abs(coef).max()#取出里面相关系数最大的数的绝对值

for i in range(10):
    l1_plot = plt.subplot(2, 5, i + 1)#放置第i+1个图
    l1_plot.imshow(#利用图片的相关系数,也可以画出大致数字的轮廓
        coef[i].reshape(28, 28),
        interpolation="nearest",#插值法
        cmap=plt.cm.RdBu,
        vmin=-scale,
        vmax=scale,
    )
    l1_plot.set_xticks(())
    l1_plot.set_yticks(())
    l1_plot.set_xlabel("Class %i" % i)
plt.suptitle("Classification vector for...")

run_time = time.time() - t0
print("Example run in %.3f s" % run_time)
plt.show()

在这里插入图片描述

回归

聚类

降维

综合实例1:鸢尾花数据集

#下载鸢尾花数据集
import seaborn as sns
iris = sns.load_dataset("iris")

#数据查看
type(iris)#pandas.core.frame.DataFrame
iris.shape#(150, 5)
iris.head()
iris.info()
iris.describe()
iris.species.value_counts()#3个分类分别的样例数目
sns.pairplot(data=iris, hue="species")#根据species形成不同颜色,根据属性形成笛卡尔积数据展示图

#数据清洗
iris_simple = iris.drop(["sepal_length", "sepal_width"], axis=1)
iris_simple.head()
#删掉了这两列

#标签编码
from sklearn.preprocessing import LabelEncoder
encoder = LabelEncoder()
iris_simple["species"] = encoder.fit_transform(iris_simple["species"])
#将species的字符串编码为int

#数据集标准化
from sklearn.preprocessing import StandardScaler
import pandas as pd
trans = StandardScaler()
_iris_simple = trans.fit_transform(iris_simple[["petal_length", "petal_width"]])
_iris_simple = pd.DataFrame(_iris_simple, columns = ["petal_length", "petal_width"])
_iris_simple.describe()

#构建训练集、测试集
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(iris_simple, test_size=0.2)
test_set.head()

iris_x_train = train_set[["petal_length", "petal_width"]]
iris_x_train.head()

iris_y_train = train_set["species"].copy()
iris_y_train.head()

iris_x_test = test_set[["petal_length", "petal_width"]]
iris_x_test.head()

iris_y_test = test_set["species"].copy()
iris_y_test.head()

对上述数据集采用不同的机器学习算法。

  • k近邻算法
from sklearn.neighbors import KNeighborsClassifier
clf = KNeighborsClassifier()#new一个分类器对象
clf
clf.fit(iris_x_train, iris_y_train)#训练
res = clf.predict(iris_x_test)#预测
print(res)
print(iris_y_test.values)#打印比对

#翻转:int反编码回原来的分类string
encoder.inverse_transform(res)

#评估
accuracy = clf.score(iris_x_test, iris_y_test)
print("预测正确率:{:.0%}".format(accuracy))

#存储数据
out = iris_x_test.copy()
out["y"] = iris_y_test
out["pre"] = res #prediction
out
out.to_csv("iris_predict.csv")

#可视化
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt

def draw(clf):
    # 网格化
    M, N = 500, 500
    x1_min, x2_min = iris_simple[["petal_length", "petal_width"]].min(axis=0)
    x1_max, x2_max = iris_simple[["petal_length", "petal_width"]].max(axis=0)
    t1 = np.linspace(x1_min, x1_max, M)
    t2 = np.linspace(x2_min, x2_max, N)
    x1, x2 = np.meshgrid(t1, t2)#把向量转换成array

    # 预测
    x_show = np.stack((x1.flat, x2.flat), axis=1)#列堆叠
    y_predict = clf.predict(x_show)
    
    # 配色
    cm_light = mpl.colors.ListedColormap(["#A0FFA0", "#FFA0A0", "#A0A0FF"])
    cm_dark = mpl.colors.ListedColormap(["g", "r", "b"])
    
    # 绘制预测区域图
    plt.figure(figsize=(10, 6))
    plt.pcolormesh(t1, t2, y_predict.reshape(x1.shape), cmap=cm_light)
    #Create a pseudocolor plot with a non-regular rectangular grid.
    
    # 绘制原始数据点
    plt.scatter(iris_simple["petal_length"], iris_simple["petal_width"], label=None,
                c=iris_simple["species"], cmap=cm_dark, marker='o', edgecolors='k')
    plt.xlabel("petal_length")
    plt.ylabel("petal_width")
    
    # 绘制图例
    color = ["g", "r", "b"]
    species = ["setosa", "virginica", "versicolor"]
    for i in range(3):
        plt.scatter([], [], c=color[i], s=40, label=species[i])    # 利用空点绘制图例
    plt.legend(loc="best")#放置图例 best指最佳位置
    plt.title('iris_classfier')

draw(clf)
  • 朴素贝叶斯算法
    探究:当X=(x1, x2)发生的时候,哪一个yk发生的概率最大
#步骤跟之前相同
from sklearn.naive_bayes import GaussianNB
clf = GaussianNB()#构造分类器对象
clf.fit(iris_x_train, iris_y_train)#训练
res = clf.predict(iris_x_test)#预测
print(res)
print(iris_y_test.values)
accuracy = clf.score(iris_x_test, iris_y_test)#评估
print("预测正确率:{:.0%}".format(accuracy))
draw(clf)#可视化
  • 决策树算法
    CART算法:每次通过一个特征,将数据尽可能的分为纯净的两类,递归的分下去
from sklearn.tree import DecisionTreeClassifier
clf = DecisionTreeClassifier()
clf.fit(iris_x_train, iris_y_train)
res = clf.predict(iris_x_test)
print(res)
print(iris_y_test.values)
accuracy = clf.score(iris_x_test, iris_y_test)
print("预测正确率:{:.0%}".format(accuracy))
draw(clf)
  • 逻辑回归算法
    训练:通过一个映射方式,将特征X=(x1, x2) 映射成 P(y=ck), 求使得所有概率之积最大化的映射方式里的参数
    预测:计算p(y=ck) 取概率最大的那个类别作为预测对象的分类
    在这里插入图片描述
from sklearn.linear_model import LogisticRegression
clf = LogisticRegression(solver='saga', max_iter=1000)
'''
solverAlgorithm to use in the optimization problem. 
Default is ‘lbfgs’.
For small datasets, ‘liblinear’ is a good choice, whereas ‘sag’ and ‘saga’ are faster for large ones;

For multiclass problems, only ‘newton-cg’, ‘sag’, ‘saga’ and ‘lbfgs’ handle multinomial loss;

‘liblinear’ and ‘newton-cholesky’ can only handle binary classification by default. To apply a one-versus-rest scheme for the multiclass setting one can wrapt it with the OneVsRestClassifier.

‘newton-cholesky’ is a good choice for n_samples >> n_features, especially with one-hot encoded categorical features with rare categories. Be aware that the memory usage of this solver has a quadratic dependency on n_features because it explicitly computes the Hessian matrix.
'''
clf.fit(iris_x_train, iris_y_train)
res = clf.predict(iris_x_test)
print(res)
print(iris_y_test.values)
accuracy = clf.score(iris_x_test, iris_y_test)
print("预测正确率:{:.0%}".format(accuracy))
draw(clf)
  • 支持向量机算法
    以二分类为例,假设数据可用完全分开:
    用一个超平面将两类数据完全分开,且最近点到平面的距离最大
from sklearn.svm import SVC   
clf = SVC()
clf #打印查看有什么属性
clf.fit(iris_x_train, iris_y_train)
res = clf.predict(iris_x_test)
print(res)
print(iris_y_test.values)
accuracy = clf.score(iris_x_test, iris_y_test)
print("预测正确率:{:.0%}".format(accuracy))
draw(clf)
  • 集成方法——随机森林
    训练集m,有放回的随机抽取m个数据,构成一组,共抽取n组采样集
    n组采样集训练得到n个弱分类器 弱分类器一般用决策树或神经网络
    将n个弱分类器进行组合得到强分类器
from sklearn.ensemble import RandomForestClassifier
clf = RandomForestClassifier()
clf
clf.fit(iris_x_train, iris_y_train)
res = clf.predict(iris_x_test)
print(res)
print(iris_y_test.values)
accuracy = clf.score(iris_x_test, iris_y_test)
print("预测正确率:{:.0%}".format(accuracy))
draw(clf)
  • 集成方法——Adaboost
    训练集m,用初始数据权重训练得到第一个弱分类器,根据误差率计算弱分类器系数,更新数据的权重
    使用新的权重训练得到第二个弱分类器,以此类推
    根据各自系数,将所有弱分类器加权求和获得强分类器
from sklearn.ensemble import AdaBoostClassifier
clf = AdaBoostClassifier()
clf
clf.fit(iris_x_train, iris_y_train)
res = clf.predict(iris_x_test)
print(res)
print(iris_y_test.values)
accuracy = clf.score(iris_x_test, iris_y_test)
print("预测正确率:{:.0%}".format(accuracy))
draw(clf)
  • 集成方法——梯度提升树GBDT
    训练集m,获得第一个弱分类器,获得残差,然后不断地拟合残差
    所有弱分类器相加得到强分类器
    (残差在数理统计中是指实际观察值与估计值(拟合值)之间的差。)
from sklearn.ensemble import GradientBoostingClassifier
clf = GradientBoostingClassifier()
clf
clf.fit(iris_x_train, iris_y_train)
res = clf.predict(iris_x_test)
print(res)
print(iris_y_test.values)
accuracy = clf.score(iris_x_test, iris_y_test)
print("预测正确率:{:.0%}".format(accuracy))
draw(clf)
  • 更多常见可选模型
    【1】xgboost
    GBDT的损失函数只对误差部分做负梯度(一阶泰勒)展开
    XGBoost损失函数对误差部分做二阶泰勒展开,更加准确,更快收敛

【2】lightgbm
微软:快速的,分布式的,高性能的基于决策树算法的梯度提升框架,速度更快

【3】stacking
堆叠或者叫模型融合
先建立几个简单的模型进行训练,第二级学习器会基于前级模型的预测结果进行再训练

【4】神经网络

综合实例2:用8种不同算法

使用 8 种不同算法

import pandas as pd
import numpy as np
import matplotlib as mpl
import matplotlib.pyplot as plt
import pandas_profiling as ppf
import seaborn as sns

def load_data(file_path):
    '''
    导入数据
    :param file_path: 数据存放路径
    :return: 返回数据列表
    '''
    f = open(file_path)
    data = []
    for line in f.readlines():
        row = []  # 记录每一行
        lines = line.strip().split("\t")
        for x in lines:
            row.append(x)
        data.append(row)
    f.close()
    return data
    
data = load_data('datingTestSet.txt')
# data
data = pd.DataFrame(data, columns=['每年的飞行距离', '玩视频游戏所耗时间的百分比', '每周消费冰激凌的公升数', '喜欢的程度'])

data = data.astype(float)
# data['喜欢的程度'] = data['喜欢的程度'].astype(int)

data['喜欢的程度'].value_counts()#每种值对应多少个row

ppf.ProfileReport(data)#输出report

# windows版解决sns.pairplot()中文问题
from matplotlib.font_manager import FontProperties
myfont=FontProperties(fname=r'C:\Windows\Fonts\simhei.ttf',size=14)
sns.set(font=myfont.get_name())

sns.pairplot(data=data, hue='喜欢的程度')

#数据预处理:标签编码、处理缺失值、数据标准化
#本例无需标签编码,没有缺失值,需要进行数据标准化
from sklearn.preprocessing import StandardScaler
trans = StandardScaler()
data_simple = trans.fit_transform(data[['每年的飞行距离', '玩视频游戏所耗时间的百分比', '每周消费冰激凌的公升数']])
data_simple = pd.DataFrame(data, columns=['每年的飞行距离', '玩视频游戏所耗时间的百分比', '每周消费冰激凌的公升数'])
data_simple.head(10)

#构建训练集和测试集
from sklearn.model_selection import train_test_split
train_set, test_set = train_test_split(data, test_size=0.2)
train_set.head()

data_x_train = train_set[['每年的飞行距离', '玩视频游戏所耗时间的百分比', '每周消费冰激凌的公升数']]
data_y_train = train_set['喜欢的程度'].copy()
# data_x_train.head()
data_y_train.head()

data_x_test = test_set[['每年的飞行距离', '玩视频游戏所耗时间的百分比', '每周消费冰激凌的公升数']]
data_y_test = test_set['喜欢的程度'].copy()

# 使用 8 种不同算法,分别对数据集进行训练,获得分类模型,并用测试集进行测试,最后将预测结果存储到本地文件中
#k近邻算法
#朴素贝叶斯算法
#决策树算法
#逻辑回归算法
#支持向量机算法
#集成方法——随机森林
#集成方法——Adaboost
#集成方法——梯度提升树GBDT

#找一个表现较好的算法,对比舍弃一个不重要特征与否对模型性能的影响
data = data.drop(['每周消费冰激凌的公升数'], axis=1)
data_simple = trans.fit_transform(data[['每年的飞行距离', '玩视频游戏所耗时间的百分比']])
data_simple = pd.DataFrame(data, columns=['每年的飞行距离', '玩视频游戏所耗时间的百分比'])
data_simple.head(10)
# data.head()

train_set, test_set = train_test_split(data, test_size=0.2)
train_set.head()

data_x_train = train_set[['每年的飞行距离', '玩视频游戏所耗时间的百分比']]
data_y_train = train_set['喜欢的程度'].copy()
data_y_train.head()

data_x_test = test_set[['每年的飞行距离', '玩视频游戏所耗时间的百分比']]
data_y_test = test_set['喜欢的程度'].copy()

clf = GradientBoostingClassifier()
clf.fit(data_x_train, data_y_train)
res = clf.predict(data_x_test)#预测结果
accuracy = clf.score(data_x_test, data_y_test)
print("预测正确率:{:.0%}".format(accuracy))

#可视化
def draw(clf):

    # 网格化
    M, N = 500, 500
    x1_min, x2_min = data_simple[['每年的飞行距离', '玩视频游戏所耗时间的百分比']].min(axis=0)
    x1_max, x2_max = data_simple[['每年的飞行距离', '玩视频游戏所耗时间的百分比']].max(axis=0)
    t1 = np.linspace(x1_min, x1_max, M)
    t2 = np.linspace(x2_min, x2_max, N)
    x1, x2 = np.meshgrid(t1, t2)
    
    # 预测
    x_show = np.stack((x1.flat, x2.flat), axis=1)
    y_predict = clf.predict(x_show)
    
    # 配色
    cm_light = mpl.colors.ListedColormap(["#A0FFA0", "#FFA0A0", "#A0A0FF"])
    cm_dark = mpl.colors.ListedColormap(["g", "r", "b"])
    
    # 绘制预测区域图
    plt.figure(figsize=(10, 6))
    plt.pcolormesh(t1, t2, y_predict.reshape(x1.shape), cmap=cm_light)
    
    # 绘制原始数据点
    plt.scatter(data_simple["每年的飞行距离"], data_simple["玩视频游戏所耗时间的百分比"], label=None,
                c=data["喜欢的程度"], cmap=cm_dark, marker='o', edgecolors='k')
    plt.xlabel("每年的飞行距离")
    plt.ylabel("玩视频游戏所耗时间的百分比")
    
    # 绘制图例
    color = ["g", "r", "b"]
    species = ["1", "2", "3"]
    for i in range(3):
        plt.scatter([], [], c=color[i], s=40, label=species[i])    # 利用空点绘制图例
        #s:The marker size in points**2 (typographic points are 1/72 in.)
    plt.legend(loc="best")
    plt.title('data_classfier')

本文来自互联网用户投稿,该文观点仅代表作者本人,不代表本站立场。本站仅提供信息存储空间服务,不拥有所有权,不承担相关法律责任。如若转载,请注明出处:http://www.coloradmin.cn/o/2252736.html

如若内容造成侵权/违法违规/事实不符,请联系多彩编程网进行投诉反馈,一经查实,立即删除!

相关文章

如何让控件始终处于父容器的居中位置(父容器可任意改变大小)

前言&#xff1a; 大家好&#xff0c;我是上位机马工&#xff0c;硕士毕业4年年入40万&#xff0c;目前在一家自动化公司担任软件经理&#xff0c;从事C#上位机软件开发8年以上&#xff01;我们在C#开发winform程序的时候&#xff0c;有时候需要将一个控件居中显示&#xff0c…

Python 调用 Umi-OCR API 批量识别图片/PDF文档数据

目录 一、需求分析 二、方案设计&#xff08;概要/详细&#xff09; 三、技术选型 四、OCR 测试 Demo 五、批量文件识别完整代码实现 六、总结 一、需求分析 市场部同事进行采购或给客户报价时&#xff0c;往往基于过往采购合同数据&#xff0c;给出现在采购或报价的金额…

【QT】背景,安装和介绍

TOC 目录 背景 GUI技术 QT的安装 使用流程 QT程序介绍 main.cpp​编辑 Wiget.h Widget.cpp form file .pro文件 临时文件 C作为一门比较古老的语言&#xff0c;在人们的认知里始终是以底层&#xff0c;复杂和高性能著称&#xff0c;所以在很多高性能需求的场景之下…

Linux内核编译流程(Ubuntu24.04+Linux Kernel 6.8.12)

万恶的拯救者&#xff0c;使用Ubuntu没有声音&#xff0c;必须要自己修改一下Linux内核中的相关驱动逻辑才可以&#xff0c;所以被迫学习怎么修改内核&编译内核&#xff0c;记录如下 准备工作 下载Linux源码&#xff1a;在Linux发布页下载并使用gpg签名验证 即&#xff1a…

【阅读笔记】Android广播的处理流程

关于Android的解析&#xff0c;有很多优质内容&#xff0c;看了后记录一下阅读笔记&#xff0c;也是一种有意义的事情&#xff0c; 今天就看看“那个写代码的”这位大佬关于广播的梳理&#xff0c; https://blog.csdn.net/a572423926/category_11509429.html https://blog.c…

linux下Qt程序部署教程

文章目录 [toc]1、概述2、静态编译安装Qt1.1 安装依赖1.2 静态编译1.3 报错1.4 添加环境变量1.5 下载安装QtCreator 3、配置linuxdeployqt环境1.1 在线安装依赖1.2 使用linuxdeployqt提供的程序1.3 编译安装linuxdeployqt 4、使用linuxdeployqt打包依赖1.1 linuxdeployqt使用选…

【PHP】部署和发布PHP网站到IIS服务器

欢迎来到《小5讲堂》 这是《PHP》系列文章&#xff0c;每篇文章将以博主理解的角度展开讲解。 温馨提示&#xff1a;博主能力有限&#xff0c;理解水平有限&#xff0c;若有不对之处望指正&#xff01; 目录 前言安装PHP稳定版本线程安全版解压使用 PHP配置配置文件扩展文件路径…

视觉经典神经网络学习01_CNN(1)

一、概述 卷积神经网络&#xff08;Convolutional Neural Network&#xff0c;CNN&#xff09;是一种专门用于处理具有网格状结构数据的深度学习模型。最初&#xff0c;CNN主要应用于计算机视觉任务&#xff0c;但它的成功启发了在其他领域应用&#xff0c;如自然语言处理等。…

【golang】单元测试,以及出现undefined时的解决方案

单元测试 要对某一方法进行测试时&#xff0c;例如如下这一简单减法函数&#xff0c;选中函数名后右键->转到->测试 1&#xff09;Empty test file 就是一个空文件&#xff0c;我们可以自己写测试的逻辑 但是直接点绿色箭头运行会出问题&#xff1a; 找不到包。我们要在…

DVWA靶场通关——DOM型XSS漏洞

一、DOM型XSS攻击概述 DOM型XSS&#xff08;DOM-based Cross-Site Scripting&#xff0c;DOM XSS&#xff09;是一种跨站脚本攻击&#xff08;XSS&#xff09;的变种&#xff0c;它与传统的反射型XSS&#xff08;Reflected XSS&#xff09;或存储型XSS&#xff08;Stored XSS&a…

【Unity 动画】设置跟运动(Apply Root)模型跟着动画产生位移

一、导入的动画本身必须有跟随动画产生位移或者旋转的效果 二、导入Unity后 在Unity中&#xff0c;根运动&#xff08;Root Motion&#xff09;是指动画中角色根节点的移动和旋转。根节点通常是角色的根骨骼&#xff08;Root Bone&#xff09;&#xff0c;它决定了角色的整体…

Spring AI 框架介绍

Spring AI是一个面向人工智能工程的应用框架。它的目标是将Spring生态系统的设计原则&#xff08;如可移植性和模块化设计&#xff09;应用于AI领域&#xff0c;并推广使用pojo作为AI领域应用的构建模块。 概述 Spring AI 现在(2024/12)已经支持语言&#xff0c;图像&#xf…

C++小问题

怎么分辨const修饰的是谁 是限定谁不能被改变的&#xff1f; 在C中&#xff0c;const关键字的用途和位置非常关键&#xff0c;它决定了谁不能被修改。const可以修饰变量、指针、引用等不同的对象&#xff0c;并且具体的作用取决于const的修饰位置。理解const的规则能够帮助我们…

近几年,GIS专业的五类就业方向!

近二十几年来&#xff0c;地理信息科学毕业生的就业方向在不断发生变化。 早期的地理信息科学技术主要应用于政府部门&#xff0c;因此学生就业主要在高校、交通运输、规划勘测设计、国土、矿业、水利电力、通讯、农林、城市建设、旅游等国家政府部门或事业单位。 随着地理信…

【Maven】继承和聚合

5. Maven的继承和聚合 5.1 什么是继承 Maven 的依赖传递机制可以一定程度上简化 POM 的配置&#xff0c;但这仅限于存在依赖关系的项目或模块中。当一个项目的多个模块都依赖于相同 jar 包的相同版本&#xff0c;且这些模块之间不存在依赖关系&#xff0c;这就导致同一个依赖…

2、Three.js初步认识场景Scene、相机Camera、渲染器Renderer三要素

三要素之间关系&#xff1a; 有了虚拟场景Scene&#xff0c;相机录像Camera&#xff0c;在相机小屏幕上看到的Renderer Scene当前空间 Mesh人在场景 Camera相机录像 Renderer显示器上 首先先描述下Scene&#xff1a; 这个场景为三要素之一&#xff0c;一切需要展示的东西都需…

工厂方法模式的理解和实践

在软件开发中&#xff0c;设计模式是一种经过验证的解决特定问题的通用方案。工厂方法模式&#xff08;Factory Method Pattern&#xff09;是创建型设计模式之一&#xff0c;它提供了一种创建对象的接口&#xff0c;但由子类决定要实例化的类是哪一个。工厂方法让类的实例化推…

Vue教程|搭建vue项目|Vue-CLI2.x 模板脚手架

一、项目构建环境准备 在构建Vue项目之前&#xff0c;需要搭建Node环境以及Vue-CLI脚手架&#xff0c;由于本篇文章为上一篇文章的补充&#xff0c;也是为了给大家分享更为完整的搭建vue项目方式&#xff0c;所以环境准备部分采用Vue教程&#xff5c;搭建vue项目&#xff5c;V…

无人机主控芯片技术与算法详解!

一、无人机主控芯片核心技术 高性能CPU&#xff1a; 无人机需要高性能的CPU来处理复杂的飞行控制算法、图像处理和数据传输等任务。目前&#xff0c;无人机的CPU主要有大疆自研的飞控系统、高通提供的无人机设计平台Snapdragon Flight&#xff0c;以及基于开源平台APM、Px4等…

GaussDB(类似PostgreSQL)常用命令和注意事项

文章目录 前言GaussDB&#xff08;类似PostgreSQL&#xff09;常用命令和注意事项1. 连接到GaussDB数据库2. 查看当前数据库中的所有Schema3. 进入指定的Schema4. 查看Schema下的表、序列、视图5. 查看Schema下所有的表6. 查看表结构7. 开始事务8. 查询表字段注释9. 注意事项&a…