机器学习:梯度提升算法|python与r语言代码实现

2018-10-1200:04:52数据结构与算法Comments10,518 views字数 6040阅读模式

梯度提升算法

10.1 GBM文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

GBM(梯度提升机)是一种在处理大量数据以进行高预测的预测时使用的Boosting算法。Boosting实际上是一个学习算法的集合,它结合了几个基本估计量的预测,以便比单个估计量提高坚固性。它将多个弱或平均预测因子组合成一个强预测因子。这些提升算法在Kaggle、AV Hackthon、CrowdAnalytix等数据科学竞赛中总能表现得很好。文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

Python代码

#Import Library
from sklearn.ensemble import GradientBoostingClassifier
#Assumed you have, X (predictor) and Y (target) for training data set and x_test(predictor) of test_dataset
# Create Gradient Boosting Classifier object
model= GradientBoostingClassifier(n_estimators=100, learning_rate=1.0, max_depth=1, random_state=0)
# Train the model using the training sets and check score
model.fit(X, y)
#Predict Output
predicted= model.predict(x_test)

R语言代码

library(caret)
x <- cbind(x_train,y_train)
# Fitting model
fitControl <- trainControl( method = "repeatedcv", number = 4, repeats = 4)
fit <- train(y ~ ., data = x, method = "gbm", trControl = fitControl,verbose = FALSE)
predicted= predict(fit,x_test,type= "prob")[,2] 

梯度提升分类器和随机森林是两种不同的提升树分类器,人们经常问到这两种算法的区别文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

10.2 XGBOST文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

另一种经典的梯度提升算法,众所周知,是一些Kaggle比赛中决定性选择。文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

XGBoost具有非常高的预测能力,这使它成为事件准确度的最佳选择,因为它同时具有线性模型和树学习算法,使得该算法比现有的梯度提升机技术快近10倍。文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

支持包括各种目标函数,包括回归、分类和排序。文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

XGBoost最有趣的事情之一是它也被称为一种正规的提升技术。这有助于减少过拟合建模,并为Scala、Java、R语言、Python、Julia和C++等多种语言提供了大量支持。文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

支持分布式和广泛的训练,包括GCE,AWS,Azure和 Yarn clusters等许多机器。XGBoost还可以与Spark、Flink和其他云数据流系统集成,并在增强过程的每次迭代中内置交叉验证。文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

要了解更多关于XGBoost和参数调优的信息,请访问www..ticsvidhya.com/blog/2016/0…文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

Python 代码:文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

from xgboost import XGBClassifier
from sklearn.model_selection import train_test_split
from sklearn.metrics import accuracy_score
X = dataset[:,0:10]
Y = dataset[:,10:]
seed = 1

X_train, X_test, y_train, y_test = train_test_split(X, Y, test_size=0.33, random_state=seed)

model = XGBClassifier()

model.fit(X_train, y_train)

#Make predictions for test data
y_pred = model.predict(X_test)

R 语言代码文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

require(caret)

x <- cbind(x_train,y_train)

# Fitting model

TrainControl <- trainControl( method = "repeatedcv", number = 10, repeats = 4)

model<- train(y ~ ., data = x, method = "xgbLinear", trControl = TrainControl,verbose = FALSE)

OR 

model<- train(y ~ ., data = x, method = "xgbTree", trControl = TrainControl,verbose = FALSE)

predicted <- predict(model, x_test)

10.3 LightGBM文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

LightGBM是一种使用基于树的学习算法的梯度提升框架。它被设计成分布式和高效的,具有以下优点:文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

  1. 更快的培训速度和更高的效率
  2. 低内存使用
  3. 较好精度
  4. 并行GPU学习支持
  5. 能够处理大规模数据

该框架是一种基于决策树算法的快速高效的梯度提升框架,用于排序、分类等机器学习任务。它是在微软的分布式机器学习工具包项目下开发的。文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

由于LightGBM是基于决策树算法的,所以它以最佳拟合度按叶片状分割树,而其他增强算法按层次或深度方向而不是按叶片状分割树。因此,在lightGBM中,当在同一片叶子上生长时,leaf-wise算法能够比level-wise 算法减少更多的损耗,从而产生比现有任何提升算法都难以达到的更好的精度。文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

此外,它是非常快的,所以有“light”这个词。文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

参阅本文了解更多关于LightGBM的信息:www..ticsvidhya.com/blog/2017/0…文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

Python 代码文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

data = np.random.rand(500, 10) # 500 entities, each contains 10 features
label = np.random.randint(2, size=500) # binary target

train_data = lgb.Dataset(data, label=label)
test_data = train_data.create_valid('test.svm')

param = {'num_leaves':31, 'num_trees':100, 'objective':'binary'}
param['metric'] = 'auc'

num_round = 10
bst = lgb.train(param, train_data, num_round, valid_sets=[test_data])

bst.save_model('model.txt')

# 7 entities, each contains 10 features
data = np.random.rand(7, 10)
ypred = bst.predict(data)

R 语言代码文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

library(RLightGBM)
data(example.binary)
#Parameters

num_iterations <- 100
config <- list(objective = "binary",  metric="binary_logloss,auc", learning_rate = 0.1, num_leaves = 63, tree_learner = "serial", feature_fraction = 0.8, bagging_freq = 5, bagging_fraction = 0.8, min_data_in_leaf = 50, min_sum_hessian_in_leaf = 5.0)

#Create data handle and booster
handle.data <- lgbm.data.create(x)

lgbm.data.setField(handle.data, "label", y)

handle.booster <- lgbm.booster.create(handle.data, lapply(config, as.character))

#Train for num_iterations iterations and eval every 5 steps

lgbm.booster.train(handle.booster, num_iterations, 5)

#Predict
pred <- lgbm.booster.predict(handle.booster, x.test)

#Test accuracy
sum(y.test == (y.pred > 0.5)) / length(y.test)

#Save model (can be loaded again via lgbm.booster.load(filename))
lgbm.booster.save(handle.booster, filename = "/tmp/model.txt")

如果您熟悉R语言中的Caret 扩展包,这是实现LightGBM的另一种方式。文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

require(caret)
require(RLightGBM)
data(iris)

model <-caretModel.LGBM()

fit <- train(Species ~ ., data = iris, method=model, verbosity = 0)
print(fit)
y.pred <- predict(fit, iris[,1:4])

library(Matrix)
model.sparse <- caretModel.LGBM.sparse()

#Generate a sparse matrix
mat <- Matrix(as.matrix(iris[,1:4]), sparse = T)
fit <- train(data.frame(idx = 1:nrow(iris)), iris$Species, method = model.sparse, matrix = mat, verbosity = 0)
print(fit)

10.4 CatBoost文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

CatBoost是俄罗斯最大搜索引擎公司Yandex开放源码的机器学习算法。它可以很容易地与谷歌的Tensorflow和苹果的 Core ML等深度学习框架相结合。文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

关于CatBoost最好的地方是它不需要像其他ML模型那样进行广泛的数据训练,并且可以处理各种数据格式;不会破坏它的坚固性。文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

在执行之前,务必确保处理丢失的数据。文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

CatBoost可以在不显示类型转换错误的情况下自动处理分类变量,这有助于您集中精力更好地调优模型,而不是解决一些小错误。文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

从本文中了解更多关于Catboost的内容:www-DistaSvIDHYA.COM/BLG/2017/08…文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

Python 代码文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

import pandas as pd
import numpy as np

from catboost import CatBoostRegressor

#Read training and testing files
train = pd.read_csv("train.csv")
test = pd.read_csv("test.csv")

#Imputing missing values for both train and test
train.fillna(-999, inplace=True)
test.fillna(-999,inplace=True)

#Creating a training set for modeling and validation set to check model performance
X = train.drop(['Item_Outlet_Sales'], axis=1)
y = train.Item_Outlet_Sales

from sklearn.model_selection import train_test_split

X_train, X_validation, y_train, y_validation = train_test_split(X, y, train_size=0.7, random_state=1234)
categorical_features_indices = np.where(X.dtypes != np.float)[0]

#importing library and building model
from catboost import CatBoostRegressormodel=CatBoostRegressor(iterations=50, depth=3, learning_rate=0.1, loss_function='RMSE')

model.fit(X_train, y_train,cat_features=categorical_features_indices,eval_set=(X_validation, y_validation),plot=True)

submission = pd.DataFrame()

submission['Item_Identifier'] = test['Item_Identifier']
submission['Outlet_Identifier'] = test['Outlet_Identifier']
submission['Item_Outlet_Sales'] = model.predict(test)

R 语言代码文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html

set.seed(1)

require(titanic)

require(caret)

require(catboost)

tt <- titanic::titanic_train[complete.cases(titanic::titanic_train),]

data <- as.data.frame(as.matrix(tt), stringsAsFactors = TRUE)

drop_columns = c("PassengerId", "Survived", "Name", "Ticket", "Cabin")

x <- data[,!(names(data) %in% drop_columns)]y <- data[,c("Survived")]

fit_control <- trainControl(method = "cv", number = 4,classProbs = TRUE)

grid <- expand.grid(depth = c(4, 6, 8),learning_rate = 0.1,iterations = 100, l2_leaf_reg = 1e-3,            rsm = 0.95, border_count = 64)

report <- train(x, as.factor(make.names(y)),method = catboost.caret,verbose = TRUE, preProc = NULL,tuneGrid = grid, trControl = fit_control)

print(report)

importance <- varImp(report, scale = FALSE)

print(importance)
文章源自菜鸟学院-https://www.cainiaoxueyuan.com/suanfa/6550.html
  • 本站内容整理自互联网,仅提供信息存储空间服务,以方便学习之用。如对文章、图片、字体等版权有疑问,请在下方留言,管理员看到后,将第一时间进行处理。
  • 转载请务必保留本文链接:https://www.cainiaoxueyuan.com/suanfa/6550.html

Comment

匿名网友 填写信息

:?: :razz: :sad: :evil: :!: :smile: :oops: :grin: :eek: :shock: :???: :cool: :lol: :mad: :twisted: :roll: :wink: :idea: :arrow: :neutral: :cry: :mrgreen:

确定