lgbm参数分析及回归超参数寻找 | 您所在的位置:网站首页 › lgb是什么 › lgbm参数分析及回归超参数寻找 |
参考:lgbm的github: https://github.com/Microsoft/LightGBM/blob/master/docs/Parameters.rst 代码来源参见我另一篇博客: https://blog.csdn.net/ssswill/article/details/85217702 网格搜索寻找超参数: from sklearn.model_selection import (cross_val_score, train_test_split, GridSearchCV, RandomizedSearchCV) from sklearn.metrics import r2_score from lightgbm.sklearn import LGBMRegressor hyper_space = {'n_estimators': [1000, 1500, 2000, 2500], 'max_depth': [4, 5, 8, -1], 'num_leaves': [15, 31, 63, 127], 'subsample': [0.6, 0.7, 0.8, 1.0], 'colsample_bytree': [0.6, 0.7, 0.8, 1.0], 'learning_rate' : [0.01,0.02,0.03] } est = lgb.LGBMRegressor(n_jobs=-1, random_state=2018) gs = GridSearchCV(est, hyper_space, scoring='r2', cv=4, verbose=1) gs_results = gs.fit(train_X, train_y) print("BEST PARAMETERS: " + str(gs_results.best_params_)) print("BEST CV SCORE: " + str(gs_results.best_score_)) from sklearn.model_selection import KFold from sklearn.metrics import mean_squared_error import lightgbm as lgb lgb_params = {"objective" : "regression", "metric" : "rmse", "max_depth": 7, "min_child_samples": 20, "reg_alpha": 1, "reg_lambda": 1, "num_leaves" : 64, "learning_rate" : 0.01, "subsample" : 0.8, "colsample_bytree" : 0.8, "verbosity": -1} FOLDs = KFold(n_splits=5, shuffle=True, random_state=42) oof_lgb = np.zeros(len(train_X)) predictions_lgb = np.zeros(len(test_X)) features_lgb = list(train_X.columns) feature_importance_df_lgb = pd.DataFrame() for fold_, (trn_idx, val_idx) in enumerate(FOLDs.split(train_X)): trn_data = lgb.Dataset(train_X.iloc[trn_idx], label=train_y.iloc[trn_idx]) val_data = lgb.Dataset(train_X.iloc[val_idx], label=train_y.iloc[val_idx]) print("-" * 20 +"LGB Fold:"+str(fold_)+ "-" * 20) num_round = 10000 clf = lgb.train(lgb_params, trn_data, num_round, valid_sets = [trn_data, val_data], verbose_eval=1000, early_stopping_rounds = 50) oof_lgb[val_idx] = clf.predict(train_X.iloc[val_idx], num_iteration=clf.best_iteration) fold_importance_df_lgb = pd.DataFrame() fold_importance_df_lgb["feature"] = features_lgb fold_importance_df_lgb["importance"] = clf.feature_importance() fold_importance_df_lgb["fold"] = fold_ + 1 feature_importance_df_lgb = pd.concat([feature_importance_df_lgb, fold_importance_df_lgb], axis=0) predictions_lgb += clf.predict(test_X, num_iteration=clf.best_iteration) / FOLDs.n_splits print("Best RMSE: ",np.sqrt(mean_squared_error(oof_lgb, train_y)))输出: “n_estimators”: 从图中可看到,n_estimators是num_itertations的别名,默认是100.也就是循环次数,或者叫树的数目。 后面又有一句note:对于多分类问题,树的数目是种类数*你设的树颗数。 “max_depth”:树的深度 再来看这两行代码: est = lgb.LGBMRegressor(n_jobs=-1, random_state=2018) gs = GridSearchCV(est, hyper_space, scoring='r2', cv=4, verbose=1)
下面有一个类似的讲解,但不是lgbm的参数讲解。 2018-12-27补充: |
CopyRight 2018-2019 实验室设备网 版权所有 |