Matlab:trainingOptions()详解 训练选项 | 您所在的位置:网站首页 › matlab详解 › Matlab:trainingOptions()详解 训练选项 |
trainingOptions用于设置神经网络的训练策略以及超参数 用法:options = trainingOptions(solverName,Name,Value) solverName为优化函数,Name-Value为键值对,返回一个TrainingOptions对象 看个例子更直观, >> options = trainingOptions('sgdm', ... 'MiniBatchSize',512, ... 'MaxEpochs',2, ... 'InitialLearnRate',1e-3, ... 'Shuffle','every-epoch', ... 'ValidationData',imdsValidation, ... 'ValidationFrequency',3, ... 'Verbose',false, ... 'Plots','training-progress'); >> options options = TrainingOptionsSGDM - 属性: Momentum: 0.9000 InitialLearnRate: 1.0000e-03 LearnRateSchedule: 'none' LearnRateDropFactor: 0.1000 LearnRateDropPeriod: 10 L2Regularization: 1.0000e-04 GradientThresholdMethod: 'l2norm' GradientThreshold: Inf MaxEpochs: 2 MiniBatchSize: 512 Verbose: 0 VerboseFrequency: 50 ValidationData: [1×1 matlab.io.datastore.ImageDatastore] ValidationFrequency: 3 ValidationPatience: Inf Shuffle: 'every-epoch' CheckpointPath: '' ExecutionEnvironment: 'auto' WorkerLoad: [] OutputFcn: [] Plots: 'training-progress' SequenceLength: 'longest' SequencePaddingValue: 0 SequencePaddingDirection: 'right' DispatchInBackground: 0 ResetInputNormalization: 1可以看到,能调的参数很多,下面进行详解 solverName:优化函数,可选’sgdm’,‘rmsprop’,‘adam’Momentum:动量,[0,1]之间InitialLearnRate:初始学习率LearnRateSchedule:学习率策略,‘none’或者’piecewise’,'none’表示学习率不变,'piecewise’为分段学习率LearnRateDropFactor:学习率下降因子,[0,1]之间,降低之后学习率为:当前学习率*下降因子LearnRateDropPeriod:学习率下降周期,即几个epoch下降一次学习率L2Regularization:L2正则化因子GradientThresholdMethod:用于裁剪超过阈值的梯度,可选’l2norm’,‘global-l2norm’,‘absolute-value’GradientThreshold:梯度阈值,如果梯度大于阈值,则按GradientThresholdMethod设定的方法处理MaxEpochs:最大训练回合数,正整数,默认为20MiniBatchSize:就是batchsize,每次迭代使用的数据量,正整数Verbose:是否在命令行窗口显示实时训练进程,0或1,若为1,则在命令行显示当前在干啥了,默认为true![]() 其实,在实际用的时候,需要调的也就那么几个,大多就默认就行了,上面代码里面就是常用的几个。 最后附上官方文档链接,更详细 |
CopyRight 2018-2019 实验室设备网 版权所有 |