CopyRight 2012-2014 DS文库版权所有
神经网络背后的统计学K9
(0 次评价)2916 人阅读0 次下载

 9StatisticsandNeuralNetworks 9.1Linearandnonlinearregression Feed-forwardnetworksareusedtofindthebestfunctionalfitforasetofinput-outputexamples.Changestothenetworkweightsallowfine-tuningofthenetworkfunctioninordertodetecttheoptimalconfiguration.However,twocomplementarymotivationsdetermineourperceptionofwhatoptimalmeansinthiscontext.Ontheonehandweexpectthenetworktomaptheknowninputsasexactlyaspossibletotheknownoutputs.Butontheotherhandthenetworkmustbecapableof generalizing ,thatis,unknowninputsaretobecomparedtotheknownonesandtheoutputproducedisakindofinterpolationoflearnedvalues.However,goodgeneralizationandminimalreproductionerrorofthelearnedinput-outputpairscanbecomecontradictoryobjectives. 9.1.1Theproblemofgoodgeneralization Figure9.1showstheproblemfromanotherperspective.Thedotsinthegraphicrepresentthetrainingset.Wearelookingforafunctioncapableofmappingtheknowninputsintotheknownoutputs.Iflinearapproximationisused,asinthefigure,theerrorisnotexcessiveandnewunknownvaluesoftheinput x aremappedtotheregressionline.Figure9.2showsanotherkindoffunctionalapproximationusinglinearsplineswhichcanreproducethetrainingsetwithouterror.However,whenthetrainingsetconsistsofexperimentalpoints,normallythereissomenoiseinthedata.Reproducingthetrainingsetexactlyisnotthebeststrategy,becausethenoisewillalsobereproduced.AlinearapproximationasinFigure9.1couldbeabetteralternativethantheexactfitofthetrainingdatashowninFigure9.2.Thissimpleexampleillustratesthetwocontradictoryobjectivesoffunctionalapproximation:minimizationofthetrainingerrorbutalsominimizationoftheerrorofyetunknowninputs.Whetherornotthetrainingsetcanbe R. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996 R. Rojas: Neural Networks, Springer-Verlag, Berlin, 1996

打分:

0 星

用户评论:

小飞飞
于 2020-07-15 上传
畅读榜

版权及免责声明|RISC-V单片机中文网 |网站地图

GMT+8, 2023-1-27 13:11 , Processed in 9.344285 second(s), 30 queries .

返回顶部