ConvergenceProofforthePerceptronAlgorithm MichaelCollins Figure1showstheperceptronlearningalgorithm,asdescribedinlecture.Inthisnotewegiveaconvergenceproofforthealgorithm(alsocoveredinlecture).Theconvergencetheoremisasfollows: Theorem1 Assumethatthereexistssomeparametervector θ ∗ suchthat || θ ∗ || =1 ,andsome γ> 0 suchthatforall t =1 ...n , y t ( x t · θ ∗ ) ≥ γ Assumeinadditionthatforall t =1 ...n , || x t ||≤ R .Thentheperceptronalgorithmmakesatmost R 2 γ 2 errors.(Thedefinitionofanerrorisasfollows:anerroroccurswheneverwehave y � � = y t forsome ( j,t ) pairinthealgorithm.) Notethatforanyvector x ,weuse || x || torefertotheEuclideannormof x ,i.e., || x || = � � i x 2 i . Proof: First,define θ k tobetheparametervectorwhenthealgorithmmakesits k ’therror.Notethatwehave θ 1 =0 Next,assumingthe k ’therrorismadeonexample t ,wehave θ k +1 · θ ∗ =( θ k + y t x t ) · θ ∗ (1) = θ k · θ ∗ + y t x t · θ ∗ (2) ≥ θ k · θ ∗ + γ (3)Eq.1followsbythedefinitionoftheperceptronupdates.Eq.3followsbecausebytheassumptionsofthetheorem,wehave y t x t · θ ∗ ≥ γ 1
GMT+8, 2023-1-27 13:06 , Processed in 0.994684 second(s), 30 queries .
打分:
0 星