๊ด€๋ฆฌ ๋ฉ”๋‰ด

๋ชฉ๋ก์ตœ์ ํ™” (6)

DATA101

[๋”ฅ๋Ÿฌ๋‹] Grid Search, Random Search, Bayesian Optimization

๐Ÿ‘จ‍๐Ÿ’ป ๋“ค์–ด๊ฐ€๋ฉฐ ๋ณธ ํฌ์ŠคํŒ…์—์„œ๋Š” ๋”ฅ๋Ÿฌ๋‹ ๋ถ„์•ผ์—์„œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ ์ตœ์ ํ™” ๋ฐฉ๋ฒ• 3๊ฐ€์ง€์ธ Grid Search, Random Search, Bayesian Optimization์— ๋Œ€ํ•ด ์•Œ์•„๋ด…๋‹ˆ๋‹ค. ๐Ÿ“š ๋ชฉ์ฐจ 1. Grid Search 2. Random Search 3. Bayesian Optimization 1. Grid Search ๊ทธ๋ฆฌ๋“œ ์„œ์น˜(Grid Search)๋Š” ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ผ์ •ํ•œ ๊ฐ„๊ฒฉ์œผ๋กœ ๋ณ€๊ฒฝํ•˜๋ฉฐ ์ตœ์ ์˜ ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ฐพ์•„๊ฐ€๋Š” ๊ธฐ๋ฒ•์ž…๋‹ˆ๋‹ค. ์•„๋ž˜์˜ ๊ทธ๋ฆผ 1์ฒ˜๋Ÿผ ๊ฐ€๋กœ์ถ•์ด ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ์ด๊ณ  ์„ธ๋กœ์ถ•์ด ๋ชฉํ‘œํ•จ์ˆ˜์ผ ๋•Œ, ๋ชฉํ‘œํ•จ์ˆ˜ ๊ฐ’์ด ์ตœ๋Œ€๊ฐ€ ๋˜๋Š” ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ฐพ๋Š” ๋ฌธ์ œ๋ฅผ ํ’€์–ด์•ผ ํ•œ๋‹ค๊ณ  ๊ฐ€์ •ํ•ด ๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. ๊ทธ๋ฆฌ๋“œ ์„œ์น˜๋Š” ํŠน์ • ๋ฒ”์œ„ ๋‚ด์—์„œ ํ•˜์ดํผํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ์ผ์ • ๊ฐ’๋งŒํผ ์ผ์ผ์ด ๋ณ€๊ฒฝํ•˜๋ฉฐ ์ถœ๋ ฅ๊ฐ’์„ ๋น„๊ตํ•˜๋Š” ๋ฐฉ์‹์œผ๋กœ..

[Deep Learning] ์ตœ์ ํ™”(Optimizer): (4) Adam

1. ๊ฐœ๋…Adaptive Moment Estimation(Adam)์€ ๋”ฅ๋Ÿฌ๋‹ ์ตœ์ ํ™” ๊ธฐ๋ฒ• ์ค‘ ํ•˜๋‚˜๋กœ์จ Momentum๊ณผ RMSProp์˜ ์žฅ์ ์„ ๊ฒฐํ•ฉํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ž…๋‹ˆ๋‹ค. ์ฆ‰, ํ•™์Šต์˜ ๋ฐฉํ–ฅ๊ณผ ํฌ๊ธฐ(=Learning rate)๋ฅผ ๋ชจ๋‘ ๊ฐœ์„ ํ•œ ๊ธฐ๋ฒ•์œผ๋กœ ๋”ฅ๋Ÿฌ๋‹์—์„œ ๊ฐ€์žฅ ๋งŽ์ด ์‚ฌ์šฉ๋˜์–ด "์˜ค๋˜" ์ตœ์ ํ™” ๊ธฐ๋ฒ•์œผ๋กœ ์•Œ๋ ค์ ธ ์žˆ์Šต๋‹ˆ๋‹ค. ์ตœ๊ทผ์—๋Š” RAdam, AdamW๊ณผ ๊ฐ™์ด ๋”์šฑ ์šฐ์ˆ˜ํ•œ ์„ฑ๋Šฅ์„ ๋ณด์ด๋Š” ์ตœ์ ํ™” ๊ธฐ๋ฒ•์ด ์ œ์•ˆ๋˜์—ˆ์ง€๋งŒ, ๋ณธ ํฌ์ŠคํŒ…์—์„œ๋Š” ๋”ฅ๋Ÿฌ๋‹ ๋ถ„์•ผ ์ „๋ฐ˜์„ ๊ณต๋ถ€ํ•˜๋Š” ๋งˆ์Œ๊ฐ€์ง์œผ๋กœ Adam์— ๋Œ€ํ•ด ์•Œ์•„๋ด…๋‹ˆ๋‹ค.2. ์ˆ˜์‹์ˆ˜์‹๊ณผ ํ•จ๊ป˜ Adam์— ๋Œ€ํ•ด ์ž์„ธํžˆ ์•Œ์•„๋ณด๊ฒ ์Šต๋‹ˆ๋‹ค. $$ m_{t} = \beta_{1} m_{t-1} + (1 - \beta_{1}) \nabla f(x_{t-1}) $$$$ g_{t} = \beta_{..

[Deep Learning] ์ตœ์ ํ™”(Optimizer): (3) RMSProp

1. ๊ฐœ๋…RMSProp๋Š” ๋”ฅ๋Ÿฌ๋‹ ์ตœ์ ํ™” ๊ธฐ๋ฒ• ์ค‘ ํ•˜๋‚˜๋กœ์จ Root Mean Sqaure Propagation์˜ ์•ฝ์ž๋กœ, ์•Œ์— ์—์Šคํ”„๋กญ(R.M.S.Prop)์ด๋ผ๊ณ  ์ฝ์Šต๋‹ˆ๋‹ค.โœ‹๋“ฑ์žฅ๋ฐฐ๊ฒฝ์ตœ์ ํ™” ๊ธฐ๋ฒ• ์ค‘ ํ•˜๋‚˜์ธ AdaGrad๋Š” ํ•™์Šต์ด ์ง„ํ–‰๋  ๋•Œ ํ•™์Šต๋ฅ (Learning rate)์ด ๊พธ์ค€ํžˆ ๊ฐ์†Œํ•˜๋‹ค ๋‚˜์ค‘์—๋Š” \(0\)์œผ๋กœ ์ˆ˜๋ ดํ•˜์—ฌ ํ•™์Šต์ด ๋” ์ด์ƒ ์ง„ํ–‰๋˜์ง€ ์•Š๋Š”๋‹ค๋Š” ํ•œ๊ณ„๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. RMSProp์€ ์ด๋Ÿฌํ•œ ํ•œ๊ณ„์ ์„ ๋ณด์™„ํ•œ ์ตœ์ ํ™” ๊ธฐ๋ฒ•์œผ๋กœ์จ ์ œํ”„๋ฆฌ ํžŒํŠผ ๊ต์ˆ˜๊ฐ€ Coursea ๊ฐ•์˜ ์ค‘์— ๋ฐœํ‘œํ•œ ์•Œ๊ณ ๋ฆฌ์ฆ˜์ž…๋‹ˆ๋‹ค.๐Ÿ›  ์›๋ฆฌRMSProp์€ AdaGrad์™€ ๋งˆ์ฐฌ๊ฐ€์ง€๋กœ ๋ณ€์ˆ˜(feature)๋ณ„๋กœ ํ•™์Šต๋ฅ ์„ ์กฐ์ ˆํ•˜๋˜ ๊ธฐ์šธ๊ธฐ ์—…๋ฐ์ดํŠธ ๋ฐฉ์‹์—์„œ ์ฐจ์ด๊ฐ€ ์žˆ์Šต๋‹ˆ๋‹ค. ์ด์ „ time step์—์„œ์˜ ๊ธฐ์šธ๊ธฐ๋ฅผ ๋‹จ์ˆœํžˆ ๊ฐ™์€ ๋น„์œจ๋กœ ๋ˆ„์ ํ•˜์ง€ ์•Š๊ณ  ์ง€์ˆ˜์ด๋™..

[Deep Learning] ์ตœ์ ํ™”(Optimizer): (2) AdaGrad

๐Ÿ“š ๋ชฉ์ฐจ 1. ๊ฐœ๋… 2. ์žฅ์  3. ๋‹จ์  1. ๊ฐœ๋… AdaGrad๋Š” ๋”ฅ๋Ÿฌ๋‹ ์ตœ์ ํ™” ๊ธฐ๋ฒ• ์ค‘ ํ•˜๋‚˜๋กœ์จ Adaptive Gradient์˜ ์•ฝ์ž์ด๊ณ , ์ ์‘์  ๊ธฐ์šธ๊ธฐ๋ผ๊ณ  ๋ถ€๋ฆ…๋‹ˆ๋‹ค. Feature๋งˆ๋‹ค ์ค‘์š”๋„, ํฌ๊ธฐ ๋“ฑ์ด ์ œ๊ฐ๊ฐ์ด๊ธฐ ๋•Œ๋ฌธ์— ๋ชจ๋“  Feature๋งˆ๋‹ค ๋™์ผํ•œ ํ•™์Šต๋ฅ ์„ ์ ์šฉํ•˜๋Š” ๊ฒƒ์€ ๋น„ํšจ์œจ์ ์ž…๋‹ˆ๋‹ค. ์ด๋Ÿฌํ•œ ๊ด€์ ์—์„œ AdaGrad ๊ธฐ๋ฒ•์ด ์ œ์•ˆ๋˜์—ˆ์Šต๋‹ˆ๋‹ค. AdaGrad๋Š” Feature๋ณ„๋กœ ํ•™์Šต๋ฅ (Learning rate)์„ Adaptiveํ•˜๊ฒŒ, ์ฆ‰ ๋‹ค๋ฅด๊ฒŒ ์กฐ์ ˆํ•˜๋Š” ๊ฒƒ์ด ํŠน์ง•์ž…๋‹ˆ๋‹ค. AdaGrad๋ฅผ ์ˆ˜์‹์œผ๋กœ ๋‚˜ํƒ€๋‚ด๋ฉด ์•„๋ž˜์™€ ๊ฐ™์Šต๋‹ˆ๋‹ค. $$ g_{t} = g_{t-1} + (\nabla f(x_{t-1}))^{2} $$ $$ x_{t} = x_{t-1} - \frac{\eta}{\sqrt{g_{t} + \epsi..

[Deep Learning] ์ตœ์ ํ™”(Optimizer): (1) Momentum

๋ณธ ํฌ์ŠคํŒ…์—์„œ๋Š” ๋”ฅ๋Ÿฌ๋‹ ์ตœ์ ํ™”(optimizer) ๊ธฐ๋ฒ• ์ค‘ ํ•˜๋‚˜์ธ Momentum์˜ ๊ฐœ๋…์— ๋Œ€ํ•ด ์•Œ์•„๋ด…๋‹ˆ๋‹ค. ๋จผ์ €, Momentum ๊ธฐ๋ฒ•์ด ์ œ์•ˆ๋œ ๋ฐฐ๊ฒฝ์ธ ๊ฒฝ์‚ฌ ํ•˜๊ฐ•๋ฒ•(Gradient Descent)์˜ ํ•œ๊ณ„์ ์— ๋Œ€ํ•ด ๋‹ค๋ฃจ๊ณ  ์•Œ์•„๋ณด๋„๋ก ํ•˜๊ฒ ์Šต๋‹ˆ๋‹ค.๐Ÿ“š ๋ชฉ์ฐจ1. ๊ฒฝ์‚ฌ ํ•˜๊ฐ•๋ฒ•์˜ ํ•œ๊ณ„ 1.1. Local Minimum ๋ฌธ์ œ 1.2. Saddle Point ๋ฌธ์ œ2. Momentum 2.1. ๊ฐœ๋… 2.2. ์ˆ˜์‹1. ๊ฒฝ์‚ฌ ํ•˜๊ฐ•๋ฒ•์˜ ํ•œ๊ณ„๊ฒฝ์‚ฌ ํ•˜๊ฐ•๋ฒ•(Gradient Descent)์€ ํฌ๊ฒŒ 2๊ฐ€์ง€ ํ•œ๊ณ„์ ์ด ์žˆ์Šต๋‹ˆ๋‹ค. ์ฒซ์งธ, Local Minimum์— ๋น ์ง€๊ธฐ ์‰ฝ๋‹ค๋Š” ์ . ๋‘˜์งธ, ์•ˆ์žฅ์ (Saddle point)๋ฅผ ๋ฒ—์–ด๋‚˜์ง€ ๋ชปํ•œ๋‹ค๋Š” ์ . ๊ฐ๊ฐ์— ๋Œ€ํ•ด ์•Œ์•„๋ด…๋‹ˆ๋‹ค.1.1. Local Minimum..

[Deep Learning] ์ตœ์ ํ™” ๊ฐœ๋…๊ณผ ๊ฒฝ์‚ฌ ํ•˜๊ฐ•๋ฒ•(Gradient Descent)

๐Ÿ“š ๋ชฉ์ฐจ1. ์ตœ์ ํ™” ๊ฐœ๋… 2. ๊ธฐ์šธ๊ธฐ ๊ฐœ๋… 3. ๊ฒฝ์‚ฌ ํ•˜๊ฐ•๋ฒ• ๊ฐœ๋… 4. ๊ฒฝ์‚ฌ ํ•˜๊ฐ•๋ฒ•์˜ ํ•œ๊ณ„1. ์ตœ์ ํ™” ๊ฐœ๋…๋”ฅ๋Ÿฌ๋‹ ๋ถ„์•ผ์—์„œ ์ตœ์ ํ™”(Optimization)๋ž€ ์†์‹ค ํ•จ์ˆ˜(Loss Function) ๊ฐ’์„ ์ตœ์†Œํ™”ํ•˜๋Š” ํŒŒ๋ผ๋ฏธํ„ฐ๋ฅผ ๊ตฌํ•˜๋Š” ๊ณผ์ •์ž…๋‹ˆ๋‹ค(์•„๋ž˜ ๊ทธ๋ฆผ 1 ์ฐธ๊ณ ). ๋”ฅ๋Ÿฌ๋‹์—์„œ๋Š” ํ•™์Šต ๋ฐ์ดํ„ฐ๋ฅผ ์ž…๋ ฅํ•˜์—ฌ ๋„คํŠธ์›Œํฌ ๊ตฌ์กฐ๋ฅผ ๊ฑฐ์ณ ์˜ˆ์ธก๊ฐ’(\(\hat{y}\))์„ ์–ป์Šต๋‹ˆ๋‹ค. ์ด ์˜ˆ์ธก๊ฐ’๊ณผ ์‹ค์ œ ์ •๋‹ต(\(y\))๊ณผ์˜ ์ฐจ์ด๋ฅผ ๋น„๊ตํ•˜๋Š” ํ•จ์ˆ˜๊ฐ€ ์†์‹ค ํ•จ์ˆ˜์ž…๋‹ˆ๋‹ค. ์ฆ‰, ๋ชจ๋ธ์ด ์˜ˆ์ธกํ•œ ๊ฐ’๊ณผ ์‹ค์ ฏ๊ฐ’์˜ ์ฐจ์ด๋ฅผ ์ตœ์†Œํ™”ํ•˜๋Š” ๋„คํŠธ์›Œํฌ ๊ตฌ์กฐ์˜ ํŒŒ๋ผ๋ฏธํ„ฐ(a.k.a., Feature)๋ฅผ ์ฐพ๋Š” ๊ณผ์ •์ด ์ตœ์ ํ™”์ž…๋‹ˆ๋‹ค. ์ตœ์ ํ™” ๊ธฐ๋ฒ•์—๋Š” ์—ฌ๋Ÿฌ ๊ฐ€์ง€๊ฐ€ ์žˆ์œผ๋ฉฐ, ๋ณธ ํฌ์ŠคํŒ…์—์„œ๋Š” ๊ฒฝ์‚ฌ ํ•˜๊ฐ•๋ฒ•(Gradient Descent)์— ๋Œ€ํ•ด ์•Œ์•„๋ด…๋‹ˆ๋‹ค.2. ๊ธฐ์šธ๊ธฐ ๊ฐœ๋…..