๐ [Lec1] Neural Networks & Deep Learning
Deep Learning Specialization Course์ ์ฒซ ๋ฒ์งธ ๊ฐ์ 'Neural Networks & Deep Learning'์ 2์ฃผ์ฐจ ๊ณผ์ ์ ๋๋ค.
2์ฃผ์ฐจ ๋ชฉํ
- ์์ ํ, ์ญ์ ํ
- ๋ก์ง์คํฑ ํ๊ท๋ถ์
- ๋น์ฉ ํจ์(Cost Function)
- ๊ฒฝ์ฌํ๊ฐ๋ฒ(gradient descent)
Goal
- Build a logistic regression model structured as a shallow neural network
- Build the general architecture of a learning algorithm, including parameter initialization, cost function and gradient calculation, and optimization implementation (gradient descent)
- Implement computationally efficient and highly vectorized versions of models
- Compute derivatives for logistic regression, using a backpropagation mindset
- Use Numpy functions and Numpy matrix/vector operations
- Work with iPython Notebooks
- Implement vectorization across multiple training examples
Logistic Regression
- Why? - Logistic Regression์ ์ ์ฌ์ฉํ๋ ๊ฑธ๊น?
- What? - Logistic Regression์ด๋ ๋ฌด์์ผ๊น?
- How? - ์ด๋ป๊ฒ ๊ตฌํ ์ ์์๊น?
- Answer
- Why?
- Logistic Regression์ Supervised Learning Problem์ ํด๊ฒฐํ๊ธฐ ์ํด ๊ณ ์๋์๋ค. (Binary Classification)
- What?
- Logistic Regression์ ๋ชฉํ๋ ๋ฌด์์ผ๊น?
- 0๊ณผ 1๋ก ๋ถ๋ฅํ๋ ๋ฌธ์ ๊ฐ ์ฃผ์ด์ก์ ๋(y = {0, 1}) ํด๋์ค๋ฅผ ์์ธกํ๋ ๊ฒ์ด๋ค. e.g ๊ณ ์์ด์ธ๊ฐ vs ๊ณ ์์ด๊ฐ ์๋๊ฐ. ์คํธ์ธ๊ฐ vs ํ์ธ๊ฐ. ๊ฐ์ผ์ธ๊ฐ vs ๊ฐ์ผ์ด ์๋๊ฐ ๋ฑ ๋ค์ํ ์์๋ฅผ ๋ค ์ ์๊ฒ ๋ค
- ํ๋ จ ๋ฐ์ดํฐ์ ๋ชจ๋ธ์ ์์ธก๊ฐ์ ์๋ฌ๋ฅผ ์ค์ด๋ ๊ฒ์ด๋ค.
- How?
- linear function์ sigmoid ํจ์๋ฅผ ์์ ๊ฒฐ๊ณผ๊ฐ์ด 0์์ 1์ฌ์ด๊ฐ ๋๋๋ก ๋ง๋ค์ด์ค๋ค
- Why?
Logistic Regression Cost Function
What is the difference between the cost function and the loss function for logistic regression?
โ Loss Function๊ณผ Cost Fuction์ ์ฐจ์ด๋ฅผ ๋งํ ์ ์์ด์ผํ๋ค
์์ ์์๊ณผ ๊ฐ์ด Loss Function๊ณผ Cost Function์ ๊ฐ์ฅ ํฐ ์ฐจ์ด๋ ์ค์ ๊ฐ๊ณผ ์์ธก๊ฐ์ ์ฐจ์ด๋ฅผ ํ๋์ ๋ฐ์ดํฐ์ ๋ํด์๋ง ์๊ฐํ๋๋ ๋๋ ์ ์ฒด ํธ๋ ์ด๋ ๋ฐ์ดํฐ์ ๋ํด์ ์๊ฐํ๋๋์ ๋ฌ๋ ค์๋ค. ์ ๋ฆฌํ๋ฉด
- Loss Function์ด Single Training Example์ ๋ํ ๊ณ์ฐ์ด๋ผ๋ฉด
- Cost Function์ ์ ์ฒด ํธ๋ ์ด๋ ์ธํธ์ ๋ํ ํ๊ท ๊ฐ์ ๊ณ์ฐํ ๊ฒ์ด๋ค
So the terminology I'm going to use is that the loss function is applied to just a single training example like so. And the cost function is the cost of your parameters. So in training your logistic regression model, we're going to try to find parameters W and B that minimize the overall costs function J written at the bottom.
Derivatives(๋ฏธ๋ถ)
โ ๋ฏธ๋ถ
- ๋ฏธ๋ถ์ ์ง๊ด ⇒ Slope(๊ธฐ์ธ๊ธฐ) ๋ผ๊ณ ์๊ฐํ์
- ํจ์๊ฐ์ด ์ง์ ์ผ ๊ฒฝ์ฐ์๋ ์ด๋ ์์น์์๋ ๋ฏธ๋ถ๊ฐ์ด ๊ฐ์ง๋ง ๊ณก์ ๋๋ ๋ค๋ฅธ ๊ฒฝ์ฐ์๋ ์์น์ ๋ฐ๋ผ ๋ฏธ๋ถ๊ฐ์ด ๋ฌ๋ผ์ง ์ ์๋ค
![]() |
![]() |
Logistic Regression Gradient Descent
Logistic Regression on m examples
More Vectorization Examples
Vectorizing Logistic Regression
Steve-YJ/Data-Science-scratch
Learning Data Science from scratch. Contribute to Steve-YJ/Data-Science-scratch development by creating an account on GitHub.
github.com
Reference
์ฌ์ธต ํ์ต
Learn Deep Learning from deeplearning.ai. If you want to break into Artificial intelligence (AI), this Specialization will help you. Deep Learning is one of the most highly sought after skills in tech. We will help you become good at Deep Learning.
www.coursera.org
โ End! -20.11.03.Tue- :)
โ Update! -20.11.15.Sun am 7:00 - :)
'Coursera > Deep Learning Specialization' ์นดํ ๊ณ ๋ฆฌ์ ๋ค๋ฅธ ๊ธ
[Deep-Special] [Lec1] Week3. Shallow Neural Network (0) | 2020.11.08 |
---|---|
[Deep-Special] [Lec1] Programming Assignment - Logistic Regression with a Neural Network mindset (0) | 2020.11.04 |
[Deep-Special] [Lec1] Week2. Python Basics with numpy(Optional) (0) | 2020.11.04 |
[Deep-Special] [Lec1] Week1. Neural Networks and Deep Learning (0) | 2020.11.02 |
[Coursera] ์์ํ๊ธฐ Financial Aid (5) | 2020.11.02 |
๋๊ธ