1. 뉴런(neuron)¶
1-2. 인공 뉴런¶
- 1943년에 워렌 맥컬로그 월터 피츠가 단순화된 뇌세포 개념을 발표
- 신경 세포를 이진 출력을 가진 단순한 논리 게이트라고 설명
- 생물학적 뉴런의 모델에 기초한 수학적 기능으로 각 뉴런이 입력을 받아 개별적으로 가중치를 곱하여 나온 합계를 받아 개별적으로 가중치를 곱하여 나온 합계를 비선형 함수를 전달하여 출력을 생성
2. 퍼셉트론(Perceptron)¶
- 인공 신경망의 가장 기본적인 형태로 1957년에 처음 소개됨
- 입력과 출력을 가진 단일 뉴런 모델을 기반
- 초기에 기계 학습 알고리즘 중 하나로 이진 분류 문제를 해결하기 위해 설계
201. 논리 회귀(단층 퍼셉트론)로 AND 문제 풀기¶
In [1]:
import torch
import torch.nn as nn
import torch.optim as optim
In [3]:
X = torch.FloatTensor([[0, 0], [0, 1], [1, 0], [1, 1]])
y = torch.FloatTensor([[0], [0], [0], [1]])
model = nn.Sequential(
nn.Linear(2, 1),
nn.Sigmoid()
)
optimizer = optim.SGD(model.parameters(), lr=1)
epochs = 1000
for epoch in range(epochs + 1):
y_pred = model(X)
loss = nn.BCELoss()(y_pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % 100 == 0:
y_bool = (y_pred >= 0.5).float()
accuracy = (y == y_bool).float().sum() / len(y) * 100
print(f'epoch: {epoch:4d}/{epochs} Loss: {loss:.6f} Accuraacy: {accuracy:2f}%')
epoch: 0/1000 Loss: 0.581481 Accuraacy: 75.000000% epoch: 100/1000 Loss: 0.139236 Accuraacy: 100.000000% epoch: 200/1000 Loss: 0.080120 Accuraacy: 100.000000% epoch: 300/1000 Loss: 0.055768 Accuraacy: 100.000000% epoch: 400/1000 Loss: 0.042592 Accuraacy: 100.000000% epoch: 500/1000 Loss: 0.034378 Accuraacy: 100.000000% epoch: 600/1000 Loss: 0.028783 Accuraacy: 100.000000% epoch: 700/1000 Loss: 0.024735 Accuraacy: 100.000000% epoch: 800/1000 Loss: 0.021674 Accuraacy: 100.000000% epoch: 900/1000 Loss: 0.019279 Accuraacy: 100.000000% epoch: 1000/1000 Loss: 0.017357 Accuraacy: 100.000000%
2-2. 논리 회귀(단층 퍼셉트론)로 OR 문제 풀기¶
In [4]:
X = torch.FloatTensor([[0, 0], [0, 1], [1, 0], [1, 1]])
y = torch.FloatTensor([[0], [1], [1], [1]])
model = nn.Sequential(
nn.Linear(2, 1),
nn.Sigmoid()
)
optimizer = optim.SGD(model.parameters(), lr=1)
epochs = 1000
for epoch in range(epochs + 1):
y_pred = model(X)
loss = nn.BCELoss()(y_pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % 100 == 0:
y_bool = (y_pred >= 0.5).float()
accuracy = (y == y_bool).float().sum() / len(y) * 100
print(f'epoch: {epoch:4d}/{epochs} Loss: {loss:.6f} Accuraacy: {accuracy:2f}%')
epoch: 0/1000 Loss: 0.977533 Accuraacy: 25.000000% epoch: 100/1000 Loss: 0.087511 Accuraacy: 100.000000% epoch: 200/1000 Loss: 0.046364 Accuraacy: 100.000000% epoch: 300/1000 Loss: 0.031188 Accuraacy: 100.000000% epoch: 400/1000 Loss: 0.023405 Accuraacy: 100.000000% epoch: 500/1000 Loss: 0.018697 Accuraacy: 100.000000% epoch: 600/1000 Loss: 0.015550 Accuraacy: 100.000000% epoch: 700/1000 Loss: 0.013302 Accuraacy: 100.000000% epoch: 800/1000 Loss: 0.011618 Accuraacy: 100.000000% epoch: 900/1000 Loss: 0.010310 Accuraacy: 100.000000% epoch: 1000/1000 Loss: 0.009264 Accuraacy: 100.000000%
In [6]:
model = nn.Sequential(
nn.Linear(2, 64),
nn.Sigmoid(),
nn.Linear(64, 32),
nn.Sigmoid(),
nn.Linear(32, 16),
nn.Sigmoid(),
nn.Linear(16, 1),
nn.Sigmoid()
)
print(model)
Sequential( (0): Linear(in_features=2, out_features=64, bias=True) (1): Sigmoid() (2): Linear(in_features=64, out_features=32, bias=True) (3): Sigmoid() (4): Linear(in_features=32, out_features=16, bias=True) (5): Sigmoid() (6): Linear(in_features=16, out_features=1, bias=True) (7): Sigmoid() )
In [7]:
X = torch.FloatTensor([[0, 0], [0, 1], [1, 0], [1, 1]])
y = torch.FloatTensor([[0], [1], [1], [0]])
optimizer = optim.SGD(model.parameters(), lr=1)
epochs = 5000
for epoch in range(epochs + 1):
y_pred = model(X)
loss = nn.BCELoss()(y_pred, y)
optimizer.zero_grad()
loss.backward()
optimizer.step()
if epoch % 100 == 0:
y_bool = (y_pred >= 0.5).float()
accuracy = (y == y_bool).float().sum() / len(y) * 100
print(f'epoch: {epoch:4d}/{epochs} Loss: {loss:.6f} Accuraacy: {accuracy:2f}%')
epoch: 0/5000 Loss: 0.711879 Accuraacy: 50.000000% epoch: 100/5000 Loss: 0.693142 Accuraacy: 50.000000% epoch: 200/5000 Loss: 0.693138 Accuraacy: 50.000000% epoch: 300/5000 Loss: 0.693134 Accuraacy: 50.000000% epoch: 400/5000 Loss: 0.693129 Accuraacy: 50.000000% epoch: 500/5000 Loss: 0.693125 Accuraacy: 75.000000% epoch: 600/5000 Loss: 0.693120 Accuraacy: 75.000000% epoch: 700/5000 Loss: 0.693115 Accuraacy: 75.000000% epoch: 800/5000 Loss: 0.693110 Accuraacy: 75.000000% epoch: 900/5000 Loss: 0.693104 Accuraacy: 75.000000% epoch: 1000/5000 Loss: 0.693097 Accuraacy: 75.000000% epoch: 1100/5000 Loss: 0.693090 Accuraacy: 75.000000% epoch: 1200/5000 Loss: 0.693082 Accuraacy: 50.000000% epoch: 1300/5000 Loss: 0.693073 Accuraacy: 50.000000% epoch: 1400/5000 Loss: 0.693063 Accuraacy: 50.000000% epoch: 1500/5000 Loss: 0.693051 Accuraacy: 50.000000% epoch: 1600/5000 Loss: 0.693038 Accuraacy: 50.000000% epoch: 1700/5000 Loss: 0.693022 Accuraacy: 50.000000% epoch: 1800/5000 Loss: 0.693003 Accuraacy: 50.000000% epoch: 1900/5000 Loss: 0.692981 Accuraacy: 50.000000% epoch: 2000/5000 Loss: 0.692955 Accuraacy: 50.000000% epoch: 2100/5000 Loss: 0.692922 Accuraacy: 50.000000% epoch: 2200/5000 Loss: 0.692882 Accuraacy: 50.000000% epoch: 2300/5000 Loss: 0.692831 Accuraacy: 50.000000% epoch: 2400/5000 Loss: 0.692766 Accuraacy: 50.000000% epoch: 2500/5000 Loss: 0.692681 Accuraacy: 50.000000% epoch: 2600/5000 Loss: 0.692564 Accuraacy: 50.000000% epoch: 2700/5000 Loss: 0.692401 Accuraacy: 50.000000% epoch: 2800/5000 Loss: 0.692159 Accuraacy: 50.000000% epoch: 2900/5000 Loss: 0.691778 Accuraacy: 50.000000% epoch: 3000/5000 Loss: 0.691128 Accuraacy: 50.000000% epoch: 3100/5000 Loss: 0.689865 Accuraacy: 50.000000% epoch: 3200/5000 Loss: 0.686881 Accuraacy: 50.000000% epoch: 3300/5000 Loss: 0.676731 Accuraacy: 50.000000% epoch: 3400/5000 Loss: 0.697920 Accuraacy: 50.000000% epoch: 3500/5000 Loss: 0.632354 Accuraacy: 50.000000% epoch: 3600/5000 Loss: 0.426935 Accuraacy: 75.000000% epoch: 3700/5000 Loss: 0.011883 Accuraacy: 100.000000% epoch: 3800/5000 Loss: 0.004427 Accuraacy: 100.000000% epoch: 3900/5000 Loss: 0.002612 Accuraacy: 100.000000% epoch: 4000/5000 Loss: 0.001822 Accuraacy: 100.000000% epoch: 4100/5000 Loss: 0.001386 Accuraacy: 100.000000% epoch: 4200/5000 Loss: 0.001113 Accuraacy: 100.000000% epoch: 4300/5000 Loss: 0.000926 Accuraacy: 100.000000% epoch: 4400/5000 Loss: 0.000791 Accuraacy: 100.000000% epoch: 4500/5000 Loss: 0.000688 Accuraacy: 100.000000% epoch: 4600/5000 Loss: 0.000609 Accuraacy: 100.000000% epoch: 4700/5000 Loss: 0.000545 Accuraacy: 100.000000% epoch: 4800/5000 Loss: 0.000493 Accuraacy: 100.000000% epoch: 4900/5000 Loss: 0.000449 Accuraacy: 100.000000% epoch: 5000/5000 Loss: 0.000413 Accuraacy: 100.000000%
In [ ]:
'코딩 > 머신러닝과 딥러닝' 카테고리의 다른 글
CNN 기초 (0) | 2024.07.17 |
---|---|
비선형 활성화 함수 (1) | 2024.07.17 |
데이터 로더 (0) | 2024.07.17 |
파이토치로 구현한 논리회귀 (0) | 2024.07.17 |
파이토치로 구현한 선형회귀 (0) | 2024.07.17 |