동아리,ν•™νšŒ/GDGoC

[Winter Blog Challenge] tensorboard μ‚¬μš©λ²•

egahyun 2025. 6. 12. 19:26

μ•ˆλ…•ν•˜μ„Έμš”πŸ˜ŠGDGoC member μ΄κ°€ν˜„ μž…λ‹ˆλ‹€.

 

μ €λŠ” λ”₯λŸ¬λ‹ ν”„λ‘œμ νŠΈμ—μ„œ ν•™μŠ΅ 과정을 μ§κ΄€μ μœΌλ‘œ λͺ¨λ‹ˆν„°λ§ν•  수 μžˆλŠ” 도ꡬ, TensorBoard의 ν™œμš©λ²•μ— λŒ€ν•΄ μ†Œκ°œν•΄λ“œλ¦¬λ € ν•©λ‹ˆλ‹€.

Tensorboardλ₯Ό λ¨Όμ € μ†Œκ°œ λ“œλ¦¬κ³ , λ”₯λŸ¬λ‹μ—μ„œ μ‚¬μš©ν•˜λŠ” μ˜ˆμ‹œ, λ¨Έμ‹ λŸ¬λ‹μ—μ„œ μ‚¬μš©ν•  수 μžˆλŠ” μ˜ˆμ‹œλ₯Ό μ†Œκ°œν•΄λ“œλ¦¬λ„λ‘ ν•˜κ² μŠ΅λ‹ˆλ‹€ !


TensorBoard λž€?

Tensorflowκ°€ μ œκ³΅ν•˜λŠ” ν•™μŠ΅ κ³Όμ • μ‹œκ°ν™” λ„κ΅¬λ‘œ, ν•™μŠ΅ 쀑에 λ°œμƒν•˜λŠ” Loss, Accuracy, Gradient λ“±μ˜ λ‹€μ–‘ν•œ 둜그λ₯Ό

μ‹€μ‹œκ°„μœΌλ‘œ κ·Έλž˜ν”„ ν˜•νƒœλ‘œ 확인할 수 μžˆλ„λ‘ λ³΄μ—¬μ€λ‹ˆλ‹€.

 

ν•™μŠ΅ μ§€ν‘œλ“€μ„ κ·Έλž˜ν”„λ‘œ 확인 ν•˜λ©΄μ„œ, ν•™μŠ΅μ΄ μ œλŒ€λ‘œ 이루어지고 μžˆλŠ”μ§€ 확인할 수 있고,

ν•˜μ΄νΌνŒŒλΌλ―Έν„° νŠœλ‹ ν›„, 이전 결과와 λΉ„κ΅ν•¨μœΌλ‘œμ¨ 효과적인 μ„€μ • 쑰합을 νƒμƒ‰ν•˜λŠ”λ° 도움을 μ€λ‹ˆλ‹€.

 

또, λͺ¨λΈ ꡬ쑰 및 λ‚΄λΆ€ 정보λ₯Ό 뢄석할 수 μžˆμ–΄, νŒŒλΌλ―Έν„°μ˜ 뢄포와 λ³€ν™”, gradient λ³€ν™”λ₯Ό 좔적할 수 μžˆμŠ΅λ‹ˆλ‹€.

λ˜ν•œ μ—°μ‚° κ·Έλž˜ν”„λ₯Ό μ‹œκ°ν™” ν•¨μœΌλ‘œμ¨ λͺ¨λΈ ꡬ쑰λ₯Ό νŒŒμ•…ν•  μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€.

 

특히, μš”μ¦˜ 이미지, μŒμ„±, ν…μŠ€νŠΈ λ“± 고차원 데이터λ₯Ό μ΄μš©ν•œ λͺ¨λΈλ“€μ΄ λ‹€μ–‘ν•˜κ²Œ 개발 되고 μžˆλŠ”λ°μš”.

Tensorboard에선 고차원 데이터 λ˜ν•œ 고차원 μž„λ² λ”©μ„ 톡해 데이터 κ°„ 관계λ₯Ό μ‹œκ°ν™” ν•  수 있고,

이λ₯Ό μ΄μš©ν•œ 예츑 κ²°κ³Όλ₯Ό 기둝할 μˆ˜λ„ μžˆμŠ΅λ‹ˆλ‹€.

 


PyTorch와 TensorBoard 연동 μ‹€μŠ΅ - λ”₯λŸ¬λ‹

μ €λŠ” κ°„λ‹¨ν•˜κ²Œ titanic 데이터셋을 뢈러온 λ‹€μŒ κ°„λ‹¨ν•œ λͺ¨λΈμ„ μ •μ˜ν•˜μ—¬, μ‹€μŠ΅ μ§„ν–‰ν–ˆμŠ΅λ‹ˆλ‹€.

 

데이터 μ…‹

X = df.drop("Survived", axis=1).values
y = df["Survived"].values
X = StandardScaler().fit_transform(X)

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

class TitanicDataset(Dataset):
    def __init__(self, X, y):
        self.X = torch.tensor(X, dtype=torch.float32)
        self.y = torch.tensor(y, dtype=torch.long)
    def __len__(self): return len(self.X)
    def __getitem__(self, idx): return self.X[idx], self.y[idx]

train_loader = DataLoader(TitanicDataset(X_train, y_train), batch_size=32, shuffle=True)
test_loader = DataLoader(TitanicDataset(X_test, y_test), batch_size=32)

 

 

λͺ¨λΈ μ •μ˜

class TitanicNet(nn.Module):
    def __init__(self):
        super().__init__()
        self.net = nn.Sequential(
            nn.Linear(6, 16),
            nn.ReLU(),
            nn.Linear(16, 2)
        )
    def forward(self, x): return self.net(x)
    
model = TitanicNet().to(device)

 

 

SummaryWriter μ„€μ •

: summaryWriter μ•ˆμ— λ‘œκ·Έκ°€ μ €μž₯될 디렉토리 경둜λ₯Ό μ§€μ •ν•˜μ—¬, writer을 μ„€μ •ν•˜λ©΄ λ©λ‹ˆλ‹€.

writer = SummaryWriter("runs/nn_titanic")

 

 

둜그 기둝

: ν•™μŠ΅ μ½”λ“œ 내뢀에 writer.add_scalar()λ₯Ό μΆ”κ°€ν•˜μ—¬, μ›ν•˜λŠ” 둜그λ₯Ό κΈ°λ‘ν•˜λ„λ‘ μ„€μ •ν•©λ‹ˆλ‹€.

criterion = nn.CrossEntropyLoss()
optimizer = torch.optim.Adam(model.parameters(), lr=0.001)

for epoch in range(100):
    model.train()
    total_loss = 0
    correct = 0

    for X_batch, y_batch in train_loader:
        X_batch, y_batch = X_batch.to(device), y_batch.to(device)
        output = model(X_batch)
        loss = criterion(output, y_batch)

        optimizer.zero_grad()
        loss.backward()
        optimizer.step()

        total_loss += loss.item()
        pred = output.argmax(1)
        correct += (pred == y_batch).sum().item()

    acc = correct / len(train_loader.dataset)
    
    # writer둜 둜그 기둝
    writer.add_scalar("Loss/train", total_loss / len(train_loader), epoch)
    writer.add_scalar("Accuracy/train", acc, epoch)

 

 

Writer close

: 둜그 μ“°λŠ”κ²ƒμ„ μ’…λ£Œν•˜κΈ° μœ„ν•΄ ν•™μŠ΅μ΄ λλ‚œ ν›„, μ•„λž˜μ˜ μ½”λ“œλ₯Ό μ‹€ν–‰ν•©λ‹ˆλ‹€.

writer.close()

 

Tensorboard μ‹€ν–‰

: 둜그λ₯Ό 확인할 수 μžˆλ„λ‘ μ•„λž˜μ˜ μ½”λ“œλ₯Ό μ‹€ν–‰ν•˜λ©΄ λ©λ‹ˆλ‹€!

%load_ext tensorboard
%tensorboard --logdir=runs

 

μ‹€ν–‰ν•˜λ©΄, μ΄λŸ°μ‹μœΌλ‘œ 확인할 수 μžˆμŠ΅λ‹ˆλ‹€!


LGBMκ³Ό TensorBoard 연동 μ‹€μŠ΅ - λ¨Έμ‹ λŸ¬λ‹

이번 μ‹€μŠ΅μ—μ„œλ„ titanic 데이터셋을 뢈러온 λ‹€μŒ λ¨Έμ‹ λŸ¬λ‹μ˜ LightCBM λͺ¨λΈμ„ μ‚¬μš©ν•˜μ—¬ μ˜ˆμΈ‘ν•  수 μžˆλ„λ‘ μ§„ν–‰ν•˜μ˜€μŠ΅λ‹ˆλ‹€.

λ¨Έμ‹ λŸ¬λ‹ λͺ¨λΈμ—μ„œλŠ” ν•™μŠ΅ 둜그 기둝이 μ €μ ˆλ‘œ λ„˜μ–΄κ°€μ§„ μ•Šμ§€λ§Œ 둜그 기둝을 κ΅¬ν˜„ν•˜μ—¬ λ™μΌν•˜κ²Œ μ‹œκ°ν™”ν•  수 μžˆμŠ΅λ‹ˆλ‹€!

 

데이터 μ…‹

X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.2, random_state=42)

train_data = lgb.Dataset(X_train, label=y_train)
valid_data = lgb.Dataset(X_test, label=y_test)

 

 

SummaryWriter μ„€μ •

: summaryWriter μ•ˆμ— λ‘œκ·Έκ°€ μ €μž₯될 디렉토리 경둜λ₯Ό μ§€μ •ν•˜μ—¬, writer을 μ„€μ •ν•˜λ©΄ λ©λ‹ˆλ‹€.

writer = SummaryWriter("runs/nn_titanic")

 

 

둜그 기둝을 μœ„ν•œ 콜백 ν•¨μˆ˜ μ •μ˜

def tb_callback(env):
    epoch = env.iteration
    train_loss = env.evaluation_result_list[0][2]
    val_loss = env.evaluation_result_list[1][2]
    writer.add_scalar("Loss/train", train_loss, epoch)
    writer.add_scalar("Loss/valid", val_loss, epoch)

 

 

λͺ¨λΈ μ •μ˜

: λͺ¨λΈ train μ½”λ“œ μ•ˆμ— 콜백 ν•¨μˆ˜λ₯Ό λ„£μ–΄, 둜그 기둝을 κ°€λŠ₯토둝 ν•©λ‹ˆλ‹€.

## λͺ¨λΈ νŒŒλΌλ―Έν„°
params = {
    "objective": "binary",
    "metric": "binary_logloss",
    "learning_rate": 0.1,
    "verbosity": -1
}

model = lgb.train(
    params,
    train_data,
    num_boost_round=100,
    valid_sets=[train_data, valid_data],
    valid_names=["train", "valid"],
    ## 콜백
    callbacks=[
        tb_callback,
        lgb.log_evaluation(period=1)
    ]
)

 

 

predict 둜그 기둝

: μ˜ˆμΈ‘ν•œ 결과둜 λ‚˜μ˜¨ 둜그λ₯Ό κΈ°λ‘ν•˜κ³ μž ν•  λ•Œ, model.predict ν›„, κΈ°λ‘ν•˜λ„λ‘  ν•˜λ©΄, test 결과도 확인 κ°€λŠ₯ν•©λ‹ˆλ‹€!

y_pred = model.predict(X_test)
acc = accuracy_score(y_test, (y_pred >= 0.5).astype(int))
writer.add_scalar("Accuracy/test", acc, 0)

 

 

Writer close

: 둜그 μ“°λŠ”κ²ƒμ„ μ’…λ£Œν•˜κΈ° μœ„ν•΄ ν•™μŠ΅μ΄ λλ‚œ ν›„, μ•„λž˜μ˜ μ½”λ“œλ₯Ό μ‹€ν–‰ν•©λ‹ˆλ‹€.

writer.close()

 

 

Tensorboard μ‹€ν–‰

: 둜그λ₯Ό 확인할 수 μžˆλ„λ‘ μ•„λž˜μ˜ μ½”λ“œλ₯Ό μ‹€ν–‰ν•˜λ©΄ λ©λ‹ˆλ‹€!

%load_ext tensorboard
%tensorboard --logdir=runs

 

μ‹€ν–‰ν•˜λ©΄, μ΄λŸ°μ‹μœΌλ‘œ 확인할 수 μžˆμŠ΅λ‹ˆλ‹€!


Tensorboardλ₯Ό 잘 μ΄μš©ν•œλ‹€λ©΄ λͺ¨λΈμ˜ μ„±λŠ₯을 λΉ λ₯΄κ²Œ κ°œμ„ ν•˜κ³ ,

μ‹€ν—˜μ„ λ°˜λ³΅ν•˜λ©΄μ„œ μ •ν™•ν•œ λ°©ν–₯으둜 λ‚˜μ•„κ°ˆ 수 있게 도움을 μ£ΌλŠ” 쒋은 λ„κ΅¬μž…λ‹ˆλ‹€!

λ¨Έμ‹ λŸ¬λ‹ λͺ¨λΈμ„ κ΅¬ν˜„ν•˜κ±°λ‚˜ λ”₯λŸ¬λ‹ λͺ¨λΈμ„ κ΅¬ν˜„ν•  λ•Œ, ν™œμš©ν•˜μ—¬ 쒋은 κ²°κ³Όλ₯Ό λ§Œλ“€μ–΄λ³΄μ„Έμš”πŸ˜ŽπŸŽΆ

κ°μ‚¬ν•©λ‹ˆλ‹€πŸ™Œ