Gradient Descent Playground
Main
Guide
Gallery
Step: 0
x: -1.5
y: 1.5
Start Optimization
Stop Optimization
Speed:
50
step/s
Function:
Rosenbrock
Gaussian1
Gaussian2
Saddle Point
Beale
Initial X:
-1.5
Initial Y:
1.5
Optimizer:
Normal
Momentum
Nesterov
AdaGrad
RMSProp
AdaDelta
Adam
Metropolis
Newton-Raphson
Learning Rate:
Momentum:
DecayRate:
Beta1:
Beta2:
Temperature:
1.0
Decay:
0.99
Xで結果をシェア
結果を画像として表示
URLをコピー
×
ダウンロード