Neural Network with Back Propagation - One Hidden Layer
One Input, with a two neuron hidden layer and RELU activation, single value output
Desired Output
Input Number
Layer 1 First Weight
Layer 1 Second Weight
Layer - 2 First Weight
Layer 2 Second Weight
 
Learning Rate
 
Training data sets
xW11W12H1H2W21W22y
503-1150024300
-463-104624184
483-1144024288
553-1165024330
-23-102248
63-11802436
-183-10182472
633-1189024378
-963-109624384
223-166024132
-883-108824352
-383-103824152
943-1282024564
413-1123024246
313-193024186
-283-102824112
733-1219024438
-763-107624304
713-1213024426
113-13302466
-53-1052420
-613-106124244


Optimal Weights
W113.883655516   W211.54493078
W12-1.499246109   W222.667997907