Страницы

Поиск по вопросам

воскресенье, 14 апреля 2019 г.

Обучение нейронной сети теореме Пифагора

Хочу обучить НС теореме Пифагора. Всё вроде правильно сделал, нормализовал данные, модель правильная, а вот в чём ошибка не понятно..
import numpy as np
from keras.models import Sequential from keras.layers.core import Dense, Activation from keras.utils import np_utils
np.random.seed()
NB_EPOCH = 500 VERBOSE = 1
X_in = [[ 0 , 44 ], [ 0 , 18 ], [ 38 , 0 ], [ 48 , 14 ], [ 0 , 36 ], [ 14 , 0 ], [ 34 , 0 ], [ 0 , 0 ], [ 0 , 38 ], [ 32 , 0 ], [ 28 , 0 ], [ 36 , 0 ], [ 20 , 48 ], [ 0 , 6 ], [ 0 , 20 ], [ 0 , 42 ], [ 0 , 8 ], [ 24 , 32 ], [ 4 , 0 ], [ 6 , 8 ], [ 24 , 10 ], [ 0 , 22 ], [ 16 , 12 ], [ 30 , 40 ], [ 0 , 32 ], [ 0 , 32 ], [ 16 , 0 ], [ 48 , 20 ], [ 0 , 8 ], [ 32 , 0 ], [ 0 , 46 ], [ 0 , 22 ], [ 0 , 8 ], [ 10 , 24 ], [ 0 , 36 ], [ 14 , 0 ], [ 0 , 22 ], [ 42 , 0 ], [ 16 , 12 ], [ 40 , 30 ], [ 44 , 0 ], [ 40 , 0 ], [ 34 , 0 ], [ 0 , 32 ], [ 40 , 30 ], [ 32 , 0 ], [ 0 , 30 ], [ 24 , 18 ], [ 0 , 26 ], [ 22 , 0 ], [ 0 , 4 ], [ 16 , 0 ], [ 10 , 0 ], [ 0 , 32 ], [ 0 , 42 ], [ 2 , 0 ], [ 0 , 38 ], [ 32 , 24 ], [ 48 , 0 ], [ 20 , 0 ], [ 0 , 18 ], [ 0 , 38 ], [ 14 , 48 ], [ 40 , 42 ], [ 16 , 12 ], [ 26 , 0 ], [ 0 , 20 ], [ 40 , 30 ], [ 16 , 30 ], [ 36 , 48 ], [ 36 , 0 ], [ 18 , 24 ], [ 34 , 0 ], [ 16 , 0 ], [ 0 , 24 ], [ 0 , 24 ], [ 0 , 18 ], [ 38 , 0 ], [ 28 , 0 ], [ 0 , 34 ], [ 0 , 36 ], [ 24 , 32 ], [ 16 , 30 ], [ 40 , 30 ], [ 24 , 0 ], [ 0 , 14 ], [ 8 , 6 ], [ 12 , 0 ], [ 16 , 0 ], [ 16 , 30 ], [ 48 , 14 ], [ 0 , 30 ], [ 38 , 0 ], [ 38 , 0 ], [ 0 , 8 ], [ 36 , 48 ], [ 0 , 32 ], [ 10 , 24 ], [ 46 , 0 ], [ 24 , 10 ], [ 30 , 0 ], [ 0 , 48 ], [ 40 , 0 ], [ 42 , 0 ], [ 32 , 24 ], [ 32 , 0 ], [ 12 , 16 ], [ 0 , 4 ], [ 0 , 28 ], [ 32 , 0 ], [ 40 , 42 ], [ 46 , 0 ], [ 0 , 24 ], [ 30 , 16 ], [ 36 , 48 ], [ 40 , 0 ], [ 24 , 0 ], [ 0 , 22 ], [ 40 , 42 ], [ 10 , 24 ], [ 0 , 16 ], [ 14 , 48 ], [ 22 , 0 ], [ 0 , 22 ], [ 30 , 0 ], [ 0 , 2 ], [ 48 , 20 ], [ 6 , 0 ], [ 6 , 0 ], [ 28 , 0 ], [ 20 , 0 ], [ 0 , 40 ], [ 42 , 0 ], [ 48 , 36 ], [ 14 , 0 ], [ 10 , 24 ], [ 0 , 30 ], [ 48 , 20 ], [ 40 , 30 ], [ 0 , 0 ], [ 42 , 40 ], [ 0 , 48 ], [ 32 , 24 ]] X_answer = [[44] ,[18] ,[38] ,[50] ,[36] ,[14] ,[34] ,[0] ,[38] ,[32] ,[28] ,[36] ,[52] ,[6] ,[20] ,[42] ,[8] ,[40] ,[4] ,[10] ,[26] ,[22] ,[20] ,[50] ,[32] ,[32] ,[16] ,[52] ,[8] ,[32] ,[46] ,[22] ,[8] ,[26] ,[36] ,[14] ,[22] ,[42] ,[20] ,[50] ,[44] ,[40] ,[34] ,[32] ,[50] ,[32] ,[30] ,[30] ,[26] ,[22] ,[4] ,[16] ,[10] ,[32] ,[42] ,[2] ,[38] ,[40] ,[48] ,[20] ,[18] ,[38] ,[50] ,[58] ,[20] ,[26] ,[20] ,[50] ,[34] ,[60] ,[36] ,[30] ,[34] ,[16] ,[24] ,[24] ,[18] ,[38] ,[28] ,[34] ,[36] ,[40] ,[34] ,[50] ,[24] ,[14] ,[10] ,[12] ,[16] ,[34] ,[50] ,[30] ,[38] ,[38] ,[8] ,[60] ,[32] ,[26] ,[46] ,[26] ,[30] ,[48] ,[40] ,[42] ,[40] ,[32] ,[20] ,[4] ,[28] ,[32] ,[58] ,[46] ,[24] ,[34] ,[60] ,[40] ,[24] ,[22] ,[58] ,[26] ,[16] ,[50] ,[22] ,[22] ,[30] ,[2] ,[52] ,[6] ,[6] ,[28] ,[20] ,[40] ,[42] ,[60] ,[14] ,[26] ,[30] ,[52] ,[50] ,[0] ,[58] ,[48] ,[40]] X_in = np.asarray(X_in, dtype=np.float32) X_answer = np.asarray(X_answer, dtype=np.float32)
X_in /= np.amax(X_in) X_answer /= np.amax(X_answer)
model = Sequential() model.add(Dense(10, input_dim = 2, activation='relu')) model.add(Dense(10, activation='relu')) model.add(Dense(1, activation='softmax'))
model.compile(loss='mean_squared_error', optimizer='adam', metrics=['accuracy'])
history = model.fit(X_in, X_answer, epochs=NB_EPOCH, verbose=VERBOSE)
Что до 100 эпох, что до 500 один и тот же результат:
Epoch 1/100 143/143 [==============================] - 0s 2ms/step - loss: 0.2966 - acc: 0.0280 Epoch 2/100 143/143 [==============================] - 0s 52us/step - loss: 0.2966 - acc: 0.0280 Epoch 3/100 143/143 [==============================] - 0s 52us/step - loss: 0.2966 - acc: 0.0280 Epoch 4/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 5/100 143/143 [==============================] - 0s 52us/step - loss: 0.2966 - acc: 0.0280 Epoch 6/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 7/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 8/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 9/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 10/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 11/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 12/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 13/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 14/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 15/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 16/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 17/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 18/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 19/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 20/100 143/143 [==============================] - 0s 35us/step - loss: 0.2966 - acc: 0.0280 Epoch 21/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 22/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 23/100 143/143 [==============================] - 0s 35us/step - loss: 0.2966 - acc: 0.0280 Epoch 24/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 25/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 26/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 27/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 28/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 29/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 30/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 31/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 32/100 143/143 [==============================] - 0s 52us/step - loss: 0.2966 - acc: 0.0280 Epoch 33/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 34/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 35/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 36/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 37/100 143/143 [==============================] - 0s 49us/step - loss: 0.2966 - acc: 0.0280 Epoch 38/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 39/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 40/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 41/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 42/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 43/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 44/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 45/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 46/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 47/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 48/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 49/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 50/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 51/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 52/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 53/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 54/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 55/100 143/143 [==============================] - 0s 49us/step - loss: 0.2966 - acc: 0.0280 Epoch 56/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 57/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 58/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 59/100 143/143 [==============================] - 0s 35us/step - loss: 0.2966 - acc: 0.0280 Epoch 60/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 61/100 143/143 [==============================] - 0s 35us/step - loss: 0.2966 - acc: 0.0280 Epoch 62/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 63/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 64/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 65/100 143/143 [==============================] - 0s 52us/step - loss: 0.2966 - acc: 0.0280 Epoch 66/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 67/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 68/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 69/100 143/143 [==============================] - 0s 52us/step - loss: 0.2966 - acc: 0.0280 Epoch 70/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 71/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 72/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 73/100 143/143 [==============================] - 0s 39us/step - loss: 0.2966 - acc: 0.0280 Epoch 74/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 75/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 76/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 77/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 78/100 143/143 [==============================] - 0s 49us/step - loss: 0.2966 - acc: 0.0280 Epoch 79/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 80/100 143/143 [==============================] - 0s 49us/step - loss: 0.2966 - acc: 0.0280 Epoch 81/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 82/100 143/143 [==============================] - 0s 35us/step - loss: 0.2966 - acc: 0.0280 Epoch 83/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 84/100 143/143 [==============================] - 0s 35us/step - loss: 0.2966 - acc: 0.0280 Epoch 85/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 86/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 87/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 88/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 89/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 90/100 143/143 [==============================] - 0s 49us/step - loss: 0.2966 - acc: 0.0280 Epoch 91/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 92/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 93/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280 Epoch 94/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 95/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 96/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 97/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 98/100 143/143 [==============================] - 0s 45us/step - loss: 0.2966 - acc: 0.0280 Epoch 99/100 143/143 [==============================] - 0s 42us/step - loss: 0.2966 - acc: 0.0280 Epoch 100/100 143/143 [==============================] - 0s 38us/step - loss: 0.2966 - acc: 0.0280


Ответ

Как уже сказал @L.Murashov функция активации softmax применяется в задачах мультиклассовой классификации - она помогает посчитать вероятности принадлежности образца каждому из классов.
Есть еще несколько моментов, на которые стоит обратить внимание:
В обучающей выборке слишком много (около половины) нулевых значений - это вырожденный случай для теоремы Пифагора, когда одна из сторон треугольника имеет нулевую длину. Для того, чтобы лучше обучить модель можно выборку взять побольше - в вашем случае не проблема сгенерировать столько данных сколько нужно. Для данной (простой) задачи хватит одного скрытого слоя В качестве функции активации выходного слоя можно использовать linear В качестве функции потери и метрики можно выбрать mean_squared_error

Пример:
import numpy as np from keras.models import Sequential from keras.layers.core import Dense, Activation
N = 5000 np.random.seed(1234) X = np.random.randint(0, 50, size=(N,2)) y = np.linalg.norm(X, axis=1)
NB_EPOCHS = 100 VERBOSE = 1
model = Sequential() model.add(Dense(20, input_dim = 2, activation='relu')) model.add(Dense(1, activation='linear')) model.compile(loss='mse', optimizer='adam', metrics=['mean_squared_error']) model.fit(X, y, epochs=NB_EPOCHS, verbose=VERBOSE)
Обучение:
... 5000/5000 [==============================] - 0s 22us/step - loss: 0.0057 - mean_squared_error: 0.0057 Epoch 98/100 5000/5000 [==============================] - 0s 22us/step - loss: 0.0048 - mean_squared_error: 0.0048 Epoch 99/100 5000/5000 [==============================] - 0s 22us/step - loss: 0.0045 - mean_squared_error: 0.0045 Epoch 100/100 5000/5000 [==============================] - 0s 22us/step - loss: 0.0043 - mean_squared_error: 0.0043 Out[70]:
Предсказание:
In [71]: model.predict(np.array([[3,4], [10,10], [5,6]])) Out[71]: array([[ 5.018393], [14.130004], [ 7.841759]], dtype=float32)

Комментариев нет:

Отправить комментарий