RUL-神经网络模型建立

对每个传感器来说,故障发生时间(TTF)都是不同,对于TTF比较好的表示方式是预测还能传回多少次数据或者多长时间后回传的数据将达到阀值。它的计算公式如下:$$fTTF_i=\cfrac{TTF_i-min(TTF)}{max(TTF)-min(TTF)}$$我们可以用一个python函数来表示

In [1]:
from rul_code import *

def fractionTTF(dat,q):
    return(dat.TTF[q]-dat.TTF.min()) / float(dat.TTF.max()-dat.TTF.min())

fTTFz = []
fTTF = []

for i in range(train['unit'].min(),train['unit'].max()+1):
    dat=train[train.unit==i]
    dat = dat.reset_index(drop=True)
    for q in range(len(dat)):
        fTTFz = fractionTTF(dat, q)
        fTTF.append(fTTFz)
ntrain['fTTF'] = fTTF
Using TensorFlow backend.
<Figure size 1800x1600 with 20 Axes>
<Figure size 1600x1200 with 2 Axes>
<Figure size 1000x2000 with 9 Axes>
<Figure size 800x800 with 1 Axes>
<Figure size 800x800 with 1 Axes>
<Figure size 800x800 with 2 Axes>
<Figure size 800x800 with 2 Axes>

下面的图展示了针对前四个传感器的TTF时间。可以看到每个传感器从1到0的周期是不同的。

In [2]:
mx = cyclestrain.iloc[0:4,1].sum()

fig = plt.figure(figsize = (8, 8))
fig.add_subplot(1,2,1)
plt.plot(ntrain.TTF[0:mx])
plt.legend(['Time to failure (in cycles)'], bbox_to_anchor=(0., 1.02, 1., .102), loc=3, mode="expand", borderaxespad=0)
plt.ylabel('Original unit')
fig.add_subplot(1,2,2)
plt.plot(ntrain.fTTF[0:mx])
plt.legend(['Time to failure (fraction)'], bbox_to_anchor=(0., 1.02, 1., .102), loc=3, mode="expand", borderaxespad=0)
plt.ylabel('Scaled unit')
plt.show()
In [3]:
ntrain['fTTF'].describe()
Out[3]:
count    20631.000000
mean         0.500000
std          0.290085
min          0.000000
25%          0.248718
50%          0.500000
75%          0.751282
max          1.000000
Name: fTTF, dtype: float64

接下来我看下我们选择用来训练模型的数据集概况。

In [4]:
pd.DataFrame(ntrain.columns).transpose()
Out[4]:
0 1 2 3 4 5 6 7 8 9 ... 12 13 14 15 16 17 18 19 20 21
0 unit cycles op_setting1 op_setting2 s2 s3 s4 s6 s7 s8 ... s12 s13 s14 s15 s17 s20 s21 maxcycles TTF fTTF

1 rows × 22 columns

到此,我们已经准备好了数据,接下来了将进行训练了,这里使用的工具是Keras。强调的一点是,我们将使用TTFx来对标训练集中的Y-train。下面就是我们将要使用的数据。

In [5]:
X_train = ntrain.values[:,1:19]
Y_train = ntrain.values[:, 21]
X_test = ntest.values[:,1:19]

Keras中的神经网络开发过程中我们将用到ReLU激活函数,优化器使用Adam。模型没有优化,但是可以通过调整参数来得出较好的结果。这里用到的神经网络输入层有18个节点,隐藏层有6个节点,输出层只有一个节点。nn-18-6-1.png

In [6]:
model = Sequential()
model.add(Dense(6, input_dim=18, kernel_initializer='normal', activation='relu'))
model.add(Dense(1, kernel_initializer='normal'))
model.compile(loss='mean_squared_error', optimizer='adam')

model.fit(X_train, Y_train, nb_epoch=20)
d:\testproj\test_python\zhyblog36\venv\lib\site-packages\ipykernel_launcher.py:6: UserWarning: The `nb_epoch` argument in `fit` has been renamed `epochs`.
  
Epoch 1/20
20631/20631 [==============================] - 1s 52us/step - loss: 0.0493
Epoch 2/20
20631/20631 [==============================] - 1s 43us/step - loss: 0.0091
Epoch 3/20
20631/20631 [==============================] - 1s 41us/step - loss: 0.0083
Epoch 4/20
20631/20631 [==============================] - 1s 42us/step - loss: 0.0081
Epoch 5/20
20631/20631 [==============================] - 1s 54us/step - loss: 0.0080
Epoch 6/20
20631/20631 [==============================] - 1s 60us/step - loss: 0.0078
Epoch 7/20
20631/20631 [==============================] - 1s 54us/step - loss: 0.0077
Epoch 8/20
20631/20631 [==============================] - 1s 49us/step - loss: 0.0075
Epoch 9/20
20631/20631 [==============================] - 1s 49us/step - loss: 0.0070
Epoch 10/20
20631/20631 [==============================] - 1s 49us/step - loss: 0.0064
Epoch 11/20
20631/20631 [==============================] - 1s 51us/step - loss: 0.0053
Epoch 12/20
20631/20631 [==============================] - 1s 49us/step - loss: 0.0048
Epoch 13/20
20631/20631 [==============================] - 1s 49us/step - loss: 0.0047
Epoch 14/20
20631/20631 [==============================] - 1s 51us/step - loss: 0.0047
Epoch 15/20
20631/20631 [==============================] - 1s 52us/step - loss: 0.0047
Epoch 16/20
20631/20631 [==============================] - 1s 64us/step - loss: 0.0047
Epoch 17/20
20631/20631 [==============================] - 1s 53us/step - loss: 0.0047
Epoch 18/20
20631/20631 [==============================] - 1s 53us/step - loss: 0.0046
Epoch 19/20
20631/20631 [==============================] - 1s 55us/step - loss: 0.0047
Epoch 20/20
20631/20631 [==============================] - 1s 52us/step - loss: 0.0046
Out[6]:
<keras.callbacks.History at 0x1a4058433c8>
In [ ]: