1. multivariate input
multivariate simply set n_features, and size array accordingly
2. multistep output
2.1. for normal multistep
model = Sequential() model.add(LSTM(100, activation='relu', return_sequences=True, input_shape=(n_steps_in, n_features))) model.add(LSTM(100, activation='relu')) model.add(Dense(n_steps_out)) #use a Dense layer with n output
[output with an array of size n_steps_out]
2.2. for encoder/decoder multistep
model = Sequential() model.add(LSTM(100, activation='relu', input_shape=(n_steps_in, n_features))) model.add(RepeatVector(n_steps_out)) # repeat n_features output as input to the following model.add(LSTM(100, activation='relu', return_sequences=True)) model.add(TimeDistributed(Dense(1))) model.compile(optimizer='adam', loss='mse')[has timedistributed output each with size 1]
3. side note3.1. for cnn/lstm - time distributed cnn before pass in to lstm
(sample, subsequence, steps, features)
model = Sequential() model.add(TimeDistributed(Conv1D(64, 1, activation='relu'), input_shape=(None, n_steps, n_features))) model.add(TimeDistributed(MaxPooling1D())) model.add(TimeDistributed(Flatten())) model.add(LSTM(50, activation='relu')) model.add(Dense(1)) model.compile(optimizer='adam', loss='mse')3.2. for convlstm(sample, steps, row, columns, features)
model = Sequential()
model.add(ConvLSTM2D(64, (1,2), activation='relu', input_shape=(n_steps, 1,n_seq, n_features))) model.add(Flatten()) model.add(Dense(1)) model.compile(optimizer='adam', loss='mse')we have 1 row here as it is easy to interpret time series as 1 dimension
No comments:
Post a Comment