Domain-invariant feature learning with recurrent autoencoders for time series prediction
|Title||Domain-invariant feature learning with recurrent autoencoders for time series prediction|
|Organization||Software Competence Center Hagenberg GmbH|
Transfer Learning (TL) in the field of time-series prediction with neural networks is considered. In particular, the research domain around an industrial project (TRUMPF) guided by the SCCH is examined and possible improvements of the current approaches are considered. This report aims at providing evidence regarding the following aspects: (a) On the goals and directions of the next years and (b) on some experiments of the period under report. Concerning (a), the review of the state-of-the-art for TL with neural networks shows, that the primary key findings of these works are measures for the similarity between the neural networks hidden activation distributions w.r.t different learning tasks. This fact together with the time-series aspect motivates possible future directions. Concerning (b), different regression algorithms are tested on the TRUMPF data, including elastic net regression, support vector machines, long-short-term memory networks, gated recurrent neural networks, auto encoders and some combinations. In particular, a new L2 regularizer-technique for multi-task neural networks is proposed and tested on the data.