Big / Stream Data Processing

Concerning “Big / Stream Data Processing”, we are conducting research together with our scientific partners (e.g. JKU-FAW) on the following topics, among others:

Secure and Efficient Distributed Algorithms for Big Data

Distributed and secure deep learning algorithms are based on the idea that parameters of deep learning models are updated locally, and then be merged into a global model. In this way no training data has to be transferred from the machine operators to the central model provider. In practice, this mechanism makes it very hard to infer sensible information of individual data sources (e.g. users of a machine in the field).

Heterogeneous Online Transfer Learning

Based on our developments in online transfer learning (Generalized Online Transfer Learning) we are researching and developing online transfer learning methods for distributed learning on heterogeneous data sources. In contrast to comparable methods our algorithms are applicable for arbitrary loss functions and guarantees convergence to the best combination of one source and one target scenario.

Data Quality and Data & Model Management

We focus on automated data quality (DQ) monitoring which incorporates the investigation of computational capabilities for the continuous DQ monitoring of real-world information systems. The focus is on the creation of a preferably generic approach that can be applied to assess different data models, including NoSQL data bases. The research work includes the recommendation of suitable DQ metrics that are especially meaningful to make statements (and possibly predictions) about the temporal development of data quality.

Selected publications are summarized in the strategic project dasRES.

back