Project

Resilient Big Data Architectures (RES-BID)

The realization of Big Data relies on disruptive technologies such as Cloud Computing, Internet of Things and Data Analytics. Using these new technologies to design and construct big data system creates new challenges for software architects that cannot be resolved using existing approaches. In particular the massively scale of the volume of data together with the speed and variety of data has many implications for software architecture. Big Data has become a very important driver for innovation and growth for various industries such as health, administration, agriculture, defense, and education.
Storing and analyzing petabytes of data are becoming increasingly common in many of these application areas. These systems represent major, long-term investments requiring considerable financial commitments and massive scale software and system deployments.

Since an increasing number of applications is depending on Big Data systems it is important to build systems that are resilient to failures. The increasing size and complexity of Big Data systems has led to an amplified number of potential failures that can occur in the system. In general, to cope with the novel challenges of Big Data Systems, systematic system engineering approaches that explicitly address the fundamental concerns of scale as well as technologies are needed. For developing resilient Big Data system, appropriate reliability techniques must be applied to detect and recover from the potential failures. Reliability techniques can be based on fault prevention, fault tolerance, fault removal or fault forecasting.
In this project we will aim to provide fault tolerant big data architectures.