Show Menu

Big Data Architecture Overview Cheat Sheet (DRAFT) by

Approach to define a suitable Big Data architecture

This is a draft cheat sheet. It is a work in progress and is not finished yet.

Patterns in Big Data Archit­ecture

Patterns are everywhere and indicate best practices. Therefore patterns are not invented but found. The start of software design patterns began in the early nineties of the 20th century baseed on building archit­ectures and were first published as the famous gang of four design patterns. Ever since the idea of patterns grew and grew to a huge collection of thounsands of patterns in nearly every discipline and part - not only reduced to software develo­pment anymore. Just to mention it - patterns can be brought together in a compound to create new more high level patterns.

There are currently pattern collec­tions (or pattern languages) on different modern fields of software develo­pment like patterns for cloud computing, micros­ervices and big data solutions. You can design a compelete solution and therefore consider all lessons learned, pitfalls, and best practices from the start.

Integr­ation of Big Data solutions

Big Data solutions are always integrated with existing enterprise solutions and solve the problem of processing a huge volume of data, in as short time as possible, and coping with different struct­ures.

Big Data Pipeline compound pattern

Big Data solutions are therefore always a data pipeline consisting of several steps wheras each steps identifies its input, several operations and an output.

Existing Big Data Compound Patterns


Logical Big Data Archit­ecture


Big Data Mechanisms


Further Reading

This overview is an excerpt of the aproach to define a suitable Big Data solution enviro­nment archit­ecture. For further invest­iga­tions checkout several courses of Arcitura's Big Data School. A summary of all the presented inform­ation is readable on the Big Data Patterns Overview of Arcitura.