Project description
Today the DLR institutes produce large data sets, e.g. by capturing the surface of planets with high resolution multispectral cameras or during the numerical analysis of complex fluid dynamics on supercomputers. The size of such datasets can reach up to Tera- or Petabytes. Hence, it is difficult to analyse these data on a local workstation only. For an efficient interactive analysis a framework for distributed post-processing is needed.
Distributed post-processing splits and distributes the rendering pipeline, so that each part is performed on specialized HPC resources with large local memory and a multi-processor architecture. For an interactive analysis of the raw data, significant characteristics are extracted and then displayed on a local client of the scientist with sufficient response times.
Another challenge of large datasets is the storage and the transport of these data. Large File-Servers are located near the supercomputer because of high bandwidth and low latency requirements. However, if user groups at different DRL institutes need access to these data the efficient transport is an important factor.
Hence, the goals of the project are to develop a software infrastructure which enables:
Project partners
Runtime
Since 01.01.2010