research

The research activities of the group comprise systems from core to cloud and span aspects from pointers to patterns. For novel compute, memory, or interconnect technologies, we currently focus on profiling, programming models, and operating system integration. Recent fields of study include memory disaggregation, accelerated computing, and carbon awareness. In the area of new applications using distributed systems, we focus on co-simulations, hybrid test beds, and model-supported approaches. Especially regarding system composition, interests in this respect include dependability, safety, security, and real-time properties. Beyond the above, our work also concerns the effects of systems engineering on operations. Again, the scenarios reach from embedded IoT to scale-up machine learning workloads.

The group's academic activities, trends, and new impetus are exchanged in our research seminar.

current projects

logo of the FlexiDug research project
FlexiDug
Flexible, digitale Systeme für Schienenverkehr in Wachstumsregionen.
logo of the HPI IoT Lab
IoT Lab
A versatile facility for our research and teaching activities.

current open source projects

Metal FS
Near-storage compute-aware file system and FPGA operator pipelines.
marvis (formerly cohydra)
Co-simulation and Hybrid Testbed for Distributed Applications.
pgasus
A C++ parallel programming framework for NUMA systems, based on PGAS semantics.

For further projects, please refer to the OSM group on GitHub and the OSM group on GitLab.

current topics

  • resource management
    • dynamic workload management in distributed systems
    • dynamic, workload-dependent resource scaling (e.g., dynamic LPARs)
    • memory optimizations for virtual machines (e.g., memory migration benchmarks)
    • NUMA-aware programming with the PGASUS framework: applications, case studies, benchmarks
    • architectures of future server systems
    • lock-free data structures
    • multicore/NUMA
    • advanced topology discovery with bandwith and latency measurements: find shared interconnects
  • accelerator programming / heterogeneous computing
    • high-Level programming facilities for distributed GPU computing (e.g., CloudCL)
    • high-level programming facilities for FPGAs (e.g., CAPI SNAP, OpenCL)
    • virtualization / Containerization for GPUs or FPGAs
    • FPGAs in IaaS Cloud Resources (e.g., Amazon F instances)
    • hardware-accelerated memory-compression (e.g., DEFLATE, 842 compression)
    • evaluation of integrated GPUs and APUs for latency-critical workloads (e.g., audio processing)
    • new programming languages & frameworks: CAPI SNAP, Radeon Open Compute, etc.
  • dependability
    • assessment and benchmarking
    • operation of heterogeneous infrastructures
    • fault tolerance of service-oriented architectures
    • microservice architectures
    • software fault injection (network, OS, communication, etc.)
    • automation (lab, operation, assessment, etc.)
    • error detection – coverage vs. complexity
    • decentralized architectures (e.g., distributed ledgers)
    • Internet of Things (e.g., Rail2X, IEEE 802.11p)

annual reports

(German language only)

past projects