Professorship for Operating Systems and Middleware

At the group Operating Systems and Middleware group led by Prof. Dr. Andreas Polze, work is done "close to the metal". Firstly, new hardware trends that are to be utilized. This involves programming paradigms, design patterns and description techniques for large, distributed component systems and data centers. Part of the focus is how to measure and control energy consumption there as well. The other metal involves rails: Several projects are exploring new architectures for flexible, distributed rail control systems to control signals and switches. We examine real-time capability, fault tolerance and reliability through simulated experiments, in our laboratory, and directly on the track.
Prof. Polze is also spokesperson of the HPI Research School, HPI's international research college, and member of the steering committee of HPI’s Future SOC Lab.

recent posts

current teaching activities

current projects

Future SOC Lab
Free of charge access for researchers to a powerful IT infrastructure.
FlexiDug
Flexible, digitale Systeme für den schienengebundenen Verkehr in Wachstumsregionen
RailChain
Transfer of distributed ledger technologies to the railway sector.
Telemed5000
Developing a telemedicine system for the support of cardiological patients.

current open source projects

Metal FS
Near-storage compute-aware file system and FPGA operator pipelines.
marvis (formerly cohydra)
Co-simulation and Hybrid Testbed for Distributed Applications.
pgasus
A C++ parallel programming framework for NUMA systems, based on PGAS semantics.

For further projects, please refer to the OSM group on GitHub and the OSM group on GitLab.

current topics

  • resource management from core to Cloud
    • dynamic workload management in distributed systems
    • dynamic, workload-dependent resource scaling (e.g., dynamic LPARs)
    • memory optimizations for virtual machines (e.g., memory migration benchmarks)
    • NUMA-aware programming with the PGASUS framework: applications, case studies, benchmarks
    • architectures of future server systems
    • lock-free data structures
    • multicore/NUMA
    • advanced topology discovery with bandwith and latency measurements: find shared interconnects
  • accelerator programming / heterogeneous computing
    • high-Level programming facilities for distributed GPU computing (e.g., CloudCL)
    • high-level programming facilities for FPGAs (e.g., CAPI SNAP, OpenCL)
    • virtualization / Containerization for GPUs or FPGAs
    • FPGAs in IaaS Cloud Resources (e.g., Amazon F instances)
    • hardware-accelerated memory-compression (e.g., DEFLATE, 842 compression)
    • evaluation of integrated GPUs and APUs for latency-critical workloads (e.g., audio processing)
    • new programming languages & frameworks: CAPI SNAP, Radeon Open Compute, etc.
  • dependability
    • assessment and benchmarking
    • operation of heterogeneous infrastructures
    • fault tolerance of service-oriented architectures
    • microservice architectures
    • software fault injection (network, OS, communication, etc.)
    • automation (lab, operation, assessment, etc.)
    • error detection – coverage vs. complexity
    • decentralized architectures (e.g., distributed ledgers)
    • Internet of Things (e.g., Rail2X, IEEE 802.11p)