Computing Resource Offerings

With our data centre in Potsdam, we provide a heterogeneous infrastructure for artificial intelligence. This allows us to explore the requirements and needs of different stakeholders such as academic institutions, start-ups or small and medium-sized enterprises. In this way, we can compare the costs and performance of different types of hardware, investigate edge applications and provide flexible solutions for the development, optimization and implementation of complex AI models.

We will gladly help you to use our infrastructure. To do so, please send us a brief project description and a signed copy of the user agreement, with which you accept our terms of use.

We will also be happy to support you in selecting suitable resources and inform you about the advantages and disadvantages of cloud, on-premise and hybrid solutions.

Infrastructure Specifics

Our most powerful hardware are 8 NVIDIA Basepods, each with 8 H100 GPUs (each with 80 GB VRAM), which can be used for training AI models. The H100 pods communicate with each other via 400 Gb/s Infiniband or 200 Gb/s Ethernet. 

For inference, 5 NVIDIA pods with 8 A30 each are available, which communicate with 25 Gb/s Infiniband or 40 Gb/s Ethernet. In addition, we offer access to an NVIDA Jetson AGX module, an ARM server and a server in a medium price range which are also relevant for SMEs.

Procedure: Access Infrastructure

  • Request via form or E-Mail 

  • Confirmation of resources 

  • Signing of the usage contract and creation of the usage requirements 

  • Access to computing resources 

  • Documentation of the results and findings obtained