With the advent of more powerful and effective scientific instrumentation (such as next generation sequencers in genomics), the complexity and amount of data being generated is exploding. The ability to process, analyse and ultimately act on this data is becoming a critical element of research success.
In such data intensive science High Performance Computing (HPC) is an indispensable tool. This system gives researchers access to compute power that up till now was hardly available. This leads to a faster time to market, cost reductions, better quality of products and above all the opportunity to explore data and models in a more sophisticated way.
The HPC is designed as a high available and versatile platform. Whether your job needs lots of memory, raw compute power or enormous storage demands, this HPC can handle your requests. The current system contains 900 cores, 600 TB of parallel storage and a fast internal infiniband network.
It’s prepared for growth to at least double the current capacity. Depending on the future customer demands and technical innovation the HPC can, due to his agile architecture, support all kinds of cpu types. As a shared facility the system will be used by multiple users. Users insert their jobs in a queueing system and the workload manager software handles these requests in order of priority and available free hardware.
The HPC is housed in a professional datacentre in a secure environment and with constant temperature and power supply. Different backup solutions are available to safeguard the data from loss. All data in the home-drives are back-up daily. Besides that, users can choose to back-up their compute-data as well.
- 48x Computes: 16 cores, 64 GB memory, Intel Xeon, 2.2 GHz
- 2x Fat nodes: 64 cores, 1 TB memory, AMD Opteron, 2.3 GHz
- 600 TB Lustre parallel file system (15 GB/s)
- 56 Gb/s FDR Infiniband network
- Connected to a fast internet connection (SURFnet)
- Scientific Linux on the nodes
- Jobs scheduling is done with Slurm
- Bright Cluster Management software
- Compilers: Parallel Studio XE 2017 from Intel, Open64, Gcc
- Software: R, ASReml, Python, SPARK, Java, Octave, GEOS, Galaxy, MATLAB, Jupyter Hub
- File formats: NetCDF, HDF5, GDAL
- MPI: MPICH, MVAPICH, Open MPI
The software is continuously subject to upgrades and installation of new software packages
- Fast computing: The HPC cluster has a computing power of 13 TeraFlops. Because of its size, this HPC offers users the possibility to compute jobs in parallel and to use multiple nodes for a job.
- Service: the HPC is a full service platform. Maintenance and technical management are taken care of.
- Community: the usergroups and wiki give users the opportunity to share tips and tricks, experiences, knowledge and developed software. In this way your research can be accelerated.
- Pay only for what you use: compute power is be charged per used core per second. Storage is charged by the actual useage.
- Easy start-up: users can easily access and use the HPC to use their own models. In addition, many software is already installed and ready to use.
- Prepared for growth: the HPC is equipped for expansion of capacity in the future. This offers possibilities for more applications.
The HPC can be used for a variety of research applications within the domain of the life sciences and living environment. It can be used for modelling, simulations, analysis and visualisation of data.
- Genomics research bioinformatics
- Climate models
- Modelling of river beds
- Modelling of food structures
- Unfortunately, your cookie settings do not allow videos to be displayed. - check your settings