One of the main problems of computing clusters is the consumption and, therefore, the release of heat.
Current HPC systems are especially dense, generating heat output from 10 kW/h in the case of the least-consuming clusters to 30 kW/h for GPU-based clusters.
This implies very high cooling needs and we usually find two environments:
In these cases, the costs of conditioning the room involve pipes, technical flooring, enclosures for hot and cold corridors, etc., if we want to have a small CPD with all the guarantees. Otherwise, the air conditioning equipment will not be effective.
Generally, these CPDs housed communications systems, management servers or web servers, for which the climate installation is sized. When trying to introduce an HPC system, the high density and thermal dissipation harm the entire environment and raise the room temperature, making the new equipment unsustainable.
With the SIE Ladon clusters with neutral heat emission, this problem disappears, because the heat is neutralized in the rear part of the rack, where the computing nodes expel the heat at temperatures that can be around 60º. In addition, since the differential with the outside is much higher than that between the air in the room and the air conditioning system, it makes them much more efficient. A classic climate system has an efficiency of 1-1,5W (ie 1,5W of climate power is needed for every W of power released by the equipment). Neutral heat emission systems achieve an efficiency that is between 1,2-1,3 W.
In addition, as the temperature gradient is much greater outside, in cold months, the consumption of the system will be minimal. It is no longer necessary to consider hot and cold aisles, nor the need to study how we place the racks: each rack dissipates its own heat and keeps the work environment at the ideal temperature (20-25º). It is also much more pleasant for the cluster operators who do not have to be subjected to low temperatures in the CPD.
You can see more information in our presentation