Sizing a system for Memory Requirements

May 20, 2017 | By Michael Schulman

I’ve been thinking recently on having the best equipment on how to size a system for a specific workload. I thought about how to obtain a system with memory requirements desired. Rather than just purchase a lot of memory and hope it is the right amount, why not define the size of the model and the physics of the simulation in advance and use an algorithm to determine the right amount of memory? What is the correct way for marketing system?

For example, if a CAE analyst knows that the desired model size in terms of degrees of freedom with be X, then an appropriate sized system could be purchased with Y amount of memory. There would need to be a mapping between the degrees of freedom, X and the needed memory, Y. Y = f(X). However then it gets more complicated depending on the types and amounts of physics that are added to the simulation. Are mode shapes to be determined ? Is heat or cooling of the solid to be included ? Is the material behavior non-linear ? Are dynamic forces to be dealt with ? These and many more real world physics effects all take up more memory per node and element than not including.

Other types of simulations obviously require more memory as the model size increases. How do you size your compute servers ? Randomly, or by closely analyzing what work needs to be done ? But don’t forget about your model sizes tomorrow ! Let us know how much RAM is needed by your application, for tomorrows simulations or big analytics.