Quality
When discussing the difference between license-bound simulation software and open source simulation software, one of the first concerns I normally hear is: "Is the quality of the open source alternatives high enough?"
Then I am very tempted to quip: "Define quality." I believe that relevant quality issues may be classified this way:
- Confidence that the most frequently encountered code parts have been executed so many times that they cannot possibly be in error.
- Confidence that your installation behaves exactly as the application behaves at the software supplier's installation.
- Confidence that a new user is guided through a flow of operations which creates the simulation that the user expects.
- Confidence that a new user is appropriately warned if a solution does not fullfil criteria with respect to equilibrium, convergence etc.
- Confidence that an experienced user is appropriately warned if a solution does not fullfil criteria with respect to equilibrium, convergence etc.
- Confidence that trivial solution areas (rigid body behavior, for instance) will exhibit trivial solution properties (special, but important case of 4 and 5)
- Confidence that important solution aspects are properly highlighted during postprocessing.
- Confidence that if a simulation is rerun with some slight changes, the difference between the new solution and the old one will not be significantly affected by obscure differences in the numerics.
Here you can read one story indicating that the quality of open source software may even exceed that of license-bound software.
The only point where license-bound software may have a clear advantage is in my opinion number 3. New users of open source simulation software should consider calling somebody else for a sanity check. Many bureaus are available for such purposes.
For the remaining points, the quality you get depends much more on you than on the software you use. And if you want to ask really tricky questions to your simulation, the open source world is much easier to deal with, simply because the source is open.
When it comes to quipping, I recall the legal preamble at each startup of a well-established license-bound software product: "Users must verify their own results." That is so true for any kind of mechanical engineering simulation.
There is, by the way, for most simulations a case against overemphasizing quality issues:
You won't need a significantly better quality for your virtual prototypes than for your physical ones
If you rely on calculated properties that you will never be able to validate through an experiment, you have put yourself in dangerous waters. However, if a calculated property may be compared with a measured one, it makes no sense that the accuracy of the calculation is considerably higher than that of the experiment.
Is it possible to quantify the concept of "quality"?
Concerning item number 8 on the above list, you can to a certain extent. By varying the input to a certain CAE simulation and study the variations of the output, you can obtain results which you can subject to statistical analysis. Read more here.
How about quality and safety?
My concise answer is: Think more than twice before using simulation to validate safety. If at all possible, validate safety by either
- testing a prototype under upscaled loading conditions, or
- dividing a force by an area
To these rules of thumb, there are of course exceptions:
- Pressure vessel codes like ASME VIII-2 or EN 13445 were among the earliest adopters of finite element analysis for safety calculations.
- A carefully executed elastic or elastic-plastic analysis may be used to validate the safety of welded structures.
- Calculated natural vibration frequencies may be necessary for seismic analyses and similar situations. Still, there is a case against delving too much into intricacies: When the going gets tough, boundary conditions should not be expected to remain foreseeable. It may a wise choice to create a hierarchy of simplified models and check that the main frequency is reasonably insensitive to the choice of model granularity.
A pretty picture. For more pretty pictures, see here.