One of the parameters driving the age old question of “is my mesh refined enough?” is the impact it has on the model size and the knock-on effect to runtime and computer spec. This comes into starkest relief if you have a dimensionally very large structure but where you need a fine local mesh around some detailed areas in order to refine stress intensity for e.g. fatigue life prediction.
Almost all the solid element meshes I come across in the last decade or so have used second order 10 noded tetrahedral elements, and for most applications they are perfectly fine. There are however some applications in some FEA solvers that require a hex8 element – for example some elastomer models, magneto and electro static solvers and some acoustic solutions. How do you mesh an irregular part in hex elements?
It is for good reasons that designing laminated composite structures is sometimes known as a ‘black art’. It is not easy to intuit from the topology of and loads applied to a component what a good ply layup should be. Many companies rely on the wisdom of veteran engineers’ hard won experience, but sometimes it is necessary to take a step back and ask “what else could we try?”.
Often the design of a composite layup starts with the definition of zones within a part. The layup on each of these zones can then be fettled using FEA to arrive at a stacking sequence which can then be used to define plies.
But how do you choose the zones? Is it arbitrary based on the topology of the part? Do you just chequer-board your panel into regular squares? You could use a technique developed with MSC Nastran for one of the F1 companies.
Modelling Cracks the Easy Way
In my previous blog I talked about the advantages of automatic re-meshing in the analysis of rubbers in improving accuracy and stability of a simulation. One advanced application of this capability that was not touched upon was in the field of crack propagation.
In many industries it is sufficient to use your analysis to predict that a crack could initiate and redesign the part to avoid this occurrence. In others though it is possible that a crack may be identified from an in-service inspection whereupon it becomes necessary to understand if it will propagate under the loads applied and how quickly so that a replacement can be introduced in a timely manner.
Predicting crack growth in materials with finite elements can seem more art than science.
As an example, in some codes you may need to construct a very precise ‘rosette’ mesh at the crack tip.
A series of angular perturbations to the crack tip node are then simulated to look at the energy release resulting from extending the tip with the assumption being it moves in the direction of the greatest energy release.
Coping with Large Strain and Large Deformation in FEA
One of the challenges of analysing the performance of large strain materials like rubbers and synthetic elastomers is how the finite element mesh distorts as the part deforms. You may well start out with a lovely mesh where all your elements meet your quality standards, but as the part distorts the element quality gets worse and worse until it can actually prematurely end the analysis because of excessive distortion, let alone give you poor results.
This is not an uncommon problem.
The use of FEA to design ‘optimal’ components has been around for nearly two decades. In general terms it works by meshing an available volume for a part and then eating away at the space iteratively to leave just those bits of the mesh that are doing work while aiming at a target mass for the part, as in the examples below.
Using this method ‘raw’ it is easy to see how un-manufacturable designs can result, so much effort has been invested by software developers to place manufacturing constraints on the optimisation process to, for example, eliminate voids or undercuts in moulded parts.
Working with long duration transient events in a finite element world can be extremely computationally expensive. If those events are very long, like the wheel hub forces over the lifespan of a vehicle, then it is impossible to simulate. One technique to overcome this limitation is to use something called Random Loading, or Random Analysis.
If a time signal can be considered properly random then it can be transformed from the time to the frequency domain and is known as a Power Spectral Density plot, or PSD. A quick check of randomness is that any section of a time history transformed in this way should give the same outcome as the whole signal. These PSD’s are best thought of as a statistical representation of the amount of energy in the signal as a function of frequency.
There are three key inputs to any finite element analysis process; the geometry representation as a mesh, the materials data and the loads. The validity of your decisions made from an analysis depends on capturing all of these accurately.
Accurate loading can be difficult to obtain. Some years ago we were involved in a project where the loads were provided by an OEM through a chain of suppliers. The stress results on the assembly indicated a catastrophic failure and no amount of tinkering with geometry and materials could go any way towards mitigating it.
Additive Manufacturing has been around in some form or other since the 90's when stereolitography (SLA) and selective laser sintering (SLS) techniques were used for rapid prototyping components, but with little strength or physical integrity there was limited utility from them.
3D printing of plastic parts can now be achieved on the desktop with a printer costing hundreds of pounds, but printing metal parts that could be used for load bearing applications has really come to the fore in the last few years with focus from industry and government on it as a cost-effective and short lead time manufacturing technique.