Modelling Cracks the Easy Way
In my previous blog I talked about the advantages of automatic re-meshing in the analysis of rubbers in improving accuracy and stability of a simulation. One advanced application of this capability that was not touched upon was in the field of crack propagation.
In many industries it is sufficient to use your analysis to predict that a crack could initiate and redesign the part to avoid this occurrence. In others though it is possible that a crack may be identified from an in-service inspection whereupon it becomes necessary to understand if it will propagate under the loads applied and how quickly so that a replacement can be introduced in a timely manner.
Predicting crack growth in materials with finite elements can seem more art than science.
As an example, in some codes you may need to construct a very precise ‘rosette’ mesh at the crack tip.
A series of angular perturbations to the crack tip node are then simulated to look at the energy release resulting from extending the tip with the assumption being it moves in the direction of the greatest energy release.
Coping with Large Strain and Large Deformation in FEA
One of the challenges of analysing the performance of large strain materials like rubbers and synthetic elastomers is how the finite element mesh distorts as the part deforms. You may well start out with a lovely mesh where all your elements meet your quality standards, but as the part distorts the element quality gets worse and worse until it can actually prematurely end the analysis because of excessive distortion, let alone give you poor results.
This is not an uncommon problem.
In a previous article I discussed using random analysis to predict failure of components in a vibration environment. Random analysis is a quick way of ensuring that statistically the maximum stress due to a vibration loading will not exceed a set level, but the most common mode of failure in such an environment is not due to one single load spike but due to the summation of damage from all the load cycles – known as fatigue failure.
Classically fatigue failure due to a transient load was performed quasi-statically. A known load history was combined with stresses from a unit load in the FEA model to create a stress time history. Rainflow cycle counting was used to evaluate the stress cycles and then damage calculated from these using classical theories like Goodman and summed using Miner’s rule. There were problems with this method though.
The use of FEA to design ‘optimal’ components has been around for nearly two decades. In general terms it works by meshing an available volume for a part and then eating away at the space iteratively to leave just those bits of the mesh that are doing work while aiming at a target mass for the part, as in the examples below.
Using this method ‘raw’ it is easy to see how un-manufacturable designs can result, so much effort has been invested by software developers to place manufacturing constraints on the optimisation process to, for example, eliminate voids or undercuts in moulded parts.
In an earlier article I talked in general terms about the benefits of using CAE tools to model the physics of manufacturing processes. In this article I will show a case study example using welding simulation to decide on the best strategy for welding a two-part swing arm together.
The objective is to compare two different clamping schemes with third iteration that uses tack welds to hold the parts together.
The two parts were made of carbon steel, welded along an ~80mm length using arc welding with an estimate energy input of around 2000J per centimetre.
The simulation tool used for this analysis uses a state-of-the-art multi-physics finite element programme as the underlying solver, but has a custom user interface that speaks to the manufacturing engineer about his process flow and not about the minutia of an FEA job set up.
Working with long duration transient events in a finite element world can be extremely computationally expensive. If those events are very long, like the wheel hub forces over the lifespan of a vehicle, then it is impossible to simulate. One technique to overcome this limitation is to use something called Random Loading, or Random Analysis.
If a time signal can be considered properly random then it can be transformed from the time to the frequency domain and is known as a Power Spectral Density plot, or PSD. A quick check of randomness is that any section of a time history transformed in this way should give the same outcome as the whole signal. These PSD’s are best thought of as a statistical representation of the amount of energy in the signal as a function of frequency.
There are three key inputs to any finite element analysis process; the geometry representation as a mesh, the materials data and the loads. The validity of your decisions made from an analysis depends on capturing all of these accurately.
Accurate loading can be difficult to obtain. Some years ago we were involved in a project where the loads were provided by an OEM through a chain of suppliers. The stress results on the assembly indicated a catastrophic failure and no amount of tinkering with geometry and materials could go any way towards mitigating it.
Additive Manufacturing has been around in some form or other since the 90's when stereolitography (SLA) and selective laser sintering (SLS) techniques were used for rapid prototyping components, but with little strength or physical integrity there was limited utility from them.
3D printing of plastic parts can now be achieved on the desktop with a printer costing hundreds of pounds, but printing metal parts that could be used for load bearing applications has really come to the fore in the last few years with focus from industry and government on it as a cost-effective and short lead time manufacturing technique.
Delamination is one of the critical failure criteria in the design of laminates and any bonded joint. Modelling it with simulation software is possible, but can be very fiddly and labour intensive in many finite element codes. It is often necessary to pre-determine where delamination will occur and then create interface elements that capture the cohesive behaviour of bond. Doing this manually within a GUI environment is not fun.
The Marc finite element solver, from MSC Software, makes this very easy by having the solver detect where delamination is likely to occur between unique materials (or within a homogenous material) and then create the interface elements itself. This represents a massive saving in time for the FEA engineer creating these type of models.
In the last simulation blog the benefits of flexible body dynamics simulation were introduced.
The method used to represent a component flexibly requires an FEA solution that captures the vibration characteristics (normal modes) and the connectivity stiffness between the interface points (constraint modes). These are all linear modes used to represent any state of a component by a linear superposition sum. But what if your component doesn’t behave linearly?