I just thought I'd nick this quote from solar scientist Douglas Hoyt on another blog as it is a succinct summary of what can go wrong with models that have too many adjustable and vague parameters and relying on hindcasts as validations - which I touched on before with respect to fatigue calculations. "The climate modelers introduced a large upward trend in global aerosols because, without them, their models ran too hot, predicting a global warming of circa 2C in the 20th century, as opposed to the observed 0.6C warming. As I have pointed out, there is no evidence that the claimed global trend in aerosols existed. At best there were a few regional aerosol clouds covering less than 1% of the globe. The proper solution to their problem would have been to lower the climate sensitivity to 1C or less. In fact. Lindzen has a convincing paper out recently showing the climate sensitivity is about 0.6C for a CO2 doubling. The scientific solution to the problem: No large global trend trend in aerosols and low climate sensitivity. The political “solution” is: Unsupported claims of large aerosol increases which allows the fiction of a high climate sensitivity to be maintained, leading to alarming and false predictions of catastrophic future warming." Douglas Hoyt I added. "I wonder if a 3rd party review might have fixed it. There are times when you get a weird modeling result and you can't find the problem so you rationalize it or add a fiddle factor. Only later do you see where the mistake was. Also sometimes throwing money at a group to investigate a problem can fail due to an over-riding need to justify the money and claim more of it." There's no conspiracy here - just hubris, group-think and self-preservation. Normal science in fact. Just to be fair I'll pick on a few other fields later. Update: Here is a quote from Lindzen on modelling where he describes the 0.5 feedback: ////////beginning of extract "IPCC ‘Consensus.’ It is likely that most of the warming over the past 50 years is due to man’s emissions. How was this arrived at? What was done, was to take a large number of models that could not reasonably simulate known patterns of natural behavior (such as ENSO, the Pacific Decadal Oscillation, the Atlantic Multidecadal Oscillation), claim that such models nonetheless accurately depicted natural internal climate variability, and use the fact that these models could not replicate the warming episode from the mid seventies through the mid nineties, to argue that forcing was necessary and that the forcing must have been due to man. The argument makes arguments in support of intelligent design sound rigorous by comparison. It constitutes a rejection of scientific logic, while widely put forward as being ‘demanded’ by science. Equally ironic, the fact that the global mean temperature anomaly ceased increasing by the mid nineties is acknowledged by modeling groups as contradicting the main underlying assumption of the so-called attribution argument (Smith et al, 2007, Keenlyside et al, 2008, Lateef, 2009). Yet the iconic statement continues to be repeated as authoritative gospel, and as implying catastrophe. Now, all projections of dangerous impacts hinge on climate sensitivity. (To be sure, the projections of catastrophe also depend on many factors besides warming itself.) Embarrassingly, the estimates of the equilibrium response to a doubling of CO2 have basically remained unchanged since 1979. They are that models project a sensitivity of from 1.5-5C. Is simply running models the way to determine this? Why hasn’t the uncertainly diminished? There follows a much more rigorous determination using physics and satellite data. We have a 16-year (1985–1999) record of the earth radiation budget from the Earth Radiation Budget Experiment (ERBE; Barkstrom 1984) nonscanner edition 3 dataset. This is the only stable long-term climate dataset based on broadband flux measurements and was recently altitude-corrected (Wong et al. 2006). Since 1999, the ERBE instrument has been replaced by the better CERES instrument. From the ERBE/CERES monthly data, we calculated anomalies of LW-emitted, SW-reflected, and the total outgoing fluxes. We also have a record of sea surface temperature for the same period from the National Center for Environmental Prediction. Finally, we have the IPCC model calculated radiation budget for models forced by observed sea surface temperature from the Atmospheric Model Intercomparison Program at the Lawrence Livermore Laboratory of the DOE. The idea now is to take fluxes observed by satellite and produced by models forced by observed sea surface temperatures, and see how these fluxes change with fluctuations in sea surface temperature. This allows us to evaluate the feedback factor. Remember, we are ultimately talking about the greenhouse effect. It is generally agreed that doubling CO2 alone will cause about 1C warming due to the fact that it acts as a ‘blanket.’ Model projections of greater warming absolutely depend on positive feedbacks from water vapor and clouds that will add to the ‘blanket’ – reducing the net cooling of the climate system. We see that for models, the uncertainty in radiative fluxes makes it impossible to pin down the precise sensitivity because they are so close to unstable ‘regeneration.’ This, however, is not the case for the actual climate system where the sensitivity is about 0.5C for a doubling of CO2 . From the brief SST record, we see that fluctuations of that magnitude occur all the time." Richard Lindzen Professor of Atmospheric Sciences, MIT /////////////////////end of extract Now just to summarise: 1) It is universally agreed that basic warming from a doubling of CO2 should theoretically be 1 degree C. 2) Model runs say it could be between 1 and 6 degrees C. The extra above the 1 degree number is from a supposed positive feedback where evaporated water vapour from the sea adds to the CO2. 3) This extra warming was needed to match up the recent warming in the Hadley temperature graph with Hadley model output. 4) For this match-up, Hadley modellers had assumed that natural variability had minimal warming effect over the period of study, hence any remaining anomaly must be manmade warming. However they couldn't actually model natural variability - they just pretended that they could. 5) Subsequent non-warming has been blamed on natural variability by those same Hadley scientists which is an admission that the previous assumption has zero foundation. 6) So Lindzen finds out if this supposed positive feedback from water vapour is present in the real, measured satellite data. He finds a sensitivity of 0.5 C from real data, indicating that there must be a negative feedback, not a positive one. One might postulate that this is due to formation of clouds (as Lindzen suggested and as Dr Roy Spencer has obs and published papers to back up). That's what the real science tells us, ie comparison of the theory with observations - remember that? It's how science used to work. It's not true to say that the models couldn't achieve the same result. In fact they could - all they need to do is adjust the natural variation to reflect real world observations. However, the latter number implicitly assumes that extra warming is from the CO2 and it is not actually another long-term natural trend. This is fair but not necessarily so because there is a well known descent into a "little ice age" at least in Northern Europe which has not been adequately explained. If that cooling was natural and we started from that natural low point then the heating can be natural too. That said, Northern europe is quite tiny so it's politically correct and probably sensible to assume that man is warming the planet by a small amount. In any event the planet has warmed by mostly natural 0.4 degrees since 1950, the IPCC cutoff date. So what are the implications? For a future post.
Sunday, October 25, 2009
Perils of modelling: initial assumptions 1
Posted by
jgdes
at
6:17 AM
0
comments
Saturday, October 24, 2009
Where do we go from here?
Let's not get melancholy about what should have been done or about the grave injustices of the system. The revolution isn't coming to replace the stupid with the wise so we need to learn to adapt to continued stupidity. In times like this we really need to know where things are going so we can plan ahead. My only qualifications for this are that I managed to predict the crash was coming (and warned everyone I possibly could, some of whom are even grateful) which puts me well ahead of the worlds PhD economists who praise the god of the invisible hand and also seriously well ahead of Alan Greenspan, that latter-day Oracle of Delphi whose inscrutable pronouncements were enough to move entire markets. Mind you, I didn't manage to see then how exactly to prepare for it, apart from moving to a country that didn't see the boom in the first place. The plan was to hold up in this safe haven until other real estate markets corrected to a sensible level and then pick up a bargain. I really didn't expect France to nosedive as well because they had laws to prevent bank speculations. Those sneaky bankers got around the laws that were meant to protect them. Never underestimate the stupidity of the greedy! Happily it was only the big few. Unhappily they still don't quite realize the first role of a bank is to lend money to small businesses, not to gamble with their savers money. Sarko needs to step in here. From the rather pathetic selection of world leaders, he's likely the best man for the job.
Posted by
jgdes
at
8:36 AM
0
comments
Monday, April 21, 2008
Check those initial assumptions!
I bought a science magazine (Science et Vie No. 102) last month with an article about a radical new ecofuel which would apparently incorporate the benefits of a Diesel engine but without running on Diesel fuel. It was being developed by engineers at Mercedes-Benz: the DiesOtto and Opel: the CAI engine. The idea? A fuel that can be compressed to ignite and so approach the efficiency of the Diesel engine, which - the journalist reported - is caused by better mixing, but without the attendant high-temperature and high-pressure which cause high NOx emissions. The new fuel could stand the higher pressures without pre-ignition but would not be as inert as the traditional Diesel fuel which needed much higher pressures to ignite.
At this point I hope I'm not alone in spotting the obvious error in thinking. The Diesel cycle is efficient because of those high pressures and temperatures, not because of better mixing. But remember this is Mercedes talking here - they made the first Diesel engine - so what was the result? A table was produced and this is the really good bit. They showed 15% reduction in fuel consumption, 15% reduction in hydrocarbon emissions, 80% reduction in CO2 emissions, and an almost complete elimination of NOx emissions. But wait, let's revise those numbers from an engineering perspective rather than a marketing one and point out the bits that weren't mentioned. They reduced fuel use by 15% compared to SI engines but their new engine is still less efficient than CI engines. This much was mentioned in the text but not in the headline graphic. So the laws of thermodynamics are safe after all. The hydrocarbon reduction would also obviously be less than those of the Diesel engine. With the emissions figures the marketing men were even trickier. The NOx reductions were impressive except that this time they were comparing emission levels against Diesel engines, not petrol, which would likely have the same level of NOx emissions. Worse still, the stated reduction in CO2 seems to be utterly incredible unless the laws of chemistry are to be broken too: The amount of CO2 out is totally related to the amount of fuel in, so they can't be comparing that result with either engine. The only way to reduce carbon dioxide for a given amount of fuel is to increase carbon monoxide - it's poisonous cousin - and soot. I doubt that is what they did though because that would be the very definition of an inefficient engine. So the number must be a convenient error in transcription or they are capturing carbon in some other unmentioned way.
So what we are left with after a spend of several million euros is a 15% reduction in fuel use but producing a new fuel which may very well cost more than 15% more to make and buy. Actually if they limited their top speed they could have achieved the same result for zero cost. Or they could have reduced their total weight like the Loremo car designers did to achieve 150 mpg. I find it difficult to know what to make of this saga. Did these auto engineers really know nothing about the meaning of the Diesel and Otto cycles? Are they just paying lip service to fuel efficiency, trying to falsely show us that it's not really that easy. Or is it just another demonstration of the many easy ways that dumb ideas can take root. Was it poor journalism and the actual aims of these companies were not really as high as reported?
Update: On second thoughts, despite falling well short on being as efficient as a Diesel engine I have to admit that getting rid of spark plugs etc. and removing throttle losses is after all a good idea. Since ignition problems are pretty much 80% of all IC engine problems they have made the engine more reliable while retaining the acceleration of petrol/gasoline and without the NOx emissions of Diesel. Nevertheless the PR men who advised the journalist concerned crossed well over the line of truthfulness - especially about the falseness of the extent of the CO2 savings.
Posted by
jgdes
at
7:20 AM
0
comments
Saturday, February 02, 2008
Solid Modeling Software
Updated 6th March 08 because the price of CADMAI has increased for the new version 3.
FEMdesigner works best via it's import iges command. The other model creation methods are a little out of date by now, though perhaps still ok for teaching purposes. So the next question you may ask is which solid modeler should I use? My own criterion is basically "power without the price" (as the old Atari slogan went). And the one I've been using for day to day work on prototypes is CADMAI which costs a very reasonable 499 euros which (so far) includes all upgrades and email user support. It is a very easy program to use. In fact I don't think I've needed to read the manual yet. My one regret is that the iges models produced from it don't seem to be very compatible with FEMdesigner. However very soon we will have fully integrated FEMdesigner with CADMAI so there will be no need for switching programs, the total package price will be around the same because CADMAI already has a FEM mesher. However I've discovered a better alternative called "Moment of Inspiration" or MOI for short, which has the most fantastic user interface and is even easier to use. Better still, the iges models import perfectly. Even better still it is only 138 euros which is well within anyones budget. Lastly there is the free solid modeler called Alibre Xpress, which I haven't used myself yet but one of my users reports that it's iges output works well with FEMdesigner. Of course if you have the thinkdesign software already then the Stressout plugin is clearly the way to go, since it is fully integrated into your thinkdesign environment.
Once you have your solid model tool you really won't want to do FE modeling any other way. Now assume you have tested your part in FEMdesigner and you want to alter a fillet or change a dimension then you just make the changes and re-import the model. If the changes haven't been too drastic then your old loads and restraints will still be valid for the new shape.
Posted by
jgdes
at
9:11 AM
0
comments
Labels: budget solid modeler, CADMAI, import iges, MOI, Solid modelling
Friday, January 25, 2008
Simple is efficient
This paradigm applies in many spheres. I've just read a blog rant from Steve Yegge about how he is among the small minority who think that code bloat is a major problem. So I'm not alone! Yes it's true, people actually boast about the number of lines of code they have written. I well remember a Southampton lecturer did that very thing when he was describing his Fortran monster of an optimising fluid-structure code. I really found it difficult to decide what expression to show him as a sneer would have been too impolite. However he must have been used to people falling over with delight when he described his code bloat so my passive face made him leave shortly after. I hadn't cooed over his baby I guess. Actually, maybe he was psychic because I was thinking at the time "if you can't even see the need to optimise your code, then what the heck do you know about optimisation"?
Though all engineers are really amateurs at code churning, the professional programmers are just as lame on this point. I remember being told that programmers used to be paid per line of code written so maybe it's the old survival instinct. Anyway if you went into any programmers forum a few years ago and said you want to use mainly C rather than C++ to avoid code bloat you were treated as a complete idiot. The standard responses were roughly:
a) programmers time is much more costly than that extra RAM or processing power,
b) Do you want to go back to using a Sinclair spectrum or a Citroen 2CV instead of using modern tools,
c) Don't you realise that object orientation, data-hiding, multiple inheritance, blah, blah,... pick your buzzword.. is essential to code readability, security, reuse, sharing, blah, blah, blah.
Of course it's funny that now the same people have brought their same arguments to bear, en masse, to C#. For example, you say; "I don't want my users to have to download 60 megs of crap just to use my software, I'll stick to C++ thank you" and these same guys who had previously hurled abuse at you for daring not to drink the wondrous elixir of C++, now apparently find that C++ is a veritable pile of steaming dung compared to the new religion of C#. Fashions eh!
In the same vein I lament the death of the Atari ST which started instantaneously just like in the films (but not real life). An operating system in ROM what a great idea. Like the QL, it was doomed by the rise of the Amstrad and other clones. Another triumph of mediocrity over elegance! There was even an alternative pre-emptive multitasking operating system called SMS2, based on Sinclair's QDOS, which actually fitted in a 250k pluggable ROM cartridge - I bet it would still give most operating systems a run for their money. And why can't we even now have our operating system in ROM or RAM. I have 1 Gig of fast RAM in my pocket, the size of a postage stamp and it even plugs into the computer, but I still cannot store an operating system on it to overide Windows. Too simple an idea?
Anyway the same paradigm applies to engineering design and this time I'm paying no heed to the fashionistas. Fewer parts mean less maintenance, less chance of a fault and more chance of the thing working in the first place. When applied to the motor car this obviously means the best engine is no engine at all. Well Ok, I've never liked the conventional IC engine: It's like one big collection of kludges and bad ideas wrapped together in a needlessly complex and self-defeating system. Controversial but I'll explain what I mean in a future blog article. The rise of the battery / fuel cell cars is at last coming - 100 years late but it's coming and unbelievably I'm involved in it. More of that later too.
Posted by
jgdes
at
2:36 PM
0
comments
Labels: Amstrad, Atari ST, C++, code bloat, engineering design, optimisation, professional programmers, Steve Yegge
Friday, January 04, 2008
Designing against fatigue failure
I had a useful comment to my previous fatigue post to the effect that; regardless of how inadequate the traditional methods are, the commenter - an expert in failure analysis - had never seen failures from components which used these methods. Failures happen when no actual fatigue calculations are performed, or when over-reliance is placed on fatigue crack growth calculations. As a designer who has also used these methods extensively and not had any failures either, I can endorse that observation. And it is an important point to make. However, it begs the question, how did we get the right answer with poor methods? And that is worthy of a new post, especially since it's been over a year since the last:)
That the designer is aware that fatigue may be a problem is really half the battle. This may have been by accident - that is having had parts fail during testing - or by being better engineers in the first place. I have lamented that many design engineers actually don't have that much savvy. My experiences in industry indeed suggest that many engineers actively try to forget whatever they have learned in University and others seem to have gained a degree just by turning up, because they often don't demonstrate enough intelligence to design a box. However, and fortunately, most if not all, engineers are honest people and they are more than willing to admit their deficiencies. The ones who don't are usually pushed into management, where lying is considered a positive asset, or into paper-shuffling jobs where they cannot do any real harm. Those honest engineers will always go to more experienced or clever individuals for advice and guidance. That is indeed one of the really nice thing about engineers - they are never too proud to ask for help; because everyone is aware that being wrong can cost lives. So, getting back to the point, how do we produce good results - parts that don't break - from quite poor methods?
If you are aware that fatigue failure may be a problem, then you know that sharp discontinuities will be where it will occur, so you always use generous radii and long tapers. That insight saves many designs, especially in weaker materials like plastics. In fact, in cast parts you often have to do this anyway just to make manufacture possible. The next thing a savvy designer will do is to find out the local stresses by FE analysis. If the stresses are too high then you are immediately forced to redesign the component either by redistributing the load over a wider area or by stiffening up the area concerned. This is the second fallback. Hence, by the time you come around to doing the fatigue assessment, you have already eliminated many of the problems which would cause fatigue. Lastly, a savvy designer will always try to use steel. It isn't just that it is cheaper, it is also very forgiving of bad designs because it will plastically flow to redistribute the high stresses. Many engineering materials don't have that property so you will find a high proportion of unexpected failures occur in more exotic materials. Steel's plasticity automatically gives you a safety factor of up to 50%. Et voila, this is how so many parts fail to fail; not by the methods themselves but the savvy and experience of the lead design engineer.
The big fatigue problems usually stem from welded connections. Either there are fillet welds where the crack is inherent, or there are inclusions or porosity. The last two can be spotted by x-rays or ultrasound (and then repaired) and the first one has a well established fatigue analysis procedure originating from the UK Welding Institute and the US Welding Research Council. In these procedures every type of weld joint has been extensively tested and charts have been produced under different weld categories, steel types and heat treatment conditions. It is really a marvelous body of work which has undoubtedly saved many engineering structures from failing. If you follow these procedures you simply cannot go wrong. However, they are only for steel structures. Aluminium and it's alloys, which are being more frequently used need the same treatment but I believe they are on the case.
Posted by
jgdes
at
5:00 AM
0
comments
Labels: design, engineering, fatigue, fatigue analysis, FE analysis, FEA, FEM, femdesigner
Thursday, September 28, 2006
Great Design: Money versus Savvy
I love to see great design. I love small and efficient design and I just hate waste in any form. One engineering concept that shows the striking difference between good design and bad design is in traditional manned space travel versus the new Virgin-backed concept. In the Saturn V rockets we had enormous rockets on a launch pad with a design based on dumping big chunks of metal in space, and expending huge quantities of fuel. These were necessary because of the problems of launch and re-entry. The launch idea was based on the launch of the unmanned V2 rockets in WW2, mainly because the designer of both was Werner von Braun. But manned spaceflight doesn't actually need a launchpad - it has people to guide it. Take away the launch pad and piggy back the rocket on a standard aircraft and much less fuel is needed and no parts need to be thrown away. The re-entry idea was even better! The amount of money NASA wasted on a badly designed reusable shuttle should be enough to disband NASA. Worse still, they actually blamed the shuttle disasters on a lack of funding. Eh? In fact the shuttle and the rockets before it were designed by an enormous complex of people with phd's but no imagination. The main problems with the shuttle are the enormous launch boosters and the heat of the re-entry on the fixed wing design. But we already knew that a dropped capsule worked just fine without all these hugely expensive thermal tiles so why not have a movable wing design and just drop back through the atmosphere and then glide again when we've dropped. Totally brilliant in its simplicity, and the result of just one inspired designer with a background in flight. What are the lessons learned here?
1. Unlimited money leads to an expensive design. Limiting money forces better design.
2. One person with savvy is worth more than several thousand Phd scientists.
3. You probably need experience of flight if you are going to design flying machines.
On a related note. You may have noticed that Western weapons technology is generally outperformed by Russian technology. In fact the West is usually playing catch-up in capability terms (Mig aircraft, T-34 tanks in WW2, Sunburn misssiles etc, etc.). Of course the Western stuff costs billions and the Russian stuff costs peanuts. Western companies are much better at marketing than technology it seems. A glaring example being the patriot missile, whose projected 99% success rate was discovered to be in reality about O%. There are many, many other examples of this type of unjustified hype! However, now that the cold war is over and the Russians are clearly our friends, perhaps someone should suggest that the West could save a fortune by buying it's weapons from Russia. Just a thought, in case someone was wondering how to fund all these retiring baby boomers.
Posted by
jgdes
at
5:32 AM
0
comments
Labels: design, FE analysis, FEA, FEM, femdesigner, NASA, rockets, saturn
Tuesday, September 26, 2006
Fatigue Stress Assessment
Introduction
Resistance to fatigue was long thought to be a property of the metals because some metals react worse than others. Steel is good against fatigue, titanium is not. Titanium though is preferred by engineers because it is lighter and more exotic. So much for designing against fatigue failure! Lucky for us punters titanium is too scarce.
Total Life Fatigue Design
Fatigue we are constantly reminded is responsible for about 80% of failures. Straight from the department of guesswork or should I say from the sales dept. All we really know is that all structures are dynamic and undergo cycling so unexplained failures are probably caused by a fatigue mechanism. Just as well we have well-defined and long-standing techniques for calculation the onset of fatigue. Ho ho ho! Let us recap. Actually the reason that fatigue is blamed so often is that it is the most likely miscalculation in the design path. Yes I know the software salesmen tell you that their fatigue software can predict failure to an incredible accuracy. As you will see though, when you know the end result it's very easy to fiddle the in-between calculations to get there. In fact fatigue calculations are an endless series of fiddle factors.
Try this one Kf=Kt*Ks*Ke*Km*K.......
Kf is the fatigue strength reduction factor, introduced because the mathematics of the Kt calculation doesn't actually match with real life. So you need a material factor, an environment factor, a shape factor, a temperature factor etc. This equation alone is enough to invalidate your results.
Try this one too; Miner's cumulative damage rule:
n1/N1 + n2/N2 + n3/N3 ....... <=1.0
Where n is the number of cycles in each cyclic load condition and N is the corresponding number of cycles which it needs to fail. Simple and universally utilised but, unfortunately it doesn't compare well to real life situations. Instead of 1 you can substitute C. Wikipedia says "C is experimentally found to be between 0.7 and 2.2. Usually for design purposes, C is assumed to be 1", which Miner suggested on the basis of logic. I have seen data though which suggests C can be as low as 0.1. Ergo this calculation is thoroughly useless. Furthermore, N has to be adjusted for temperature by yet another fiddle factor. Why do we use this formula you may ask? Because no one has come up with anything better. There is really no mystery that it doesn't work because it ignores the effect of a previous cycle on the next cycle and it assumes uniaxial loading which only ever happens in a laboratory tensile test.
As if the level of guesswork wasn't enough we come across the "what you see is what you fiddled" miracle of rainflow counting in which you can manipulate the number of peaks and troughs of the load cycle by changing the sampling amount or "bucket size". Rainflow counting is not even logical because once you have opened a crack in one cycle, a further cycle which opens it half as much has no actual effect. So adjusting the bucket size is just a technique to magically reproduce already known results which is why those fatigue computer programs seem so accurate. It is a lot easier to predict failure when you already know what happened but we really want to predict failure before it happens.
Useless fatigue calculation summary; Find your SCF based on a fillet radius and thickness from Peterson's handbook of hopelessly limited 2D shapes, then multiply it by a variety of guess factors for environment, loading type, etc. Next reduce the factor by the fracture toughness factor which corrects for the fact that SCF's hopelessly overpredict the onset of fatigue in real life. Of course this value is only of use if the test was done on your actual structure, which it wasn't. Apply the final factor to your field stress, which is the general stress away from the discontinuity. The field stress concept is also based on simple 2D shapes and simple loading. It is not possible to obtain a field stress in any real situation unless you linearise the highly nonlinear stresses. A technique which is controversion and idiosyncratic, even impossible in a 3D situation. Finally find the expected life from a suitable endurance curve. This curve was produced for simple 1D specimens under simplistic loading and it had originally a phenomenal scatter which someone plotted on a logarithmic scale and drew a couple of straight lines through it: They could easily too have drawn a dancing bear through it. Of course you must further adjust the lines according to the extent of compression in the load cycle. For every cycle find a life fraction and add these fractions to get a total life value for the part using Miner's cumulative damage rule which has long been proven to be total nonsense. If you have a load history which is not conveniently sinusoidal then use rainflow counting to capture each mini-cycle within the larger cycles, then disregard the majority of these cycle "buckets" so as to not to be too conservative (because rainflow counting doesn't represent real life). Finally you will arrive at the conclusion you need. In this case, if it is someone elses design you must fail it by including for all eventualities, but if it is your design you must consider all the unnecessary assumptions until you manage to pass it.
Well that procedure was for high cycle, elastic stresses. There is a low-cycle, strain-based calculation for materials in the plastic regime. However it is currently carried out by using elastic stresses and assuming a Neuber plasticity curve. NAFEMS adroitly points out the inadequacy of this approach in a book on it's website, wherein it is pointed out that plastic computations would not only be possible but far more desirable. For me this approximation alone is enough of a fudge to render the calculation useless so I avoid discussing the remaining fiddle factors. However this technique is universally used in current Fatigue software. In fact the prostitution of several prominent academics in presenting this approximation technique as the last word in fatigue design is really quite disagreeable. I suggest you avoid said software and get instead the AFGROW program from the net. It's probably no better but it is well documented, well used and free. We may do a user interface for it one day.
Meantime, there is a module in FEMdesigner to plot the fatigue strength of the material but we use stress ratios instead of life predictions. Here there is only a maximum and minimum stress, an endurance strength (obtained with consideration of the design life) and a fatigue strength reduction factor. The software reads all the stresses in the current output file, calculates the maximum & minimum equivalent stresses, makes them tensile or compressive (+/-) according to the sign of the largest principal stress and applies the Goodman mean stress correction, then finds the endurance limit and adjusts for temperature of the material and presents the result as a red/green contour plot of Actual to Allowable stresses at each point in the model. It repeats this for every load step in this file and in selected other files presenting the worst values in all cycles. This test was developed and tested with performance forged pistons and it works well. Assuming only one maximum and one minimum stress for all cycles is a perfectly valid, well-used technique and avoids both the discredited rainflow-counting technique, Miner's rule and log-log plots. Goodmans stress correction has a lot of actual tests to back it up. The addition of the temperature correction is crucial as it seriously degrades fatigue life in many materials. In FEMdesigner that is made easy. In other fatigue codes it is not. In the nuclear engineering field we also used to prepare low-cycle, strain-based fatigue curves, for which this technique becomes usable. Fatigue tests are best done with the actual component under the actual loading of course. That may seem nonsensical since you may think you don't then need the computational test, but the key idea is to computationally replicate the actual results for the old design, identify the failure areas, modify the design to improve it, then compare the old design to the new one.
The alternative to total life calculation is fatigue from Fracture Mechanics considerations.
Fracture Mechanics and Fatigue
Fatigue it is now accepted is really just fracture in disguise. Hence fatigue is really the initiation of a crack from a material or geometric imperfection and propagation of that crack. This should have seemed obvious but for many years fatigue has been regarded as something that happens by repeated cycling of a body in an elastic state. Although it was material related, it was considered as load-controlled. Fracture mechanics grew up separately by looking at material behaviours at low temperature and then at what happens to notched specimens. We had realised that fatigue and fracture both happen at sharp corners so we invented Stress Concentration Factors for fatigue and Stress Intensity factors for Fracture. Still the penny never dropped because fatigue and fracture calculations were done separately. For both types of calculations we had so many assumptions so we still uultimately fall back on material testing of the actual component whenever possible.
There are two official stages Crack Initiation and Crack Growth. A crack growth calculation seems silly. You know there is a crack but instead of repairing it you calculate how long it will take before catastrophic failure occurs under a variable multiaxial load. When it does come, the crack apparently proceeds at 1/3 of the speed of sound, hence the term "catastrophic". Now I ask you, in all seriousness, would you get on an aircraft if you knew it had a crack in the wing? The answer is obvious, so a crack propagation calculation is largely an academic exercise. In practice if you see a crack you should stop using the component and repair it. Unfortunately, that was the easier calculation of the two. Crack growth calculations are summarised below:
Useless fracture calculation summary; You receive your NDT report which either shows you a crack in an X-ray from one angle, from which it is impossible to tell the real shape, or from an ultrasonic report which states that there is an "indication of size below 3mm". As you then don't know a crack shape you must do a "sensitivity analysis". That is, try every pertinent shape from the list of impossibly clean and mathematically perfect crack shapes to get your SIF. Ignore the extreme unlikelihood of not having an elliptically shaped crack. Then guess the positive residual stresses adjacent to the crack (because cracks in compression won't grow) assuming some fraction of yield. The final report will state that the crack will undoubtedly grow, and hence the structure will fail, under at least some of the fake scenarios you have been forced to use. Again you can happily pass your own design but fail someone elses depending on your assumptions and your degree of malevolence.
You can use your FE code to calculate the SIF and a certain Dr. Pook wrote a paper on it using FEMdesigner. For the future though we hope to consistently compute the crack path computationally.
Posted by
jgdes
at
9:44 AM
6
comments
Labels: design, fatigue, fatigue analysis, fatigue calculation, Fracture mechanics, NAFEMS, titanium
Monday, September 25, 2006
Analyst or designer?
You will find many people ready to say that novices should not be allowed near FEA software. "It's too easy to make a big mistake goes the argument". It is always left unsaid that it is much more easy to screw up using good old fashioned hand calculations. You remember those don't you? How to reduce your fancy design to a plain cantilever beam! Ignore holes, and fillets and anything else inconveniently stuck on your free body diagram. Remember to check the text of those 1933 reference papers in Roark with their quaint uneven and unreadable graphs conveniently converted to number format.
So why the big downer on FEA from academics? Well it's true that FEA is rubbish in, rubbish out. However, being able to actually see the mesh, the applied loads, the stress plots, the displacements, a reasonably capable supervisor system should be able to spot any howlers; Theres the rub! Your boss usually didn't get there by being good at design: He doesn't know a Von Mises Stress plot from a hole in the wall. In fact Stress= Force/Area is the sum total of his engineering knowledge. This is something many managers (and engineers) freely admit. Somehow knowledge is not trendy. Who wants to be a geek? Hence if a boss needs an FEA system he will choose the most expensive one on the planet, with fully comprehensive user support so that he doesn't have to be exposed on his lack of knowledge.
Well I have gone down the whole route. I learned about discontinuity analysis before I uised FEA and I was thrilled when the results matched up. However, i also noticed that FEA pointed out a few things that I hadn't thought about. This is the true value of FEA. Over the years I have seen many FEA-averse engineers make complete howlers because they used over-simplified hand calculations instead of building a computer model and looking at the results. Hence, the philosophy I have is that everyone becomes a better designer by using FEA. Ignore the naysayers, most of whom couldn't design a box, and get out there and use it. No you don't need to know all about the maths behind it, and the little you do need to know I will tell you - in plain English.
Let me be clear. To me there is no distinction between analysis and design. Analysis is an integral part of design and it should be used at the start, in the middle and at the end of the design process. Either you can do design by analysis or you shouldn't be in the design department. If you are FEA-averse then go find another job: Carry around bits of paper from office to office, attend pointless meetings, write quality specs. and stay away from our design area, where the real work is done.
As a postscript to this, I once showed the engineering manager an ANSYS contour plot of yield fronts in two different annular seal designs. I pointed at the bad design and said that the yield zone (showing a plastic hinge) was in orange. For the other design, the better one, I said that the yield zone was in red. He immediately said - "but you said that the yield zone was orange". There is no moral to this tale except that we soon parted company. I felt if I was going to work for an idiot I might as well be self-employed.
Posted by
jgdes
at
6:50 AM
0
comments
FEMdesigner
It's about time to introduce some feedback to the website. So I'd like anyone with comments on the site or on the use of FEMdesigner to add them to this thread. Feel free to praise or condemn. All feedback is welcome, or at least until it starts to hurt sales :)
Sphere: Related Content
Posted by
jgdes
at
5:48 AM
0
comments
Labels: design, engineering, FEA, FEM, femdesigner