I've only used global sensitivity studies, but maybe I should investigate how to use local sensitivity as well:
- Global Sensitivity — Calculates the changes in your model's measures when you vary a design variable over a specified range. Mechanica does this by calculating measure values at regular intervals in a design variables range. You can create a Global Sensitivity design study from the Sensitivity Study Definition dialog box.
- Local Sensitivity — Calculates the sensitivity of your model's measures to slight changes in one or more design variables. You can create a Local Sensitivity design study from the Sensitivity Study Definition dialog box.
From reading further in the help file, it looks as though Local Sensitivity identifies the slope of the effect of a parameter, possibly several parameters at once, so it could provide a good pointer for the direction of changes.
I usually use Global Sensitivity on one parameter at a time, and use the graph to try to find the optimum value. If I'm exploring more than one parameter, I will sometimes go back and re-check the first parameter after I make a large change to the second.
Does anyone else have preferred techniques for Sensitivity studies?
More of a general question to PTC, really: why are Sensitivity studies so slow?
I've got a fairly simple part here, which regenerates in the blink of an eye, and a simple static analysis in 2D Axisymmetric runs in 4.01(!) seconds elapsed.
Yet an 8-step sensitivity study (9 analyses total) has just taken 257 sec - that's 28.5 sec for each regen and static analysis. It's running on a RAM disk, so it shouldn't be waiting to write data to a disk or anything like that...
Interestingly, the total CPU time is still only 0.83 seconds, compared to the 0.42 for a single static run.
This is just an illustration - the principle extends to much longer analyses too.
It's difficult to draw conclusions from an analysis that runs in 4 sec. For this sensitivity study, loading and unloading Pro/E to do regenarations in the background must take most of the time.This is something we addressed to a large extent in WF5 by keeping Pro/E live in the background during the whole run.
Of course, this effect will be smaller for larger models as each analysis takes longer to complete. For medium to large models (and assuming you don't ask for hundred of steps), this effect should be very minor, even in WF4 and before.
If you have a case of a large model / long analysis where you see a large descrepancy between the run times of each analysis vs. the sensitivity study, please file an SPR with Technical Support and we will take a look at it.
Stephen: I'm on WF4, as you probably suspected.
Christos: thanks, that makes some sense now. Obviously it depends on the definition of 'large model' but with my current machine (quad-core Xeon with 24 GB, temp files on a RAM disk) most of my analyses are pretty quick, no more than a handful of minutes and often less than one minute, so it's just a little irritating that a 9-step study takes way more than 9 times as long.
Good to hear it's been addressed in WF5 - presumably there's a memory penalty, but that's not likely to bother me!