A few months back I was at a conference when I received an unexpected treat: One of my concrete analysis heroes was attending from Singapore and I was able to interrogate him for an hour about my research. This professor is probably the world leader on my specific topic, and his research papers have become my general playbook. I had no idea he was going to be there—I was as starstruck as you can get at an academic conference on concrete mechanics.
So anyway, we were sitting there chatting, and I was flying through a series of slides that give an overview of my approach to the problem. At one point he stopped me and asked:
Why are you using these models? Grad students always want to use the most complicated models. I used to do this too, but now I can tell you that using a more complicated model makes no difference in the final result. I have done it both ways—use a simpler, more stable model. Try it and you’ll see.
Your mileage may vary with this idea, but in my area (nonlinear computational mechanics), I think this is good advice. I have been splitting hairs on numerical calibration experiments—perfectionism, to be sure—and frankly moving forward far too slowly. These component models are gross approximations anyway.
So why not go with a simpler, more computationally stable curve that captures the essence of the component, and then get on with it?
You have to verify your results anyway. If a simpler model proves too crude, you can always introduce more complexity when the problem requires it.