Imagine you are writing software someone else will use for one of the following life-critical scenarios:
- Design of a nuclear core containment structure
- Design of an emergency backup power system for a hospital
- Stress analysis of an artificial heart valve prototype
You obviously don’t want your software to be misused. What I mean is not that someone uses it with malicious intent, but rather someone uses it inappropriately—either because they don’t possess the expertise in the problem domain or they do possess the expertise but perhaps don’t accurately understand what your software is computing—and Something Bad Happens. Human life is threatened. Money is lost. Property is damaged. Lawsuits abound. You get the idea.
When that happens, is it your fault as the software developer, or not? It wouldn’t be that hard to absolve yourself of responsibility because, after all, your software shipped with a detailed disclaimer about how it’s not even necessarily correct, and that person using it obviously should have known better. But it doesn’t mean you shouldn’t try to minimize this possibility.
Technical software is easier to use than it’s ever been. Users of such software would certainly agree this is a good thing and has improved productivity. But on the other hand—and I am speaking for my industry of structural engineering design software—the unprecedented ease by which you can click a couple buttons and design a building… is actually quite terrifying when you stop to think about it.
Fancy dialogues. Beautiful contour plot animations that make your clients cry.
And results that might be completely wrong.
How can we can set the bar higher? Here is my idea for developers:
Insert roadblocks into software workflows which demand expertise.
Make it so the unqualified user can’t get to a final result without their ignorance being challenged.
I’m not saying to send engineers back to the olden days. You’d be crazy not to take advantage of pandas if you are a Python programmer or automated mesh generation algorithms in FEA, for example. But at some point—and preferably where a lot of assumptions reside—stop holding the user’s hand. Don’t pre-fill certain dialogs with default parameter values. Treat certain parts of the model definition like a technical challenge—solve this riddle to proceed. (What is your quest? What is your favorite color?) Point to resources that guide parameter or algorithm selection, but do not do it for them.
I’ll end this with an example from my personal experience where I think software developers got it exactly right.
Several years ago I had to solve a concrete failure problem in Abaqus, one of the standard finite element codes in the mechanical, aerospace, automotive, biotech, and defense arenas. I was using the so-called concrete damaged plasticity model (Lubliner et al. 1989 and Lee and Fenves 1998). Abaqus is probably best-in-class in terms of model visualization and creation—I don’t know a faster path to a detailed and accurate finite element mesh. I love Abaqus.
But when it came time to define the nonlinear material model, I could not move past this stage without deep immersion in the source journal articles. It took me weeks to understand the formulations well enough to define the material input. Yes, they could have pre-populated the material definition dialogue with default values. But I think Abaqus got it exactly right here. Their “limitation” forced me to take responsibility in a way that I couldn’t circumvent. My results were all the better for it. (As a side note, there are multiple peer-reviewed publications simply exploring how to determine the input parameters for this particular model—it’s not just me.)
Monumental pain? Yes, but that’s my job as an engineer or scientist.
And it’s your user’s job as well.