There is something nostalgic about getting together with folks to study. The benefits of working through academic ideas with colleagues are many, but who should you study with?
In the past, I’ve encouraged students to work together with people who are are evenly matched intellectually. I have always figured: if you are in the same boat, it will prevent one person from “carrying the group” and force everyone to dig deep.
But today I read something that might have changed my mind. Dr. Drang penned a really interesting piece titled Learning that explores how teaching, programming, or writing about a topic is the best way to fully learn something.
In the comments, a gentleman named Clark offers the following food for thought:
This is a great truth that applies to students too. The classes I did best in were those where we found the student struggling most in the class and brought them into our study group. If we could teach the subject so that they could understand then we would fully understand it. The greatest self-deception is thinking you know something when you don’t really comprehend it. Yet that’s a natural mistake to make. It’s easy to feel like you intuitively understand something only to find when it’s in a different form you don’t. When teaching someone struggling with it you have to understand it so well that you understand all the nuances. It’s the same teaching things. (emphasis added)
This perspective is so fascinating because the strong student benefits from the weak student, and the weak student benefits from the strong student. I had always thought it was zero-sum.
The obvious downside to this philosophy is that (assuming you are the stronger student) you will probably create a tremendous amount of work and inconvenience for yourself that could be avoided if you wanted to just slip by in the course.
But you just might walk away with incredible insight into the material.
I want to try to codify a few things I’ve been learning in the process of streamlining my schedule on the home stretch of my PhD. Recently I’ve been trying to incorporate a new practice: not only saying more to no things, but…
Saying no faster.
In the past, I have developed a habit of either not responding or loosely responding to requests for my time or attention. Why? Because directly saying no to people is uncomfortable. I may end up doing some things because I feel obligated, or weaseling my way out of a pseudo-commitment—perhaps where I expressed an interest but my involvement was never clearly agreed upon.
These situations are unproductive for me and the other people involved. I accrue expectational debt, the other party may be counting on me for something, and it’s poor character for me to pull out. Stress and guilt abound.
All that to say: I’ve been trying to be more honest when people make requests of me. I have adopted a difficult work/family schedule during the final push of graduate school, with limited opportunities for deviation. When a request comes along, I evaluate it according to what my wife and I have agreed are our priorities for this season, and if it doesn’t fit, I try to tell the other party ASAP that as much as I would love to XYZ, it’s simply impossible.
I feel much better and it’s more honoring to other people.
And of course, there will always be people in my life that I will drop absolutely anything for at the drop of a hat, but that’s not really what this post is about.
Today I want to talk about what is—in my opinion—some of the most interesting structural engineering research currently underway anywhere.
Sherif El-Tawil, professor at the University of Michigan, is a leader in progressive collapse simulation. In addition to having won a slew of awards, he is editor-in-chief of the ASCE Journal of Structural Engineering, arguably the top journal in the field. I was able to attend a seminar lecture given by Dr. El-Tawil several months ago and I was blown away by what he demonstrated. Frankly, I thought it could be reworked into a TED talk.
Progressive collapse modeling involves high-resolution simulation of structures subjected to collapse-inducing events, perhaps an earthquake or a terrorist bomb blast. Running on massive supercomputers, this is the same technology behind many animations you have seen in the popular media of the World Trade Center or the Murrah Federal Building (Oklahoma City bombing) collapses.
What is so fascinating with El-Tawil’s work is that he is partnering with social scientists to develop evacuation/egress models that can be coupled with progressive collapse simulation. Great. So what does that mean?
Imagine a building structure with a network of people (agents), each with their own personality (AI)—all stochastically distributed. When an alarm goes off or vibrations are felt in the structure, panic ensues and people begin trying to evacuate. Egress models can be employed here to estimate, at the time of collapse, the locations of people throughout the structure.
Coupling the distribution of agents from the egress model with the progressive collapse simulation opens the door to begin predicting the most likely locations of survivors. Not surprisingly, most post-collapse survivors are found in voids in the rubble. The collapse simulation aims to identify these voids and correlate them with the predicted locations of people based on the building occupancy model. Virtual reality can be used to explore these piles of rubble.
I found a video of a lecture by El-Tawil that demonstrates some of their progress. The video is about an hour long and well worth it if you are interested, but I have labeled a few minute markers for several key points in the talk:
32:25 — Progressive collapse video example
36:38 — Search and rescue with virtual reality
41:05 — Egress modeling and collapse simulation
This type of research is a great example of challenging and fascinating work aimed at addressing the human condition. No doubt, it’s great to develop an accurate simulation strategy, but the point here is ultimately preventing loss of human life.
Ray Clough, emeritus professor at UC Berkeley, is credited with coining the term “finite elements” and is considered by some the father of the finite element method. At a minimum, he played a significant role in formalizing it.
In 1980, Clough published a 25-year retrospective on the finite element method, recounting his personal perspective during its early development. I consider this a classic paper and would recommend it to anyone, including non-FEM-practitioners, simply for his prescient insights into the challenges faced by numerical analysts.
Consider the following quote—every bit as relevant today as when it was published 32 years ago:
At present it probably is fair to say that the state-of-the-art has advanced to the point where solution of any structural engineering problem can be contemplated, but there may be a wide variation in the quality of the result obtained. Depending on the validity of the assumptions made in reducing the physical problem to a numerical algorithm, the computer output may provide a detailed picture of the true physical behavior or it may not even remotely resemble it. A controlling influence on where the final result lies along this scale is the skill of the engineer who prepares the mathematical idealization; when dealing with complex and unusual structures, this phase of the analysis is an art and the program cannot be treated merely as a “black box”. Because of the significant possibility that the analysis may have totally overlooked or misjudged some important aspects of the mechanical behaviour, experimental verification should be incorporated into the analytical process whenever it steps beyond the borders of experience and established practice.
This quote was a major inspiration for the name of this blog (Only A Model). It encapsulates much of what I observe to be a dramatic problem with modern engineers—an over-reliance on computers and trust of numerical results.
(If you have trouble accessing that article, Clough wrote another article together with Ed Wilson—Early Finite Element Research at Berkeley—that covers a lot of similar history.)