Winter is coming (Moving to Boston)

ATL-BOS

I’m very excited to announce that I’ve accepted a position at a structural engineering consulting firm in Boston. I’ll be working in the engineering mechanics division on structural failures, seismic evaluation of nuclear facilities, and other interesting things. I start at the beginning of April. So there it is: I’m moving to Boston.

Vortex shedding around skyscrapers

hancock-vortex-shedding

hancock-vortex-shedding-2

The other day I attended a really interesting doctoral defense (congrats Mustafa!) which used computational fluid dynamics to look at vortex shedding from fluid flow around various-shaped objects with prescribed motions. Check out this animated gif to get an idea what I’m talking about.

Vortex shedding is an interesting phenomenon in skyscraper design because, depending on the building’s aerodynamic characteristics, moderate wind can excite torsional modes or cause other serious problems. Nowadays any landmark skyscraper is evaluated in wind tunnels to attempt to detect this effect ahead of time.

It reminded me that a year or so ago I got interested in building a collection of photos showing vortex shedding around skyscrapers. I found two decent ones (shown above) of the John Hancock Center in Chicago, one of my favorite skyscrapers. These photos were really hard to find because most tourists taking photos from the base of a skyscraper don’t tend to tag their photos with “vortex shedding” on Flickr. The next time I’m in a big city on a windy+cloudy day, I’m going to see if I can recreate a photo like these.

Responding to skepticism toward your model

Several years ago I wrote about how researchers are skeptical towards numerical models and their results. I understand where this comes from. How do you convince people that you didn’t just tune the inputs of your model until it matched empirical data or yielded some other result you wanted?

I encountered this frequently during graduate school, from informal meetings to conference talks. I show a result, someone questions it, I suspect they think the result isn’t very good, until they finally reveal that they have the opposite concern: the model looks too good.

This is a fair question that should not offend you.

Do not become defensive. Their skepticism is valid.

The solution is simple: Share your code or simulation input files. My thesis included an appendix with a Python script for generating material model inputs as well as the relevant material definition blocks for all major finite element models I ran. I am considering posting the entire model definitions once all the papers are submitted.

So the next time you are giving a scientific talk and a listener is concerned that your results are too good to be true, simply point them to where you have made the code or simulation files publicly available, and invite them to verify your inputs and play with your model.

Otherwise, what are you going to say? “I really really promise these numbers are legit!” ?

Preventing Instapaper bankruptsy

It’s no understatement to say that Instapaper transformed my reading life. But I run into trouble when life gets busy because I start throwing any link that looks interesting into Instapaper. When I finally get around to reading my Instapaper backlog, I have an overwhelming list of random links to read, most of which I can’t remember why I saved in the first place. I regularly delete everything and start over.

The best way solution I know to this problem is this: Never “blind add” an article to Instapaper. If I see a headline that looks interesting, I open that article long enough to skim it and make sure it’s something I want to read. If I’m still intrigued, it goes into Instapaper. If not, I forget about it and move on. I don’t know how to measure the impact of this, but qualitatively I think it’s greatly reduced false positive interesting articles. Like a lot of things, the solution is in the human filter layer.

Commenting Excel files

Even though I usually prefer another tool, Excel is part of my life as an engineer. Jeff Davis writes about an unconventional way to comment Excel formulas. There is some interesting functionality discussed in this post that I didn’t know about (the N-function), but I still feel like except for simple, tabular data processing, Excel is not a transparent way to communicate with collaborators.

How many times have you tried to decode a spreadsheet created by someone else, clicked in a cell to see the underlying computation, inadvertently clicked on another cell outside the current focus, and overwritten part or all of an expression? You can undo, but that isn’t exactly a selling point, is it?

This is exactly why I prefer simple Python scripts. They are transparent, plain text, and it’s much harder to accidentally delete the flow of logic.

(by way of Ben Brooks)