Well, day 3, where do we start? Alan Robock and I were up to bat to talk about outdoor experiments - I am not sure our remit was better fleshed out than that. Controversy was desirable but never really likely given our relative positions - there are things we don't agree on for sure but at our core, for different reasons maybe, are similar values. Actually, I suspect that is true of most of the school now I think about it.
Anyway, the general consensus was that Alan focused a bit too much on model uncertainty and I focused too much on governance. Others wanted to explore likely experiments (I did approach one) and think about governance once the experiments were detailed. I think that makes sense but it does fill me with unease just to present potential outdoor experiments without some discussion of both environmental and social impact.
One really interesting question posed to me was 'imagine if you had an identical world to manipulate, what experiments would you conduct?'. I was surprised, alarmed and, actually, subsequently relieved that I struggled to answer. The question itself is not hard, 'what do you want to know from experiments?' and I eventually answered, but the construct of the question threw me. Initially I think I felt a little ashamed for not being nimble enough mentally to circumvent the absurdity at the framing but, on reflection, I think it's a function of my change of mindset and one I am comfortable with. The truth is, you cannot decouple impacts on the planet and its people from climate perturbation and, I believe, nor should you. In the end I had to construct a slightly altered framework where the system could be reset without harm (some form of time travel I suppose) where no lasting impacts were felt. The answer I gave is, I think, correct - it is impacts on things you value (water, crops, biodiversity) that need to be at the centre of any investigative effort that would cause a climatological response.
Alan and then David Keith then presented on GeoMIP5 and the hypothetical experimental suite (solicited from a meeting in Harvard) respectively.
On the walk home I wondered about an extension to the trolley problem which I think was derived directly from my unease this morning. What if the current position of the points was somehow your fault (i.e. you had set, or had instructed to be set, the points incorrectly)? Would that make one more likely to intervene in the system, switch the points and reduce, but not eliminate the death toll? (the trolley experiment is described here if you've no idea what I am talking about).