Some people make the argument that doing a cost-benefit analysis on the optimal societal response to COVID-19 involves so many important moving pieces that we should leave it to experienced experts who can try to include them all in a big model.
This is a good point.
But an alternative angle would be to say that the situation is so complicated that we can't produce a reliable and robust model in the time we actually have.
There's a high risk of being led to some radical course of action by the value we happen to choose for some highly uncertain parameter. Or the outcome being determined by some consideration we happen to leave out of the model to keep it manageable.
So while we give some weight to these huge modelling projects, we should consider giving a lot of weight to heuristics like 'maintain option value', or 'copy what has worked in the past', or 'copy what seems to be working elsewhere', that are more easily understood and scrutinized by everyone.
On that end it would better to have lots of people competing to make models to flesh-out important parameters/considerations, and then used applied information economics/value of information calculations to adjust models + set policy.
Monte-carlo simulation/sensitivity analysis can prevent you from taking ridiculous courses of action based on a few parameters being wrong.
Heuristics make sense for areas you haven't yet done a significant amount of research. You can always update policies as you go, and keep modelling the costs of policy change.
Copying can make sense and was actually a pretty useful pre-scientific technique for cultural evolution/societal survival. When there is concern that things are fundamentally unpredictable, then there should be higher uncertainty about how to generalise strategies from one society/culture to another.