1. Model configurations yielding equivalent likelihoods will generally not yield equivalent decisions. Robust decision making requires quantifying _all_ of the model configurations consistent with the data, which is really hard when likelihood functions are poorly identified. https://twitter.com/j_jason_bell/status/1248934172711964674
2. Asymptotic nonidentifiability typically implies preasymptotic identifiability, but asymptotic identifiability does not imply preasymptotic identifiability. Asymptotic analyses can help find practical problems but can't demonstrate that they don't exist.
3. How well identified a likelihood function/posterior is depends on the specific details of an observation. Sometimes one gets lucky with a particularly informative observation, sometimes gets unlucky. To get a full picture one has to consider many possible observations.
4. For most realistic models in the preasymptotic regime we can explore identifiability only empirically, which requires algorithms capable of fully exploring problematic model configuration spaces or at least informing us when they can't.
Just because a fragile algorithm doesn't say anything doesn't mean that it is accurately quantifying the identifiability of a model. Conversely a robust algorithm like Hamiltonian Monte Carlo complaining doesn't mean that it is worse than a silent algorithm that can't.
You can follow @betanalpha.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: