Non-parametric Inference: In using this type of inference, we replace heavy modelling assumptions with much lighter assumptions. Often in frequentist theory, this means no longer assuming the data are normally distributed & instead making use of the binomial distribution in *1/6*
some way. This is obviously appealing, but in my opinion the conservative nature of the resulting inferences makes the usual behavioural justification of frequentist methods even less adequate. In particular, why should we stick to a decision rule in the long run (e.g. that *2/6*
a parameter lies in a confidence interval) when non-parametric methods are likely to overlook important information that implies it would be rational to at least occasionally break such a rule? For some approaches to inference that directly condition on the observed data, *3/6*
e.g. Fisherian inference, such non-parametric methods are a little easier to justify, since we can try to turn a blind eye to the information loss, but still this issue causes foundational holes. Also, it is overall rather difficult to justify such methods from a Bayesian *4/6*
perspective, since Bayesian methods generally rely on having a full description of how the data were generated so we can form a complete likelihood function. For this reason, in Bayesian inference, a non-parametric method usually means just a method based on a very flexible *5/6*
sampling model with many parameters. However, if trying to get an expert to elicit a meaningful prior density for a model that has only a single parameter is difficult, what concrete meaning can a prior density generally have if it is a joint density over >100 parameters? *6/6*
You can follow @naked_statist.
Tip: mention @twtextapp on a Twitter thread with the keyword “unroll” to get a link to it.

Latest Threads Unrolled: