Poster
Finite-Sample Maximum Likelihood Estimation of Location
Shivam Gupta · Jasper Lee · Eric Price · Paul Valiant
Hall J (level 1) #822
Abstract:
We consider 1-dimensional location estimation, where we estimate a parameter from samples , with each drawn i.i.d. from a known distribution . For fixed the maximum-likelihood estimate (MLE) is well-known to be optimal in the limit as : it is asymptotically normal with variance matching the Cramer-Rao lower bound of , where is the Fisher information of . However, this bound does not hold for finite , or when varies with . We show for arbitrary and that one can recover a similar theory based on the Fisher information of a smoothed version of , where the smoothing radius decays with .
Chat is not available.