화학공학소재연구정보센터
Automatica, Vol.58, 22-27, 2015
On the end-performance metric estimator selection
It is well known that appropriately biasing an estimator can potentially lead to a lower mean square error (MSE) than the achievable MSE within the class of unbiased estimators. Nevertheless, the choice of an appropriate bias is generally unclear and only recently there have been attempts to systematize such a selection. These systematic approaches aim at introducing MSE bounds that are lower than the unbiased Cramer-Rao bound (CRB) for all values of the unknown parameters and at choosing biased estimators that beat the standard maximum-likelihood (ML) and/or least squares (LS) estimators in the finite sample case. In this paper, we take these approaches one step further and investigate the same problem from the aspect of an end-performance metric different than the classical MSE. This study is motivated by recent advances in the area of system identification indicating that the optimal experiment design Should be done by taking into account the end-performance metric of interest and not by quantifying a quadratic distance of the unknown model from the true one. (C) 2015 Elsevier Ltd. All rights reserved.