Why Machine Learning Models Systematically Underestimate Extreme Values II: How to Fix It with LatentNN
Yuan-Sen Ting
Submitted to OJAp (2025)
Show abstract ▸Hide abstract ▾
Attenuation bias — the systematic underestimation of regression coefficients due to measurement errors in input variables — affects astronomical data-driven models. For linear regression, this problem was solved by treating the true input values as latent variables to be estimated alongside model parameters. In this paper, we show that neural networks suffer from the same attenuation bias and that the latent variable solution generalizes directly to neural networks. We introduce LatentNN, a method that jointly optimizes network parameters and latent input values by maximizing the joint likelihood of observing both inputs and outputs. LatentNN reduces attenuation bias across a range of signal-to-noise ratios where standard neural networks show large bias.