Fiducial Matching: Differentially Private Inference for Categorical Data

Jul 15, 2025·
Ogonnaya Michael Romanus
Younes Boulaguiem
Younes Boulaguiem
,
Roberto Molinari
· 0 min read
Abstract
Statistical inference under differential privacy (DP) remains challenging because the data-generation process includes multiple layers of randomness: sampling noise and the privacy-inducing perturbations. This convolution of noise makes it difficult to characterize the behavior of estimators and tests derived from DP-protected data, especially for categorical outcomes common in large surveys. We propose a fiducial matching approach that reconstructs an approximate distribution of the parameters by simulating the entire data pipeline—including the DP mechanism—and matching it to the observed privatized output. Our method adapts principles from the fiducial framework to the DP setting, providing a general simulation-based engine for uncertainty quantification. We establish its theoretical validity, examine coverage properties, and demonstrate strong computational efficiency across a variety of inferential tasks in both simulated and applied categorical-data settings.
Type
Publication
arXiv