- How to release an optimal solution privately without saying much about opt data?
- Easy! Add a calibrated noise to optimization results.
- But how do you ensure feasibility?
- Well…
That’s where we started off with our new work. See the thread
https://arxiv.org/pdf/2006.12338.pdf
- Easy! Add a calibrated noise to optimization results.
- But how do you ensure feasibility?
- Well…
That’s where we started off with our new work. See the thread

https://arxiv.org/pdf/2006.12338.pdf
(1) Standard Differentially Private algorithms apply a carefully calibrated noise to the result of computations to provide privacy for datasets. For constrained optimization problems, it simply may not work due to feasibility requirements.
(2) We propose to optimize optimization variables as affine functions of the noise to provide both privacy (by calibrating noise) and feasibility guarantees (by enforcing chance constraints).
(3) The pic shows that the standard output perturbation method for some constrained optimization problem produces likely infeasible solutions. With affine function, we optimize the output distribution to increase the likelihood of feasible outcomes.
(4) Check out our code companion - thanks to @JuMPjl and @JuliaLanguage, we tried our best to make our model accessible to everyone: https://github.com/wdvorkin/DP_CO_FG