Revisiting Uncertainty in Graph Cut Solutions

IEEE Conference on Computer Vision and Pattern Recognition (CVPR), Providence, RI (2012)
PDF | Google Doc | Supplementary | Google Scholar | BibTex | EndNote


Graph cuts is a popular algorithm for finding the MAP assignment of many large-scale graphical models that are common in computer vision. While graph cuts is powerful, it does not provide information about the marginal proba- bilities associated with the solution it finds. To assess uncer- tainty, we are forced to fall back on less efficient and inexact inference algorithms such as loopy belief propagation, or use less principled surrogate representations of uncertainty such as the min-marginal approach of Kohli & Torr (2008).
In this work, we give new justification for using min- marginals to compute the uncertainty in conditional ran- dom fields, framing the min-marginal outputs as exact marginals under a specially-chosen generative probabilis- tic model. We leverage this view to learn properly cali- brated marginal probabilities as the result of straightfor- ward maximization of the training likelihood, showing that the necessary subgradients can be computed efficiently us- ing dynamic graph cut operations. We also show how this approach can be extended to compute multi-label marginal distributions, where again dynamic graph cuts enable ef- ficient marginal inference and maximum likelihood learn- ing. We demonstrate empirically that — after proper train- ing — uncertainties based on min-marginals provide better- calibrated probabilities than baselines and that these dis- tributions can be exploited in a decision-theoretic way for improved segmentation in low-level vision.


computer vision