Set clear limits to uncertainty TOU

Set clear limits to uncertainty

In science and technology, there has been a long and steady movement toward improving the accuracy of measurements of all kinds, along with parallel efforts to improve image resolution. A related objective is to reduce the uncertainty in the estimates that can be made and the inferences drawn from the data (visual or otherwise) that have been collected. Yet uncertainty can never be completely eliminated. And since we have to live with it, at least to some degree, there’s a lot to be gained by quantifying uncertainty as precisely as possible.

Expressed in other words, we would like to know how uncertain our uncertainty is.

This question was taken up in a new study, led by Swami Sankaranarayanan, a postdoctoral fellow at MIT’s Computer Science and Artificial Intelligence Laboratory (CSAIL), and his co-authors – Anastasios Angelopoulos and Stephen Bates of the University of California, Berkeley; Yaniv Romano of Technion, the Israel Institute of Technology; and Phillip Isola, associate professor of electrical engineering and computer science at MIT. These researchers not only succeeded in obtaining precise measures of uncertainty, but they also found a way to display uncertainty in a way that the average person could grasp.



their paper, which was presented in December at the Neural Information Processing Systems conference in New Orleans, is about computer vision – a field of artificial intelligence that involves training computers to glean information from digital images. This search focuses on partially stained or corrupted images (due to missing pixels), as well as the methods – computer algorithms, in particular – that are designed to discover the part of the signal that is spoiled or otherwise hidden. An algorithm of this type, explains Sankaranarayanan, “takes the blurry image as input and gives you a sharp image as output” – a process that usually occurs in a few steps.

First, there’s an encoder, a kind of neural network specifically trained by the researchers for the task of de-blurring blurry images. The encoder takes a distorted image and from it creates an abstract (or “latent”) representation of a clean image in a form – consisting of a list of numbers – which is intelligible to a computer but which is not wouldn’t make sense to most humans. The next step is a decoder, of which there are several types, which again are usually neural networks. Sankaranarayanan and his colleagues worked with a kind of decoder called the “generative” model. In particular, they used an out-of-the-box version called StyleGAN, which takes numbers from the coded representation (of a cat, for example) as input and then builds a complete, cleaned image (of that particular cat). So the whole process, including the encoding and decoding stages, results in a clean image from an originally garbled render.

But how much confidence can we place in the accuracy of the resulting image? And, as discussed in the December 2022 article, what is the best way to represent uncertainty in this image? The standard approach is to create a “saliency map”, which assigns a probability value – somewhere between 0 and 1 – to indicate the confidence the model has in the accuracy of each pixel, taken one at a time. This strategy has a drawback, according to Sankaranarayanan, “because the prediction is performed independently for each pixel. But meaningful objects occur in groups of pixels, not in an individual pixel,” he adds, which is why he and his colleagues come up with an entirely different way of assessing uncertainty.

Their approach centers on the “semantic attributes” of an image – groups of pixels which, taken together, have meaning, constituting a human face, for example, or a dog, or anything else recognizable. The goal, Sankaranarayanan argues, “is to estimate uncertainty in a way that relates to pixel groupings that humans can easily interpret.”



While the standard method can produce a single image, constituting the “best estimate” of what the true image should be, the uncertainty in this representation is normally difficult to discern. The new paper argues that for real-world use, uncertainty needs to be presented in a way that makes sense to people who are not machine learning experts. Rather than producing a single image, the authors devised a procedure to generate a range of images – any of which could be correct. Additionally, they can set precise bounds on the range, or interval, and provide a probabilistic guarantee that the true representation is somewhere within that range. A narrower range may be provided if the user is comfortable with, say, 90% certainty, and an even narrower range if a higher risk is acceptable.

The authors believe their paper offers the first algorithm, designed for a generative model, that can establish intervals of uncertainty related to significant (semantically interpretable) features of an image and come with a “formal statistical guarantee”. . Although this is an important step, Sankaranarayanan considers it only a step towards “the ultimate goal. So far, we’ve been able to do this for simple things, like restoring images of human or animal faces, but we want to extend this approach to more critical areas, like medical imaging, where our ” statistical guarantee” could be particularly important. .”

Suppose the film, or x-ray, of a chest X-ray is out of focus, he adds, “and you want to reconstruct the image. If you’re given a range of images, you want to know that the real image is within that range, so you don’t miss anything critical” – information that could reveal whether or not a patient has lung cancer or pneumonia. . In fact, Sankaranarayanan and his colleagues have already started working with a radiologist to see if their pneumonia prediction algorithm might be useful in a clinical setting.

Their work may also be relevant in the field of law enforcement, he says. “The image from a surveillance camera can be blurry, and you want to improve that. Models for doing that already exist, but it’s not easy to assess the uncertainty. And you don’t want to do error in a life-or-death situation.The tools he and his colleagues are developing could help identify a culprit and exonerate an innocent as well.

Much of what we do and many things that happen in the world around us are shrouded in uncertainty, notes Sankaranarayanan. Therefore, better understanding this uncertainty could help us in countless ways. On the one hand, it can tell us more about what we don’t know exactly.

Angelopoulos was supported by the National Science Foundation. Bates was supported by the Foundations of Data Science Institute and the Simons Institute. Romano was supported by the Israel Science Foundation and a Technion Career Advancement Fellowship. Sankaranarayanan and Isola’s research for this project was sponsored by the USAF Research Laboratory and the USAF Artificial Intelligence Accelerator and was conducted under the grant number of cooperation FA8750-19-2-1000. MIT SuperCloud and the Lincoln Laboratory Supercomputing Center also provided resources that contributed to the results reported in this work.

  </div>