Foveated Metamers Browser

Search model metamers from Foveated metamers of the early visual system, Broderick et al. 2023

Foveated Metamers Browser contains a searchable database of the synthesized model metamers for the paper Foveated Metamers of the Early Visual System.

This project investigates how human perception changes across the visual field using behavioral experiments and computational models of early stages of visual processing. We use these models to investigate what people cannot see, an approach that has a long history of vision science. If we know what information people are insensitive to, we can discard it or randomize it, and the resulting image should appear unchanged from the original.

Our models approximate the early visual system by averaging image statistics (such as the brightness) in regions of space across the image. These regions grow with distance from the center of gaze, and the rate at which they grow, called scaling, is the model's only free parameter.

In this experiment, we take 20 large images of natural scenes and generate synthetic images that our models (with a range of scaling values) think are identical to the natural ones. We then show these images to humans to find the largest scaling value where humans are unable to distinguish the two; that is, where humans and the models are discarding the same information. This allows us to reason about how similar processing steps happen in the human visual system

See the preprint or the Vision Science Society 2023 conference poster for more details about the project.

This website will allow you to browse all of the synthetic images generated for this project: you can filter by model, scaling value, target image, and more. Please open an issue if you have any difficulties.

Please see the project's OSF page if you wish to download all the images in bulk (as well as other associated files, such as the behavioral data), and the project Github page for more details about using the code or data.

NOTE: The "Result sets" below correspond to different experimental conditions: they specify the model, the comparison the participant was performing (whether they were comparing two synthesized images to each other, or a synthesized to a natural image), and how the synthesized images were initialized. See the poster for more details.

You may find it helpful to look at the "Sherlock" result set, which was not included in the experiment, but includes some examples of model metamers based on a block of text, which may make the types of differences between natural images and our model metamers more obvious.

WARNING: Depending on your screen resolution and zoom level, the full-sized synthesized image might have display artifacts due to aliasing. The zoomed-in image will not have any aliasing, so if the two differ, trust the zoomed-in version.

Images

  • Model:-
  • Target Image:-
  • Scaling Value-
  • File Path-
Select an Image
Result sets
Filters

scaling

From:
To:

target image

all
none

initialization type

all
none
model namepsychophysics comparisondownsampledscalingtarget imageinitialization typerandom seed
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
Loading...