1. Results Format Overview

This page describes the results format and the saliency evaluation code used by SALICON. The evaluation code provided here can be used to obtain results on the publicly available SALICON validation set. It computes multiple common metrics, including AUC, Shuffled AUC, NSS and CC. Submitting algorithm results on SALICON for evaluation requires using the formats described below.

2. Results Format

The results format used by SALICON closely mimics the format of the ground truth as described on the download page. We suggest reviewing the ground truth format before proceeding.

Each algorithmically produced saliency map is stored separately in its own result struct. This singleton result struct must contains the id of the image from which the result was generated (note that a single image will only have one associated saliency map). Results across the whole dataset are aggregated in an array of such result structs. Finally, the entire result struct array is stored to disk as a single JSON file (save via json.dump in Python).

The data struct for each of the result type is described below (for details see the download page).

[{
“image_id”
: int,
“saliency_map”
: base64 encoded string
}]
3. Storing and Browsing Results

A demo demonstrating the algorithm results formats is available on the SALICON github page. In addition to the demo, example result JSON files are available in ./results/ as part of the github package.
The results format is similar to the ground truth annotation format. As such, the SALICON API for accessing the ground truth can also be used to visualize and browse algorithm results. The only difference is that the ground truth is given as an array of fixation points and the result is given as an image (saliency map). So the showAnn function will detect which type of the annotation is and if it is “fixations”, it will automatically build fixation map first then show it. If it is “saliency_map”, it will show the saliency map directly without any change. For details please see saliconResDemo and also loadRes in the SALICON API.

4. Saliency Evaluation Code

Evaluation tools can be obtained on the salicon-evaluation github page. Running the evaluation code produces two data structures that summarize saliency map quality. The two structs are evalImgs and eval, which summarize saliency map quality per-image and aggregated across the entire test set, respectively. Details for the two data structures are given below. We recommend running the python saliency evaluation demo for more details.

evalImgs[{
“image_id”
: int,
“SAUC”
: float,
“AUC”
: float,
“NSS”
: float,
“CC”
: float
}]
eval[{
“SAUC”
: float,
“AUC”
: float,
“NSS”
: float,
“CC”
: float
}]

To obtain results on the SALICON test set, for which ground truth annotations are hidden, generated results must be submitted to the evaluation server. For instructions on submitting results to the evaluation server please see the upload page. The exact same evaluation code is used to evaluate saliency maps on test set.