Welcome to the SALICON Challenge 2015!
 1. Introduction

The SALICON Challenge is designed to evaluate the performance of algorithms predicting visual saliency in natural images. Motivation of the challenge includes (1) to facilitate attention study in context and with non-iconic views, (2) to provide larger-scale human attentional data, and (3) to encourage the development of methods that leverage multiple annotation modalities from MS COCO. Saliency prediction results could in turn benefit other tasks like recognition and captioning – humans make multiple fixations to understand the visual input in natural scenes. Teams will be competing against each other by training their algorithms on the SALICON / MS COCO dataset and their results will be compared against human behavioral data.

2. Rules to Participate

Please submit through http://lsun.cs.princeton.edu/ before May 29, 2015. After the deadline, we will continue to host the challenge on CodaLab. We follow the practice and extend the tools and formats from the MS COCO Captioning Challenge for the SALICON Challenge on CodaLab. Participants are recommended but not restricted to train their algorithms on the SALICON dataset with all available annotations, including fixations, instances and captions. Please specify any use of external data for training in the “method description” field when uploading results to the evaluation server. Both results on the validation and test sets are encouraged to be submitted to the evaluation server. The results will be public and used for performance diagnosis and visualization.

3. Tools and Instructions

Please follow the instructions in download, evaluation and upload for the data format and the best practice to upload results. The SALICON API and Evaluation Tools are released. The software provides the evaluation API and most common metrics, including AUC, Shuffled AUC, NSS and CC for algorithm development. The MATLAB version of these metrics can be found at the MIT saliency benchmark.