[vc_row type=”full_width_background” full_screen_row_position=”middle” column_margin=”default” column_direction=”default” column_direction_tablet=”default” column_direction_phone=”default” bg_color=”#046380″ mouse_based_parallax_bg=”true” scene_position=”center” mouse_sensitivity=”19″ layer_one_strength=”0.11″ layer_two_strength=”0.24″ layer_three_strength=”0.5″ top_padding=”8%” bottom_padding=”4%” text_color=”light” text_align=”center” row_border_radius=”none” row_border_radius_applies=”bg” overflow=”visible” overlay_strength=”0.3″ gradient_direction=”left_to_right” shape_divider_position=”bottom” bg_image_animation=”none”][vc_column column_padding=”no-extra-padding” column_padding_tablet=”inherit” column_padding_phone=”inherit” column_padding_position=”all” column_element_direction_desktop=”default” column_element_spacing=”default” desktop_text_alignment=”default” tablet_text_alignment=”default” phone_text_alignment=”default” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_backdrop_filter=”none” column_shadow=”none” column_border_radius=”none” column_link_target=”_self” column_position=”default” gradient_direction=”left_to_right” overlay_strength=”0.3″ width=”1/1″ tablet_width_inherit=”default” animation_type=”default” bg_image_animation=”none” border_type=”simple” column_border_width=”none” column_border_style=”solid”][vc_column_text css=”” text_direction=”default”]
SALIENCY IN CONTEXT
SALICON is an ongoing effort that aims at understanding and predicting visual attention. With innovations in experimental paradigm and crowdsourced human behavioral data, we offer new possibilities to advance the ultimate goal of visual understanding.
[divider line_type=”No Line” custom_height=”20″][button color=”see-through” hover_text_color_override=”#fff” image=”default-arrow” size=”large” url=”/explore” text=”Explore” color_override=””] [button color=”see-through” hover_text_color_override=”#fff” image=”default-arrow” size=”large” url=”/challenge-2017″ text=”Challenge” color_override=””]
[/vc_column_text][/vc_column][/vc_row][full_width_section type=”in_container” text_color=”dark” top_padding=”60″ bottom_padding=”40″][vc_column column_padding=”no-extra-padding” column_padding_tablet=”inherit” column_padding_phone=”inherit” column_padding_position=”all” column_element_direction_desktop=”default” column_element_spacing=”default” desktop_text_alignment=”default” tablet_text_alignment=”default” phone_text_alignment=”default” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_backdrop_filter=”none” column_shadow=”none” column_border_radius=”none” column_link_target=”_self” column_position=”default” gradient_direction=”left_to_right” overlay_strength=”0.3″ width=”1/2″ tablet_width_inherit=”default” animation_type=”default” bg_image_animation=”none” border_type=”simple” column_border_width=”none” column_border_style=”solid”][text-with-icon icon_type=”font_icon” icon=”icon-comment” color=”Extra-Color-3″]
SALICON – DATASET
Eye tracking is commonly used in visual neuroscience and cognitive science to answer related questions such as visual attention and decision making. Computational models that predict where to look have direct applications to a variety of computer vision tasks. Due to the inherently complex nature of both the stimuli and the human cognitive process, we envision that bigger eye-tracking data can advance the understanding of these questions and emulating the way humans do. The scale of current eye-tracking experiments, however, are limited as it requires a customized device to track gaze accurately. With our novel psychophysical and crowdsourcing paradigm, SALICON dataset offers a large set of saliency annotations on the popular Microsoft Common Objects in Context (MS COCO) image database. These data complement the task-specific annotations to advance the ultimate goal of visual understanding.
[button open_new_tab=”true” color=”see-through” hover_text_color_override=”#fff” url=”https://cocodataset.org/” image=”default-arrow” text=”Visit MS COCO” color_override=””]
[/text-with-icon][text-with-icon icon_type=”font_icon” icon=”icon-book” color=”Extra-Color-3″]
Download our research paper
SALICON: Saliency in Context
Ming Jiang*, Shengsheng Huang*, Juanyong Duan*, Qi Zhao
CVPR 2015 (* indicates equal contribution)
[button color=”See-Through” image=”fa-file-pdf-o” open_new_tab=”true” size=”small” url=”http://www-users.cs.umn.edu/~qzhao/publications/pdf/salicon_cvpr15.pdf” text=”PDF”] [button color=”See-Through” image=”fa-file-text-o” open_new_tab=”true” size=”small” url=”http://www-users.cs.umn.edu/~qzhao/publications/bib/jiang2015salicon.txt” text=”BibTeX”]
[/text-with-icon][/vc_column][vc_column column_padding=”no-extra-padding” column_padding_tablet=”inherit” column_padding_phone=”inherit” column_padding_position=”all” column_element_direction_desktop=”default” column_element_spacing=”default” desktop_text_alignment=”default” tablet_text_alignment=”default” phone_text_alignment=”default” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_backdrop_filter=”none” column_shadow=”none” column_border_radius=”none” column_link_target=”_self” column_position=”default” gradient_direction=”left_to_right” overlay_strength=”0.3″ width=”1/2″ tablet_width_inherit=”default” animation_type=”default” bg_image_animation=”none” border_type=”simple” column_border_width=”none” column_border_style=”solid”][text-with-icon icon_type=”font_icon” icon=”icon-comments” color=”Extra-Color-3″]
FAQ
[/text-with-icon][toggles style=”default” accordion=”true” accordion_starting_functionality=”default” border_radius=”none”][toggle color=”Extra-Color-3″ heading_tag=”default” heading_tag_functionality=”default” title=”Why using the images from MS COCO?”][vc_column_text]MS COCO is a new large-scale image dataset that highlights non-iconic views and objects in context. It presents a rich set of task-specific annotations for image recognition, segmentation, and captioning. The rich contextual information enables joint studies of image saliency and semantics. For example, by highlighting important objects, our data naturally rank the existing object categories, and suggest new categories of interests.[/vc_column_text][/toggle][toggle color=”Extra-Color-3″ heading_tag=”default” heading_tag_functionality=”default” title=”How is the data collected?”][vc_column_text]We designed a new mouse-contingent multi-resolutional paradigm based on neurophysiological and psychophysical studies of peripheral vision to simulate the natural viewing behavior of humans. The new paradigm allowed using a general-purpose mouse instead of eye tracker to record viewing behaviors. The experiment is deployed on the Amazon Mechanical Turk to enable large-scale data collection. The aggregation of the mouse trajectories from different viewers indicates the probability distribution of visual attention.[/vc_column_text][/toggle][toggle color=”Extra-Color-3″ heading_tag=”default” heading_tag_functionality=”default” title=”Are the annotations equivalent to the eye-fixations recorded with an eye-tracker?”][vc_column_text]The paradigm was validated with controlled laboratory as well as large-scale online data. Comparisons on the OSIE dataset (700 natural images) show that the two systems are highly similar in the output maps for attention annotation. With the achieved similarity, the new method provides reasonable ground truth for saliency prediction and other computer vision tasks. For saliency benchmarking, model rankings have shown consistent across datasets (OSIE and SALICON) and in multiple scenarios (eye tracking, and mouse tracking in laboratory setting as well as through Amazon Mechanic Turk).[/vc_column_text][/toggle][toggle color=”Extra-Color-3″ heading_tag=”default” heading_tag_functionality=”default” title=”Are all the annnotations public available?”][vc_column_text]We plan to provide more annotations for the MS COCO dataset, by expanding the database periodically. In this first release we provide 10,000 training data. The next 10,000 for validation and test will be available soon. The test data will only be used for evaluating saliency algorithms on demand in this website.[/vc_column_text][/toggle][toggle color=”Extra-Color-3″ heading_tag=”default” heading_tag_functionality=”default” title=”How can I test my saliency algorithm with the data?”][vc_column_text]Apart from the data, we will also offer a MATLAB toolkit to assist the data processing and model evaluation.[/vc_column_text][/toggle][/toggles][/vc_column][/full_width_section][vc_row type=”in_container” full_screen_row_position=”middle” column_margin=”default” column_direction=”default” column_direction_tablet=”default” column_direction_phone=”default” scene_position=”center” text_color=”dark” text_align=”left” row_border_radius=”none” row_border_radius_applies=”bg” overflow=”visible” overlay_strength=”0.3″ gradient_direction=”left_to_right” shape_divider_position=”bottom” bg_image_animation=”none”][vc_column column_padding=”no-extra-padding” column_padding_tablet=”inherit” column_padding_phone=”inherit” column_padding_position=”all” column_element_direction_desktop=”default” column_element_spacing=”default” desktop_text_alignment=”default” tablet_text_alignment=”default” phone_text_alignment=”default” background_color_opacity=”1″ background_hover_color_opacity=”1″ column_backdrop_filter=”none” column_shadow=”none” column_border_radius=”none” column_link_target=”_self” column_position=”default” gradient_direction=”left_to_right” overlay_strength=”0.3″ width=”1/1″ tablet_width_inherit=”default” animation_type=”default” bg_image_animation=”none” border_type=”simple” column_border_width=”none” column_border_style=”solid”][/vc_column][/vc_row]