some benchmark eye movement datasets ( from State-of-the-Art in Visual Attention Modeling)

1. Some benchmark eye movement datasets over still images often used to evaluate visual attention models.


2009


Kienzle

165


Center-Surround Patterns Emerge as Optimal Predictors for Human Saccade Targets


2008


Einhauser

84


Objects Predict Fixations Better Than Early Saliency


2003


Ouerhani

210


Empirical Validation of Saliency-Based Model of Visual Attention


2005


Bruce and Tsotsos

144


Saliency Based on Information Maximization

www-sop.inria.fr/members/Neil.Bruce


1996


Stark and Choi

211


Experimental Metaphysics: The Scanpath as an Epistemological Mechanism


2010


Chikkerur

154


What and Where: A Bayesian Inference Theory of Visual Attention

www.sharat.org


2003


Torralba

92


Modeling Global Scene Factors in Attention

People.csail.mit.edu/torralba/GlobalFeaturesAndAttention


2009


Judd

166


Learning to Predict Where Humans Look

People.csail.mit.edu/tjudd/WherePeopleLook/index.html


2007


Cerf

167


Predicting Human Gaze Using Low-Level Saliency Combined with Face Detection

www.fifadb.com


2005


Peters

134


Components of Bottom-Up Gaze Allocation in Natural Images

Ilab.usc.edu


1999


Reinagel and Zador

212


Natural Scenes at the Center of Gaze

Zadorlab.cshl.edu


2008


Hwang and Pomplun

86


A Model of Top-Down Control of Attention during Visual Search in Real-World Scenes

www.cs.umb.edu/~marc


2008


Kootstra

136


Paying Attention to Symmetry

www.csc.kth.se/~kootstra


2007


Tatler

123


The Central Fixation Bias in Scene Viewing: Selecting an Optimal Viewing Position Independently of Motor Bases and Image Feature Distributions

www.activevisionlab.org


2009


Engmann

182


Saliency on a Natural Scene Background: Effects of Color and Luminance Contrast Add Linearly


2009


Engelke

213


Visual Attention Modeling: Region-of-Interest Versus Fixation Patterns


2006


Le Meur

41


A Coherent Computational Approach to Model Bottom-Up Visual Attention

www.irisa.fr/temics/staff/lemeur


2009


Ehinger

87


Modeling Search for People in 900 Scenes: A Combined Source Model of Eye Guidance

Cvcl.mit.edu/searchmodels/


2008


Rajashekar

174


GAFFE: A Gaze-Attentive Fixation Finding Engine

Live.ece.utexas.edu/research/doves/

2. Some benchmark eye movement datasets over video stimuli for evaluating visual attention prediction


2005


CRCNS-ORIG

145


Bayesian Surprise Attracts Human Attention

Crcns.org/data-sets/eye/eye-1


2005


CRCNS-MTV

145


Bayesian Surprise Attracts Human Attention

Crcns.org/data-sets/eye/eye-1


2010


Jia li

133


Probabilistic Multi-Task Learning for Visual Saliency Estimation in Video

www.jdl.ac.cn/user/jiali/


2007


Peters and Itti

101


Beyond Bottom-up: Incorporating Taskdependent

Influences into a Computational Model of Spatial Attention

Ilab.usc.edu/rjpeters/


2007


Shic and Scassellati

74


A Behavioral Analysis of Computational Models of Visual Attention

Sites.google.com/site/fredshic/home


2007


Marat

49


Video Summarization Using a Visual Attention Model

Star1g.ovh.net/~qgsmabaq/sophie/index.php


2007


Le Meur

138


Predicting Visual Fixations on Video Based on Low-Level Visual Features

www.irisa.fr/temics/staff/lemeur

时间: 2024-11-11 10:15:10

some benchmark eye movement datasets ( from State-of-the-Art in Visual Attention Modeling)的相关文章

翻新并行程序设计的认知整理版(state of the art parallel)

近几年,业内对并行和并发积累了丰富的经验,有了较深刻的理解.但之前积累的大量教材,在当今的软硬件体系下,反而都成了负面教材.所以,有必要加强宣传,翻新大家的认知. 首先,天地倒悬,结论先行:当你需要并行时,优先考虑不需要线程间共享数据的设计,其次考虑共享Immutable的数据,最糟情况是共享Mutable数据.这个最糟选择,意味着最差的性能,最复杂啰嗦的代码逻辑,最容易出现难于重现的bug,以及不能测试预防的死锁可能性.在代码实现上,优先考虑高抽象级别的并行库(如C++11的future,PP

CVPR 2017 Paper list

CVPR2017 paper list Machine Learning 1 Spotlight 1-1A Exclusivity-Consistency Regularized Multi-View Subspace Clustering Xiaojie Guo, Xiaobo Wang, Zhen Lei, Changqing Zhang, Stan Z. Li Borrowing Treasures From the Wealthy: Deep Transfer Learning Thro

Datasets for Data Mining and Data Science

From kdnuggets Data repositories AWS (Amazon Web Services) Public Data Sets, provides a centralized repository of public data sets that can be seamlessly integrated into AWS cloud-based applications. BigML big list of public data sources. Bioassay da

Detecting diabetic retinopathy in eye images

Detecting diabetic retinopathy in eye images The past almost four months I have been competing in a Kaggle competition about diabetic retinopathy grading based on high-resolution eye images. In this post I try to reconstruct my progression through th

计算机视觉和人工智能的状态:我们已经走得很远了 The state of Computer Vision and AI: we are really, really far away.

The picture above is funny. But for me it is also one of those examples that make me sad about the outlook for AI and for Computer Vision. What would it take for a computer to understand this image as you or I do? I challenge you to think explicitly

{ICIP2014}{收录论文列表}

This article come from HEREARS-L1: Learning Tuesday 10:30–12:30; Oral Session; Room: Leonard de Vinci 10:30  ARS-L1.1—GROUP STRUCTURED DIRTY DICTIONARY LEARNING FOR CLASSIFICATION Yuanming Suo, Minh Dao, Trac Tran, Johns Hopkins University, USA; Hojj

(zhuan) Attention in Long Short-Term Memory Recurrent Neural Networks

Attention in Long Short-Term Memory Recurrent Neural Networks by Jason Brownlee on June 30, 2017 in Deep Learning The Encoder-Decoder architecture is popular because it has demonstrated state-of-the-art results across a range of domains. A limitation

[转载] Conv Nets: A Modular Perspective

原文地址:http://colah.github.io/posts/2014-07-Conv-Nets-Modular/ Conv Nets: A Modular Perspective Posted on July 8, 2014 neural networks, deep learning, convolutional neural networks, modular neural networks Introduction In the last few years, deep neura

Papers on github

Interesting Readings Big Data Benchmark – Benchmark of Redshift, Hive, Shark, Impala and Stiger/Tez. NoSQL Comparison – Cassandra vs MongoDB vs CouchDB vs Redis vs Riak vs HBase vs Couchbase vs Neo4j vs Hypertable vs ElasticSearch vs Accumulo vs Volt