Collective Activity Dataset.
Wongun Choi, Khuram Shahid, Silvio Savarese
Department of Electrical Engineering, University of Michigan, Ann Arbor.
This page describe a Collective Activity Dataset. This dataset contains 5 different collective activities : crossing, walking, waiting, talking, and queueing and 44 short video sequences some of which were recorded by consumer hand-held digital camera with varying view point.
Example Images
Crossing |
Waiting |
Queueing |
Walking |
Talking |
Annotation
Every 10th frame in all video sequences was annotated with image location of person, activity id, and pose direction.
Frame number, X, Y, WIDTH, HEIGHT, CLASS ID, POSE ID.
ex. 001 366 168 106 212 5 3
001 512 190 98 195 5 3
001 440 187 84 167 5 3
001 339 191 83 165 5 3
CLASS ID
1. NA 2. Crossing 3. Waiting 4. Queueing 5. Walking 6. Talking
POSE ID
1. Right 2. Front-right 3. Front 4. Front-left 5. Left 6. Back-left 7. Back 8. Back-right
Dataset download
You can download dataset.ver1 here.
Our Result
Our results on this dataset are presented in the following paper
What are they doing? : Collective Activity Classification Using Spatio-Temporal Relationship Among People. (PDF)
W. Choi, K. Shahid, S. Savarese 9th International Workshop on Visual Surveillance (VSWS09) in conjuction with ICCV 2009.
Augmented Dataset
We augmented the dataset by adding two more categories (dancing and jogging). Since the Walking activity is rather a isolated activity than a collective activity, we recomend to remove it and include following two categories. Annotation file has the same format as original dataset and the dancing/jogging activities are labeled as 7 and 8 respectively.
Augmented Dataset (including Dancing/Jogging activityies) here.
Target Trajectories
You can also download the trajectories we used for classification, here.
Pose Classification
[NEW] Simple HoG based 4/8 viewpoint classifier can be found here.