The challenge proposes four different tasks addressing different aspects of the activity recognition problem. We provide 18 labelled session from 4 subjects that can be used as training dataset as shown in the dataset description. In order to emulate realistic online conditions we provided data of the whole recording without any segmentation.
Four different tasks are considered:
For the first three tasks, we provide data from the motion jacket (5 IMUs), 12 bluetooth body-worn accelerometers and 2 inertial sensors placed on the feet. For Task C we provide only the data from the motion jacket sensors. Nevertheless, for all tasks, participants are free to use only a subset of the provided sensors.
The goal of this task is to classify modes of locomotion from body-worn sensors.
Classes
Stand | Walk | Sit | Lie |
Testing Dataset: Subjects 2,3 (ADL4, ADL5)
Typically, activity recognition methods are evaluated using recordings that has already been segmented into the different target classes. However, realistic deployments are required to detect when no relevant action is performed (i.e. null class). This task involves the location of specific time points when relevant actions begins and ends within a continuous recording.
The data for this task correspond to right-arm gestures performed in a daily activities scenario (see task B1 for a list of gestures). Labels denote when any of the considered gestures is being executed or not.
The full set of sensors are considered for this task including the motion jacket, 12 bluetooth body-worn accelerometers and inertial sensors on the feet.
Classes
Null | Activity |
Testing Dataset: Subjects 2,3 (ADL4, ADL5)
This task concerns recognition of right-arm gestures performed in a daily activities scenario, as described above. We provided unsegmented labeled data sets for gestures corresponding to the classes listed below.
The full set of sensors are considered for this task including the motion jacket, 12 bluetooth body-worn accelerometers and inertial sensors on the feet.
Classes
Null | clean_Table | open_Drawer1 | close_Drawer1 |
open_Dishwasher | close_Dishwasher | open_Drawer2 | close_Drawer2 |
open_Fridge | close_Fridge | open_Drawer3 | close_Drawer3 |
move_Cup | open_Door1 | close_Door1 | |
open_Door2 | close_Door2 |
Testing Dataset: Subjects 2,3 (ADL4, ADL5)
Realistic applications are prone to noise due to different factors. This task focuses on methods that are robust to sensor noise. To this end, rotational and additive noise has been added to the testing dataset. The classes to be recognized are the same as for Task B2.
For this task, only the motion jacket sensors are considered.
Testing Dataset: Subject 4 (ADL4, ADL5)
Do not hesitate to contact the consortium.
We develop opportunistic activity recognition systems: goal-oriented sensor assemblies spontaneously arise and self-organize to achieve a common activity and context recognition. We develop algorithms and architectures underlying context recognition in opportunistic systems.
Subscribe to our newsletter for regular project updates.