The objective of this project is to develop mobile systems to recognize human activity and user context with dynamically varying sensor setups, using goal oriented, cooperative sensing. We refer to such systems as opportunistic, since they take advantage of sensing modalities that just happen to be available, rather than forcing the user to deploy specific, application dependent sensor systems.
This project is grounded in wearable computing and pervasive/ubiquitous computing or Ambient Intelligence (AmI). The vision of AmI is that of pervasive but transparent technology, always on, always present, that provides the appropriate information, assistance and support to users at appropriate moments, proactively and in a natural way. The key mechanism to achieve this is to recognize the user's activities and the user's context from body-worn and ambient sensor-enabled devices, in order to infer automatically when, how, and by which modality to support the user.
OPPORTUNITY aims to develop a novel paradigm for context and activity recognition that will remove the up-to-now static constraints placed on sensor availability, placement and characteristics. This is in contrast to most state of the art approaches that assume fixed, narrowly defined sensor configurations dedicated to often equally narrowly defined recognition tasks. Thus, currently, for each application, the user needs to place specific sensors at certain well-defined locations in the environment and on his body. For a widespread use of context awareness and activity recognition this approach is not realistic. As the user moves around, he is at times in highly instrumented environments, where a lot of information is available. At other times he stays in places with little or no sensor infrastructure. Concerning on-body sensing, the best one can realistically expect is that at any given point in time the user carries a more or less random collection of sensor enabled devices. Such devices include mobile phones (today often equipped with GPS, and a variety of sensors), watches (today also available with a wide range of sensors), headsets, or intelligent garments (shoe worn motion sensors are already commercially available). As the user leaves devices behind, picks up new ones and changes his outfit, the sensor configuration changes dynamically. In addition the on-body location of the sensors may also change. For example, a mobile phone can be placed in the trousers pocket, in a hip holder, in the backpack or in the users hand. Finally, large scale sensor systems deployed in real life environments over long time periods are bound to experience failures, again leading to dynamically varying sensor setups.
In summary, considering realistic settings, no static assumptions can be made about the availability, placement, and characteristics of sensors (sensors and other information sources become dynamically available/unavailable at unpredictable points in time).
OPPORTUNITY addresses this challenge by developing generic principles, algorithms and system architecture to reliably recognize complex activities and contexts despite the absence of static assumptions about sensor configurations.
Overall the project is organized in 5 technical workpackages organized around 4 key functions: efficient large scale sensing (WP4); opportunistic context/activity recognition chain (WP1 focusing on sensing and feature extration and WP2 focusing on classification and decision fusion); dynamic adaptation and autonomous evolution (WP3); and validation scenarios (WP5).
Do not hesitate to contact the consortium.
We develop opportunistic activity recognition systems: goal-oriented sensor assemblies spontaneously arise and self-organize to achieve a common activity and context recognition. We develop algorithms and architectures underlying context recognition in opportunistic systems.
Subscribe to our newsletter for regular project updates.