NIPS 2012
Skip to yearly menu bar Skip to main content


Workshop

Machine Learning Approaches to Mobile Context Awareness

Katherine Ellis · Gert Lanckriet · Tommi Jaakkola · Lenny Grokop

Emerald Bay 3, Harveys Convention Center Floor (CC)

The ubiquity of mobile phones, packed with sensors such as accelerometers, gyroscopes, light and proximity sensors, BlueTooth and WiFi radios, GPS radios, microphones, etc., has brought increased attention to the field of mobile context awareness. This field examines problems relating to inferring some aspect of a user’s behavior, such as their activity, mood, interruptibility, situation, etc., using mobile sensors. There is a wide range of applications for context-aware devices. In the healthcare industry, for example, such devices could provide support for cognitively impaired people, provide health-care professionals with simple ways of monitoring patient activity levels during rehabilitation, and perform long-term health and fitness monitoring. In the transportation industry they could be used to predict and redirect traffic flow or to provide telematics for auto-insurers. Context awareness in smartphones can aid in automating functionality such as redirecting calls to voicemail when the user is uninterruptible, automatically updating status on social networks, etc.., and can be used to provide personalized recommendations.

Existing work in mobile context-awareness has predominantly come from researchers in the human-computer interaction community. There the focus has been on building custom sensor/hardware solutions to perform social science experiments or solve application-specific problems. The goal of this workshop is to bring the challenging inferential problems of mobile context awareness to the attention of the machine learning community. We believe these problems are fundamentally solvable. We seek to get this community excited about these problems, encourage collaboration between people with different backgrounds, explore how to integrate research efforts, and discuss where future work needs to be done. We are looking for participation both from individuals with machine learning backgrounds who may or may not have attacked context awareness problems before, and individuals with application-specific backgrounds. Although the dominant mobile sensing platform these days is the smartphone, we also welcome contributions that work with data from a variety of body-worn sensors including standalone accelerometers, GPS, microphones, EEG, ECG, etc., and custom hardware platforms that combine multiple sensors. We are particularly interested in contributions that deal with inferring context by fusing information from different sensor sources.

In particular, we would like the workshop to address the following topics:

(1) What is the best way to combine heterogeneous data from multiple sensors? Is contextual information encoded in specific correlation patterns, or is there one sensor that “says it all” for each context, and can we learn this automatically? How do we model and analyze correlations between heterogeneous data?

(2) Feature extraction: what are the features that best characterize these new sensor streams for analysis and learning? In video and speech processing, such features have emerged over the years and are now commonly accepted – are there certain features best suited for accelerometer, audio environment, and GPS data streams? Can we learn them automatically?

(3) A major part of this workshop will be dedicated to the discussion of data. The community has a great need for a shared public dataset that will allow researchers to compare algorithms and improve collaboration. In our panel discussion we will discuss issues such as creating a central data repository, common data collection apps, and unique issues with context-awareness data.

Live content is unavailable. Log in and register to view live content