In recent years, breakthroughs from the field of deep learning have transformed how sensor data (e.g., images, audio, and even accelerometers and GPS) can be interpreted to extract the high-level information needed by bleeding-edge sensor-driven systems like smartphone apps and wearable devices. Today, the state-of-the-art in computational models that, for example, recognize a face, track user emotions, or monitor physical activities are increasingly based on deep learning principles and algorithms. Unfortunately, deep models typically exert severe demands on local device resources and this conventionally limits their adoption within mobile and embedded platforms. As a result, in far too many cases existing systems process sensor data with machine learning methods that have been superseded by deep learning years ago.

Because the robustness and quality of sensory perception and reasoning is so critical to mobile computing, it is critical for this community to begin the careful study of two core technical questions. First, how should deep learning learning principles and algorithms be applied to sensor inference problems that are central to this class of computing? This includes a combination of applications of learning some of which are familiar to other domains (such as the processing image and audio), in addition to those more uniquely tied to wearable and mobile systems (e.g., activity recognition). Second, what is required for current -- and future -- deep learning innovations to be either simplified or efficiently integrated into a variety of mobile resource-constrained systems? At heart, this MobiSys 2017 co-located workshop aims to consider these two broad themes; more specific topics of interest, include, but are not limited to: