Date of Graduation

5-2018

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Computer Science (PhD)

Degree Level

Graduate

Department

Computer Science & Computer Engineering

Advisor

Michael Gashler

Committee Member

John Gauch

Second Committee Member

Xintao Wu

Third Committee Member

Jingxian Wu

Keywords

Computer Vision, Context-aware Computing, Depth-based Positioning, Logical Sensors, Machine Learning

Abstract

In this dissertation, we explore methods for enhancing the context-awareness capabilities of modern computers, including mobile devices, tablets, wearables, and traditional computers. Advancements include proposed methods for fusing information from multiple

logical sensors, localizing nearby objects using depth sensors, and building models to better understand the content of 2D images.

First, we propose a system called Unagi, designed to incorporate multiple logical sensors into a single framework that allows context-aware application developers to easily test new ideas and create novel experiences. Unagi is responsible for collecting data, extracting

features, and building personalized models for each individual user. We demonstrate the utility of the system with two applications: adaptive notication ltering and a network content prefetcher. We also thoroughly evaluate the system with respect to predictive accuracy,

temporal delay, and power consumption.

Next, we discuss a set of techniques that can be used to accurately determine the location of objects near a user in 3D space using a mobile device equipped with both depth and inertial sensors. Using a novel chaining approach, we are able to locate objects farther

away than the standard range of the depth sensor without compromising localization accuracy. Empirical testing shows our method is capable of localizing objects 30m from the user with an error of less than 10cm.

Finally, we demonstrate a set of techniques that allow a multi-layer perceptron (MLP) to learn resolution-invariant representations of 2D images, including the proposal of an MCMC-based technique to improve the selection of pixels for mini-batches used for training. We also show that a deep convolutional encoder could be trained to output a resolution-independent representation in constant time, and we discuss several potential applications of this research, including image resampling, image compression, and security.

Share

COinS