Date of Graduation

5-2018

Document Type

Thesis

Degree Name

Master of Science in Computer Science (MS)

Degree Level

Graduate

Department

Computer Science & Computer Engineering

Advisor/Mentor

Michael Gashler

Committee Member

Xintao Wu

Second Committee Member

John Gauch

Keywords

Continuous Functions, Deep Learning, Generative Models, Machine Learning, Neural Networks

Abstract

Generative models are a class of machine learning models capable of producing digital images with plausibly realistic properties. They are useful in such applications as visualizing designs, rendering game scenes, and improving images at higher magnifications. Unfortunately, existing generative models generate only images with a discrete predetermined resolution. This paper presents the Continuous Space Generative Model (CSGM), a novel generative model capable of generating images as a continuous function, rather than as a discrete set of pixel values. Like generative adversarial networks, CSGM trains by alternating between generative and discriminative steps. But unlike generative adversarial networks, CSGM uses only one model for both steps, such that learning can transfer between both operations. Also, the continuous images that CSGM generates may be sampled at arbitrary resolutions, opening the way for new possibilities with generative models. This paper presents results obtained by training on the MNIST dataset of handwritten digits to validate the method, and it elaborates on the potential applications for continuous generative models.

Share

COinS