Author ORCID Identifier:

https://orcid.org/0000-0001-8495-5469

Date of Graduation

12-2025

Document Type

Dissertation

Degree Name

Doctor of Philosophy in Computer Science (PhD)

Degree Level

Graduate

Department

Computer Science & Computer Engineering

Advisor/Mentor

Luu, Khoa

Committee Member

Churchhill, Hugh

Second Committee Member

Gauch, John

Third Committee Member

Sinha, Pawan

Fourth Committee Member

Gauch, Susan

Keywords

artificial intelligence; computational neurocomputing; computer vision; neuroscience; quantum machine learning

Abstract

In recent years, large-scale learning approaches such as unsupervised and self-supervised learning have revolutionized artificial intelligence. These methods enable machines to learn high-level representations without explicit human supervision, achieving remarkable success across vision, language, and multimodal tasks. However, such advances come at a cost—they rely on massive datasets, billions of parameters, and extensive computational resources. Despite these achievements, artificial systems still fall short of the remarkable learning efficiency of the human brain, which can infer, adapt, and generalize from limited experiences. This gap motivates a deeper exploration of how biological intelligence acquires knowledge and how these principles can inspire the next generation of AI models. To this end, this thesis focuses on Vision–Brain Understanding (VBU), aiming to bridge the divide between machine perception and human cognition. We investigate how the brain encodes and decodes visual information using functional magnetic resonance imaging (fMRI) data. Our study introduces time-aware and memory-inspired frameworks that model how neural responses evolve across sessions and how visual memories decay over time. By aligning large-scale visual features with brain activity, we show that incorporating temporal dynamics and biological constraints leads to more consistent and interpretable representations of human visual perception. These models not only improve the accuracy of brain decoding and reconstruction but also provide insights into how the brain organizes and recalls visual experiences. Building upon these findings, this thesis explores the emerging frontier of Quantum–Brain Approaches. Inspired by quantum principles such as superposition and entanglement, we model the distributed and interdependent nature of neural activations. Leveraging quantum machine learning, we introduce representations that capture complex correlations and uncertainty beyond the limits of classical computation. These quantum-inspired formulations offer a new computational perspective for understanding non-deterministic brain processes and advancing neural decoding at scale. Overall, this research presents a unified perspective connecting large-scale learning, neuroscience, and quantum computation. By moving from classical deep learning to time-aware and quantum-inspired frameworks, this thesis envisions a future of integrative intelligence systems that not only perform like humans but also learn, adapt, and think with the efficiency and complexity of the human brain.

Share

COinS