DS Ph.D. Qualifier Presentation | Shiquan He | Monday, Dec.9 @ 10:00am | Dynamical System Autoencoders

Monday, December 9, 2024
10:00 am to 11:00 am
Floor/Room #
303 Conference Room

DATA SCIENCE

Ph.D. Qualifier Presentation

Shiquan He

Monday, December 9th, 2024 | 10:00AM EST

Location: Gordon Library 303 Conference Room

 

Committee: 

Randy Paffenroth,  PhD Advisor, Mathematical Sciences, Computer Science, Data Science

Fabricio Murai,  Co-Advisor, Computer Science, Data Science

Xiangnan Kong,  Co-Advisor, Computer Science, Data Science

Title: Dynamical System Autoencoders

Abstract: 

Autoencoders represent a significant category of deep learning models and are widely utilized for dimensionality reduction. However, standard Autoencoders are complicated architectures that normally have several layers and many hyper-parameters that require tuning. In this paper, we introduce a new type of autoencoder that we call dynamical system autoencoder (DSAE). Similar to classic autoencoders, DSAEs can effectively handle dimensionality reduction and denoising tasks, and they demonstrate strong performance in several benchmark tasks. However, DSAEs, in some sense, have a more flexible architecture than standard AEs. In particular, in this paper we study simple DSAEs that only have a single layer. In addition, DSAEs provide several theoretical and practical advantages arising from their implementation as iterative maps, which have been well studied over several decades. Beyond the inherent simplicity of DSAEs, we also demonstrate how to use sparse matrices to reduce the number of parameters for DSAEs without sacrificing the performance of our methods. Our simulation studies indicate that DSAEs achieved better performance than the classic autoencoders when the encoding dimension or training sample size was small. Additionally, we illustrate how to use DSAEs, and denoising autoencoders in general, to perform supervised learning tasks.


 

Audience(s)

Department(s):

Data Science
Contact Person
Kelsey Briggs

Phone Number: