Automatic Evolution of AutoEncoders for Compressed Representations



Developing learning systems is challenging in many ways: often there is the need to optimise the learning algorithm structure and parameters, and it is necessary to decide which is the best data representation to use, i.e., we usually have to design features and select the most representative and useful ones. In this work we focus on the later and investigate whether or not it is possible to obtain good performances with compressed versions of the original data, possibly reducing the learning times. The process of compressing the data, i.e., reducing its dimensionality, is typically conducted by someone who has domain knowledge and expertise, and engineers features in a trial-and-error endless cycle. Our goal is to achieve such compressed versions automatically; for that, we use an Evolutionary Algorithm to generate the structure of AutoEncoders. Instead of targeting the reconstruction of the images, we focus on the reconstruction of the mean signal of each class, and therefore the goal is to acquire the most representative characteristics of each class. Results on the MNIST dataset show that the proposed approach can not only reduce the original dataset dimensionality, but the performance of the classifiers over the compressed representation is superior to the performance on the original uncompressed images.


World Congress on Computational Intelligence 2018

Cited by

No citations found