UW Interactive Data Lab
Papers
Yang Liu, Eunice Jun, Qisheng Li, Jeffrey Heer
Interpreting latent spaces from variational autoencoders trained on emoji images. (a) The user starts with summary metrics for latent space variants, (b) then drills down to an overview distribution of a chosen latent space. (c) To map out a semantic relationship, the user defines an attribute vector, examines the custom projection to the vector axis, applies analogies and assesses the relationship uncertainty.
Abstract
Latent spaces - reduced-dimensionality vector space embeddings of data, fit via machine learning — have been shown to capture interesting semantic properties and support data analysis and synthesis within a domain. Interpretation of latent spaces is challenging because prior knowledge, sometimes subtle and implicit, is essential to the process. We contribute methods for "latent space cartography", the process of mapping and comparing meaningful semantic dimensions within latent spaces. We first perform a literature survey of relevant machine learning, natural language processing, and scientific research to distill common tasks and propose a workflow process. Next, we present an integrated visual analysis system for supporting this workflow, enabling users to discover, define, and verify meaningful relationships among data points, encoded within latent space dimensions. Three case studies demonstrate how users of our system can compare latent space variants in image generation, challenge existing findings on cancer transcriptomes, and assess a word embedding benchmark.
Citation
Yang Liu, Eunice Jun, Qisheng Li, Jeffrey Heer
Computer Graphics Forum (Proc. EuroVis), 2019