Work

Synergy of Physics and Learning-based Models in Computational Imaging and Display

Public

Computational imaging (CI) is a class of imaging systems that optimize both the opto-electronic hardware and computing software to achieve task-specific improvements. Machine/deep learning models have proven effective in drawing statistical priors from adequate datasets. Yet when designing computational models for CI problems, physics-based models derived from the image formation process (IFP) can be well incorporated into learning-based architectures. In this thesis, we propose a group of synergistic models (synergy between physics-based and learning-based models) and apply such models in several CI tasks. The core idea is to derive differentiable imaging models to approximate the IFP, enabling automatic differentiation and integration into learning-based models. We demonstrate two synergistic models with the use of differentiable imaging models. The first synergistic model combines a differentiable model with residual learning for high frame-rate video frame synthesis based on event cameras. The second one integrates a light transport model with an autoencoder for 3D holographic display design. Additionally, we demonstrate two other synergistic strategies without differentiable imaging models. In solving privacy preserving action recognition task using coded aperture videos, we show that extracting motion features derived from the IFP can improve the performance of deep classifiers. In an on-chip holographic microscopy task, to achieve space-time super resolution, we use sparsely-coded bi-level dictionary for hologram super resolution followed by a phase retrieval algorithm for 3D localization.

Creator
DOI
Subject
Language
Alternate Identifier
Keyword
Date created
Resource type
Rights statement

Relationships

Items