Detailed Record



EDADepth: Enhanced Data Augmentation for Monocular Depth Estimation


Abstract Due to their text-to-image synthesis feature, diffusion models have recently seen a rise in visual perception tasks, such as depth estimation. The lack of good-quality datasets makes the extraction of a fine-grain semantic context challenging for the diffusion models. The semantic context with fewer details further worsens the process of creating effective text embeddings that will be used as input for diffusion models. In this paper, we propose a novel EDADepth, an enhanced data augmentation method to estimate monocular depth without using additional training data. We use Swin2SR, a super-resolution model, to enhance the quality of input images. We employ the BEiT pretrained semantic segmentation model to better extract text embeddings. We use the BLIP-2 tokenizer to generate tokens from these text embeddings. The novelty of our approach is the introduction of Swin2SR, the BEiT model, and the BLIP-2 tokenizer in the diffusion-based pipeline for the monocular depth estimation. Our model achieves state-of-the-art results (SOTA) on the δ3 metric on the NYUv2 and KITTI datasets. It also achieves results comparable to those of the SOTA models in the RMSE and REL metrics. Finally, we also show improvements in the visualization of the estimated depth compared to the SOTA diffusion-based monocular depth estimation models. Code: https://github.com/edadepthmde/EDADepth_ICMLA.
Authors Nischal Khanal University of WyomingORCID , Shivanand Venkanna Sheshappanavar University of WyomingORCID
Journal Info Institute of Electrical and Electronics Engineers | 2024 International Conference on Machine Learning and Applications (ICMLA) , pages: 620 - 627
Publication Date 3/4/2024
ISSN
TypeKeyword Image article
Open Access green Green Access
DOI https://doi.org/10.1109/icmla61862.2024.00090
KeywordsKeyword Image Monocular (Score: 0.6277716)