Role of Deep Learning in Early Detection and Prevention of Diabetic Retinopathy
Main Article Content
Abstract
Using unlabeled images, Self-Supervised Learning has evolved into a common method for learning image representations. Still, its use in medical picture analysis is not very well studied. In this study, we present a self-supervised Image Transformer that is led by saliency based on fundus pictures to grade Diabetic Retinopathy. Our method especially uses saliency maps in Self-Supervised Learning to direct the pre-training process using knowledge within a given domain. We particular suggest Two saliency-guided techniques for learning activities inside Self-Supervised Image Transformer: (1) Saliency-guided contrastive learning: To reduce. Unnecessary patches derived from momentum-updated key encoder input sequences, we utilize saliency maps of fundus pictures within conjunction with momentum contrast. As a result, the encoder for queries is directed to learn significant features from the prominent regions that the key encoder has focused on. (2) Saliency segmentation prediction: The query encoder is motivated to preserve detailed information in the acquired representations by being trained to predict saliency maps. Using four publicly accessible fundus imaging datasets, we do out comprehensive investigations. The efficiency of the representations learned through self-supervised Image Transformer is demonstrated by our results, which demonstrate that Self-Supervised Image Transformer performs noticeably better than a number of cutting-edge Self-Supervised Learning techniques across all datasets and evaluation circumstances.