Utilizing human intelligence with artificial intelligence, we use machine learning of human loops to detect glaucoma fundus images.

To overcome the artificial intelligence (AI) black box dilemma to diagnose glaucoma damage, we applied a new convolutional neural network (CNN) of TrueColor confocal fundus images. With human-in-the-loop (HITL) data annotations, this CNN architecture neural network is useful not only for diagnosing glaucoma but also for predicting and locating detailed signs of glaucoma fundus, including: B. Splinter hemorrhage, glaucomatous optic nerve atrophy, vertical glaucomatous coupling, peripapillary atrophy and retinal nerve fiber layer defect (RNFL). The training was performed on a carefully selected private dataset of 1,400 high-resolution confocal fundus images. Of these, 1,120 (80%) are dedicated to training and 280 (20%) are dedicated to testing. We used a specially trained object detection method based on You Only Look Once version 5 (YOLOv5) to pinpoint the underlying condition. The 26 predefined conditions were annotated by a team of humans (composed of two glaucoma specialists and two optometrists) using the Microsoft Visual Object Tagging Tool (VoTT). We divided the 280 test images into three groups (90, 100, 90 images) and ran three tests, which were run once every 15 days. Test results showed a consistent improvement in predicting glaucoma diagnosis from 94.44% to 98.89%, along with detailed glaucoma background signs. The use of human intelligence in AI to detect glaucoma fundus images using HITL machine learning has not previously been described in the literature. Not only does this AI model have excellent sensitivity and specificity in making accurate glaucoma predictions, it is also an explainable AI that overcomes the black box dilemma. Artificial intelligence (AI) in the field of glaucoma diagnosis is becoming increasingly common to the use of basic good neural networks (CNNs) to improve and improve patient care. A well distributed CNN can identify various pathologies of the fundus. However, the previously reported AI model with CNN was criticized by the ophthalmic community for the black box di-lemma. In the black box di-lemma, the CNN based system analyzed data based on its own self-generic rule. The real reasonable reason behind that was not clearly understood how the prediction was produced in the first place. Each ophtist not only predicts diagnostics, but also predicted signs of fundus images are also predicted, but spline interload, green spotted atrophy, vertical green cappings, peripheralovia, etc. Signs have also been predicted. Retinal nerve fiber layer (RNFL) error. For successful AI algorithms, the basis starts with the data note. If there is no comment data, there is no machine learning algorithm for detecting images.

Comments

Popular posts from this blog

Independent and integrated ophthalmology – where justice and excellence meet

A Revision of the International Classification of Retinopathy of Prematurity

Longitudinal Changes in Scotopic and Mesopic Macular Function as Assessed with Microperimetry in Patients with Stargardt Disease