Integrated Diagnostic Retinal Imaging in Electronic Medical Records

Retinal morphology data in EMRs could improve diagnostic and treatment efficiency.


The digital revolution in medicine is without doubt destined to transform how we deliver health care. Artificial intelligence (AI) and big data analytics are being seen as the main drivers of digital health innovation, and current data holds huge potential for improved diagnostics, personalized treatments, and early disease prevention.1 However, AI algorithms depend on a usable input with data that is structured and meaningful to develop applications, which has been the largest barrier for applying AI to medicine.

Retinal diseases such as age-related macular degeneration (AMD) and diabetic retinopathy (DR) are ideally placed to embrace digital analytics for a number of reasons. Primarily, there are many patients suffering from these conditions and who require repeat standardized investigations and analysis for treatment determinations. These conditions account for the majority of causes of sight loss in developed and developing countries. Improvements that are potentially engendered through improved diagnostic and treatment pathways would be of enormous benefit to the lives of many patients worldwide.

Ophthalmology has always been at the forefront of medical innovation, from the introduction of intraocular surgery to the development of various laser technologies and a plethora of pharmacotherapies for retinal diseases. Imaging technologies have evolved from fluorescein angiography (FA) to noninvasive imaging with spectral-domain optical coherence tomography (SD-OCT) and OCT angiography (OCTA). OCT acquisition systems are capable of acquiring 80,000 scans per second, and presently achieve depth resolutions of between 5-8 µm2. Novel algorithms have been introduced and facilitate automated segmentation and analysis of OCT and OCTA images.2 Electronic medical records (EMRs) in daily clinical practice help retina specialists securely record the ever-increasing volume of data generated from the burgeoning retinal disease population, which also requires long-term follow-up and frequent review.

Artificial intelligence and big data have much potential. They may accelerate the process of preclinical research moving from bench to bedside and provide a quicker route to personalized medicine through analysis of an individual’s phenotype and treatment response.3 However, they rely on high-volume, high-quality datasets and image repositories captured in a standard format. The need for enormous storage capacity, the lack of compatibility across multiple isolated databases and software packages, and the difficulty of protecting data when sharing across institutions remain challenges.


The most basic requirement for an efficient practice is to have the right information available to the right person at the right time. This is often easier said than done when patients have personal data sets in notes from multiple visits at various clinics over many years. Recently, the routine use of EMRs in retinal clinics has helped prevent loss of important medical information, which can affect the quality of care and lead to potential risk. However, EMRs are not standardized, often do not connect with other external systems or software, and can be costly. Standard terminology, particularly internationally agreed-upon nomenclature and definitions, would help make EMRs compatible across countries and health care systems.

Compared to “clean data” from standardized clinical trials, real-world heterogeneous data is also more challenging to analyze. Data stored in incompatible formats and in multiple secure sites does not lend itself to cross correlation and analysis. The challenge for “big data” is to connect the many small islands of data all speaking different languages together in a coherent way. Running algorithms on nonstandardized data may lead to errors that distort results, that introduce bias in the analysis, and that may be difficult to identify in large data sets.


The most useful and possibly the most simplistic process for the automation of retinal imaging would be the detection of microaneurysms as an early sign of DR, and the detection of drusen as the hallmark of AMD. The challenges of detection algorithms are false positives when target lesions are mistaken for other structures, and false negatives when the large variability of the lesion appearance leads to a target lesion being missed. A superior automated feature-based framework that includes a set of reference images representing atypical target lesions, and eye structures that are similar looking but are not target lesions has been described.4

There is now an evidence basis for autonomous AI diagnostic system in DR, and AI systems exceeded all prespecified superiority endpoints at sensitivity of 87.2%, specificity of 90.7%, and imageability rate of 96.1%. The IDx-DR autonomous AI system has 2 core validation algorithms: an image quality algorithm and an diagnostic algorithm. Nonetheless, in these early studies, the higher-level clinical decision was still made by human experts.5

The validity of predictions from machine learning-based clinical decisions could be compromised by unconsidered confounders. Potential biases may be introduced, such as missing data, patients not identified by algorithms, sample size, underestimation, misclassification, and measurement error.6 There have been documented measurement errors using connected devices, such as optical biometry when using different devices that resulted in a high degree of inconsistency.7 Other bias in health care may arise from changes in medical coding practices or variations in clinical practice, both of which form the very basis of clinically derived data sets.8

It has been noted that interoperability is key to the success of transforming these data sets into useful information to improve the health of patients worldwide.9 Different levels of interoperability have been defined: technical interoperability ensures basic data exchange capabilities between systems; syntactic interoperability specifies the format and structure of the data; semantic interoperability ensures common medical terminology that is understandable to humans and machines worldwide; and organizational interoperability facilitates data sharing between health IT systems to help health care professionals and organizations work more efficiently and improve patients’ health.9


Artifacts and segmentation have hindered efforts to incorporate OCT morphology data sets into clinical practice using EMRs. It remains unproven if automated processes can avoid pitfalls in image interpretation. In this context it is worth noting that correct identification of the key OCT characteristics seen in AMD, such as type, size, and location of drusen; retinal pigment epithelium changes; and the presence and extent of subretinal and intraretinal fluid will facilitate precise diagnosis and guide treatment decisions.10

There are several companies that offer OCT devices for clinical use. The challenge for automated morphology data sets is that the algorithms from the many acquisition systems available use different boundaries within the retinal tissue to segment the images, resulting in differences in the thickness of the tomographic slabs. Thus, qualitative and quantitative parameters that are generated are not easily comparable between devices. However, it has been demonstrated that when AI deep-learning tissue segmentations have been applied, accurate referrals are possible, even when using images that have been generated after tissue segmentations performed by different devices.11

A machine-learning-based approach has been studied to predict dark-adapted retinal sensitivity in eyes with neovascular AMD (nAMD). Multimodal retinal imaging data were acquired using spectral-domain OCT, confocal scanning laser ophthalmoscopy infrared reflectance, and fundus autofluorescence imaging. The most important predictive feature was outer nuclear layer thickness, and this AI-based analysis strategy enabled an estimate of differential effects of retinal structural abnormalities on cone and rod function in nAMD.12

Computer-aided diagnosis has been used for lesion detection and volumetric analysis in mammography, chest radiography, and chest computed tomography.13 Computer-aided diagnosis is possibly a more realistic short-term goal within ophthalmology. It has the potential for allowing more efficient identification of pathologic OCT images and directing the attention of the clinician to regions of interest on OCT images. The development of convolutional neural network layers has allowed for significant gains in the ability to classify images and detect objects in a picture.14 Within ophthalmology, deep learning has been recently applied in a limited capacity to automated detection of DR from fundus photographs,15 visual field perimetry in glaucoma patients,16 grading of nuclear cataracts,17 and segmentation of foveal microvasculature.18

The analysis of OCT images when multiple macular pathologies exist becomes more challenging. Attempts to subtype macular pathologies into 3 types — macular edema, macular hole, and AMD — using a 2-class support vector machine classifier identified the presence of normal macula and each of the 3 pathologies in a large dataset of 326 OCT scans. Results showed that the proposed method is very effective (all AUC >0.93)19 but this model may be too simplistic in the real world due to the large variation in macular pathologies and variations of normal anatomy. Semiautomatic algorithms are being developed to improve segmentation of the retinal nerve fiber layer or retinal layer in diseased eyes,20,21 which may better deal with gross and copathology distorting normal anatomical landmarks.


Optical coherence tomography angiography uses segmented en face, depth-encoded slab images of the vasculature that are cross-referenced with the corresponding structural OCT B scan. Similar to OCT, there are potential limitations to OCTA image interpretation: motion artifacts through patient movement; projection artifacts through faulty imaging of superficial retinal vessels onto the deeper retinal images,22 and segmentation errors as previously discussed.

Many OCTA devices have software to adjust for segmentation errors. Studies have shown automated segmentation algorithms to be less accurate than manually adjusted segmentation, in which the thickness and axial position of each retinal segment is manually optimized in a laborious process.23 A novel application of AI to medical imaging was using OCTA to train an AI algorithm to generate retinal flow maps from standard OCT images collected from clinical trials and clinical practice. Subtle regularities between different modalities were used to image the same body part and AI used to generate detailed inferences of tissue function from structure imaging.24


Studies to date using elements of automated imaging analysis have several limitations. Images are often only included from patients who have met strict study criteria, and neural networks are only trained on these images. The inclusion of real-world images including those images with poor quality would help make analysis more accurate. Models trained using images from a single academic center using a single OCT machine manufacturer are also likely to limit the accuracy and applicability to a wider population. Future studies should include a variety of different case types along with those that pose challenges to diagnosis. Also, it would be important to use all available images from the different scan types, and those acquired on a variety of different systems.

In the future, validated models could be applied to other retinal or choroidal pathologies that rely on OCT evaluations, including DR or retinal vein occlusions. Automated macular OCT classification features could be included as standard in clinical practice that would be compatible with all EMRs. Deep-learning models could be developed similar to radiology packages where macular pathology is identified on OCT images, and efficiently highlights this to the clinician to aid in diagnosis and treatment of retinal conditions. Future studies should address all treatable retinal diseases and show external validity of the model using images from other institutions.


The role of modern retinal diagnostic imaging modalities integrated with clinical EMR data will undoubtedly facilitate a more streamlined patient journey in clinical practice. The clinical accuracy of some new automated image analysis tools and AI technologies in interpretation is still being refined. However, integration of OCT morphology data sets into EMRs will undoubtedly facilitate earlier detection and treatment, and ultimately improved treatment outcomes.

Development of new segmentation algorithms could minimize inaccuracies in automated segmentation, improving visualization of retinal pathology and improving clinical efficiency. But it is important to clarify the technology’s current limitations, and identify possible approaches that enable the fulfilment of its potential going forward. RP


  1. Murdoch TB, Detsky AS. The inevitable application of big data to healthcare. JAMA. 2013;309(13):1351-1352.
  2. Fujimoto J, Swanson E. The development, commercialization, and impact of optical coherence tomography. Invest Ophthalmol Vis Sci. 2016;57(9):OCT1-OCT13.
  3. Cahan EM, Hernandez-Boussard T, Thadaney-Israni S, Rubin DL. Putting the data before the algorithm in big data addressing personalized healthcare. NPJ Digital Med. 2019;2(78):1-6.
  4. Quellec G, Russell SR, Abramoff MD. Optimal filter framework for automated, instantaneous detection of lesions in retinal images. IEEE Trans Med Imaging. 2011;30(2):523-533.
  5. Abramoff MD, Lavin PT, Birch M, Shah N, Folk JC. Pivotal trial of an autonomous AI-based diagnostic system for detection of diabetic retinopathy in primary care offices. NPJ Digital Med. 2018;1(39):1-8.
  6. Gianfrancesco MA, Tamang S, Yazdany J, Schmajuk G. Potential biases in machine learning algorithms using electronic health record data. JAMA Intern Med. 2018;178(11):1544-1547.
  7. Rozema JJ, Wouters K, Mathysen DG, Tassignon MJ. Overview of the repeatability, reproducibility, and agreement of the biometry values provided by various ophthalmic devices. Am J Ophthalmol. 2014;158(6):1111–1120.
  8. Ehrenstein V, Petersen I, Smeeth L, et al. Helping everyone do better: a call for validation studies of routinely recorded health data. Clin Epidemiol. 2016;8:49-51.
  9. Lehne M, Sass J, Essenwanger A, Schepers J, Thun S. Why digital medicine depends on interoperability. NPJ Digital Med. 2019;2(79):1-5.
  10. Keane PA, Liakopoulos S, Jivrajka RV, et al. Evaluation of optical coherence tomography retinal thickness parameters for use in clinical trials for neovascular age-related macular degeneration. Invest Opthalmol Vis Sci. 2009;50:3378-3385.
  11. De Fauw J, Ledsam JR, Romera-Paredes B, et al. Clinically applicable deep learning for diagnosis and referral in retinal disease. Nat Med. 2018;24(9):1342-1350.
  12. Von der Emde L, Pfau M, Dysli C, et al. Artificial intelligence for morphology-based function prediction in neovascular age-related macular degeneration. Sci Rep. 2019;9:1-12.
  13. Van Ginneken B, Schaefer-Prokop CM, Prokop M. Computer-aided diagnosis: how to move from the laboratory to the clinic. Radiology. 2011;261:719-732.
  14. Krizhevsky A, Sutskever I, Hinton GE. Imagenet classification with deep convolutional neural networks. Adv Neural Inf Process Syst. 2012;25(2):1097-1105.
  15. Abramoff MD, Lou Y, Erginay A, et al. Improved automated detection of diabetic retinopathy on a publicly available dataset through integration of deep learning. Invest Ophthalmol Vis Sci. 2016;57(13);5200-5206.
  16. Asaoka R, Murata H, Iwase A, Araie M. Detecting preperimetric glaucoma with standard automated perimetry using a deep learning classifier. Ophthalmology. 2016;123(9):1974-1980.
  17. Gao X, Lin S, Wong TY. Automatic feature learning to grade nuclear cataracts based on deep learning. IEEE Trans Biomed Eng. 2015;62(11):2693-2701.
  18. Prentasic P, Heisler M, Mammo Z, et al. Segmentation of the foveal microvasculature using deep learning networks. J Biomed Opt. 2016;21(7):75008.
  19. Liu YY, Chen M, Ishikawa H, Wollstein G, Schuman JS, Rehg JM. Automated macular pathology diagnosis in retinal OCT images using multi-scale spatial pyramid and local binary patterns in texture and shape encoding. Med Image Anal. 2011;15(5):748-759.
  20. Kashani AH, Chen CL, Gahm JK, et al. Optical coherence tomography angiography: a comprehensive review of current methods and clinical applications. Prog Retin Eye Res. 2017;60:66-100.
  21. Chen CL, Zhang A, Bojikian KD, et al. Peripapillary retinal nerve fiber layer vascular microcirculation in glaucoma using optical coherence tomography-based microangiography. Invest Ophthalmol Vis Sci. 2016;57(9):OCT475-OCT485.
  22. Spaide RF, Fujimoto JG, Waheed NK. Image artifacts in optical coherence angiography. Retina. 2015;35(11):2163-2180.
  23. Arya M, Rebhun CB, Cole ED, et al. Visualization of choroidal neovascularization using two commercially available spectral domain optical coherence tomography angiography devices. Retina. 2019;39(9):1682-1692.
  24. Lee CS, Tyring AJ, Wu Y, et al. Generating retinal flow maps from structural optical coherence tomography with artificial intelligence. Sci Rep. 2019;9:5694.