Associate at the Center for Fetal Medicine and Women’s Ultrasound, Los Angeles, Califoria and a Clinical Faculty in the Division of Maternal-Fetal Medicine, Department of Obstetrics and Gynecology, University of California, Los Angeles.
Ob/gyn, while late to the game, has the potential to climb the ranks as the specialty most instrumental to the use and development of AI.
Ever wonder how self-driving cars recognize a ball in the road? How about when Amazon magically knows what items you need before you do? This is all thanks to pattern recognition of artificial intelligence (AI). Analytical AI refers to the general process by which machines or computers replicate and replace human tasks and cognition. Machine learning is a branch of AI in which algorithms, inspired by the human brain, encourage the computer to continue recognizing patterns automatically (Figure 1). A further subset is deep learning in which massive amounts of neuronal networks interpret and use large amounts of data for deeper “cognitive” capabilities. It has also been called a convolutional neural network (CNN) in part due to its resemblance to the neurons and connections in our cerebral cortexes. Deep learning has led to breakthroughs in healthcare, specifically in radiologic image recognition.1
Use of AI in medical imaging
One of the most common uses of AI in healthcare is computer-aided diagnosis (CAD) which has already been widely studied in many fields including prostate, breast and cardiac imaging.2-4 Many AI applications are used to develop and implement protocols, thereby shortening imaging time, optimizing staffing, and reducing costs.5 They also have become instrumental in helping physicians make decisions about patient care.6 Ob/gyn, while late to the game, given the ubiquitous involvement of ultrasound in care of nearly every reproductive-aged woman in the modern world, has the potential to climb the ranks as the specialty most instrumental to use and development of AI.
Use of AI in ob/gyn imaging
“I can’t believe you have to calculate all those measurements yourselves. Doesn’t the machine recognize the baby’s head?” -asks an inquisitive patient. A Tesla automobile costs roughly the same as a Voluson E10, yet the former can recognize multiple and simultaneous moving objects in the road but the latter has yet to reliably figure out how to recognize fetal organs, which are more or less identical in 97% of fetuses. While the initial task of learning to perform fetal ultrasound requires dexterity and rote hand skills, it seems plausible that a semi-trained sonographer can learn to place the probe in the correct location and computer aid can help identify the correct plane, prompt organ identification, and calculate many of the measurements automatically.
In many new machines, automatic image recognition is already being used during biometric measurement of the fetal biparietal diameter (BPD), head circumference (HC), femur length (FL) and abdominal circumference (AC). In these instances, the sonographer is responsible for identifying the proper landmarks in the plane of choice, and once prompted, the machine will label and measure the desired biometric value. If correct, this saves time. On the contrary, if the image quality is poor, due to fetal position or maternal obesity for example, it may erroneously over or under measure the organ. One study compared 100 manual biometric measurements to 100 automated measurements and showed a time-saving of about 20 seconds and seven steps on each 20-minute anatomic survey.7 While the time-saving in each individual patient encounter may be insignificant, at the end of a busy day, every second counts towards improving sonographer efficiency and decreasing fatigue.
Have you read: Can AI improve accuracy and efficiency of mammography?
Three- and four-dimensional (3D/4D) ultrasound have further revolutionized the ability to acquire and process images, especially in the arena of CAD and image recognition. Many major ultrasound manufacturers have developed their own software, such as VOCAL (Virtual Organ Computer-aided Analysis) from GE or S-Detect for breast imaging from Samsung. Image recognition has boomed in fetal ultrasonography with groups studying virtually every plane and fetal organ.
One of the earliest interests was 3D computer-aided analysis of the fetal heart. One study showed that satisfactory views of the four-chamber heart, outflow tracts, and stomach were only obtained in 43% to 65% of cases, and less in settings of obesity or fetal spine up.8 More recently, spatiotemporal image correlation (STIC) volume data sets have been used to identity nine standard fetal echocardiographic views with up to 98% sensitivity for screening congenital heart disease.9 Some other examples include the fetal thymus, for which CAD has assisted with border identification and accurate volume measurement of this complex pyramidal structure in 77% of cases.10 CAD has even been studied for identification of key characteristics of placenta accreta spectrum11 and cervical length, funneling, and sludge to predict preterm labor in patients with short cervix.12 Gestational age is no longer a limiting factor as these techniques have expanded to the first trimester in volume NT by Samsung and SonoNT by GE.
FetalHQ is another software that uses speckle tracking to analyze the motion of multiple points on the fetal heart to provide information on its size, shape, and function. Calculation of this complex algorithm requires a simple 2D video clip of the beating four-chamber view and 3 minutes of post-production analysis, with the results potentially yielding a volume of information on fetal cardiac function.13
Newly emerging software includes SonoCNS Fetal Brain (Figure 2) developed by GE Healthcare and 5D CNS+ by Samsung.14 Both use deep learning application to take one 3D sweep of the fetal brain and automatically recognize and measure the essential structures through the posterior fossa, ventricles, BPD, and HC. As it stands now, however, most providers use image recognition as a second pass or confirmation to increase their diagnostic accuracy and are not looking for replacement of their clinical acumen and years of training.
The future of AI in ob/gyn ultrasound
Data and a lot of it is the fundamental requirement for creating a successful deep learning application. One of the practical limitations of software developers is ethically and efficiently obtaining de-identified patient data to create such a thing. One of the biggest players on the AI block is the UK-based company Intelligent Ultrasound, which acquired over 1 million high-quality images from real obstetric scans to develop algorithms for the software ScanNav. The goals of ScanNav are to provide real-time guidance to sonographers by automatically capturing the six correct images as recommended by the UK fetal anomaly-screening program and provide an audit showing that all the images were obtained. In a sense, this provides a layer of quality improvement to ensure that optimal patient care is being delivered.
The software is still in development and some limitations include real-time guidance for probe placement, especially with unique patient considerations such as obesity. A crucial aspect to consider in these situations is patient privacy. While individual data may be de-identified, further advances in machine learning may be able to identify individuals if appropriate safety measures aren’t taken with data security.15
In case you missed it: Can AI improve embryo selection for IVF?
At the end of 2018, SonoScape medical (Shenzhen, China) announced development of their S-fetus algorithm, designed for the S60 ultrasound system, which will scan the entire fetus with a single cine loop. Thousands of real images were used to develop algorithms to identify appropriate landmarks and accurate measurements. In addition, in true deep learning fashion, the system continues to fine-tune its analysis with each additional exam it performs. The S-fetus software will select the best images and automatically measure key growth components. This software will consolidate the multistep process of obtaining fetal biometry to a single push of a button. In addition to saving an immense amount of time and keystrokes for each patient, it will alert the sonographer if manual adjustments or measurements do not meet image standards, thus providing the sonographer feedback and resulting in better images.
Fetal ultrasonography is a mainstay of routine prenatal care. Significant advancements have been made over the years to improve image quality and diagnostic accuracy while maintaining the ease, reproducibility, and efficiency for sonographers performing and physicians interpreting the images. One of AI’s greatest benefits is removing its dependency on the operator and standardizing our approach to improve patient safety, especially in low-resources settings where expertise may otherwise be lacking. Keep your eyes and ears open as the data and hype about this technology are only going to skyrocket in our field. The preliminary schedule for ISUOG in Berlin in October of this year includes courses on how large data and AI may impact our field, and the sessions are sure to be well attended. AI and deep learning certainly warrants all the buzz and energy surrounding it, but realistically, is not yet sophisticated enough to replace obstetricians, maternal-fetal medicine specialists or radiologists. Rest assured, we don’t have to worry about our job security quite yet.
The authors report no potential conflicts of interest with regard to this article.