Artificial intelligence (AI) has seemingly begun to change the world. More and more movies are depicting possible impacts of AI, for example. Meanwhile, more and more households have smart devices that enable the use of virtual assistants for almost everything. The medical field is no different from the rest of the world, with artificial intelligence aiding a doctor's decision making. Radiology departments, in particular, may use advanced applications that use both applied and artificial intelligence.
What is the difference between applied and artificial intelligence?
The difference between applied intelligence and artificial intelligence is a simple nuance with the way a device operates. In fact, the two terms may often be used inter-changeable. Both can propel medical imaging toward more personalized care, and, as such, should be understood by healthcare professionals.
- Applied intelligence refers to the software system that enables artificial intelligence.1 In the healthcare field, it helps to extract and interpret data across healthcare systems, devices and imaging equipment. This may help provide actionable insights, as well as enhance and augment decision making. In everyday life, applied intelligence may be the programs that virtual assistants run.
- Artificial intelligence, or AI, refers to the device's ability to mimic human intelligence or to perform tasks that have traditionally required human thought.1 The branch of computer science that deals with artificial intelligence attempts to design programs that can complete certain actions that software programs may have been incapable of before, such as those involving visual perception, speech recognition, and decision making. This is possible, because the program design allows the device to continue to learn. Virtual assistants may be considered artificial intelligence.
- Machine learning enhances the system's ability to automatically learn and improve from experiences, or data, without being explicitly programmed.
- Deep learning refers to a network of systems that are capable of learning unsupervised from unstructured or unlabeled information.
What is the impact of advanced applications that use deep learning on magnetic resonance imaging?
There are three main areas for AI to be developed in medical imaging: intelligent applications, intelligent scanners and intelligent services.1 Each aspect could potentially improve and streamline work for the radiology department, leaving time to focus on and interact with the patient.
Intelligent applications may help staff make decisions faster throughout the use of actionable insights.1 These insights may be provided by software solutions and applications that utilize machine learning or deep learning algorithms.
In fact, Dr. Yaou Liu from the department of radiology at Beijing Tiantan Hospital attempted to ease the workload for the radiologists by employing one of these deep-learning algorithms.2 The algorithm was meant to help identify abnormal MR images, select the best protocol for detected diseases and direct patient to the correct department for treatment. The team uploaded a total of 5,806 brain images, 4,639 of which were used to train the algorithm. The rest of the images were to test the algorithm's ability to detect potential tumors, ischemic cerebrovascular disease and multiple sclerosis. During the test, the algorithm had a sensitivity of 85%, specificity of 96% and accuracy of 94%.2 Radiologists still reviewed the results to confirm or correct them.
What is the impact of advanced applications that automate slices on magnetic resonance imaging (MRI)?
The advanced applications that are being developed for radiology departments are not just about deep or machine learning. Some companies are also attempting to automate the scan itself, with radiologists watching over it.3,4 This is especially true for MRI, which requires the scanner to collect a large amount of data and process it with the help of the radiologist or radiographer. Each image is called a slice and shows a specific angle on an area of the patient's anatomy.3
One software application enables automatic slice selection and positioning.3,4 This is done by training a deep learning algorithm, to recognize anatomical landmarks. Once the program has recognized a landmark, it can use the previously acquired data to determine where the region of interest is in relation to the landmark. Automation of this step has the potential to optimize and expedite scanning, possibly reducing scan times per patient in the process. In the past, radiographers have had to manual select the slice and adjust positioning multiple times before scanning.
Tom Schrack, the Manager of MR Education and Technical Development at Fairfax Radiological Consultants in Fairfax, Virginia, cites the software that automatically prescribes slices as one reason that scan times may have been reduced.3 He was also stunned by the accuracy of landmark identification. He hopes that this and other applications will continue to fulfill and improve the purpose of AI in MR imaging.
Automated slice selection and deep learning have enabled radiology to become more patient friendly through the use of AI. Reduced scan times due to AI assistance may help more patients be able to have MRI studies done. Automated slice selection may allow radiographers to interact with patients more, which could help the patients feel more at ease. With these software programs, medical imaging may be easier for the patient and the radiologists, just as AI has made technology more accessible for those who have virtual assistants. In the future, it is possible that AI will continue to ease the often overwhelming workload for radiology departments.
For more information, please read SIGNA Pulse "Exploring MR powered by Applied Intelligence."
For more information about deep learning, please read SIGNA Pulse "Introducing intelligent MR powered by deep learning."
1. Victor Justo. "Exploring MR powered by Applied Intelligence." SIGNA Pulse. Spring 2018. Web. 22 April 2019. <http://www.gesignapulse.com/signapulse/spring_2018/MobilePagedArticle.action?articleId=1396203&app=false#articleId1396203>.
2. Wayne Forrest. "Deep-learning MRI algorithm aids in neurological diagnoses." AuntMinnie.com. 2 March 2019. Web. 22 April 2019. <https://www.auntminnie.com/index.aspx?sec=rca&sub=ecr_2019&pag=dis&ItemID=124750>.
3. "No matter how you slice it, this AI tech is changing MR neuro imaging: Fairfax hospital seeing results from deep learning-based application for MRI." The Pulse. 25 March 2019. Web. 30 April 2019. <http://newsroom.gehealthcare.com/this-ai-tech-is-changing-mr-neuro-imaging/>.
4. "GE Healthcare's FDA approved MR neuro deep-learning software, AIRx, increases consistency and productivity." dotmed.com. 14 March 2019. Web. 30 April 2019. <https://www.dotmed.com/news/story/46576>.
5. Mary Beth Mussat. "Introducing intelligent MR powered by deep learning." SIGNA Pulse. Autumn 2018. Web. 22 April 2019. <http://www.gesignapulse.com/signapulse/autumn_2018/MobilePagedArticle.action?articleId=1444512&app=false#articleId1444512>.