Purpose To demonstrate the association between coronary vessel wall thickness (VWT) measured at MRI and coronary artery disease (CAD) risk in asymptomatic groups at low and intermediate risk on the basis of Framingham score. Materials and Methods A total of 131 asymptomatic adults were prospectively enrolled. All participants underwent CT angiography for scoring CAD, and coronary VWT was measured at 3.0-T MRI. Nonlinear single and multivariable regression analyses with consideration for interaction with sex were performed to investigate the association of traditional atherosclerotic risk factors and VWT with CT angiography-based CAD scores. Results The analysis included 62 women and 62 men with low or intermediate Framingham score of less than 20%. Age (mean age, 45.0 years ± 14.5 [standard deviation]) and body mass index were not different between the groups. Age, sex, and VWT were individually significantly associated with all CT angiography-based CAD scores (P less then .05). Additionally, sex was a significant effect modifier of the associations with all CAD scores. In men, age was the only statistically significant independent risk factor of CAD; in women, VWT was the only statistically significant independent surrogate associated with increased CAD scores (P less then .05). Conclusion In asymptomatic women, VWT MRI was the primary independent surrogate of CAD, whereas age was the strongest risk factor in men. This study suggests that VWT may be used as a CAD surrogate in women at low or intermediate risk of CAD. Further longitudinal studies are required to determine the potential implication and use of this MRI technique for the preventative management of CAD in women.© RSNA, 2019. 2019 by the Radiological Society of North America, Inc.Purpose To assess the performance of an automated myocardial T2 and extracellular volume (ECV) quantification method using transfer learning of a fully convolutional neural network (CNN) pretrained to segment the myocardium on T1 mapping images. Materials and Methods A single CNN previously trained and tested using 11 550 manually segmented native T1-weighted images was used to segment the myocardium for automated myocardial T2 and ECV quantification. Reference measurements from 1525 manually processed T2 maps and 1525 ECV maps (from 305 patients) were used to evaluate the performance of the pretrained network. Correlation coefficient (R) and Bland-Altman analysis were used to assess agreement between automated and reference values on per-patient, per-slice, and per-segment analyses. Furthermore, transfer learning effectiveness in the CNN was evaluated by comparing its performance to four CNNs trained using manually segmented T2-weighted and postcontrast T1-weighted images and initialized using random-weightsA, 2020. 2020 by the Radiological Society of North America, Inc.Purpose To develop a multichannel deep neural network (mcDNN) classification model based on multiscale brain functional connectome data and demonstrate the value of this model by using attention deficit hyperactivity disorder (ADHD) detection as an example. Materials and Methods In this retrospective case-control study, existing data from the Neuro Bureau ADHD-200 dataset consisting of 973 participants were used. Multiscale functional brain connectomes based on both anatomic and functional criteria were constructed. The mcDNN model used the multiscale brain connectome data and personal characteristic data (PCD) as joint features to detect ADHD and identify the most predictive brain connectome features for ADHD diagnosis. https://www.selleckchem.com/products/5-n-ethylcarboxamidoadenosine.html The mcDNN model was compared with single-channel deep neural network (scDNN) models and the classification performance was evaluated through cross-validation and hold-out validation with the metrics of accuracy, sensitivity, specificity, and area under the receiver operating characteristic curve (AUC). Results In the cross-validation, the mcDNN model using combined features (fusion of the multiscale brain connectome data and PCD) achieved the best performance in ADHD detection with an AUC of 0.82 (95% confidence interval [CI] 0.80, 0.83) compared with scDNN models using the features of the brain connectome at each individual scale and PCD, independently. In the hold-out validation, the mcDNN model achieved an AUC of 0.74 (95% CI 0.73, 0.76). Conclusion An mcDNN model was developed for multiscale brain functional connectome data, and its utility for ADHD detection was demonstrated. By fusing the multiscale brain connectome data, the mcDNN model improved ADHD detection performance considerably over the use of a single scale.© RSNA, 2019. 2019 by the Radiological Society of North America, Inc.A publicly available dataset containing k-space data as well as Digital Imaging and Communications in Medicine image data of knee images for accelerated MR image reconstruction using machine learning is presented. 2020 by the Radiological Society of North America, Inc.Purpose To evaluate the use of artificial intelligence (AI) to shorten digital breast tomosynthesis (DBT) reading time while maintaining or improving accuracy. Materials and Methods A deep learning AI system was developed to identify suspicious soft-tissue and calcified lesions in DBT images. A reader study compared the performance of 24 radiologists (13 of whom were breast subspecialists) reading 260 DBT examinations (including 65 cancer cases) both with and without AI. Readings occurred in two sessions separated by at least 4 weeks. Area under the receiver operating characteristic curve (AUC), reading time, sensitivity, specificity, and recall rate were evaluated with statistical methods for multireader, multicase studies. Results Radiologist performance for the detection of malignant lesions, measured by mean AUC, increased 0.057 with the use of AI (95% confidence interval [CI] 0.028, 0.087; P less then .01), from 0.795 without AI to 0.852 with AI. Reading time decreased 52.7% (95% CI 41.8%, 61.5%; P less then .01), from 64.1 seconds without to 30.4 seconds with AI. Sensitivity increased from 77.0% without AI to 85.0% with AI (8.0%; 95% CI 2.6%, 13.4%; P less then .01), specificity increased from 62.7% without to 69.6% with AI (6.9%; 95% CI 3.0%, 10.8%; noninferiority P less then .01), and recall rate for noncancers decreased from 38.0% without to 30.9% with AI (7.2%; 95% CI 3.1%, 11.2%; noninferiority P less then .01). Conclusion The concurrent use of an accurate DBT AI system was found to improve cancer detection efficacy in a reader study that demonstrated increases in AUC, sensitivity, and specificity and a reduction in recall rate and reading time.© RSNA, 2019See also the commentary by Hsu and Hoyt in this issue. 2019 by the Radiological Society of North America, Inc.Purpose To describe an unsupervised three-dimensional cardiac motion estimation network (CarMEN) for deformable motion estimation from two-dimensional cine MR images. Materials and Methods A function was implemented using CarMEN, a convolutional neural network that takes two three-dimensional input volumes and outputs a motion field. A smoothness constraint was imposed on the field by regularizing the Frobenius norm of its Jacobian matrix. CarMEN was trained and tested with data from 150 cardiac patients who underwent MRI examinations and was validated on synthetic (n = 100) and pediatric (n = 33) datasets. CarMEN was compared to five state-of-the-art nonrigid body registration methods by using several performance metrics, including Dice similarity coefficient (DSC) and end-point error. Results On the synthetic dataset, CarMEN achieved a median DSC of 0.85, which was higher than all five methods (minimum-maximum median [or MMM], 0.67-0.84; P .05) all other methods. All P values were derived from pairwise testing. For all other metrics, CarMEN achieved better accuracy on all datasets than all other techniques except for one, which had the worst motion estimation accuracy. Conclusion The proposed deep learning-based approach for three-dimensional cardiac motion estimation allowed the derivation of a motion model that balances motion characterization and image registration accuracy and achieved motion estimation accuracy comparable to or better than that of several state-of-the-art image registration algorithms.© RSNA, 2019Supplemental material is available for this article. 2019 by the Radiological Society of North America, Inc.Purpose To investigate the feasibility of using a deep learning-based approach to detect an anterior cruciate ligament (ACL) tear within the knee joint at MRI by using arthroscopy as the reference standard. Materials and Methods A fully automated deep learning-based diagnosis system was developed by using two deep convolutional neural networks (CNNs) to isolate the ACL on MR images followed by a classification CNN to detect structural abnormalities within the isolated ligament. With institutional review board approval, sagittal proton density-weighted and fat-suppressed T2-weighted fast spin-echo MR images of the knee in 175 subjects with a full-thickness ACL tear (98 male subjects and 77 female subjects; average age, 27.5 years) and 175 subjects with an intact ACL (100 male subjects and 75 female subjects; average age, 39.4 years) were retrospectively analyzed by using the deep learning approach. Sensitivity and specificity of the ACL tear detection system and five clinical radiologists for detecting an ACL is article. 2019 by the Radiological Society of North America, Inc.Purpose To identify the role of radiomics texture features both within and outside the nodule in predicting (a) time to progression (TTP) and overall survival (OS) as well as (b) response to chemotherapy in patients with non-small cell lung cancer (NSCLC). Materials and Methods Data in a total of 125 patients who had been treated with pemetrexed-based platinum doublet chemotherapy at Cleveland Clinic were retrospectively analyzed. The patients were divided randomly into two sets with the constraint that there were an equal number of responders and nonresponders in the training set. The training set comprised 53 patients with NSCLC, and the validation set comprised 72 patients. A machine learning classifier trained with radiomic texture features extracted from intra- and peritumoral regions of non-contrast-enhanced CT images was used to predict response to chemotherapy. The radiomic risk-score signature was generated by using least absolute shrinkage and selection operator with the Cox regression model; associ TTP and OS for patients with NSCLC.© RSNA, 2019Supplemental material is available for this article. 2019 by the Radiological Society of North America, Inc.Many over-the-counter drug products lack official compendial analytical methods. As a result, the United States Pharmacopeia and the United States Food and Drug Administration are seeking to develop and validate new methods to establish analysis standards for the assessment of the pharmaceutical quality of over-the-counter drug products. Diphenhydramine and phenylephrine hydrochloride oral solution, a combination drug product, was identified as needing a compendial standard. Therefore, an ultra-high-performance liquid chromatography method was developed to separate and quantify the two drug compounds and eleven related organic impurities. As part of a robustness study, the separation was demonstrated using different high-performance liquid chromatography systems and columns from different manufacturers, and showed little dependence with changes in flow rate, column temperature, detection wavelength, injection volume and mobile phase gradient. The method was then validated conformant with the International Council for Harmonisation guidelines.