10/04/2024


The performance of the proposed imaging system is validated with in vivo and ex vivo targets. The experimental results obtained from several tungsten filaments in the depth range of 1.2 mm, show an improvement of -6 dB lateral resolution from 55-287 μm to 25-29 μm and also an improvement of signal-to-noise ratio (SNR) from 16-22 dB to 27-33 dB in the proposed system.The Purkinje system is a heart structure responsible for transmitting electrical impulses through the ventricles in a fast and coordinated way to trigger mechanical contraction. Estimating a patient-specific compatible Purkinje Network from an electro-anatomical map is a challenging task, that could help to improve models for electrophysiology simulations or provide aid in therapy planning, such as radiofrequency ablation. In this study, we present a methodology to inversely estimate a Purkinje network from a patient's electro-anatomical map. First, we carry out a simulation study to assess the accuracy of the method for different synthetic Purkinje network morphologies and myocardial junction densities. Second, we estimate the Purkinje network from a set of 28 electro-anatomical maps from patients, obtaining an optimal conduction velocity in the Purkinje network of 1.95 ± 0.25 m/s, together with the location of their Purkinje-myocardial junctions, and Purkinje network structure. Our results showed an average local activation time error of 6.8 ± 2.2 ms in the endocardium. Finally, using the personalized Purkinje network, we obtained correlations higher than 0.85 between simulated and clinical 12-lead ECGs.Cine cardiac magnetic resonance imaging (MRI) is widely used for the diagnosis of cardiac diseases thanks to its ability to present cardiovascular features in excellent contrast. As compared to computed tomography (CT), MRI, however, requires a long scan time, which inevitably induces motion artifacts and causes patients' discomfort. Thus, there has been a strong clinical motivation to develop techniques to reduce both the scan time and motion artifacts. Given its successful applications in other medical imaging tasks such as MRI super-resolution and CT metal artifact reduction, deep learning is a promising approach for cardiac MRI motion artifact reduction. In this paper, we propose a novel recurrent generative adversarial network model for cardiac MRI motion artifact reduction. This model utilizes bi-directional convolutional long short-term memory (ConvLSTM) and multi-scale convolutions to improve the performance of the proposed network, in which bi-directional ConvLSTMs handle long-range temporal features while multi-scale convolutions gather both local and global features. We demonstrate a decent generalizability of the proposed method thanks to the novel architecture of our deep network that captures the essential relationship of cardiovascular dynamics. Indeed, our extensive experiments show that our method achieves better image quality for cine cardiac MRI images than existing state-of-the-art methods. https://www.selleckchem.com/products/gsk2334470.html In addition, our method can generate reliable missing intermediate frames based on their adjacent frames, improving the temporal resolution of cine cardiac MRI sequences.Regression-based face alignment involves learning a series of mapping functions to predict the true landmark from an initial estimation of the alignment. Most existing approaches focus on learning efficacious mapping functions from some feature representations to improve performance. The issues related to the initial alignment estimation and the final learning objective, however, receive less attention. This work proposes a deep regression architecture with progressive reinitialization and a new error-driven learning loss function to explicitly address the above two issues. Given an image with a rough face detection result, the full face region is firstly mapped by a supervised spatial transformer network to a normalized form and trained to regress coarse positions of landmarks. Then, different face parts are further respectively reinitialized to their own normalized states, followed by another regression sub-network to refine the landmark positions. To deal with the inconsistent annotations in existing training datasets, we further propose an adaptive landmark-weighted loss function. It dynamically adjusts the importance of different landmarks according to their learning errors during training without depending on any hyper-parameters manually set by trial and error. The whole deep architecture permits training from end to end, and extensive experimental comparisons demonstrate its effectiveness and efficiency.Representations in the form of Symmetric Positive Definite (SPD) matrices have been popularized in a variety of visual learning applications due to their demonstrated ability to capture rich second-order statistics of visual data. There exist several similarity measures for comparing SPD matrices with documented benefits. However, selecting an appropriate measure for a given problem remains a challenge and in most cases, is the result of a trial-and-error process. In this paper, we propose to learn similarity measures in a data-driven manner. To this end, we capitalize on the alpha-beta-log-det divergence, which is a meta-divergence parametrized by scalars alpha and beta, subsuming a wide family of popular information divergences on SPD matrices for distinct and discrete values of these parameters. Our key idea is to cast these parameters in a continuum and learn them from data. We systematically extend this idea to learn vector-valued parameters, thereby increasing the expressiveness of the underlying non-linear measure. We conjoin the divergence learning problem with several standard tasks in machine learning, including supervised discriminative dictionary learning and unsupervised SPD matrix clustering. We present Riemannian descent schemes for optimizing our formulations efficiently and show the usefulness of our method on eight standard computer vision tasks.This paper proposes a novel distance metric learning algorithm, named adaptive neighborhood metric learning (ANML). In ANML, we design two thresholds to adaptively identify the inseparable similar and dissimilar samples in the training procedure, thus inseparable sample removing and metric parameter learning are implemented in the same procedure. Due to the non-continuity of the proposed ANML, we develop a log-exp mean function to construct a continuous formulation to surrogate it. The proposed method has interesting properties. For example, when ANML is used to learn the linear embedding, current famous metric learning algorithms such as the large margin nearest neighbor (LMNN) and neighbourhood components analysis (NCA) are the special cases of the proposed ANML by setting the parameters different values. Besides, compared with LMNN and NCA, ANML has a broader searching space which may contain better solutions. When it is used to learn deep features, the state-of-the-art deep metric learning algorithms such as Triplet loss, Lifted structure loss, and Multi-similarity loss become the special cases of our method. Furthermore, the proposed log-exp mean function gives a new perspective to review deep metric learning methods such as Prox-NCA and N-pairs loss. Experiments are conducted to demonstrate the effectiveness of the proposed method.We propose the first stochastic framework to employ uncertainty for RGB-D saliency detection by learning from the data labeling process. Existing RGB-D saliency detection models treat this task as a point estimation problem by predicting a single saliency map following a deterministic learning pipeline. We argue that, however, the deterministic solution is relatively ill-posed. Inspired by the saliency data labeling process, we propose a generative architecture to achieve probabilistic RGB-D saliency detection which utilizes a latent variable to model the labeling variations. Our framework includes two main models 1) a generator model, which maps the input image and latent variable to stochastic saliency prediction, and 2) an inference model, which gradually updates the latent variable by sampling it from the true or approximate posterior distribution. The generator model is an encoder-decoder saliency network. To infer the latent variable, we introduce two different solutions i) a Conditional Variational Auto-encoder with an extra encoder to approximate the posterior distribution of the latent variable; and ii) an Alternating Back-Propagation technique, which directly samples the latent variable from the true posterior distribution. Qualitative and quantitative results on six challenging RGB-D benchmark datasets show our approach's superior performance in learning the distribution of saliency maps.This paper generalizes the Attention in Attention (AiA) mechanism, proposed in [1], by employing explicit mapping in reproducing kernel Hilbert spaces to generate attention values of the input feature map. The AiA mechanism models the capacity of building inter-dependencies among the local and global features by the interaction of inner and outer attention modules. Besides a vanilla AiA module, termed linear attention with AiA, two non-linear counterparts, namely, second-order polynomial attention and Gaussian attention, are also proposed to utilize the non-linear properties of the input features explicitly, via the second-order polynomial kernel and Gaussian kernel approximation. The deep convolutional neural network, equipped with the proposed AiA blocks, is referred to as Attention in Attention Network (AiA-Net). The AiA-Net learns to extract a discriminative pedestrian representation, which combines complementary person appearance and corresponding part features. Extensive ablation studies verify the effectiveness of the AiA mechanism and the use of non-linear features hidden in the feature map for attention design. Furthermore, our approach outperforms current state-of-the-art by a considerable margin across a number of benchmarks. In addition, state-of-the-art performance is also achieved in the video person retrieval task with the assistance of the proposed AiA blocks.The popularity of deep learning techniques renewed the interest in neural architectures able to process complex structures that can be represented using graphs, inspired by Graph Neural Networks (GNNs). We focus our attention on the originally proposed GNN model of Scarselli et al. 2009, which encodes the state of the nodes of the graph by means of an iterative diffusion procedure that, during the learning stage, must be computed at every epoch, until the fixed point of a learnable state transition function is reached, propagating the information among the neighbouring nodes. We propose a novel approach to learning in GNNs, based on constrained optimization in the Lagrangian framework. Learning both the transition function and the node states is the outcome of a joint process, in which the state convergence procedure is implicitly expressed by a constraint satisfaction mechanism, avoiding iterative epoch-wise procedures and the network unfolding. Our computational structure searches for saddle points of the Lagrangian in the adjoint space composed of weights, nodes state variables and Lagrange multipliers. This process is further enhanced by multiple layers of constraints that accelerate the diffusion process. An experimental analysis shows that the proposed approach compares favourably with popular models on several benchmarks.