Multiview subspace clustering has attracted an increasing amount of attention in recent years. However, most of the existing multiview subspace clustering methods assume linear relations between multiview data points when learning the affinity representation by means of the self-expression or fail to preserve the locality property of the original feature space in the learned affinity representation. To address the above issues, in this article, we propose a new multiview subspace clustering method termed smoothness regularized multiview subspace clustering with kernel learning (SMSCK). To capture the nonlinear relations between multiview data points, the proposed model maps the concatenated multiview observations into a high-dimensional kernel space, in which the linear relations reflect the nonlinear relations between multiview data points in the original space. In addition, to explicitly preserve the locality property of the original feature space in the learned affinity representation, the smoothness regularization is deployed in the subspace learning in the kernel space. Theoretical analysis has been provided to ensure that the optimal solution of the proposed model meets the grouping effect. The unique optimal solution of the proposed model can be obtained by an optimization strategy and the theoretical convergence analysis is also conducted. Extensive experiments are conducted on both image and document data sets, and the comparison results with state-of-the-art methods demonstrate the effectiveness of our method.With the rapid development of sensor technologies, multisensor signals are now readily available for health condition monitoring and remaining useful life (RUL) prediction. To fully utilize these signals for a better health condition assessment and RUL prediction, health indices are often constructed through various data fusion techniques. Nevertheless, most of the existing methods fuse signals linearly, which may not be sufficient to characterize the health status for RUL prediction. To address this issue and improve the predictability, this article proposes a novel nonlinear data fusion approach, namely, a shape-constrained neural data fusion network for health index construction. Especially, a neural network-based structure is employed, and a novel loss function is formulated by simultaneously considering the monotonicity and curvature of the constructed health index and its variability at the failure time. A tailored adaptive moment estimation algorithm (Adam) is proposed for model parameter estimation. The effectiveness of the proposed method is demonstrated and compared through a case study using the Commercial Modular Aero-Propulsion System Simulation (C-MAPSS) data set.In this article, a manifold learning algorithm based on straight-like geodesics and local coordinates is proposed, called SGLC-ML for short. The contribution and innovation of SGLC-ML lie in that; first, SGLC-ML divides the manifold data into a number of straight-like geodesics, instead of a number of local areas like many manifold learning algorithms do. Figuratively speaking, SGLC-ML covers manifold data set with a sparse net woven with threads (straight-like geodesics), while other manifold learning algorithms with a tight roof made of titles (local areas). Second, SGLC-ML maps all straight-like geodesics into straight lines of a low-dimensional Euclidean space. All these straight lines start from the same point and extend along the same coordinate axis. These straight lines are exactly the local coordinates of straight-like geodesics as described in the mathematical definition of the manifold. With the help of local coordinates, dimensionality reduction can be divided into two relatively simple processes calculation and alignment of local coordinates. However, many manifold learning algorithms seem to ignore the advantages of local coordinates. The experimental results between SGLC-ML and other state-of-the-art algorithms are presented to verify the good performance of SGLC-ML.In the context of supervised statistical learning, it is typically assumed that the training set comes from the same distribution that draws the test samples. When this is not the case, the behavior of the learned model is unpredictable and becomes dependent upon the degree of similarity between the distribution of the training set and the distribution of the test set. One of the research topics that investigates this scenario is referred to as domain adaptation (DA). Deep neural networks brought dramatic advances in pattern recognition and that is why there have been many attempts to provide good DA algorithms for these models. Herein we take a different avenue and approach the problem from an incremental point of view, where the model is adapted to the new domain iteratively. We make use of an existing unsupervised domain-adaptation algorithm to identify the target samples on which there is greater confidence about their true label. The output of the model is analyzed in different ways to determine the candidate samples. https://www.selleckchem.com/products/gsk2879552-2hcl.html The selected samples are then added to the source training set by self-labeling, and the process is repeated until all target samples are labeled. This approach implements a form of adversarial training in which, by moving the self-labeled samples from the target to the source set, the DA algorithm is forced to look for new features after each iteration. Our results report a clear improvement with respect to the non-incremental case in several data sets, also outperforming other state-of-the-art DA algorithms.Multiagent reinforcement learning (MARL) has been extensively used in many applications for its tractable implementation and task distribution. Learning automata, which can be classified under MARL in the category of independent learner, are used to obtain the optimal joint action or some type of equilibrium. Learning automata have the following advantages. First, learning automata do not require any agent to observe the action of any other agent. Second, learning automata are simple in structure and easy to be implemented. Learning automata have been applied to function optimization, image processing, data clustering, recommender systems, and wireless sensor networks. However, a few learning automata-based algorithms have been proposed for optimization of cooperative repeated games and stochastic games. We propose an algorithm known as learning automata for optimization of cooperative agents (LA-OCA). To make learning automata applicable to cooperative tasks, we transform the environment to a P-model by introducing an indicator variable whose value is one when the maximal reward is obtained and is zero otherwise.