This article proposes an adaptive neural-network control scheme for a rigid manipulator with input saturation, full-order state constraint, and unmodeled dynamics. An adaptive law is presented to reduce the adverse effect arising from input saturation based on a multiply operation solution, and the adaptive law is capable of converging to the specified ratio of the desired input to the saturation boundary while the closed-loop system stabilizes. The neural network is implemented to approximate the unmodeled dynamics. Moreover, the barrier Lyapunov function methodology is utilized to guarantee the assumption that the control system works to constrain the input and full-order states. It is proved that all states of the closed-loop system are uniformly ultimately bounded with the presented constraints under input saturation. Simulation results verify the stability analyses on input saturation and full-order state constraint, which are coincident with the preset boundaries.In this article, a pinning control strategy is developed for the finite-horizon H∞ synchronization problem for a kind of discrete time-varying nonlinear complex dynamical network in a digital communication circumstance. For the sake of complying with the digitized data exchange, a feedback-type dynamic quantizer is introduced to reflect the transformation from the raw signals into the discrete-valued ones. Then, a quantized pinning control scheme takes place on a small fraction of the network nodes with the hope of cutting down the control expenses while achieving the expected global synchronization objective. Subsequently, by resorting to the completing-the-square technique, a sufficient condition is established to ensure the finite-horizon H∞ index of the synchronization error dynamics against both quantization errors and external noises. Moreover, a controller design algorithm is put forward via an auxiliary H₂-type criterion, and the desired controller gains are acquired in terms of two coupled backward Riccati equations. Finally, the validity of the presented results is verified via a simulation example.Expensive optimization problems arise in diverse fields, and the expensive computation in terms of function evaluation poses a serious challenge to global optimization algorithms. In this article, a simple yet effective optimization algorithm for computationally expensive optimization problems is proposed, which is called the neighborhood regression optimization algorithm. For a minimization problem, the proposed algorithm incorporates the regression technique based on a neighborhood structure to predict a descent direction. The descent direction is then adopted to generate new potential offspring around the best solution obtained so far. The proposed algorithm is compared with 12 popular algorithms on two benchmark suites with up to 30 decision variables. Empirical results demonstrate that the proposed algorithm shows clear advantages when dealing with unimodal and smooth problems, and is better than or competitive with other peer algorithms in terms of the overall performance. In addition, the proposed algorithm is efficient and keeps a good tradeoff between solution quality and running time.Recently, deep-learning-based feature extraction (FE) methods have shown great potential in hyperspectral image (HSI) processing. Unfortunately, it also brings a challenge that the training of the deep learning networks always requires large amounts of labeled samples, which is hardly available for HSI data. To address this issue, in this article, a novel unsupervised deep-learning-based FE method is proposed, which is trained in an end-to-end style. The proposed framework consists of an encoder subnetwork and a decoder subnetwork. The structure of the two subnetworks is symmetric for obtaining better downsampling and upsampling representation. Considering both spectral and spatial information, 3-D all convolution nets and deconvolution nets are used to structure the encoder subnetwork and decoder subnetwork, respectively. However, 3-D convolution and deconvolution kernels bring more parameters, which can deteriorate the quality of the obtained features. To alleviate this problem, a novel cost function with a sparse regular term is designed to obtain more robust feature representation. https://www.selleckchem.com/products/ff-10101.html Experimental results on publicly available datasets indicate that the proposed method can obtain robust and effective features for subsequent classification tasks.Feature selection is one of the most frequent tasks in data mining applications. Its ability to remove useless and redundant features improves the classification performance and gains knowledge about a given problem makes feature selection a common first step in data mining. In many feature selection applications, we need to combine the results of different feature selection processes. The two most common scenarios are the ensembles of feature selectors and the scaling up of feature selection methods using a data division approach. The standard procedure is to store the number of times every feature has been selected as a vote for the feature and then evaluate different selection thresholds with a certain criterion to obtain the final subset of selected features. However, this method is suboptimal as the relationships of the features are not considered in the voting process. Two redundant features may be selected a similar number of times due to the different sets of instances used each time. Thus, a voting scheme would tend to select both of them. In this article, we present a new approach instead of using only the number of times a feature has been selected, the approach considers how many times the features have been selected together by a feature selection algorithm. The proposal is based on constructing an undirected graph where the vertices are the features, and the edges count the number of times every pair of instances has been selected together. This graph is used to select the best subset of features, avoiding the redundancy introduced by the voting scheme. The proposal improves the results of the standard voting scheme in both ensembles of feature selectors and data division methods for scaling up feature selection.