Loading...

Table of Content

    21 November 2017, Volume 0 Issue 11
    An Improved Particle Swarm Optimization Algorithm Containing Resident Particles
    ZHANG Hong-wei, ZHANG Xiang-feng, ZHU Chen-xuan
    2017, 0(11):  1-5+12.  doi:10.3969/j.issn.1006-2475.2017.11.001
    Asbtract ( 100 )  
    References | Related Articles | Metrics
    An improved particle swarm optimization algorithm (CRPSO) is proposed to improve the premature convergence of particle swarm optimization algorithm (PSO). The particles of basic PSO are called as main particles in the improved algorithm. When the improved algorithm finds a better globally optimal extreme value point, it produces several points named resident particles around the global optimization point. The two kinds of particles work in cooperation that main particles are responsible for global research and resident particles for local search. Resident particles will help main particles to avoid falling into local extremum easily and improve the diversity of the whole particle swarm. The simulation results have validated its feasibility and effectiveness.
    Feature Set Optimization Algorithm of Fall Detection Based on Neighborhood Consistency and DBPSO
    WU Ke-yan, ZHANG Shu-ya, HUANG Yan-zi, LIU Shou-yin
    2017, 0(11):  6-12.  doi:10.3969/j.issn.1006-2475.2017.11.002
    Asbtract ( 118 )  
    References | Related Articles | Metrics
    At present there is no standard, authoritative fall detection test data, and the sample size by young people imitating fall is small, so how to use a limited data set to find the most representative feature set is particularly important. According to the characteristics of feature set in low sample and continuous type, a feature set optimization algorithm based on neighborhood consistency and discrete binary particle swarm optimization (DBPSO) was proposed. The algorithm firstly constituted the primary feature set based on optimized neighborhood consistency function and heuristic forward searching algorithm, and then used the primary feature set to initialize the population of DBPSO. At last the validity of the algorithm was verified using classification algorithm. The experimental results show that the algorithm can improve classification ability with fewer features selected, and the computational efficiency is also improved.
    Text Categorization Based on Graph Kernel
    JIANG Qiang-rong, SONG Lie-jin
    2017, 0(11):  13-16+61.  doi:10.3969/j.issn.1006-2475.2017.11.003
    Asbtract ( 120 )  
    References | Related Articles | Metrics
    In text classification, vector space model has the characteristic of simple representation, but only represents frequency information of feature word and ignores the structural information and semantic information of word order between words, which may lead to different documents to be represented as vectors of the same. In view of this problem, this paper uses the graph structure model to represent text, and a text is represented as a directed graph (abbreviated as text graph), which effectively solves the problem of the lack of structured information. In this paper, the graph kernel technique is applied to text classification, and a graph kernel algorithm, which is suitable for the computation of the similarity between text graphs, is proposed. Then support vector machine is used to classify the texts. The experimental results on the text set show that compared with the vector space model, the classification accuracy of interval walk kernel is better than other kernel functions, so it is a good graph structure similarity calculation algorithm and it can be widely used in text classification.
    A Fast Route Planning Method with 3D Reconstruct Triangular Grid
    MENG Zhao-xu, CAI Chao
    2017, 0(11):  17-22+34.  doi:10.3969/j.issn.1006-2475.2017.11.004
    Asbtract ( 128 )  
    References | Related Articles | Metrics
    The 3D environment reconstruction data are often represented with triangular grid, 3D reconstruction data based path planning is a basic work for UAV’s autonomous flight. However, the biggest difficulty in the planning process is the large amount of planning data, the long planning time and the excessive consumption of memory. Based on the triangular grid environment data obtained by 3D reconstruction, this paper proposed a three-dimensional path planning algorithm based on visibility analysis. By experiment, the method based on the visibility analysis was compared with the A * algorithm and particle swarm optimization. The experimental results showed that the proposed algorithm can generate three-dimensional feasible track rapidly in the case of less memory space.
    Classification Method of Motor Imagery EEG Signal Based on Improved CSP Algorithm
    MA Man-zhen, GUO Li-bin, SU Kui-feng
    2017, 0(11):  23-28.  doi:10.3969/j.issn.1006-2475.2017.11.005
    Asbtract ( 199 )  
    References | Related Articles | Metrics
    For the problem of low classification accuracy and poor real-time performance during the traditional common spatial patterns (CSP) algorithm for motor imagery EEG signal processing, a new analysis method of CSP EEG signal based on time space frequency domain is put forward. Firstly, the wavelet packet is used to decompose the original signal of EEG, the motor imagery EEG rhythm is extracted according to the frequency distribution of EEG signal, and the spatial features of EEG are extracted by improving CSP algorithm. Then, we introduce the time window to filter the EEG signals, and eliminate the influence of EEG fluctuation at the beginning and end of the motion imagery. Lastly, according to the characteristics of the physiological distribution of EEG signals in the brain cortex, the method based on spindle channel is used to process the EEG signal and analyze computational time of different algorithms and the classification results. The experimental results show that, the running time of the algorithm is 1.562 s, which is 67% shorter than the traditional method, and the average classification accuracy is up to 97.5% when the number of spindle channels is 29 and the time window is 2 s. In the meantime, the results show that the proposed method can effectively improve the classification accuracy and the real-time performance of motor imagery EEG.
    A Resource Scheduling Mechanism of Hadoop YARN
    LI Cheng, CHAI Xiao-li, XIE Bin, TANG Peng
    2017, 0(11):  29-34.  doi:10.3969/j.issn.1006-2475.2017.11.006
    Asbtract ( 194 )  
    References | Related Articles | Metrics
    YARN is a resource management system widely used in Hadoop. It supports MapReduce, Spark, Storm and other computing frameworks, and has become the core component of big data ecology. However, in Hadoop YARN’s existing resource scheduler, a resource guarantee mechanism based on resource reservation, will produce resource fragmentations, leading to a waste of resources. In order to improve the resource utilization and throughput of the cluster, this paper proposes a resource allocation mechanism based on reservation and backfill. In this mechanism, based on the priority of the job, it decides whether to make a reservation to the resource and introduce a backfill strategy to backfill the resource without affecting the execution of the reservation job. Experiments show that the resource scheduling mechanism based on reserved backfill can effectively improve the resource utilization and throughput of Hadoop YARN cluster.
    Overview on Error Correction Code Technologies for NAND Flash Memory
    PENG Fu-lai1,2, YU Zhi-lou1,2, CHEN Nai-kuo1,2, GENG Shi-hua1,2, BI Yan-shan1,2
    2017, 0(11):  35-40.  doi:10.3969/j.issn.1006-2475.2017.11.007
    Asbtract ( 252 )  
    References | Related Articles | Metrics
    This paper presented a review on the error correction code (ECC) technologies for NAND flash memory. Firstly, we introduced the internal structure and error mechanism of NAND flash. Then, we introduced the current four ECC technologies including Hamming code, RS code, BCH code and LDPC code and presented the research status. Lastly, the analysis and comparison among these four technologies were presented.
    Application of Multivariable Linear Regression in NMF Face Recognition
    GAO Liang1, PAN Ji-yuan2, YU Jia-ping3
    2017, 0(11):  41-45+54.  doi:10.3969/j.issn.1006-2475.2017.11.008
    Asbtract ( 178 )  
    References | Related Articles | Metrics
    Single sub-optimal nonnegative basis features usually contain limited face category information and the recognition rate lies on the related low dimensional representation. On account of the weak classification of NMF, more basis features were proposed to develop more latent correlated category information by looking at the face recognition process of NMF carefully. And then, the multivariable linear regression methods were used to build label mapping from ensemble weak labels to true label. It integrated the weak correlated category structure information and rose the correct category structure to the surface well. The results on several face databases show that the statistical label mapping enhances the face recognition capability of NMF.
    Improved Method of Point Cloud Registration Based on FPFH Feature
    MA Da-he, LIU Guo-zhu
    2017, 0(11):  46-50.  doi:10.3969/j.issn.1006-2475.2017.11.009
    Asbtract ( 224 )  
    References | Related Articles | Metrics
    ICP algorithm is the most commonly used algorithm in point cloud registration, and the FPFH feature can provide initial matching information to register point cloud. An improved method based on FPFH feature registration of point cloud is proposed. Firstly, the Bhattacharyya distance between the FPFH features of two point clouds is calculated. The k-d tree is used to retrieve the corresponding points with the smallest Bhattacharyya distance. Then, the initial transformation matrix is calculated, the ICP algorithm gets the final transformation matrix. Experiment shows that the method has higher accuracy under the same iterations.
    Recognition of Region of Interest from Facial Nerve Image Sequences
    LIN Qin-zhuang1, ZHONG Ying-chun1, LUO Wei-shi2
    2017, 0(11):  51-54.  doi:10.3969/j.issn.1006-2475.2017.11.010
    Asbtract ( 155 )  
    References | Related Articles | Metrics
    The computer recognition of facial nerve MR image sequences in the region of interest, can help doctors identify specific functional areas in the surgical planning process and performe the operation more targeted to avoid additional damage of facial nerve area. In this paper, facial nerve MR images are segmented into different functional areas, and a large number of case samples are studied by convolutional neural network, and the recognition model with high accuracy is obtained. The experimental results show that the convolutional neural network model can effectively recognize the region of interest in the image recognition of facial nerve, and it has a certain medical assistant effect.
    Road Extraction in High Resolution Remote Sensing Images Based on Improved K-means Algorithm
    LIU Huan1,2, YAN Zhen1,2
    2017, 0(11):  55-61.  doi:10.3969/j.issn.1006-2475.2017.11.011
    Asbtract ( 131 )  
    References | Related Articles | Metrics
    Aiming at the problem of feature extraction in road extraction in high resolution remote sensing images, a road extraction method based on improved K-means algorithm was proposed. Firstly, pretreatment was executed according to the specific scene of the target image. Secondly, improved K-means algorithm was introduced to implement spectral-textural classification to segment the image into two categories: road and nonroad groups. Thirdly, the geometric features of road were used to extract reliable road segments. Finally, mathematical morphology was used to complete the road information and get the final result. The experimental results show that our method can realize the road extraction in complex scene and has satisfactory effect.
    Image Set Compression with Content Adaptive Sparse Dictionary
    LI Qiang
    2017, 0(11):  62-66.  doi:10.3969/j.issn.1006-2475.2017.11.012
    Asbtract ( 124 )  
    References | Related Articles | Metrics
    In the big data era, there is a huge amount of image information, which brings considerable difficulties to the actual storage, transmission, etc. The main purpose of image set compression is to make use of its own content and remove redundant information of images. In this paper, an image set compression scheme based on content adaptive sparse dictionary is proposed. A set of classification sparse dictionaries is learned by using image content classification information, and these dictionaries will be used to replace the traditional transform. In addition, this paper uses the nonlocal similarity of image patches to optimize the problem in decoder. Experimental results demonstrate that the proposed method for image set compression outperforms JPEG method and the compression scheme based on recursive least squares dictionary learning algorithm (RLS-DLA) in terms of compression property.
    Evaluation Method of Automobile Maintenance and Service Quality Based on Relevance Vector Machine
    SHI Yun, LOU Xin-yuan, ZENG Ming-hua, WU Yan-ru
    2017, 0(11):  67-71.  doi:10.3969/j.issn.1006-2475.2017.11.013
    Asbtract ( 139 )  
    References | Related Articles | Metrics
    Aiming at the real performance appraisal requirement of comprehensive performance evaluation for maintenance and service quality at service stations (including 4S shops) which belong to automobile manufacturing enterprise, we use the relevance vector machine theoretical model algorithm to implement the evaluation of maintenance and service quality. Relevant experiments show that the relevance vector machine is better than the traditional artificial neural network, support vector machine and depth neural network for the evaluation of service quality of automobile service providers. This evaluation method improves the validity, timeliness and satisfaction of the fault treatment, and provides a good service provider for the after-sales service department of the automobile manufacturer. It provides a reference for the customers who are in good agreement with the cause of the failure. At the same time, the selection of evaluation indexes is more comprehensive and detailed, which makes the evaluation model more conducive to practical production applications.
    Optimization Design and Implementation of Environmental Emergency Command Platform Based on Big Data
    YIN Qi-ming
    2017, 0(11):  72-75+94.  doi:10.3969/j.issn.1006-2475.2017.11.014
    Asbtract ( 131 )  
    References | Related Articles | Metrics
    In view of the problem of conflicts existing in sequential scheduling of massive data in the emergency scheduling platform based on big data environment, a platform design method based on parallel scheduling thinking is proposed. The environmental emergency command platform is divided into the emergency signal reception and the emergency disposal. An adaptive parallel scheduling method is designed for the devices including the main controller, individual soldier system, emergency communication vehicle, transceiver circuit, analog digital circuit, etc. A fuzzy hierarchical scheduling method is introduced to solve the conflicts in sequential scheduling. Experimental results show that the designed platform has high performance of communication and emergency response.
    Spatial Keyword Index Method Based on Hadoop
    ZHANG Jin, FENG Jun, LU Jia-min
    2017, 0(11):  76-83.  doi:10.3969/j.issn.1006-2475.2017.11.015
    Asbtract ( 116 )  
    References | Related Articles | Metrics
    With the rise of the mobile Internet, huge amounts of data are generated from mobile terminal, these data contain not only traditional text information but also spatial location information. In order to effectively process and utilize these data, highly efficient methods of spatial keyword index have become a hot research topic in the field. But in the face of huge amounts of spatial data, the existing methods of spatial keyword index still have problems such as the lack of scalability and extensibility, easy to generate query hotspots. In order to deal with the above problems, two improvements measures are proposed in this paper: the method of spatial keyword index based on Hadoop and optimized parallelizable algorithm of index. Finally, we compare the spatial keyword index method proposed in this paper with the existing spatial keyword index methods and verify its effectiveness.
    A Long-term Fair Queue Scheduling Algorithm with Utilization of Historical Information
    RUI Mao-hai
    2017, 0(11):  84-88.  doi:10.3969/j.issn.1006-2475.2017.11.016
    Asbtract ( 165 )  
    References | Related Articles | Metrics
    The classical delay-based queuing scheduling only focuses on the queuing delay in its scheduling instance, but without memorizing the historical information. Therefore fairness of its queuing delay cannot be guaranteed when the traffic flows suffer unexpected great change. A long-term fairness oriented scheduling algorithm was proposed, which had not only synthetically considered the instantaneous parameters, i.e. queue length and the traffic arrival rate, but also considered the historical parameter, i.e. the historical accumulation of delay. Using this algorithm, the scheduling could be more rational, and the queuing delay wouldn’t change as vastly as the traffic flows. In addition, another contribution in this paper is that the relationship among the aforementioned three parameters were not ‘be given’, but ‘be deduced’. In another word, this paper modeled the long-term fairness and then solved the long-term optimization problem, so as to obtain the relationship. At last, the proposed algorithm was compared with the WRR, RPF and EDF algorithms, confirming that the algorithm in this paper could gain higher fairness and higher stabilization.
    Markov Network Query Expansion Model Based on Term Importance
    WANG Qian-qian, LUO Wen-bing
    2017, 0(11):  89-94.  doi:10.3969/j.issn.1006-2475.2017.11.017
    Asbtract ( 88 )  
    References | Related Articles | Metrics
    The weight of term has been widely used in models of information retrieved. In order to solve the problem of independence assumption of word bags mode for traditional model, the weight of term based on the importance of term will be used in the Markov network query expansion model. In order to calculate the weight of the term, firstly we must establish the graph-of-word of documents. Then according to the graph-of-word, we get the matrix that terms occur together and the probability transfer matrix between terms. Lastly, we use the chain of Markov to get the weight of term. By putting the weight of term into the Markov network query expansion model, the experiment results on 5 standard datasets show that the search results of using Markov network query expansion model based on term importance are better than those based on traditional model of word bags.
    Construction of Automatic Intrusion Detection Model Using K-means Algorithm Based on Novel Cuckoo Search Optimization
    WEI Wan-yun
    2017, 0(11):  95-95+104.  doi:10.3969/j.issn.1006-2475.2017.11.018
    Asbtract ( 209 )  
    References | Related Articles | Metrics
    In consideration of the shortcomings of traditional K-means clustering algorithm, such as poor global search ability and artificial initial cluster number, an intrusion detection system using adaptive K-means algorithm optimized by novel Cuckoo Search algorithm (NCS-AKM) was proposed. In order to increase the diversity of CS algorithm, a similar differential evolution strategy was introduced to complete the individual variation. The KDD Cup99 dataset was applied to rebuild the training data and the four-phase testing data where a new attack was introduced respectively in third and fourth phase. The experiment indicates that NCS-AKM system is sensitive to new attacks, obtaining satisfied detection performance as well as convincing clustering result, and the overall detection rate of four attacks is as high as 83.4% (range:70.8%~89.9%), while the false positive rate is 6.3% (range: 3.0%~11.5%).
    An Encryption and Cloud Storage PDP Verification Algorithm Based on Multi Chaos Sequences
    LIU Kai-le1, CONG Zheng-hai2, WANG Bo-wen2
    2017, 0(11):  100-104.  doi:10.3969/j.issn.1006-2475.2017.11.019
    Asbtract ( 135 )  
    References | Related Articles | Metrics
    To protect the confidentiality and integrity of data stored in cloud, we propose a new data encryption and PDP verification algorithm based on multi chaos sequences, which can encrypt cloud storage data and support PDP verification at the same time. By the algorithm, users can produce multi chaos sequences by a key composed of a pure decimal and an integer. Using those sequences, they can complete cloud storage data encryption and PDP verification code embedding by running the algorithm once. Because it hides the places and values of embedded PDP verification code by chaos sequences, the algorithm supports unlimited times PDP verification request. Experiments show that the algorithm has strong ability of data security protection and outstanding efficiency of data encryption and decryption.
    Likelihood-gating Sequential Monte Carlo Probability Hypothesis Density Filter
    GAO Yi-yue1,2, JIANG De-fu2, LIU Ming2, FU Wei2
    2017, 0(11):  105-110+115.  doi:10.3969/j.issn.1006-2475.2017.11.020
    Asbtract ( 126 )  
    References | Related Articles | Metrics
    To resolve the low computational efficiency of the sequential Monte Carlo (SMC) implementation of the probability hypothesis density (PHD) filter, which requires a large number of particles, we propose an improved SMC-PHD filter called the likelihood-gating SMC-PHD filter. Firstly, based on all the predicted particles, the maximum number of actual surviving observations can be selected, as all multi-target posterior information is utilized. Secondly, based on all the likelihood values of predicted particles in the updater, the proposed filter can be easily implemented in various applications, as it obviates labeling the particles and calculating the distances. Only the effective observations are employed to update the weight of particles. Experiments show that this filter has excellent real-time performance and better filtering accuracy compared with the basic SMC-PHD filter.
    Application of WebSocket in Real-time Monitoring of Industrial Equipment Data
    ZHANG Wen, MU Yan, GAO Zhen-xing, LIU Zhi-feng
    2017, 0(11):  111-115.  doi:10.3969/j.issn.1006-2475.2017.11.021
    Asbtract ( 177 )  
    References | Related Articles | Metrics
    In order to ensure the safety of the production of industrial equipment, it is necessary to monitor the change of production equipment parameters in real time. Due to the low performance of the traditional communication mode which can not achieve real-time communication, we propose to use WebSocket as the real-time communication in the real-time monitoring of industrial equipment data. The data collected by the server is pushed to the client in real time through the WebSocket. The practical application shows that this method can achieve high real-time performance and meet the requirements of the system.
    A Case Study to Predict Career Development by Comprehensive Quality Evaluation Data of College Students
    LIU Zhi-yong, YUAN Jia-xi
    2017, 0(11):  116-121.  doi:10.3969/j.issn.1006-2475.2017.11.022
    Asbtract ( 134 )  
    References | Related Articles | Metrics
    Around the research on the employment guidance of college students, it has become a widely accepted way that uses a large amount of data accumulated in the process of teaching management and data mining methods to provide an objective basis for guiding the work. In this paper, we study a case to solve some problems such as the data dimension is single, the data analysis method is simple and the data application is not enough. Based on the comprehensive quality evaluation data of a normal university students in the past five years, we predict the students career development direction by using the grey prediction model, classification rules, association rules, etc. And we can provide the objective basis of work implementation for college student management workers. The result shows that the accuracy of prediction is up to 81.82%.
    Model of Stock Price Prediction Based on Learning Artificial Immune System
    JIANG Ji-ping, JI Fang
    2017, 0(11):  122-126.  doi:10.3969/j.issn.1006-2475.2017.11.023
    Asbtract ( 115 )  
    References | Related Articles | Metrics
    In traditional artificial immune algorithm, there is no differentiation in clone step and variation step, and BP neural network is prone to obtain local minimum value. This paper presents a hybrid model combining a learning artificial immune algorithm and BP algorithm for stock shares forecast and investment strategy analysis. This model overcomes the shortcomings of artificial immune algorithm in cloning antibody and antibody variation without differentiation, and adds the antibody learning function in the model, accelerating the convergence speed and accuracy of antibody optimization. The simulation results show that the stock price prediction model with learning artificial immune algorithm is superior to BP stock price prediction model in the stock price prediction accuracy and investment strategy.