Loading...

Table of Content

    22 November 2018, Volume 0 Issue 11
    Research on Application of PKI Based on Nation Secret Algorithm in ICS
    WEI Shan-shan, HAN Qing-min, GUO Xiao-wang, ZHANG Wan, GONG Chun-yan
    2018, 0(11):  1.  doi:10.3969/j.issn.1006-2475.2018.11.001
    Asbtract ( 189 )  
    References | Related Articles | Metrics
    The national production of Industry Control System (ICS) is imperative, and a more secure and reliable identification method is urgently needed. PLC-centric system is a typical ICS, and the Public Key Infrastructure (PKI) can solve the authenticity of the identity of both communication parties. This paper studies PKI based on the national secret algorithm in a PLC-centric ICS, and gives the certificate authentication model of ICS and the deployment design of PKI. Then taking the open source framework OpenSSL for example, using the engine technology, the paper analyzes the combination of the national secret algorithm and PKI, and gives the pivotal structures and algorithm design of the SM2, SM3 extended to OpenSSL. Finally, the paper designs a PKI management system for ICS, then develops and implements the system. All the work of this paper provides a good basis for the application of PKI to the ICS, and provides a new idea for the security of the identity authentication of the ICS.
    Research on Fault Recovery Mechanism of Power Information System Based on SDN
    YUAN Jie, ZHANG Min-lei
    2018, 0(11):  7.  doi:10.3969/j.issn.1006-2475.2018.11.002
    Asbtract ( 94 )  
    References | Related Articles | Metrics
    In order to improve the flexibility and reliability of the power information system and reduce the difficulty of troubleshooting caused by the improper operation of the information system, based on the current development of the power information system, combining the SDN network architecture and technical features, this paper analyzes the advantages of the SDN technology in the power information system. In addition, the active and passive fault recovery scheme based on SDN power information system is also designed and implemented. The experimental results show that the active and passive fault recovery scheme can realize the automatic restoration of faults in the power information system conveniently and cheaply, so it meets the dynamic recovery requirements of the power information system, provides a theoretical basis for the extensive application of the SDN in the power information system.
    Research on Replica Data Adaptive Backup Strategy Based on Swift
    DU Hua1,2, GUO Jun1,2, LIU Hua-chun1,2
    2018, 0(11):  12.  doi:10.3969/j.issn.1006-2475.2018.11.003
    Asbtract ( 93 )  
    References | Related Articles | Metrics
    Redundant data backup is one of the important guarantee mechanisms to ensure the reliability of data under the cloud data center. OpenStack is an open source platform for cloud computing which belongs to IaaS layer private cloud services, and it has been widely used in the industry. The Swift module, which is one of the OpenStack’s modules, uses the consistent Hash algorithm in order to choose the replica backup node by a Ring to complete  the load balancing and data backup. By analyzing the implementation mechanism and code of Swift, this paper points out the shortcomings of the selection of the nodes in the replica placement node, and then puts forward the optimization selection strategy ABS (Adaptive Backup Strategy). On the basis of real-time monitoring of the current storage node load, the mechanism adaptively selects the recently available nodes to complete the backup in order to optimize the overall backup efficiency based on the predetermined threshold and lower limits. By comparing with the existing copy backup strategy and experimental verification, it shows that ABS improves the four reading and writing performance of the system by 3.4%~9.1% on the basis of maintaining the balance of data copy allocation, and achieves the purpose of optimizing access.
    A Defense Policy Learning Algorithm for Power Information Networks Based on Optimal Initial Value Q-learning
    JING Dong-sheng1, YANG Yu1, XUE Jing-song1, ZHU Fei2, WU Wen2
    2018, 0(11):  18.  doi:10.3969/j.issn.1006-2475.2018.11.004
    Asbtract ( 120 )  
    References | Related Articles | Metrics
     Maintaining the security and stability of the power information network is an important guarantee for today’s social development. With the development of the power information network, the researchers now focus on how to establish an efficient and stable power information protection network. The defense strategy used in an automated power information network system used to have problems such as slow update speed, long update cycle, inability to update automatically, and uneven resource allocation. The paper proposed a power information network defense algorithm based on optimal initial value Q learning. The method uses the classical reinforcement learning algorithm. Defensive strategy is obtained through simulated confrontation. Defensive agent uses Q-learning algorithm in order to utilize the historical experience. The optimistic initial values could greatly accelerate the training speed of the system’s defensive performance. The experiment verifies the effectiveness of the algorithm.
    Fault Detection of Multi-phase Batch Processes Based on PARAFAC2 Phase Partition
    CAO Xue, WANG Jian-lin, HAN Rui, QIU Ke-peng, LIU Wei-min
    2018, 0(11):  23.  doi:10.3969/j.issn.1006-2475.2018.11.005
    Asbtract ( 128 )  
    References | Related Articles | Metrics
    The multi-phases characteristics of the batch process directly influenced accuracy of multivariate statistical analysis process modeling. A fault detection method of multi-phases batch processes based on parallel factor analysis 2 (PARAFAC2) phase partition was presented for the multi-phases characteristics of the batch process. Firstly, a group of time-slice matrix models were built based on PARAFAC2, to get the control limits of time-slice matrices. Secondly, each time-slice matrix was added into the time-block matrices chronologically from the initial moment, and established models based on PARAFAC2 for the time-block matrices, to get the control limits of time-block matrices. Thirdly, the points of phase partition were found by evaluating the difference between the control limits of time-slice matrices and time-block matrices, then the optimal phase partition result was chosen according to the phase partition combination index(PPCI). Finally the MPCA models for each phase was built and the fault detection of batch process was realized. The proposed method preserved the three-way structural characteristics and data integrity of the batch process, considered chronological sequence in the actual operation of the batch process comprehensively, thus improved the accuracy of phase partition. The simulation experiments of penicillin fermentation process verified the effectiveness of the proposed method.
    Verification and Research on Longitudinal Short-period Mode of Flight Simulator
    ZHAO Shan-lu, LI Guo-hui
    2018, 0(11):  30.  doi:10.3969/j.issn.1006-2475.2018.11.006
    Asbtract ( 102 )  
    References | Related Articles | Metrics
    For a certain type of flight simulator, this paper extracts the time course curve of the related six parameters of the short cycle type modal of the simulator. In time domain, using the average standardized distance test method and gray correlation analysis method, the consistency checking of parameters of flight simulation data and real data is done. In frequency domain, using the classic spectrum estimation method, the quantitative assessment is completed for the flight simulation data and flight parameters. So the evidence theory is improved to synthesize consistency check results of time domain and frequency domain. At last, the damping ratio and the self-oscillation frequency of the short-period mode are verified, the results show that the fidelity of the longitudinal short-period model of the flight simulator is good.
    Self-paced Context-aware Correlation Filter Tracking Algorithm
    ZHANG Chi, HAN Li-xin, XU Guo-xia
    2018, 0(11):  35.  doi:10.3969/j.issn.1006-2475.2018.11.007
    Asbtract ( 150 )  
    References | Related Articles | Metrics
    Aiming at the problem of target scaling, occlusion and fast movement in target tracking, this paper proposes a self-paced context-aware correlation filter tracking algorithm. First, the global context information of the target is introduced in the regularized least squares classifier so that these context information can be learned by the filter, and a high response to the target and a near-zero response to the context information. Then we introduce self-paced learning, assign weights to the target and context information of each frame, pick out reliable target and context information, and update the filter template. Finally a robust and efficient appearance model is got by learning. Experiments show that the algorithm improves 2.81% in distance accuracy (DP), improves success rate (SR) by 13.9%, and has a good tracking effect.
    2D Engineering CAD Drawings Vectorization Method#br# Based on Recognition of Object Legends and Their Topology
    ZHANG Qi, YE Ying
    2018, 0(11):  40.  doi:10.3969/j.issn.1006-2475.2018.11.008
    Asbtract ( 127 )  
    References | Related Articles | Metrics
    To realize the vectorization of 2D engineering CAD drawings, a vectorization method based on the recognition of object legends and their topology is proposed. The method firstly proposed a classification method for multi-class object legends based on HOG and SVM by using the geometric properties of object legends; Then, the annular segmentation feature of objects was extracted to recognize subclass object legends; Finally, Object legends’ Topology was recognized by the method based on connected components labeling. Experimental results demonstrate that the proposed algorithm is able to efficiently recognize the objects and their topology relation in the drawings and is robust to the fractured and fuzzy lines. The extracted geometric and topological information can lay the foundation for subsequent vectorization of drawings. Besides, the exploratory research of related technologies in this thesis is also instructive to the fellow-up study of engineering drawings vectorization.
    A Face Recognition Algorithm Based on Gabor Wavelet and Deep Learning
    PAN Zheng-rong, WANG Zhen
    2018, 0(11):  46.  doi:10.3969/j.issn.1006-2475.2018.11.009
    Asbtract ( 193 )  
    References | Related Articles | Metrics
    In order to reduce the negative effects of factors such as illumination and posture and solve the problem that shallow learning methods can’t extract the abstract features of face images in face recognition, a face recognition algorithm based on the Gabor wavelet and deep learning was proposed. Firstly, the facial Gabor features of different scales and directions were obtained by Gabor wavelet transform, the dimensionality of Gabor features was reduced availably by downsampling and Restricted Boltzmann Machine (RBM). Secondly, the features of dimensionality reduced were taken as the input of the Deep Belief Networks (DBN), and DBN was trained by the Contrastive Divergence algorithm. Finally, DBN was fine-tuned by labeled data. The Softmax classifier was used to classification for the features extracted, which was implemented at the top layer. The recognition rate reaches 98.72%, 96.51% and 96.13% respectively on ORL, UMIST and Yale-B face databases. The experiment results indicate that the proposed method is markedly better than other existing algorithms in recognition performance and achieves good robustness to changes in illumination and posture.
    Deadline Constrained MapReduce Jobs Scheduling for Cloud Computing
    ZHOU Bo1, LI Ya-qiong1, LIU Yong-bo1, LI Shou-chao1, SONG Yun-kui2
    2018, 0(11):  51.  doi:10.3969/j.issn.1006-2475.2018.11.010
    Asbtract ( 123 )  
    References | Related Articles | Metrics
    This paper proposes a deadline constrained MapReduce jobs scheduling approach for cloud computing. A weighted bipartite graph is used to model the problem of scheduling MapReduce jobs. Map jobs and reduce jobs are organized as two isolated sets, and the weights of edges connecting two sets represent the executing time to execute jobs. Furthermore, an integral linear programming method is used to solve the problem of matching the bipartite graph with the least weight. The proposed scheduling approach considers heterogeneous servers in cloud computing, where different servers have different task execution time. This paper online predicts and adjusts the deadlines of different tasks, so improves the performance of processing MapReduce jobs significantly. The experimental results demonstrate that the proposed approach reduces the time of accessing data and the jobs which violates the deadline.
    Design and Application of Joint Environmental Perception Model #br# Based on Blockchain Technology
    ZHANG Jun-yuan, LIU Jing-wei
    2018, 0(11):  56.  doi:10.3969/j.issn.1006-2475.2018.11.011
    Asbtract ( 132 )  
    References | Related Articles | Metrics
    In order to explore new ideas and methods for joint environment perception, improve the ability of multiple sensors perceive integrated, comprehensively and accurately, a joint environment perception model based on blockchain technology was proposed and the areas involved were studied. An application scenario of a joint environmental perception model is given. Through the design of the perceptual model, information sharing and target perception is accurately perceived. At the same time, the perceptual information is guaranteed to be trustworthy, cannot be modified and traceable. In the future, building an application based on a joint environment perception model can also combine sensing and control, which achieves automatic control and autonomous collaboration of sensor nodes ultimately.
    A CUDA-based Cascade Pumping Station Scheduling Algorithm
    XIANG Wu-ming, LI Xue-wei
    2018, 0(11):  60.  doi:10.3969/j.issn.1006-2475.2018.11.012
    Asbtract ( 137 )  
    References | Related Articles | Metrics
    The dynamic programming method to solve the cascade pumping station scheduling problem is very classic, but there is a “dimensional disaster” problem in the calculation, GPU parallel computing technology can accelerate the repetitive calculations and improve the computational performance of the algorithm. This paper analyzes the dynamic planning method of cascade pumping station scheduling problems, uses CUDA (Unified Computing Device Architecture) to improve the scheduling algorithm, and gives the improved dynamic programming algorithm, and compares the time-consuming of the scheduling algorithm calculation under different computing scales. The experimental results show that the cascade pumping station scheduling algorithm based on CUDA improved dynamic programming method can reduce the calculation dimension. When the calculation scale is large, the acceleration effect is better.
    Research on Military Logistics Distribution Routing Optimization Problem #br# Based on Spark and PSO Algorithm
    ZHANG Li-juan1, QIU Jian-wei1, DU Deng-chong2, WANG Xin1
    2018, 0(11):  65.  doi:10.3969/j.issn.1006-2475.2018.11.013
    Asbtract ( 121 )  
    References | Related Articles | Metrics
    Research on the military logistics distribution routing optimization is to study how to guarantee the shortest route of the vehicles under the premise of ensuring the supply of the troops. Using Particle Swarm Optimization (PSO) algorithm to solve this problem, the program running time will increase significantly with the increase of troop numbers. Considering the characteristics of algorithm iteration calculation, a solution to parallel running PSO algorithm on Spark cluster is proposed. Experimental results show that the parallel running PSO algorithm using Spark cluster can greatly reduce the program running time and improve the efficiency of military logistics distribution routing optimization problem.
    Research on Object Detection and Content Recommendation System in Short Video Based on Deep Learning
    SHI Yin-qiao1, LIU Shou-yin1, MA Chao2
    2018, 0(11):  69.  doi:10.3969/j.issn.1006-2475.2018.11.014
    Asbtract ( 179 )  
    References | Related Articles | Metrics
    Short video has been developing rapidly in recent years, and short video advertising has a promising prospect. However, the traditional advertisements are usually stiffly inserted into the videos, which are inefficient and always decrease users’ experience. This thesis proposes a systematic scheme for video object detection and content recommendation based on the deep learning model Faster R-CNN. This scheme will match the video contents to the displayed advertisements based on the principles of high correlation, precision and low interruption, thus obtains a balance between recommendation and user experience. Two system modes are available according to the video sources and network environments, named as Cloud Mode and Mobile Terminal Mode. The Cloud Mode is composed of a server, Content Delivery Network (CDN) and clients. The server will detect and recognize the contents of the CDN videos in advance, match them to corresponding advertisements by some recommendation algorithms and play the contents on the mobile Clients. The Mobile Terminal Mode mainly processes non-CDN resources such as some local videos, completes the tasks of object detection, recognition and content recommendation with limited computation ability. We apply the MobileNet model to improve the detection speed and accuracy, as well as to reduce memory footprint. To further increase efficiency and achieve real-time performance under the Mobile Terminal Mode, we implement joint compilation of Java and C++ code, adopt a self-developed player and cut down the object category based on the feedback system.
    Measuring Students Activities via Active Entropy Model with Smartcard Data
    REN Jin-hua, LIU Tao, YANG Lin-tao, LIU Shou-yin
    2018, 0(11):  77.  doi:10.3969/j.issn.1006-2475.2018.11.015
    Asbtract ( 120 )  
    References | Related Articles | Metrics
    With the rapid developments and extensive application, information technology brings new ideas and opportunity for education. The combination of education management and data mining draws more and more attention in the education field. At present, there is little research on the quantitative analysis of the orderliness of student activities. To grasp these opportunities, this paper introduces campus active entropy by using the Smartcard Consumption Records(SCR). A spatial-temporal weighted active entropy algorithm is proposed to quantize the orderliness of 13575 undergraduates with 7.9 million SCR in one year. Based on the active entropy, the students are clustered into group and analyzed. Meanwhile, Apriori algorithm is adopted to  analyze the correlation of campus behavior and academic performance. The association rules found out are consistent with existing psychology research. The evaluation results show the effectiveness of this approach in the field of student’s orderliness measure, and active entropy is helpful for evaluating students and providing guidance on smart campus management.
    A Movie Recommendation Model: Solving Cold Start Problem
    LIU Chun-xia, LU Jian-bo, WU Ling-mei
    2018, 0(11):  83.  doi:10.3969/j.issn.1006-2475.2018.11.016
    Asbtract ( 215 )  
    References | Related Articles | Metrics
    The rating data of the recommended system database is scarce, and the quality of the movie recommendation is limited. To solve this problem, a model that simultaneously incorporates user and movie metadata into an improved implicit semantic model is proposed. The user metadata-classification matrix and movie metadata-classification matrix are constructed, and the classification domain and the implicit factor space are mapped to obtain the hidden factors of the new user and the new movie, and a recommendation is made. The experimental results show that this model can effectively solve the cold start problem while improving the accuracy of prediction.
    MRP Classification Algorithms Comparison and Semantic Paradigm Analysis
    LIU Bin, ZHANG Ji-cong
    2018, 0(11):  88.  doi:10.3969/j.issn.1006-2475.2018.11.017
    Asbtract ( 120 )  
    References | Related Articles | Metrics
    The mechanisms of movement related potentials (MRPs) are very complex and variable in forms, which makes it very challenging for feature extraction and data mining of brain electrophysiological signals based on MRPs. The purpose of this paper is to apply a variety of machine learning and semantic paradigm models to the data mining of brain electrophysiological signals to meet the above challenges. We used a variety of machine learning algorithms and signal processing methods for analysis and experimental comparison, and presented the best models corresponding to different scenarios and goals; in order to seamlessly connect the three large-span areas of fuzzy electrophysiological signals, deep learning techniques compatible with various heterogeneous signals and explicit semantics models, we had implemented a semantic paradigm framework that used brain electrophysiological signals data as research objects. We had endowed complex signals with grammatical, syntactic and semantic connotations, and constructed semantic interpretations for deep neural networks. Through this paradigm framework, we can identify the specific semantic information blocks in brain electrophysiological signals and semantic combinations between these information blocks, and automatically learn efficient filters to achieve the effect of high accuracy, high transmission rate and high adaptability.
    GRU Neural Network’s Prediction of Stock Closing Price
    LI Lei, CHEN Ai-xiang, LI Wei-shu, LIANG Wei-qi, YANG Si-tong
    2018, 0(11):  103.  doi:10.3969/j.issn.1006-2475.2018.11.018
    Asbtract ( 340 )  
    References | Related Articles | Metrics
    The stock market is a nonlinear dynamics system, which is changeable and complicated, and the price of stock is a kind of data with the character of time sequence. Given that, this thesis selected the model of Gated Recurrent Unit(GRU) with the function of time memory to deal with the problem of predicting time series data. The thesis selected daily closing price data of 18 securities industry stock in Shanghai, and the deadline of the data is December 29th, 2017. The data volume of the thesis is 1000 days per stock. To predict the closing price of the stock in the next 10 days, the thesis made empirical research. The testing and validation errors of GRU recurrent neural network is smaller than that of the other two models in the same type of error, while the accuracy of the prediction of closing price in next 10 day of Gated Recurrent Unit(GRU) has reached 98.3%, showed in the empirical results. This has embodied the strong learning ability and generalizing ability of Gated Recurrent Unit(GRU). On the other hand, comparing the test error, variance in forecast closing price, and validation error of testing of Gated Recurrent Unit(GRU)on the sequence length of 240 days, 120 days, 60 days, and it suggests that the predicting accuracy are all in a high precision. However, the testing result on the sequence length of 240 days has a lower variance obviously, showing its better stability.
    A Deep Learning Recommendation System of Movie Based on Dual-attention Model
    XIAO Qing-xiu1,2, TANG Kun3
    2018, 0(11):  109.  doi:10.3969/j.issn.1006-2475.2018.11.019
    Asbtract ( 291 )  
    References | Related Articles | Metrics
    Traditional collaborative filtering technology only used the user’s rating matrices on items to make recommendation. Because the rating matrices were too sparse and the traditional way did not take fully advantage of the many other features of users and objects, it led to a severe drop in recommendation accuracy for recommendation systems. In recent years, deep learning technology has made remarkable achievements in many fields of machine learning, in order to improve the traditional collaborative filtering recommendation system’s situation, this paper proposed a deep learning recommendation system based on dual attention-model of movie. This system used the depth learning framework to process multiple input feature information in Recommender systems, at the same time, which introduced dual attention mechanism and used the first attention layer to learn the user’s preference for film characteristics and the second attention layer to learn user’s preference for the complete movie in their watching list. After learning the user’s preference, the experimental results show that the recommendation performance has been improved.
    Research on Chinese Word Segmentation Technology for Military Field
    LI Jian-long, WANG Pan-qing, HAN Qi-yu
    2018, 0(11):  115.  doi:10.3969/j.issn.1006-2475.2018.11.020
    Asbtract ( 122 )  
    References | Related Articles | Metrics
    When the word segmentation model cross-field word segmentation, the performance will be significantly reduced. Due to the complexity of annotating the corpus work of the legacy system development documents of the army, this paper proposes an adaptation method of Chinese word segmentation in combination with n-gram and domain dictionary. By extracting the n-gram features of the target corpus, the method adapts to the word segmentation model in the feature domain. Then, the domain dictionary is used to perform reverse maximum matching correction on the word segmentation results. Experimental results show that in the corpus of documents related to the legacy system of the army, the word segmentation model trained by this method improves the F-measure by 12.4%.
    #br# Distributed Big Data Machine Learning Algorithms Based on Spark
    WANG Rui1, HAN Rui2, Jia Yu-xiang1
    2018, 0(11):  119.  doi:10.3969/j.issn.1006-2475.2018.11.021
    Asbtract ( 306 )  
    References | Related Articles | Metrics
     For big data, machine learning technology is a tool of analysis which is indispensable. For machine learning, more and more data may improve the accuracy of the model, however complex machine learning algorithms urgently require such key technologies as distributed memory computing in terms of time and performance. Spark distributed memory computing can implement the parallel operation of the algorithm, which is beneficial for machine learning algorithms to process large data sets. Therefore, this paper presents nonlinear machine learning algorithms in Spark distributed memory environment, including multi-layer variable neural network, BPPGD SVM, K-means. And we make optimizations about data compression, data bias sampling, or data loading based on the above implementation. At the same time, the SparkML scheduling framework is implemented to dispatch the above optimization algorithms. The experimental results show that the average error of the three optimized algorithms is reduced 40% and the average time is reduced 90%.