Loading...
主 管:江西省科学技术厅
主 办:江西省计算机学会
江西省计算中心
编辑出版:《计算机与现代化》编辑部
Toggle navigation
Home
About Journal
Editorial Board
Journal Online
Online First
Current Issue
Archive
Most Read Articles
Most Download Articles
Most Cited Articles
Subscription
FAQ
Self Recommendation
Contact Us
Email Alert
中文
Office Online
Submission or Manuscript
Peer Review
Editor-in-Chief
Office Work
Instruction
More>>
Instruction
Submit Flow
Template
Copyright Agreement
The Author Changes the Application
Highlights
More>>
Links
More>>
Visited
Total visitors:
Visitors of today:
Now online:
Table of Content
03 January 2019, Volume 0 Issue 12
Previous Issue
Next Issue
Decentralized Parallel Kalman Filter for Multi-sensor System with State Constraints
LI Guo-ping, XING Jian-chun, WANG Shi-qiang
2018, 0(12): 1. doi:
10.3969/j.issn.1006-2475.2018.12.001
Asbtract
(
285
)
References
|
Related Articles
|
Metrics
Based on the Insect Intelligent Building platform, a decentralized Kalman filter algorithm with state constraints is presented. The algorithm establishes the equation through physical constraints, and uses the equality constraint between neighboring nodes to calculate the Kalman filter estimation value with state constraints, so as to achieve the purpose of fault diagnosis and data checking. Based on the distributed structure, the algorithm takes the form of sensor network nodes, each node having its own processing system without any central nodes or central communication facilities. Therefore, the proposed algorithm is fully distributed, allowing independent calculations between multiple measurement nodes. The paper discusses the algorithm derivation process in detail, and verifies the parallelism, accuracy and stability of the algorithm through software simulation and hardware testing.
LEACH Protocol Based on Modified Cuckoo Search Algorithm
YANG Xiao-qin
2018, 0(12): 7. doi:
10.3969/j.issn.1006-2475.2018.12.002
Asbtract
(
232
)
References
|
Related Articles
|
Metrics
LEACH is a low energy consumption and adaptive clustering hierarchy algorithm for wireless sensor networks (WSN). And it has many disadvantages such as random selection of cluster head, taking no account of the remaining energy and position of nodes. To solve these problems, a LEACH protocol based on improved cuckoo search algorithm is proposed. Cuckoo search algorithm (CS) is a novel intelligent optimization algorithm. In order to improve the local search ability of the algorithm, the standard cuckoo search algorithm is improved. Firstly, weight coefficient is introduced to adjust the convergence speed of the algorithm. Secondly, all individuals are sorted according to their function values, the population can be divided into two parts after sorting. The individuals of two different parts choose different ways to fly, for avoiding the better individuals into the local optimums. The improved protocol divides the cluster head selection process into optimization of temporary cluster head and formal cluster head selection. First, we generate temporary cluster heads by traditional LEACH protocol, then optimize these cluster heads based on MCS and select formal cluster heads according to the remaining energy of nodes. The result of experiment shows that, comparing with LEACH, LEACH-MCS can balance the network load efficiently, improve the energy utilization, and prolong the network lifetime.
Prediction of Cellular Traffic Based on Space-time Compression Sensing
WU Jia1, ZHAO Yun2, ZHANG Li-juan3, SONG Wen2
2018, 0(12): 11. doi:
10.3969/j.issn.1006-2475.2018.12.003
Asbtract
(
218
)
References
|
Related Articles
|
Metrics
A cellular network traffic prediction algorithm based on Threshold Regularized Orthogonal Matching Pursuit (BT-ROMP) is proposed to solve the problem of cellular network energy waste, where Bases transmitted power cannot be effectively adjusted according to the peak flow rate in cells. The block sparse model of cellular traffic is constructed by using the characteristics of periodic and stable changes. And the algorithm uses the threshold to effectively screen the suboptimal atoms which are regularized, and to expand the candidate set to reduce the number of iterative times and improve the accuracy of the reconstruction. Simulation results show that compared with regularized orthogonal matching pursuit algorithm (ROMP), the prediction accuracy of the proposed algorithm can be improved by 0.01 on average.
Dynamic Clustering Algorithm Based on Graph Theory #br# for Coordinated Multi-point Transmission System
CHEN Jiao, KANG Gui-hua, XU Kai-yue, CAO Di
2018, 0(12): 16. doi:
10.3969/j.issn.1006-2475.2018.12.004
Asbtract
(
228
)
References
|
Related Articles
|
Metrics
In cellular mobile communication systems, the performance of cell edge users are restricted seriously by inter-cells interference. However, Coordinated Multi-Point transmission (CoMP) can reduce the interference among cells significantly and improve the performance of edge users. In order to improve the data transmission rate of community edge users, a dynamic clustering algorithm based on graph theory is proposed for the CoMP system. The algorithm takes full advantages of the graph theory to build the topology of the cellular network. By analyzing the inter-cell interference, the algorithm can generate multiple clusters of unfixed clusters at the same time. It solves the system restriction problem caused by cluster size fixed and sequential clustering. Compared with the other dynamic clustering algorithm, the simulation results show that this algorithm can reduce the computational complexity and improve the sum rate of the system while improving clustering performance.
Knowledge Graph Construction of Threat Intelligence Based on Deep Learning
WANG Tong, AI Zhong-liang, ZHANG Xian-guo
2018, 0(12): 21. doi:
10.3969/j.issn.1006-2475.2018.12.005
Asbtract
(
970
)
References
|
Related Articles
|
Metrics
With the increasing number of cyber threats, the knowledge graph construction technology of threat intelligence has become an important research direction in the field of network security. However, the current knowledge graph construction technology lacks the speed and accuracy of knowledge acquisition. In view of these problems, this paper proposes a supervised deep learning model, which automatically extracts the entity and entity relationship of threat intelligence, and visualizes the knowledge map through graph data. The experimental results show that the method based on the deep learning model for threat intelligence entities and entities extraction has a great improvement in accuracy, which provides a powerful guarantee for the automated construction of threat intelligence knowledge graph.
Design of Teaching Video Service Platform Based on Streaming Media Technology
XU Chang, LU Wei, LIU Kai-xiang, CHEN Peng
2018, 0(12): 27. doi:
10.3969/j.issn.1006-2475.2018.12.006
Asbtract
(
207
)
References
|
Related Articles
|
Metrics
Aiming at the high cost and long production period of current online courses, a teaching video service platform based on streaming media technology is designed. Teachers can carry on the course live broadcast and make the course videos while giving their face-to-face lectures or workshop. This paper adopts Java SSM architecture to implement the course videos making and to guarantee the flexibility of broadcasting plan. Wowza streaming media server is utilized to realize the live and on-demand function of the teaching platform. After putting this platform into practical use, we make and transfer some courses on MOOC and provide a new way for students to learn. And the platform works stably and well. It is also beneficial to the improvement of teachers’ teaching skills.
Storm Flood Pattern Library in Middle and Small Rivers Based on Pattern Mining
FENG Jun, GUO Tao, CHEN Zhi-fei
2018, 0(12): 32. doi:
10.3969/j.issn.1006-2475.2018.12.007
Asbtract
(
184
)
References
|
Related Articles
|
Metrics
The traditional neural network prediction method has been applied successfully in the field of hydrology. However, when flood forecasting is carried out in some data deficient areas, because of the lack of training samples, the model parameters are difficult to meet the requirements, so the prediction results using these methods are often not satisfied. In this paper, a new idea of constructing storm flood pattern library applicable to the basin to be forecasted is proposed, and the historical hydrology data of the basin is excavated and processed in a symbolic mode. Then, by analyzing the hydrological time series of frequent patterns and flood flow, the construction of storm flood pattern library of small and medium-sized rivers is completed. The experimental results show that the model mining method in this paper is used to build the pattern library of the rainstorm flood in the middle and small rivers, and the pattern library is used to quickly forecast the trend of future flood flow process, which has the accuracy and applicability of the basin.
An Online Optimizing Approach for Test Suite
ZHANG Chen-guang, XU Luo, LI Ning
2018, 0(12): 40. doi:
10.3969/j.issn.1006-2475.2018.12.008
Asbtract
(
228
)
References
|
Related Articles
|
Metrics
Aiming at the problem of optimizing test suite, an online optimized test suite method is proposed. Optimizing test suite as an important test generation step is embedded in the test generation flow to provide screening test sequence and test constraints for the test generation. Meanwhile, the generation process provides the satisfaction relationship between the test sequence and the test target for the reduction of the test set, avoids the redundancy due to the complex satisfaction of the test requirements. Compared with the existing test generation method, this method proposed in this paper can effectively improve the efficiency and effectiveness in optimizing test suite.
Address Standardization Algorithm Based on Aho-corasick #br# Automaton and Address Probability Model
LIU Yu1,2, ZHANG Jing-hui1
2018, 0(12): 45. doi:
10.3969/j.issn.1006-2475.2018.12.009
Asbtract
(
252
)
References
|
Related Articles
|
Metrics
Chinese address has a wide range of application fields and application values, and the address coding technology is the key part of it, while the address coding technology is based on the address standardization. This paper first introduces a double-array trie tree with fast speed and multi-pattern matching to the initial segmentation and part-of-speech tagging. By using the longest suffix matching, the address elements of the administrative division can be found very quickly. Based on those technologies the address can be segmented into different addresses elements and be labeled grade, so as to establish the address vector space model (AVSM). This paper gives three steps to process the data in AVSM, that is the conditional combination of part of the administrative divisions AVSM data to obtain possible administrative division candidates. By using cosine similarity algorithm we can calculate the best administrative path. For the following non-administrative divisional hierarchical elements, the probabilistic address model is used to calculate the probability of each hierarchical elements, and Bayes can be used to find the best word probability and process the other levels address in the further. Finally, the finite state machine can be used to adjust the level of membership of each level of the entire address level and to achieve different levels of specific repair methods. This method can quickly cut out a large number of address data, and supplement the missing address data of the administrative department. Using the keyword and probability model can effectively identify the login word, take into account both of the word segmentation performance and maintainability.
A Multi-modal and Multi-task Framework for Power Supply Service Evaluation
SHEN Ran, LIN Kai-feng, WU Hui
2018, 0(12): 51. doi:
10.3969/j.issn.1006-2475.2018.12.010
Asbtract
(
208
)
References
|
Related Articles
|
Metrics
The application of neural network into speech emotion analysis and text appeal classification in power service is a novel algorithm. Compared with the traditional method, the method of feature engineering is avoided, no human feature selection is needed, and more robust features can be learned. Inspired by the multi-task learning, this paper designs a multi-modal multi-task model, which can manage two different modes data of voice and text, on the one hand, it uses emotional analysis to evaluate the power supplying service, on the other hand, it introduces the classification of user demands, using similar tasks to improve the performance of a single task. The experiments show that the result in the single task of the model in this paper is close to the result of the best model.
A Random Forest Algorithm for Imbalanced Classification
SHEN Zhi-yong1, SU Chong1, ZHOU Yang1, SHEN Zhi-wei2
2018, 0(12): 56. doi:
10.3969/j.issn.1006-2475.2018.12.011
Asbtract
(
160
)
References
|
Related Articles
|
Metrics
Random Forest algorithm is a simple and effective integrated learning method. It increases the diversities of classes by choosing a subset of features or rotating feature space, and builds more accurate and diverse classifiers than Bagging and Boosting. However, the splitting criteria used for constructing each tree in Random Forest is Gini index, which is proven to be skew-sensitive. When learning from highly imbalanced datasets, class imbalance impedes their ability to learn the minority class concept. This paper uses K-L divergence as the splitting criterion for building each tree in Random Forest. An experimental framework is performed across a wide range of imbalanced datasets to investigate the effectiveness of K-L divergence based Random Forest which compares with Random Forest, Balanced Random Forest and Bagging with Hellinger decision trees in terms of area under ROC curve(AUC). The experimental results show that K-L divergence based Random Forest not only performs better than the others over more than 70% imbalanced datasets used in this experiment, but also is superior to the others according to the average AUC and obtains 0.938 and 0.937 across the lowly imbalanced datasets and the highly imbalanced datasets respectively. Finally, we conclude that it can improve the performances of Random Forest for imbalanced classification to use K-L divergence as the splitting criterion.
Spectral Clustering Algorithm Based on Transitive Distance
DAI Tian-chen1, GU Zheng-hong2
2018, 0(12): 61. doi:
10.3969/j.issn.1006-2475.2018.12.012
Asbtract
(
177
)
References
|
Related Articles
|
Metrics
Spectral clustering algorithms are influenced by the mesoscale factors of metrics, and similarity measured by Euclidean distance is not always accurate. In view of this situation, a spectral clustering algorithm based on transitive distance is proposed. The main idea contains three steps. First, a minimum spanning tree is constructed to do a similarity transformation, as a result a transfer matrix is generated. Second, we construct a Laplacian matrix by the transfer matrix of the first step. Data is projected into the eigen-space of this Laplacian matrix. Lastly, the clustering in the space of the second step is done. The experimental results on artificial data sets and UCI data sets show that the spectral clustering algorithm based on the transitive distance has good robustness and effectiveness.
Algorithm for Discovering Key Nodes in Social Networks Based on SALSA
ZENG Jing
2018, 0(12): 67. doi:
10.3969/j.issn.1006-2475.2018.12.013
Asbtract
(
210
)
References
|
Related Articles
|
Metrics
The study of finding key nodes in social networks is of great practical significance. Considering the behaviors of user nodes in social networks, this paper divides users’ social behavior into strong/weak relationships to supplement the relationship edge of social network topology. And combined with the ideas of SALSA algorithm, this paper proposes a weighted algorithm WSALSA to discover key nodes in social networks. Through a large number of experiments and verifications with Sina Weibo dataset, we compare spreading effects of PageRank, HITS and SALSA algorithms’ results in the SIR model. The experimental results show that the weighted WSALSA key nodes discovery algorithm has a higher Spearman’s correlation coefficient with SIR ranking results. Therefore, the weighted WSALSA algorithm has higher accuracy in the evaluation of the importance of nodes in social networks.
GRU and LDA Based Group Chat Topic Mining
TANG Kun1,2, CHEN Si-si1,3
2018, 0(12): 72. doi:
10.3969/j.issn.1006-2475.2018.12.014
Asbtract
(
233
)
References
|
Related Articles
|
Metrics
As the fast development of social network, instant messaging system has become an essential communication tool in our daily lives. We can quickly exchange information about life, technology and work through online group chat. However, due to the faster update of group chat messages, it is difficult for us to obtain group chat topics. And traditional topic mining models are not well suited to the topic mining of group chat texts. By analyzing the characteristics of group chat messages, GRU and LDA Based Group Chat Topic Mining(GLB-GCTM) model is proposed, which solves the problem of word order that cannot be solved by traditional theme models. First, assuming that each document has a Gaussian-distribution topic vector, then the latent state of each word is generated according to the GRU, and the current word is determined as a stop word based on the Bernoulli distribution of the latent state of the current word to determine which language model to use. This method uses ten QQ groups that authors join in and collect the last three-months group chat messages for test. The model can effectively identify the topics in the group chat text combined with the comparative experiment evaluation criteria.
Intelligent Text Classification Method Based on VAE-DBN Dual-Model
WANG Wei1,2
2018, 0(12): 77. doi:
10.3969/j.issn.1006-2475.2018.12.015
Asbtract
(
303
)
References
|
Related Articles
|
Metrics
Text categorization technology is the foundation of information filtering, search engine and other fields, and is one of current research hot-spots. Based on the introduction of text classification related concepts and deep learning related models, this paper presents a dual-model text classification method based on the variational autoencoder model and the deep belief network model (VAE-DBN) by analyzing the shortcomings of the traditional text classification methods. By comparing and verifying the corpus, the results show that the dual-model method can effectively improve the accuracy of text categorization.
An Unsupervised JND Color Image Segmentation Based on Firefly Algorithm
SUN Yuan, LIU Han-qiang
2018, 0(12): 85. doi:
10.3969/j.issn.1006-2475.2018.12.016
Asbtract
(
204
)
References
|
Related Articles
|
Metrics
In view of the traditional threshold segmentation algorithm, the choice of threshold number has important effect to the result of color image segmentation. We present an unsupervised JND color image segmentation based on firefly algorithm. First, getting samples from the inputing image, we obtain the neighborhood information of the center pixel and automatically obtaining the supervised pixel information. Then, use the firefly algorithm with the supervised pixel information to select the suitable threshold for color image informentation and use the index to evaluate the segmentation performance, such as the peak signal-to-noise ratio and the probability edge information. The performance of color image is segmented by introducing the theory of just noticeable difference, firefly algorithm. Moreover, the robustness of the algorithm is greatly improved.
Improved Face Recognition Method Based on Deep Learning
ZHENG Jian, WANG Zhi-ming, ZHANG Ning
2018, 0(12): 90. doi:
10.3969/j.issn.1006-2475.2018.12.017
Asbtract
(
234
)
References
|
Related Articles
|
Metrics
Aiming at the problems of low non-constrained feature discriminative ability and poor face recognition performance of current many algorithms, an improved face recognition algorithm based on deep learning is proposed. By training multi-task cascading convolutional neural networks, face detection and face normalization for unconstrained training face images are accomplished, which improves the face information of the training image and reduces the interference to the model. At the same time, the model is jointly supervised and trained by using Softmax Loss and Central Loss to compact intra-class and to disperse inter-class. The experimental results show that the algorithm improves the feature discriminant ability of the model and achieves higher recognition rate on the LFW standard test set.
Rust Detection of Power Equipment Based on RPN and FCN
SHEN Mao-dong1, ZHOU Wei1, SONG Xiao-dong1, DENG Hao1, MA Chao1, XUE Bing2, ZHANG Wei-shan2
2018, 0(12): 96. doi:
10.3969/j.issn.1006-2475.2018.12.018
Asbtract
(
289
)
References
|
Related Articles
|
Metrics
As the rust risk of power equipment is difficult to be discovered in the long-term continuous operation under high temperature, high pressure and high flow rate conditions, this paper develops an irregular rust detection method RPN-FCN based on region proposal network and fully convolution network. First, region proposal network generates region proposals, then performs a fully convolution operation on region proposals to perform accurate classification and positioning at the pixel level. The experimental results show the method improves the accuracy and effect of irregular rust detection.
Double Channel CNN Based on High & Low Dimensions Feature Fusion
WEN Yuan-mei, LUO Zhi-peng, LING Yong-quan
2018, 0(12): 101. doi:
10.3969/j.issn.1006-2475.2018.12.019
Asbtract
(
346
)
References
|
Related Articles
|
Metrics
In order to make full use of the feature information hidden in the image, this paper proposes to fusion the low latitude feature in fully connected layer, then constructs a double-channel convolutional neural network with the high-low level latitude feature. First, we construct a traditional double-channel convolutional neural network, and then set different sizes of convolution kernel on each channel, connect the double-channel CNN pooling layer to the full connection layer, at the same time, the features extracted from the first layer of the two channel convolution neural network are also directly transmitted to the fully connected layer, and this allows the extracted primary and advanced feature maps to be fused on the full link layer. Finally, the fusion data is input to the Softmax classifier to classify. Simulation results of different algorithms on fashion-mnist and CIFAR-10 databases show that this model obtains higher classification accuracy.
A Resistible Noise Recognition Method for Steel Printing Code
ZHOU Guo-hua, SHANG Jun-yan
2018, 0(12): 106. doi:
10.3969/j.issn.1006-2475.2018.12.020
Asbtract
(
184
)
References
|
Related Articles
|
Metrics
Aiming at the noise sensitive problem of total margin support vector machine (TM-SVM), the pinball loss function is introduced, and the total margin support vector machine with pinball loss function (pin-TM-SVM) is proposed. Meanwhile, the recognition method for steel printing code in the noise environment is proposed. First, the preprocessing of steel printing image is carried out. Then the extracted image features are classified by pin-TM-SVM method. The experimental results show that the pin-TM-SVM has its distinctive ability of classification accuracy and ROC curve.
Aircraft Detection Method Based on MRNSSD Model for Remote Sensing Images
SONG Ping1,2,3, XU Guang-luan2,3, ZHOU Yan-hai4, GUO Zhi2,3, YAN Meng-long2,3, ZHANG Yi-fei2
2018, 0(12): 110. doi:
10.3969/j.issn.1006-2475.2018.12.021
Asbtract
(
116
)
References
|
Related Articles
|
Metrics
Aircraft detection is one of the hottest issues in the field of remote sensing image analysis. The current detection methods of remote sensing image exist many problems, such as complex detection procedure, low accuracy in complex background and dense aircraft area. To solve these problems, an end-to-end aircraft detection method named MRNSSD (Multiscale Residual Network Single Shot Detector) is proposed. In this framework, a residual network is used to extract features for its powerful ability in feature extraction, then an extra sub-network consisting of several feature layers is appended to detect and locate aircrafts. In order to locate aircrafts of various scales more accurately, a series of aspect ratios of default boxes are set to better match aircraft shapes and combine predictions deduced from feature maps of different layers. The method is more brief and efficient than methods that require object proposals, because it eliminates proposal generation completely and encapsulates all computation in a single network. Experiments demonstrate that this approach achieves better performance in many complex scenes.
Non-rigid 3D Point Cloud Registration Method
WANG Wei
2018, 0(12): 116. doi:
10.3969/j.issn.1006-2475.2018.12.022
Asbtract
(
406
)
References
|
Related Articles
|
Metrics
The key step of non-rigid human reconstruction is the non-rigid registration of 3D point clouds. This article focuses on the non-rigid registration method of point clouds. 3D human registration is divided into processing of original depth images, corresponding point estimation, and point cloud registration. This article uses bilateral filtering to remove noise from the depth image, extracts the human part by the threshold method, applies the vector field consensus algorithm to estimate the corresponding point and builds a normal vector consistency regular terms based registration model based on the embedded deformation model. The results of test show this article’s method accelerates the iteration rate in point cloud registration and improves the accuracy of point cloud registration, so that demonstrates the advantages of this articles method of registration.