Loading...

Table of Content

    30 November 2022, Volume 0 Issue 11
    Categorical Data Clustering Based on Extraction of Associations from Co-association Matrix
    GUAN Yun-peng, LIU Yu-long
    2022, 0(11):  1-8. 
    Asbtract ( 404 )   PDF (1703KB) ( 89 )  
    References | Related Articles | Metrics
    Categorical data clustering is widely used in different fields in the real world, such as medical science, computer science,  etc. The usual categorical data clustering is studied based on the dissimilarity measure. For data sets with different characteristics, the clustering results will be affected by the characteristics of the data set itself and noise information. In addition, the categorical data clustering based on representation learning is too complicated to implement, and the clustering results are greatly affected by the representation results. Based on the co-association matrix, this paper proposes a clustering method that can directly consider the relationship between the original information of categorical data, categorical data clustering based on extraction of associations from co-association matrix (CDCBCM). The co-association matrix can be regarded as a summary of the information association in the original data space. The co-association matrix is constructed by calculating the co-association frequency value of different objects in each attribute subspace, and some noise information is removed from the co-association matrix, and then the clustering result is obtained by normalized cut. The method is tested on 16 publicly available datasets in various aspects, compared with 8 existing methods, and detected using the F1-score metric. The experimental results show that this method has the best effect on 7 data sets, the average ranking is the best, and it can better complete the clustering task of categorical data.
    Semi-supervised Learning Method Based on Convolution and Sparse Coding
    LIU Ying-jie, LAN Hai, WEI Xian
    2022, 0(11):  9-16. 
    Asbtract ( 276 )   PDF (2127KB) ( 72 )  
    References | Related Articles | Metrics
    Convolutional neural network (CNN) has achieved great success in semi-supervised learning. It uses both labelled samples and unlabelled samples in the training stage. Unlabelled samples can help standardize the learning model. To further improve the feature extraction ability of semi-supervised models, this paper proposes an end-to-end semi-supervised learning method combining deep semi-supervised convolutional neural network and sparse coding dictionary learning, called Semi-supervised Learning based on Sparse Coding and Convolution (SSSConv), which aims to learn more discriminative image feature representation and improve the performance of classification tasks. Firstly, the proposed method uses CNN to extract features and performs orthogonal projection transformation on them. Then, learn the corresponding sparse coding and obtain the image representation. Finally, the classifier of the model can classify them. The whole semi-supervised learning process can be regarded as an end-to-end optimization problem. CNN part and sparse coding part have a unified loss function. In this paper, conjugate gradient descent algorithm, chain rule, and backpropagation algorithm are used to optimize the parameters of the objective function. Among them, we restrict the relevant parameters of sparse coding to the manifold, and the CNN parameters can be defined not only in Euclidean space but also in orthogonal space. Experimental results based on semi-supervised classification tasks verify the effectiveness of the proposed SSSConv framework, which is highly competitive with existing methods.
    Research and Application of Hesitant Fuzzy Canopy-K-means Clustering Algorithm
    ZHANG Zi-xuan, SHA Xiu-yan, XIAO Fei, SU Bao-chan, SUI Yu-lu, MENG Zi-chen
    2022, 0(11):  17-21. 
    Asbtract ( 127 )   PDF (930KB) ( 55 )  
    References | Related Articles | Metrics
    Aiming at the problem that the traditional K-means clustering algorithm is sensitive to the initial value and fall into local extreme points easily, resulting in unsatisfactory data classification results, this paper proposes a hesitant fuzzy Canopy-K-means clustering algorithm. Firstly, the original data is preliminarily classified by the Canopy algorithm, and a set of Canopy centers with overlapping data is formed, that is, the initial cluster center of the K-means algorithm is obtained. Then, the K-means clustering algorithm is used for clustering to obtain the final clustering result. Finally, based on the evaluation information data of enterprises that resumed work and production after the epidemic, an example analysis is carried out, and 5 enterprises that have resumed work and production are analyzed from 6 aspects to evaluate the enterprises’ business development. The new proposed algorithm and the traditional K-means clustering algorithm are compared and analyzed, and the results show that the new proposed method greatly reduces the number of iterations, and the clustering results are more reasonable, stable and effective.
    A Data-driven Online Modeling Method for Evaporators
    DING Xu-dong, YANG Dong-run, LIU Hui, ZHAO Xing-kai, ZHANG Ying, SUN Mei,
    2022, 0(11):  22-31. 
    Asbtract ( 97 )   PDF (1886KB) ( 50 )  
    References | Related Articles | Metrics
    According to the problem that the offline modeling method of the evaporator requires a large range of variable operating conditions, a data-based evaporator online modeling method is proposed by using K-means algorithm to perform clustering and screening on the observation data of the identified model. Firstly, a method to determine the optimal classification number K* and the optimal initial clustering center in the K-means algorithm is proposed using the DB criterion and the PSO algorithm to improve the convergence speed of the algorithm and the observation data of the identified model are replaced by the cluster centers obtained by the improved K-means algorithm to reduce the data amount of the model identification. Then the existing evaporator model structure and the model identification method are employed to identify the model. The experimental results show that the accuracy of the models identified by the observation data obtained before and after clustering and screening is approximately equal and their errors are within ±3% and ±3.5% respectively. Finally, an online modeling method for the evaporator is proposed by analyzing and judging the Euclidean distance between the online observation data and each cluster center. This method can firstly use a small amount of offline observation data obtained from a small range of working conditions to identify the model, and then use the online data to modify model parameters to expand the scope of application for the model.
    Mobile Edge Computing Task Allocation Method Based on Particle Swarm Optimization
    CHEN Gang, WANG Zhi-jian, XU Sheng-chao
    2022, 0(11):  32-36. 
    Asbtract ( 187 )   PDF (995KB) ( 48 )  
    References | Related Articles | Metrics
    In order to improve the task allocation efficiency of mobile terminals and reduce the computational energy consumption, a mobile edge computing task allocation method based on particle swarm algorithm is proposed. By building a heterogeneous network, we can obtain the complete tasks that need to be allocated, and clarify the specific conditions required for task allocation, that is, allocating consumption and delay. The assignment task is converted into finding the optimal solution of the assignment result, building the optimal solution model, solving the model with the help of the particle swarm algorithm, and generating the assignment result of the optimal edge computing task through continuous iteration and update. The experimental results show that the calculation time of the proposed method is between 1 and 3.3 s when the number of tasks assigned is between 20 and 100; when the number of tasks is 100, the energy consumption of the proposed method is only 4107 J; When it is 100%, its delay is only 12.5 ms; its task allocation calculation time is short, energy consumption is small and data transmission delay is short, which can satisfy the certain application requirement.
    Communication Terminal Attack Behavior Identification Algorithm Based on Trusted Cloud Computing#br#
    MAO Ming-yang, XU Sheng-chao
    2022, 0(11):  37-42. 
    Asbtract ( 102 )   PDF (1082KB) ( 58 )  
    References | Related Articles | Metrics
    Malicious attacks such as Trojan horse implantation pose a serious threat to communication terminals. Therefore, a communication terminal attack identification algorithm based on trusted cloud computing is proposed. The data acquisition module is used to obtain the data flow of the image of the communication terminal, and the trustworthiness chain is extended to the virtual machine manager and communication terminal of the cloud computing environment through the credibility verification mechanism. After detecting the credibility of the running environment of the communication terminal, the attack behavior identification module uses Bayesian algorithm to judge whether the data flow contains attack behavior. The maximum a posteriori probability of the attack behavior data is calculated to judge the category of the attack behavior, and the detection results are reflected to the management module. Combined with the rate limiting module, the data flow containing the attack behavior is limited until the end of the attack behavior of the communication terminal. The experimental results show that the algorithm can effectively improve the access security of the communication terminal and ensure the smooth transmission of data when the communication terminal is attacked. The average absolute percentage error of communication terminal attack behavior recognition under different degrees of interference environment is always less than 0.25%.
     Vulnerability Assessment Model of SDN Mobile Network for Service Transmission
    BAO Chun-hui, ZHUANG Yi, GUO Li-ye
    2022, 0(11):  43-51. 
    Asbtract ( 126 )   PDF (1309KB) ( 53 )  
    References | Related Articles | Metrics
    Aiming at the problems that the existing vulnerability assessment algorithms can not be directly applied to software defined network(SDN), and the assessment technology is generally biased towards network connectivity and can not analyze the vulnerability of SDN according to service and transmission performance, a service-oriented SDN mobile network vulnerability assessment model and algorithm are proposed, a mobile network vulnerability assessment framework based on SDN is designed. A method for security vulnerability analysis of mobile network server nodes and network equipment based on SDN is proposed. The vulnerability of node equipment is evaluated from static configuration information and dynamic operation information respectively, so as to make the evaluation more comprehensive and accurate; Then, according to the service and transmission characteristics of SDN mobile network, the node importance of service-oriented and transmission based SDN mobile network is calculated from 2 aspects: topology transmission performance and node activity. Finally, the security vulnerability and importance of node devices are fused to evaluate the vulnerability of mobile network based on SDN, and the evaluation results are obtained. The effectiveness of the proposed algorithm is verified by examples and simulation experiments. Compared with similar algorithms, it can achieve higher evaluation accuracy.
    Unrestricted Attack Based on Colorization
    LI Shi-bao, WANG Jie-wei, CUI Xue-rong, LIU Jian-hang, HUANG Ting-pei
    2022, 0(11):  52-59. 
    Asbtract ( 129 )   PDF (3838KB) ( 57 )  
    References | Related Articles | Metrics
    Deep learning is now widely used in areas such as computer vision, robotics, and natural language processing. However, it has been shown that deep neural networks are vulnerable to adversarial examples, and a single carefully crafted adversarial example can make deep learning models misjudge. Most of the existing studies mislead the adversarial attack on classifiers by generating a small perturbation of the Lp paradigm, but the results achieved are not satisfactory. In this paper, we propose a new adversarial attack method, colorization adversarial attack, which converts the input samples into grayscale maps, designs a grayscale coloring method to guide the grayscale map coloring, and finally uses the colorized images to deceive the classifier to achieve unrestricted attacks. Experiments show that the adversarial examples produced by this method performs well in deceiving several state-of-the-art deep neural network image classifiers and passes human perception research tests.
    Collaborative Filtering Recommendation Algorithm Combined with Expert Trust
    LIU Guo-li, XU Hong-nan, TAN You-qian
    2022, 0(11):  60-68. 
    Asbtract ( 132 )   PDF (1305KB) ( 57 )  
    References | Related Articles | Metrics
    Aiming at the problems of the collaborative filtering recommendation algorithm, such as inaccurate similarity calculation of neighbor users and cold start, a collaborative filtering recommendation algorithm combined with expert trust was proposed. The algorithm was studied from 3 aspects of similarity, expert trust and cold start alleviation. In the aspect of similarity, the implicit preference contained in the public score item and the unpopular factor of the item are added into the formula of similarity to improve the similarity. In the aspect of expert trust, the time span factor is proposed, and the experience value of experts is taken into account in the calculation of trust, so as to improve expert trust. In the aspect of cold start, the problem of cold start can be effectively alleviated by combining expert users and users with similar attributes to generate recommendations for target users. The MoviesLens data set is used to verify that the improved algorithm has better average absolute error and accuracy than the traditional algorithm.
    Prediction Model of Diabetic Complications Based on BP Neural Network Optimized by Improved Genetic Algorithm#br#
    WANG Min, XU Ying-hao, ZHU Xi-jun
    2022, 0(11):  69-74. 
    Asbtract ( 259 )   PDF (1753KB) ( 49 )  
    References | Related Articles | Metrics
    BP neural network is one of the most frequently used neural networks in deep learning research. In this paper, an improved genetic algorithm (IGABP) is proposed to optimize the initial structure of BP neural network. The genetic algorithm is easy to fall into local optimal solution, which affects its own optimization ability, so the genetic algorithm is improved, and finally the prediction model of diabetes complications is constructed to predict the occurrence of diabetes complications. The selection operator of genetic algorithm is improved, and the crossover and mutation probability formula of adaptive genetic algorithm is improved also. By building a prediction model, the improved IGABP is compared with BP, GABP and AGABP. The simulation results show that the prediction accuracy of IGABP is significantly better than that of BP, GABP and AGABP, and the convergence speed of the network is accelerated.
    Deep Q-learning Based Task Offloading in Power IoT
    DING Zhong-lin, LI Yang, CAO Wei, TAN Yu-hao, XU Bo
    2022, 0(11):  75-80. 
    Asbtract ( 155 )   PDF (1639KB) ( 61 )  
    References | Related Articles | Metrics
    With the increasing demand for electricity in modern cities and industrial production, power internet of things (PIoT) has attracted extensive attention. PIoT is considered as a solution which can significantly improve the efficiency of power systems. In order to establish effective access, power equipment now is often equipped with 5G modules with lightweight built-in AI. However, limited to computing and communication capabilities of the modules, great challengs are brought by real-time processing and analysis of massive data generated by the equipment. In this paper, we mainly focus on task offloading in the PIoT system. By jointly optimizing the task scheduling and the computing resource allocation of edge servers, the weighted sum of latency and energy consumption turn out to be reduced. We propose a task offloading algorithm based on deep reinforcement learning. Firstly, the task execution on each edge server is modeled as a queuing system. Then, the local computing resource allocation is optimized based on convex optimization theory. Finally, a deep Q-learning algorithm is proposed to optimize the task offloading decisions. Simulation results show that, the proposed algorithm can reduce the latency and energy consumption significantly.
    Multi-feature Fusion Ship Target Detection Algorithm in Complex Environment
    WANG Chang-jun, PENG Cheng, LI Yong
    2022, 0(11):  81-88. 
    Asbtract ( 158 )   PDF (4287KB) ( 69 )  
    References | Related Articles | Metrics
    As one of the research fields of machine vision, ship target detection has fundamental practical significance for marine transportation industry and intelligent search and rescue. However, in actual detection, due to the low accuracy and inaccurate positioning in the complex weather environment, this paper proposes a multi-feature fusion ship target detection algorithm in the complex environment. The side fusion path network is introduced, the loss of feature forward propagation is reduced, information fusion is strengthened. By improving the positioning loss function through Gaussian distribution and the use of variance voting method, the effect of filtering duplicate frames is improved, which makes the frame positioning more accurate, and reduces missed detections and false detections. Experiment results show that in different weather environments, the average accuracy rate (mAP) of the algorithm reaches 88.01%, which is 19.70 and 15.13 percentage points higher than the traditional YOLOv3 and Faster RCNN algorithms, and the average intersection ratio (IoU) increases by 6.49 percentage points , it has good practicability in ship inspection applications in complex environments.
    Lightweight Super-resolution Networks Based on Improved Residual Feature Distillation
    WU Li-jun, CAI Zhou-wei, CHEN Zhi-cong
    2022, 0(11):  89-94. 
    Asbtract ( 135 )   PDF (1284KB) ( 66 )  
    References | Related Articles | Metrics
    Deep learning-based image super-resolution algorithms often use recursive approaches or parameter sharing strategies to reduce network parameters, which increases the depth of the network and makes running the network time consuming, making it difficult to deploy the model to real-life situations. To solve the above problems, this paper designs a lightweight super-resolution network, which learns the correlation and importance of intermediate features, and combines the feature information of high-resolution images in the reconstruction part. First, the layer attention module is introduced to adaptively assign the weights of important hierarchical features by considering the correlation among layers. Next, finer feature information of the high-resolution image are extracted using an enhanced reconstruction block to obtain a clearer reconstructed image. A large number of comparative experiments show that the network designed in this paper has a smaller amount of network parameters compared with other lightweight models, and has a certain improvement in reconstruction accuracy and visual effects.
    Crack Target Detection Algorithm Based on Adaptive Anchor Frame
    YIN Chu, ZHAO Qi-lin, RUI Ting, YUAN Hui, WANG Jian
    2022, 0(11):  95-101. 
    Asbtract ( 160 )   PDF (4083KB) ( 55 )  
    References | Related Articles | Metrics
    With the development of transportation, bridges play an increasingly important role in the transportation process ,and bridges are more diversified. Therefore, in the face of a large number of bridges with different working conditions, it is particularly important to develop an intelligent crack detection technology that can conveniently learn new working conditions. To improve the accuracy and efficiency of the target detection algorithm, this paper divides the original crack image into slices with three sizes of resolution, and trains the network to recognize cracks of different sizes. To increase the subsequent expansion of the algorithm at the same time, we design a method to adaptively adjust the anchor box according to the training set dimension, so the algorithm can directly add the data for training and automatically adjust the optimal anchor box size when need to increase the training data for different engineering conditions in subsequent process, which makes the algorithm useful for actually applying. Compared with the original YOLOv3 network and some algorithms in references, the accuracy of the proposed algorithm is 91% on average and the scalability is better.
    Indoor Positioning Technology for Complex Cabinet Room Environment
    HUANG Zhi-chang, SHANG Jing-wei
    2022, 0(11):  102-110. 
    Asbtract ( 123 )   PDF (2533KB) ( 47 )  
    References | Related Articles | Metrics
    Considering an urgent need for accurate indoor positioning services for the intelligent operation and maintenance management of the cabinet room data center, and the non-line-of-sight propagation error significantly reduces the traditional indoor positioning accuracy in the complex environment where the metal shielding, electromagnetic radiation and other equipment are seriously interfered in cabinet room, an indoor three-dimensional trilateral algorithm that uses the optimization theory principle and K-nearest neighbor to resist the non-line-of-sight propagation error is proposed. Based on this positioning algorithm, an indoor positioning system in the cabinet room based on ultra-wideband technology is designed and developed, which can be used for indoor positioning and navigation in complex cabinet room. Experiments have proved that the positioning system has higher positioning accuracy than Wi-Fi and bluetooth finger indoor position. In Addition, it performs well in a complex cabinet room environment with the combination of the optimization method of objective function based on nonlinear least square method and KNN algorithm. It provides an effective method and means for the location service of intelligent operation and maintenance of cabinet room data center.
    Minimization of Test Suite of RESTful API in Cyber-Physical System Based on NSGA-Ⅱ
    YAN Jia-cheng, YIN Kai-ou
    2022, 0(11):  111-118. 
    Asbtract ( 96 )   PDF (1140KB) ( 57 )  
    References | Related Articles | Metrics
    Test minimization techniques aim at identifying and eliminating redundant test cases from test suites in order to reduce the total number of test cases to execute, thereby improving the efficiency of testing. Cyber-physical System combines computing, communication and process control technologies. The interface written with RESTful architecture can easily realize cross-platform and cross-language calls. The lack of research on RESTful API testing brings challenges to the test and the optimization work. Minimizing the test suite needs to cover all test requirements as much as possible while maintaining high fault detection capability. In this paper, we use NSGA-Ⅱ and Random Search to minimize the test suite which is used to test the RESTful API in the cyber-physical system. Experimental results show that the performance of the test suite optimized by NSGA-Ⅱ is significantly better than that of random search and the original test suite.
    Research Review of Single-channel Speech Separation Technology Based on TasNet
    LU Wei, ZHU Ding-ju
    2022, 0(11):  119-126. 
    Asbtract ( 410 )   PDF (1016KB) ( 71 )  
    References | Related Articles | Metrics
    Speech separation is a fundamental task in acoustic signal processing with a wide range of applications. Thanks to the development of deep learning, the performance of single-channel speech separation systems has been significantly improved in recent years. In particular, with the introduction of a new speech separation method called time-domain audio separation network (TasNet), speech separation technology is also gradually transitioning from the traditional method based on time-frequency domain to the one based on time domain methods. This paper reviews the research status and prospect of single-channel speech separation technology based on TasNet. After reviewing the traditional methods of speech separation based on time-frequency domain, this paper focuses on the TasNet-based Conv-TasNet model and DPRNN model, and compares the improvement research on each model. Finally, this paper expounds the limitations of the current single-channel speech separation model based on TasNet, and discusses future research directions from the aspects of model, dataset, number of speakers, and how to solve speech separation in complex scenarios.