Loading...

Table of Content

    30 June 2024, Volume 0 Issue 06
    Recommendation Algorithm Model Based on DNN and Attention Mechanism
    ZHOU Chao, CONG Xin, ZI Lingling, XIAO Guping
    2024, 0(06):  1-7.  doi:10.3969/j.issn.1006-2475.2024.06.001
    Asbtract ( 15 )   PDF (916KB) ( 32 )  
    References | Related Articles | Metrics
    Abstract: In order to solve the defect of factorization machine in extracting high-order combination features and learn more useful feature information better, this paper attempts to use factorization machine to extract cross-feature and learn key feature information from low and high-order combination features by combining attention network, deep neural network, multi-head self-attention mechanism and other methods. Finally, the weighted fusion results were obtained according to the importance of the combination features of different orders, and the click-through rate of advertisements was estimated. The experiment was mainly carried out based on the advertising data set Criteo, and the analogy experiment was carried out with MovieLens data set to verify the effectiveness of the proposed algorithm model. The experimental results showed that compared with the benchmark model, in the two data sets, the AUC index increased by 2.32 percntage points and 0.4 percntage points.

    Automatic Scoring Method for Composition Based on Semantic Feature Fusion
    YUAN Hang, YANG Yong, REN Ge, Palidan Turson
    2024, 0(06):  8-13.  doi:10.3969/j.issn.1006-2475.2024.06.002
    Asbtract ( 9 )   PDF (623KB) ( 36 )  
    References | Related Articles | Metrics

    Abstract: Automatic composition scoring technology is a kind of natural language processing technology using machine learning. At present, end-to-end models based on deep learning have been widely used in the field of automatic essay scoring. However, because of the difficulty in obtaining correlations between different features in end-to-end models, Automatic Scoring Method for Composition Based on Semantic Feature Fusion (TSEF) has been proposed. This method is mainly divided into two stages: feature extraction and feature fusion. In the feature extraction stage, the Bert model is used to pre-train the input text, and a multi-head-attention mechanism is used to self-train the input text to supplement the shortcomings of pre-training; In the feature fusion stage, cross fusion methods are used to fuse the different features obtained in order to obtain a better performance model. In the experiment, TSEF was compared with many strong baselines, and the results demonstrated the effectiveness and robustness of our method.

    Semi-supervised Image Generation Model Based on StyleGAN
    WANG Zhiqiang, ZHENG Shuang
    2024, 0(06):  14-18.  doi:10.3969/j.issn.1006-2475.2024.06.003
    Asbtract ( 9 )   PDF (840KB) ( 23 )  
    References | Related Articles | Metrics
    Abstract: This paper introduces SG-GAN, a semi-supervised StyleGAN model that overcomes the limitations of traditional StyleGAN. The quality of generated images using StyleGAN is heavily dependent on the quality of the training data set. When the training image quality is low, StyleGAN often fails to generate high-quality images. To address this issue, SG-GAN generates and trains support vector machine(SVM)training samples based on the one-to-one correspondence between vectors w and images in StyleGAN. SVM and StyleGAN mapping network are then used to screen vectors w before generating each image to improve the quality of the resulting images. For batch image generation, gene vectors are generated by the gene vector generator and combined randomly while all permutations of style vectors are obtained using a dynamic cycle backtracking algorithm. Individuals are generated from the permutation results and screened for excellence using an evaluation function after multiple iterations. Experiments were carried out on open data sets and compared with other advanced methods, demonstrating that SG-GAN improves upon StyleGAN's accuracy significantly. The model achieves FID 2.74, an accuracy rate of 74.2%, and a recall rate of 51.2% on the lsun cat face data set, further validating the efficacy of the model. At the same time, the model achieved an accuracy of over 70% on the Cat Dataset, CIFAR-100, and ImageNet datasets, thereby verifying its good generalization ability.
    Data Filtering Strategies for Tibetan-Chinese Neural Machine Translation
    Renqingzhuoma1, 2, 3, Yongcuo1, 2, 3, TANG Chaochao1, 2, 3
    2024, 0(06):  19-24.  doi:10.3969/j.issn.1006-2475.2024.06.004
    Asbtract ( 11 )   PDF (468KB) ( 21 )  
    References | Related Articles | Metrics
    Abstract:Data syntax and semantic losses arise in Tibetan-Chinese machine translation when traditional data augmentation methods are employed. To address this issue,this paper proposes a pseudo-data filtering method combining sentence confusion degree with semantic similarity degree on the basis of traditional data enhancement methods. This strategy effectively tackles challenges such as the inadequate quality and scarcity of parallel data, particularly in low-resource settings. The results of this study demonstrate that the pseudo data filtering approach significantly improves both Tibetan-Chinese and English-Chinese bidirectional language translation tasks.The proposed pseudo-data filtering method effectively improves the grammatical and semantic defects of the translation model, thus enhancing the performance of the translation system and the generalization ability of the translation model, and verifies the effectiveness of the proposed method
    Multi-objective Cold Chain Transportation Optimization Based on LNS-NSGA2
    WANG Ning, LI Ying, LIU Feng
    2024, 0(06):  25-32.  doi:10.3969/j.issn.1006-2475.2024.06.005
    Asbtract ( 14 )   PDF (815KB) ( 34 )  
    References | Related Articles | Metrics

    Abstract: Aiming at the problems of high distribution cost and low effective utilization rate of vehicles in the cold chain logistics distribution system, a multi-vehicle cold chain logistics route optimization model aiming at minimizing transportation cost and maximizing user satisfaction was constructed. At the same time, the impact of distribution time window and freshness of fresh goods on user satisfaction was considered, so as to no longer add extra costs to fresh goods that do not meet the time window distribution. Based on Elitist Non-dominated Sorting Genetic Algorithm(NSGA2)with elite strategies, a cluster initializing population method was designed, and an orderly crossover method was designed according to the characteristics of path coding. A repair strategy is designed to modify the infeasible solutions caused by constraints and guide them to search on the edge of constraints. Combined with the idea of Large Neighborhood Search (LNS) algorithm, it guides individuals to search in the neighborhood, increases the local search ability, and enriches the population diversity. The simulation results show that the Pareto frontier obtained by the algorithm is obviously superior to the traditional NSGA2 algorithm in multi-objective multi-vehicle routing optimization problem.
    Two-level Priority Scheduling Algorithm for μC/OS-II
    LI Qiming1, YANG Xia1, FANG Wenyu1, SUN Haiyong2
    2024, 0(06):  33-37.  doi:10.3969/j.issn.1006-2475.2024.06.006
    Asbtract ( 8 )   PDF (435KB) ( 20 )  
    References | Related Articles | Metrics
    Abstract: μC/OS-II uses a preemptive priority scheduling algorithm that assigns different priorities to tasks according to their importance to ensure the real-time performance of the system. However, μC/OS-II does not allow multiple tasks to have the same priority, which not only limits the number and flexibility of concurrent tasks, but also increases the complexity of the system in some cases, and may even cause security risks to the system operation. This paper extended the two-level priority scheduling mechanism for μC/OS-II by improving the priority bitmap structure and scheduling algorithm of μC/OS-II. The improved system allows users to assign the same priority to multiple tasks. Tasks under the same priority are scheduled according to the second-level priority, and the second-level scheduling policy can be flexibly selected according to the actual needs. Experiments prove that the algorithm can effectively improve the concurrency and resource utilization of μC/OS-II, while maintaining low system overhead and response time.
    Archimedes Optimization Algorithm Based on LHS and Sine-cosine Search
    ZHAN Kaijie, CAI Maoguo, HONG Guangjie, OU Jifa
    2024, 0(06):  38-42.  doi:10.3969/j.issn.1006-2475.2024.06.007
    Asbtract ( 11 )   PDF (647KB) ( 22 )  
    References | Related Articles | Metrics
    Abstract:Aiming at the problems in the optimization process of Archimedes optimization algorithm (AOA), such as the weak ability of global exploration and local development, low optimization accuracy and easy to fall into local optimization, an Archimedes optimization algorithm based on LHS and sine-cosine search operator (LSAOA) is proposed. Firstly, Latin hypercube sampling method is used to initialize the population to improve the balance and diversity of the population; Secondly, the switching mode between global search and local search is changed to improve the convergence speed and accuracy of the algorithm; Finally, the sine-cosine search operator is introduced to improve the local search mode and improve the local search development ability of the algorithm. The simulation experiment compares lsaoa algorithm with other improved AOA algorithms and other meta heuristic algorithms under the international benchmark function. The experimental results show that lsaoa algorithm has better comprehensive performance in solving accuracy and convergence speed.
    A Video Stabilization Method Based on Improved SIFT
    LI Xin, JIAO Linan, LIU Youquan, MA Caisha
    2024, 0(06):  43-50.  doi:10.3969/j.issn.1006-2475.2024.06.008
    Asbtract ( 18 )   PDF (4377KB) ( 21 )  
    References | Related Articles | Metrics

    Abstract: This paper proposes a video stabilization method based on improved SIFT to improve computational efficiency and maintain a good video stabilization effect. Firstly, SIFT is improved and named BO-SIFT (Binarized Octagonal SIFT). The algorithm introduces concentric octagonal ring feature descriptors, processes the feature vectors by dimensionality reduction and binarization, and then uses Hamming distance for feature point matching, which effectively reduces the description and matching time. Secondly, the BO-SIFT algorithm is applied to video stabilization, extracting the feature points of the video frames for matching and calculating the motion offsets between frames to achieve motion estimation. Afterwards, the estimated motion offsets are smoothed using a Kalman filter and the video frames are inversely compensated using affine transformation to obtain a stabilized image sequence. The experimental results show that the BO-SIFT algorithm reduces the stabilization time by 56.404% compared to the original SIFT algorithm, and the stabilized video of the BO-SIFT algorithm has a higher average peak signal-to-noise ratio compared to the existing better algorithms. In addition, the algorithm in this paper is tested on different videos for video stabilization effects, which also has certain reliability and superiority.
    3-D Tomography Method Based on Ground-based Multi-channel Ice-penetrating Radar
    XU Yuanhong1, 2, 3, ZHAO Bo1, 2, LIU Xiaojun1, 2
    2024, 0(06):  51-58.  doi:10.3969/j.issn.1006-2475.2024.06.009
    Asbtract ( 15 )   PDF (1909KB) ( 22 )  
    References | Related Articles | Metrics
    Abstract: The establishment of ice sheet models is crucial for future global climate prediction, and obtaining accurate topographic condition is important for developing more realistic and accurate ice sheet model. The ground-based multi-channel ice-penetrating radar with the advantage of high-precision detection and positioning is suitable for obtaining the basic information required for developing ice sheet models. The traditional SAR processing algorithm based on frequency-wavenumber (f-k) migration can only measure the two-dimensional along-track tomography; in order to obtain three-dimensional tomography, geographic interpolation is required. The accuracy of the geographic interpolation affects the accuracy of the final three-dimensional topographic map; this is especially in regions with high ice flow velocities. The methods mentioned above are thoroughly examined in this essay. A three-dimensional tomography method is proposed in conjunction with the processing of the ground-based multi-channel ice-penetrating radar data in Greenland. The tomographic method allows for the direct measurement of digital elevation map (DEM), as well as other benefits such as the ability to detect ice sheet with high spatial resolution and minimal subglacial terrain error. Results from simulation and collected data processing demonstrate the efficacy of this three-dimensional tomographic method.
    Algorithm for Layered Bipartite Graph Maximum Matching Problem
    ZHU Lingheng1, 2, GU Danpeng1, 2, TANG Songqiang1, 2, CHEN Xiaoyong1, 2
    2024, 0(06):  59-63.  doi:10.3969/j.issn.1006-2475.2024.06.010
    Asbtract ( 8 )   PDF (856KB) ( 27 )  
    References | Related Articles | Metrics
    Abstract: This paper brought up a new problem model for bipartite match problem. In this problem, the entities to be matched contains sub-entities which are also to be matched. That is, the entities to be matched has father-son relations. This model can be applied to many situation, like database schema match, and team match. This paper gave a polynomial-time algorithm, the idea of which is to break down this problem into a combination of Bipartite Graph Maximum Matching Problem and Weighted Maximum Matching Problem. There are mature algorithms (Hungarian Algorithm and Kuhn-Munkres Algorithm) that can solve these two classic problems. The process of the combination takes a kind of greedy strategy. Among sub-entities, Hungarian Algorithm is applied. After that, we take the matching results as the weights and apply Kuhn-Munkres Algorithm among super-entities, and then we get the final result. This paper also proved the correctness of this algorithm, and analyzed its performance through experiments.
    Joint optimization of Picking Operation Based on Nested Ant Colony Algorithm
    LI Yufei, YAN Li, ZENG Yanping, LIU Yunheng
    2024, 0(06):  64-69.  doi:10.3969/j.issn.1006-2475.2024.06.011
    Asbtract ( 10 )   PDF (1262KB) ( 21 )  
    References | Related Articles | Metrics

    Abstract: Aiming at the problem of low efficiency of systematic order batching and picking path step-by-step picking in the process of picking operation of logistics warehousing center, a joint picking strategy based on nested ant colony batching and path optimization is proposed. Firstly, a joint optimization model of order batch and picking route with the goal of minimizing the total path is established; Then, considering the complexity of double optimization, a nested ant colony algorithm is designed to solve the model. The order batching model is used as the benchmark to continuously optimize the order batch results, obtaining the optimal batch collection order. Subsequently, The nested ant colony algorithm is applied to realize the picking path optimization. In order to verify the effectiveness of the algorithm on random orders, 43 order studies with both shelf area and ground pile area goods from a certain day between 17:00 and 18:00 were sampled for simulation experiments. Compared with the traditional order batching and picking path step-by-step picking strategy, the random order picking path based on the nested ant colony algorithm joint optimization model of picking operation is shorter, the picking time is less. After joint optimization, the total picking distance is shortened by 170 m. The joint optimization model of picking operation based on nested ant colony algorithm and its solution algorithm can effectively adress the problem of joint optimization of order batching and picking path, and provide a basis for the optimization of the picking system in the distribution center.

    Infrared Image Segmentation of Electrical Equipment Based on Improved Slime Mould Algorithm and Tsallis Entropy
    ZHAO Wenbo1, XIANG Dong1, WANG Jiubin2, DENG Yuehui3, ZHANG Wei1, KANG Qian1, LI Yujie1
    2024, 0(06):  70-75.  doi:10.3969/j.issn.1006-2475.2024.06.012
    Asbtract ( 8 )   PDF (854KB) ( 23 )  
    References | Related Articles | Metrics

    Abstract: When using conventional methods to deal with infrared image segmentation of electrical equipment, it is easy to have the shortcomings of poor segmentation accuracy and low computational efficiency in determing the optimal threshold. Therefore, a multi-threshold infrared image segmentation method based on improved slime mold algorithm optimizing Tsallis entropy is proposed. The optimal threshold of image segmentation is determined by using the heuristic search mechanism of slime mold algorithm to effectively reduce the time complexity of the algorithm. In the traditional slime mold algorithm, Henon chaotic mapping is introduced to optimize the initial population diversity, and a dynamic lens imaging opposite learning mechanism is designed to improve the search accuracy of the algorithm. Tsallis entropy is used to evaluate the quality of slime mold individuals, and an improved slime mold algorithm iteratively searches for the optimal image segmentation threshold. We construct experimental analysis using a common infrared image dataset of electrical equipment. The results show that compared with contrast model, the segmentation model achieves lower misclassification error, higher peak signal-to-noise ratio and structural similarity degree. The improved model demonstrates performance advantages in processing infrared image segmentation with non-uniform background and high noise.
    Retinal Vessel Segmentation Based on Improved U-Net with Multi-feature Fusion
    FU Lingli, QIU Yu, ZHANG Xinchen
    2024, 0(06):  76-82.  doi:10.3969/j.issn.1006-2475.2024.06.013
    Asbtract ( 21 )   PDF (1564KB) ( 23 )  
    References | Related Articles | Metrics
    Abstract: Due to some problems such as uneven distribution of blood vessel structure, inconsistent thickness, and poor contrast of blood vessel boundary, the image segmentation effect is not good, which cannot meet the needs of practical clinical assistance. To address the problem of breakage of small vessels and poor segmentation of low-contrast vessels, a CA module was integrated into the down-sampling process based on U-Net. Additiondly, to solve the problem of insufficient feature fusion in the original model, Res2NetBlock module was introduced into the model. Finally, a cascade void convolution module is added at the bottom of the model to enhance the receptive field, thereby improving the network’s spatial scale information and the contextual feature perception ability. So the segmentation task achieves better performance. Experiments on DRIVE, CHASEDB1 and self-made Dataset100 datasets show that the accuracy rates are 96.90%, 97.83% and 94.24%, respectively. The AUC is 98.84%, 98.98%, and 97.41%. Compared with U-Net and other mainstream methods, the sensitivity and accuracy are improved, indicating that the vessel segmentation method in this paper has the ability to capture complex features and has higher superiority.
    Pancreas Segmentation Model Based on Deformable Residual and Cascading Encoding
    ZHU Fen, HE Lifeng, SUN Shuang, ZHANG Mengying, YU Jiajia
    2024, 0(06):  83-88.  doi:10.3969/j.issn.1006-2475.2024.06.014
    Asbtract ( 7 )   PDF (766KB) ( 28 )  
    References | Related Articles | Metrics
    Abstract: In order to solve the problems of large pancreatic shape and position change, noise interference, and some small targets in pancreas segmentation by deep convolutional neural networks, a pancreatic segmentation model DC U-net combining deformable shrinkage residual block (DSRB) and cascading encoding module (CEM) is proposed. The DSRB is designed by using two deformable convolutions, an attention mechanism, and a residual structure. This method solves the problem of large changes in pancreatic shape and position through deformable convolution, and uses soft thresholding to reduce noise interference. CEM is used to fuse features, and the coding features are multiplexed to reduce the feature differentiation in the encoding and decoding stage, and strengthen the learning of small target features. The experimental results on the NIH public dataset show that the proposed DC U-net model achieves an average Dice similarity coefficient (DSC) of 87.26%, the average section over union (IOU) of 77.98%, and the segmentation accuracy is better than that of the comparison model.
    Rail Surface State Identification Based on Improved Metric Learning under Small Samples
    YU Huijun1, PENG Cibing1, LIU Jianhua1, ZHANG Jinsheng1, LIU Lili2
    2024, 0(06):  89-94.  doi:10.3969/j.issn.1006-2475.2024.06.015
    Asbtract ( 10 )   PDF (761KB) ( 22 )  
    References | Related Articles | Metrics
    Abstract: In order to solve the problems of insufficient extraction of key feature information and easy loss of discrimination information in the process of rail surface state identification under small sample conditions, a rail surface state identification method based on improved metric learning is proposed. This method incorporates a pyramid split attention mechanism in the feature extraction network to achieve multi-scale extraction of feature map spatial information, cross-dimensional channel attention and spatial attention feature interaction, so as to solve the problem of insufficient extraction of key feature information caused by the small number of track state samples. Additionally, a deep local splicing operator is employed to splice the local features of the query set and various support set feature maps in pairs, replacing the global feature splicing used in traditional metric learning. This helps fitter out filtering interference information such as background noise, and retains significant distinguishing feature information to a greater extent. Experimental results show that the proposed method can effectively identify the rail surface status, and the recognition accuracy, precision, recall rate, and F1 value reach 97.96%, 98.61%, 98.07%, and 98.34%, respectively. Compared with the small sample learning method the DN4 network with better performance, these indicators increased by 5.75, 5.83, 5.95, and 5.89 percentage points, respectively.
    Edge Computing Unloading Method for Intelligent Elderly Care
    LI Shuang1, 2, YE Ning1, 2, XU Kang1, 2, WANG Su1, WANG Ruchuan1, 2
    2024, 0(06):  95-102.  doi:10.3969/j.issn.1006-2475.2024.06.016
    Asbtract ( 13 )   PDF (831KB) ( 21 )  
    References | Related Articles | Metrics
    Abstract: In order to solve the optimization problem of average delay and energy consumption caused by the uncertainty of the dynamic arrival and channel conditions of the elderly health data tasks during task unloading in the edge computing environment, an online task computing offloading optimization algorithm based on Lyapunov optimization and deep reinforcement learning was proposed. In a multi-user mobile edge computing network, the user task data arrived randomly. Lyapunov optimization method was applied to constrain and model the queue length in the process of task offloading. Then, the model information was utilized by deep reinforcement learning method to convert the input environment parameters into the process of learning the optimal binary offloading action, and the offloading action was accurately evaluated. The simulation results show that the proposed algorithm is superior to some deep reinforcement learning algorithms, and the energy consumption of task offloading is reduced effectively while the queue length is constrained reasonably.
    An Electronic License Access Control Method Based on Aggregation and Association of Smart Contracts
    ZHU Ke1, XUAN Jiaxing2, XUE Wenhao2
    2024, 0(06):  103-108.  doi:10.3969/j.issn.1006-2475.2024.06.017
    Asbtract ( 10 )   PDF (691KB) ( 23 )  
    References | Related Articles | Metrics
    Abstract:With the in-depth promotion of “Internet plus government services”, the application of electronic licenses has entered a rapid construction period. In electronic certificate management, different businesses need to be completed by specialized units,and the identity of participants needs to be confirmed during the certificate transmission process. It cannot be disclosed to the public. This paper proposes a blockchain-based trusted identity electronic certificate management system and access control method. This method is based on fuzzy measurement and a multi-attribute aggregation operator. User permissions are controlled by clustering and evaluating user attributes. User permission verification certificates and certificate encryption based on usage scenarios are established, implemented electronic license management in a blockchain environment with user identity trustworthiness, license tamper prevention, and privacy leakage prevention.
    Network Intrusion Detection Based on Improved XGBoost Model
    SU Kaixuan
    2024, 0(06):  109-114.  doi:10.3969/j.issn.1006-2475.2024.06.018
    Asbtract ( 13 )   PDF (472KB) ( 24 )  
    References | Related Articles | Metrics
    Abstract: In order to enhance the accuracy and practicability of the traditional network intrusion detection model, this paper proposes a network intrusion detection based on an improved gradient lift tree (XGBoost) model. Firstly, the random forest algorithm is used to predict the key feature points, and the feature with the highest importance weight is effectively selected and the feature set is constructed in the data pre-processing stage. Secondly, the prediction method of XGBoost model is improved by using card equation. Finally, the cost sensitive function is introduced into the XGBoost optimization algorithm to improve the detection rate of small sample data, and the mesh method is used to reduce the complexity of the model. Experimental simulation results show that compared with other artificial intelligence algorithms, the proposed model can reduce the waiting time by more than 50% with higher inspection accuracy, and has strong scalability and adaptability under noisy environment. Combined with other models, the experimental results show that the tree depth has the greatest impact on the model performance.
    Judicial Argumentation Understanding Method Based on Multiplet Loss
    ZHANG Ke1, AI Zhongliang2, LIU Zhonglin3, GU Pingli1, LIU Xuelin4
    2024, 0(06):  115-120.  doi:10.3969/j.issn.1006-2475.2024.06.019
    Asbtract ( 10 )   PDF (606KB) ( 20 )  
    References | Related Articles | Metrics
    Abstract: Judicial Argument Understanding is a practical application of Argument Mining in judicial domain, aiming at mining the interactive argument pair from the arguments of the prosecution and the defense. Argument mining task in judicial domain has the problems of small training samples, long sentence length, and strong domain specialization, etc. Existing models for Judicial Argument Understanding are mostly based on the idea of text classification, and have poor capability of representing the text semantics. To improve the recognition accuracy of the interactive argument pairs, a Judicial Argument Understanding model based on multiplet loss is proposed, which is based on the idea of text matching, matching the prosecutor argument with the defense argument separately for semantic similarity, and realizing the mining of the interactive argument pairs by optimizing the matching degree of the interactive argument pairs. To improve the matching degree of the model for interactive argument pairs, a multivariate group matching loss function is proposed, which further improves the text semantic representation ability by reducing the semantic distance of argument interactive pairs and increasing the semantic distance of non-interactive pairs, so that the semantic distance between arguments can better reflect their interactivity, and the pre-trained model in judicial domain is used as the text semantic representation model. CAIL2022 Judicial Argument Understanding track data was used for testing, and the experimental results showed that the accuracy of the Judicial Argument Understanding model based on multiplet loss function was able to improve by more than 2.04Percentage Points to 85.19% compared with the model using classification ideas, which improved the accuracy of the Judicial Argument Understanding task.
    Design and Verification of UAV-based Cluster Communication Architecture  Based on Central Node
    CHEN Rongxin1, WEI Tonghao2, LIU Jie2, HU Zhenzhen2
    2024, 0(06):  121-126.  doi:10.3969/j.issn.1006-2475.2024.06.020
    Asbtract ( 8 )   PDF (1177KB) ( 19 )  
    References | Related Articles | Metrics
    Abstract: The traditional unmanned aerial vehicle (UAV) cluster communication architecture is limited by the ground base station, resulting in weak anti-interference ability. At the same time, the inflexible communication mode also makes its autonomous coordination ability poor. In order to cope with the ability of executing tasks in dynamic uncertain environment and get rid of the restriction of ground base station on UAV cluster, a communication architecture mode of UAV cluster based on central node control is proposed. The communication architecture mode is a multi-level hierarchical structure based on the characteristics of the central node responsible for cluster member scheduling and task release and it avoids the problem of weak distributed decision-making ability of the cluster caused by the traditional GSC mode. The reliability of the communication architecture is verified using the graphical model detection tool ISPIN. The verification results show that the communication architecture adopts dynamic adjustment layout, which improves the communication performance, reduces the dependence on the ground base station, and improves the anti-interference ability and collaborative execution efficiency of the UAV cluster cooperative combat as a whole.