Loading...
主 管:江西省科学技术厅
主 办:江西省计算机学会
江西省计算中心
编辑出版:《计算机与现代化》编辑部
Toggle navigation
Home
About Journal
Editorial Board
Journal Online
Online First
Current Issue
Archive
Most Read Articles
Most Download Articles
Most Cited Articles
Subscription
FAQ
Self Recommendation
Contact Us
Email Alert
中文
Office Online
Submission or Manuscript
Peer Review
Editor-in-Chief
Office Work
Instruction
More>>
Instruction
Submit Flow
Template
Copyright Agreement
The Author Changes the Application
Highlights
More>>
Links
More>>
Visited
Total visitors:
Visitors of today:
Now online:
Table of Content
28 March 2024, Volume 0 Issue 03
Previous Issue
Debris Flow Infrasound Signal Recognition Approach Based on Improved AlexNet
YUAN Li1, 2, LIU Dun-long1, 2, SANG Xue-jia1, 2, ZHANG Shao-jie3, CHEN Qiao4
2024, 0(03): 1-6. doi:
10.3969/j.issn.1006-2475.2024.03.001
Asbtract
(
139
)
PDF
(3108KB) (
269
)
Related Articles
|
Metrics
Abstract: Environmental interference noise is the main challenge for on-site monitoring of debris flow infrasound, which greatly limits the accuracy of debris flow infrasound signal identification. In view of the performance of deep learning in acoustic signal recognition, this paper proposes a debris flow infrasound signal recognition method based on improved AlexNet network, which effectively improves the accuracy and convergence speed of debris flow infrasound signal recognition. Firstly, the original infrasound data set is preprocessed such as data expansion, filtering and noise reduction, and wavelet transform is used to generate a time-frequency spectrum image. Then the obtained time-frequency spectrum image is used as input, and an improved AlexNet network model is built by reducing the convolution kernel, introducing a batch normalization layer and selecting the Adam optimization algorithm. Experimental results show that the improved AlexNet network model has a recognition accuracy of 91.48%, achieves intelligent identification of debris flow infrasound signals and provides efficient and reliable technical support for debris flow infrasound monitoring and early warning.
Path Planning of Parking Robot Based on Improved D3QN Algorithm
WANG Jian-ming1, WANG Xin1, LI Yang-hui2, WANG Dian-long1
2024, 0(03): 7-14. doi:
10.3969/j.issn.1006-2475.2024.03.002
Asbtract
(
164
)
PDF
(2440KB) (
251
)
References
|
Related Articles
|
Metrics
Abstract: The parking robot emerges as a solution to the urban parking problem, and its path planning is an important research direction. Due to the limitations of the A* algorithm, the deep reinforcement learning idea is introduced in this article, and improves the D3QN algorithm. Through replacing the convolutional network with a residual network and introducing attention mechanisms, the SE-RD3QN algorithm is proposed to improve network degradation and convergence speed, and enhance model accuracy. During the algorithm training process, the reward and punishment mechanism is improved to achieve rapid convergence of the optimal solution. Through comparing the experimental results of the D3QN algorithm and the RD3QN algorithm with added residual layers, it shows that the SE-RD3QN algorithm achieves faster convergence during model training. Compared with the currently used A*+TEB algorithm, SE-RD3QN can obtain shorter path length and planning time in path planning. Finally, the effectiveness of the algorithm is further verified through physical experiments simulating a car, which provides a new solution for parking path planning.
Temporal Knowledge Graph Question Answering Method Based on
#br#
Semantic and Structural Enhancement
HUANG Zheng-lin, DONG Bao-liang
2024, 0(03): 15-23. doi:
10.3969/j.issn.1006-2475.2024.03.003
Asbtract
(
148
)
PDF
(1330KB) (
178
)
References
|
Related Articles
|
Metrics
Abstract: Knowledge graphs, as one of the popular research topics in the field of natural language processing, have consistently received widespread attention from the academic community. In reality, the knowledge quiz process often carries temporal information. Consequently, in recent years, the application of temporal knowledge graphs for knowledge question answering has gained popularity among scholars. Traditional methods for temporal knowledge graph question answering primarily encode the question information to facilitate the inference process. However, they are unable to deal with the more complex entities and temporal relationships contained in the questions. To address this, semantic and structural enhancement for temporal knowledge graph question answering is proposed. This method aims to simultaneously consider both semantic and structural information in the inference process to improve the probability of providing correct answers. Firstly, implicit temporal expressions in the questions are parsed, and the questions are rewritten using direct representations based on the information in the temporal knowledge graph. Additionally, the temporal information in the temporal knowledge graph is aggregated according to different time granularities based on the question set. Secondly, the semantic information of the questions is represented and fused based on entity and time information to enhance the learning of entity and time semantics. Subsequently, subgraphs are extracted based on the extracted entities, and the structural information of the subgraphs is captured using graph convolutional networks. Finally, the fused semantic and structural information of the questions are concatenated, and candidate answers are scored, with the entity receiving the highest score selected as the answer. Comparative tests on MultiTQ data sets show that the proposed model outperforms other baseline models.
Chinese Named Entity Recognition with Fusion of Lexicon Information and Sentence Semantics
#br# #br#
WANG Tan, CHEN Jin-guang, MA Li-li
2024, 0(03): 24-28. doi:
10.3969/j.issn.1006-2475.2024.03.004
Asbtract
(
97
)
PDF
(1147KB) (
183
)
References
|
Related Articles
|
Metrics
Abstract: The performance of named entity recognition tasks has significantly improved due to the rapid advancement of deep learning techniques. However, the outstanding results achieved by deep learning networks often rely on large amounts of labeled samples, making it challenging to fully exploit deep information in small datasets. In this paper, we propose a Chinese named entity recognition model (LS-NER) that combines lexicon and sentence semantics. Firstly, potential words matched by characters in the dictionary serve as a priori lexical information for the model, addressing the Chinese word segmentation issue. Then, sentence embeddings containing semantic information, typically used for calculating text similarity, are applied to the named entity recognition task, enabling the model to identify similar entities from analogous sentences. Finally, a feature fusion strategy is devised to allow the model to effectively learn the semantic information provided by sentence embeddings. The experimental results demonstrate that our approach achieves commendable performance on both small datasets Resume and Weibo. The incorporation of sentence semantics assists the model in learning deeper features without requiring additional external information, resulting in F1 scores that are 0.15 percentage points and 2.26 percentage points higher than those of the model without added sentence information, respectively.
Key words: named entity recognition; BERT; SoftLexicon; Sentence-Bert; CRF
Inshore Warship Detection Method Based on Multi-task Learning
LIU Xin-pin1, 2, 3, WANG Hong1, 3, ZHAO Liang-jin1, 3
2024, 0(03): 29-33. doi:
10.3969/j.issn.1006-2475.2024.03.005
Asbtract
(
85
)
PDF
(1895KB) (
150
)
References
|
Related Articles
|
Metrics
Abstract: In the task of inshore warship detection in remote sensing optical images, this paper proposes an inshore warship detection method based on multi-task learning for the false alarms problem of similar features in complex scenes. By constructing a parallel dual-branch task framework for the sea-land segmentation mission and the warship detection mission, this method optimizes the traditional task of serial processing into parallel processing mode. Secondly, we propose a joint loss constraint for dual path optimum training, which improves the stability of model training. Finally, the dataset made by Google Earth remote sensing images is used for experiments. The detection results in land mask are eliminated by the dual-branch fusion model, and the land false alarm filter is realized. Compared with the single task detection algorithm YOLOv5, the mAP of the proposed method increased by 4.4 percentage points and the false alarm rate decreased by 3.4 percentage points. The experimental results show that the proposed algorithm is effective in suppressing false alarm on land.
Multiple Objective Explainable Recommendation Based on Knowledge Graph
YANG Meng, YANG Jin, CHEN Bu-qian
2024, 0(03): 34-40. doi:
10.3969/j.issn.1006-2475.2024.03.006
Asbtract
(
108
)
PDF
(2228KB) (
122
)
References
|
Related Articles
|
Metrics
Abstract: Most of the existing recommendation system research focuses on how to improve the accuracy of recommendation, but neglects the explainability of recommendation. In order to maximize the satisfaction with recommendation items of users, a multi-objective explainable recommendation model based on knowledge graph is proposed to optimize the accuracy, novelty, diversity and explainability of recommendations. Firstly, the explainable candidate list of users is obtained by knowledge graph, and the explainable candidate list is quantified by using a unified method based on the path between the interaction item and the recommendation item of target users. Finally, the explainable candidate list is optimized by multi-objective optimization algorithm, and the final recommendation list is obtained. The experimental results on the dataset of Movielens and Epinions show that the proposed model can improve the explainability of recommendations without compromising accuracy, novelty, and diversity.
Intelligent Identification Method of Debris Flow Scene Based on Camera Video Surveillance
HU Mei-chen1, 2, LIU Dun-long1, 2, SANG Xue-jia1, 2, ZHANG Shao-jie3, CHEN Qiao4
2024, 0(03): 41-46. doi:
10.3969/j.issn.1006-2475.2024.03.007
Asbtract
(
184
)
PDF
(2573KB) (
387
)
References
|
Related Articles
|
Metrics
Abstract: Camera video surveillance is widely used in debris flow disaster prevention and mitigation, but the existing video detection technology has limited functions and can not automatically judge the occurrence of debris flow disaster events. To solve this problem, using transfer learning strategy, this paper improves a video classification method based on convolutional neural network. Firstly, with the help of TSN model framework, the underlying network architecture is changed to ResNet-50, which is utilized for motion feature extraction and debris flow scene identification. Then, the model is pre-trained with ImageNet and Kinetics 400 datasets to make the model have strong generalization ability. Finally, the model is trained and fine-tuned with the pre-processed geological disaster video dataset, so that it can accurately identify debris flow events. The model is tested by a large number of moving scene videos, and the experimental results show that the identification accuracy of the method for debris flow movement video can reach 87.73%. Therefore, the research results of this paper can to the play a full role of video surveillance in debris flow monitoring and warning.
Prediction of Grain Yield Model Based on Empirical Mode Decomposition and
#br#
Extreme Learning Machine
YUAN Shi-yi
2024, 0(03): 47-53. doi:
10.3969/j.issn.1006-2475.2024.03.008
Asbtract
(
83
)
PDF
(2216KB) (
165
)
References
|
Related Articles
|
Metrics
Abstract: Due to the strong time series non-stationarity and complexity in historical data of grain production, the traditional single Extreme Learning Machine (ELM) models suffer from low prediction accuracy and poor robustness. This paper optimizes the internal parameters of Whale Optimization Algorithm (WOA) and superimpose the predicted results of decomposed components model to achieve more accurate predictions of grain production. Firstly, the Empirical Mode Decomposition (EMD) model is introduced to extract intrinsic features from raw data before establishing the prediction model. Secondly, the multiple stationary grain mode components are obtained by decomposition, and a prediction model is established for each component. The experimental results show that the proposed EMD-ELM-WOA combined prediction model outperforms single ELM neural network, BP neural network, SVM model, and EMD-ELM model with minimal prediction error and highest accuracy.
Unpaired Cross-modal Hashing Method for Web News Data
WU Zhao-meng1, ZHANG Cheng-gang2
2024, 0(03): 54-60. doi:
10.3969/j.issn.1006-2475.2024.03.009
Asbtract
(
76
)
PDF
(2964KB) (
87
)
References
|
Related Articles
|
Metrics
Abstract: Most of the current cross-modal Hashing methods can only be trained when fully paired instances are provided, and are not suitable for a large number of unpaired data in the real world. In order to solve this problem, an unpaired cross-modal Hashing method for Web news data is proposted. Firstly, a feature fusion network is constructed to process the unpaired training data, the modal information is supplemented and improved, and the adversarial loss is used to strengthen the common representation of learning. Secondly, the affinity matrix optimizes the feature distribution of samples and the generated binary codes, so that the semantic relationship between samples is more explicit. Finally, we add a class prediction loss to enhance the discrimination ability of binary codes. Experiments on real network news datasets with paired scenes and unpaired scenes respectively, the results show that the proposed method can be extended to practical applications.
Information Extraction for Aircraft Fault Text
QIAO Lu, SUN You-chao, WU Hong-lan
2024, 0(03): 61-66. doi:
10.3969/j.issn.1006-2475.2024.03.010
Asbtract
(
112
)
PDF
(1248KB) (
174
)
References
|
Related Articles
|
Metrics
Abstract: In view of the problems of large workload, low efficiency and high cost of manual extraction of aircraft fault information, a method of information extraction based on domain dictionary, rules and BiGRU-CRF model is proposed. Combining the characteristics of aircraft domain knowledge, domain dictionary and template rules are constructed based on aircraft fault text information, and semantic labeling of fault information is carried out. The BiGRU-CRF deep learning model is used for named entity recognition. BiGRU obtaines the semantic relationship of context, and CRF decodes and generates the entity label sequence. The experimental results show that the information extraction method based on domain dictionary, rules and BiGRU-CRF model has an accuracy of 95.2%, which verifies the effectiveness of the method. It can accurately identify the key words in the aircraft fault text, such as time, aircraft type, fault part name, fault part manufacturer and other information. At the same time, according to the domain dictionary and rules to correct the recognition results, effectively improves the efficiency and accuracy of information extraction, and solves the problem of traditional entity extraction model long-term dependence on manual features.
Inter-slice Super-resolution Based on Deformation Field and Grey Field Interpolation Networks
LIU Xun, ZHANG Dong, YUNG Da-long
2024, 0(03): 67-71. doi:
10.3969/j.issn.1006-2475.2024.03.011
Asbtract
(
64
)
PDF
(1213KB) (
102
)
References
|
Related Articles
|
Metrics
Abstract: Magnetic resonance imaging is a widely used medical imaging method. Constrained by hardware conditions and other factors, the inter-slice resolution of MR images is much lower than the intra-slice resolution, resulting in low image quality and affecting the doctor’s diagnosis of the patient’s condition. Therefore, it is necessary to improve the inter-slice resolution to display more details of the image. In order to achieve super resolution between slices, an algorithm based on non-parametric deformation field and gray field interpolation network is proposed. Firstly, according to the principle of image registration, an approximate U-Net network is used for unsupervised training of two adjacent slices in low-resolution images to generate bidirectional deformation fields between slices. Then, the registered image is generated by using the first slice and the deformation field, and the registered image and the second slice are trained to obtain the bidirectional gray field between them, from which the deformation field and gray field at any position in the middle of the adjacent two slices can be obtained. Finally, any position slice in the middle can be obtained by interpolation method. Compared to other existing algorithms, this algorithm has significant improvement in visual effect and objective evaluation index, with PSNR and SSIM above 30 dB and 0.99, respectively.
Ultrasound Image Segmentation of Thyroid Nodules by Fusing Multi-scale Spatial Features
CUI Shao-guo, ZHANG Yu-nan
2024, 0(03): 72-77. doi:
10.3969/j.issn.1006-2475.2024.03.012
Asbtract
(
99
)
PDF
(1457KB) (
125
)
References
|
Related Articles
|
Metrics
Abstract: The ultrasound images of thyroid nodules have serious noise and low contrast between different tissues. The existing ultrasound image segmentation algorithm of thyroid nodules have some problems of blurred edge information and inaccurate segmentation of small nodules. Therefore this paper proposes an ultrasound image segmentation algorithm of thyroid nodules fused with multi-scale spatial features. Based on the U-Net model, the coordinate attention mechanism is introduced to embed the position information into the channel attention to achieve the model’s localization of the thyroid nodule region in the coding part. At the same time, the fused multiscale feature module extracts the spatial aspect features. To retain more detailed features, we uses convolution operation in the process of down sampling and the binary cross-entropy loss and Dice coefficient loss as the comprehensive loss. The experimental results show that compared with the benchmark model U-Net, the proposed algorithm model improves the F1 evaluation index by 9.9 percentage points, and the accuracy rate is increased to 92.8%. Thus the feasibility and effectiveness is verified.
Bi-stream Transformer for Single Image Dehazing
LI An-ran1, FANG Yang-yang2, CHENG Hui-jie2, ZHANG Shen-shen2, YAN Jin-qiang3, YU Teng3, YANG Guo-wei3
2024, 0(03): 78-84. doi:
10.3969/j.issn.1006-2475.2024.03.013
Asbtract
(
84
)
PDF
(3239KB) (
77
)
References
|
Related Articles
|
Metrics
Abstract:The use of deep learning methods, specifically encoder-decoder networks, has obtained exceptional performance in image dehazing. However, these approaches often solely rely on synthetic datasets for training the models, ignoring prior knowledge about hazy images. It presents significant challenges in achieving satisfactory generalization of the trained models, leading to compromised performance on real hazy images. To address this issue and leverage insights from the physical characteristics associated with haze, this paper introduces a novel dual-encoder architecture that incorporates a prior-based encoder into the traditional encoder-decoder framework. By incorporating a feature enhancement module, the representations from the deep layers of the two encoders are effectively fused. Additionally, Transformer blocks are adopted in both the encoder and decoder to address the limitations of commonly used structures in capturing local feature associations. The experimental results show that the proposed method not only outperforms state-of-the-art techniques on synthetic data but also exhibits remarkable performance in authentic hazy scenarios.
Low-light Image Enhancement Based on Dual Attention Residual Blocks
DU Han-yu, WEI Yan, TANG Bao-xiang, LIAO Heng-feng, YE Si-jia
2024, 0(03): 85-91. doi:
10.3969/j.issn.1006-2475.2024.03.014
Asbtract
(
86
)
PDF
(6929KB) (
80
)
References
|
Related Articles
|
Metrics
Abstract: Low Light Image Enhancement (LLIE), which is to restore images captured under insufficient lighting conditions to normal exposure images. The existing LLIE algorithms based on deep learning often use stacked convolution or up/down sampling methods, which lacks the guidance of relevant semantic information, resulting in problems such as increased noise, color distortion and detail loss in the enhanced image. To address this issue, a novel LLIE algorithm based on dual attention residual modules is proposed. This algorithm proposes a residual block that integrates dual attention units(Dual Attention Residual Block, DA-ResBlock), which provides semantic information guidance in both channel and spatial domains. Through multi-level cascaded DA-ResBlocks, effective features are stably extracted, and skip connections and convolutional neural networks are used to restore image detail information. In addition, a composite loss function is used to constrain the enhancement task. Finally, we compare our algorithm with mainstream algorithms in recent years on two public datasets that provide real images. The experimental results show that the proposed algorithm effectively improves image brightness while better suppressing noise, restoring image color and detail texture in subjective vision. In the objective evaluation, the three indexes of PSNR, SSIM and LPIPS are superior to the compared mainstream algorithms.
Breast Cancer Immunohistochemical Image Generation Based on Generative Adversarial Network
LU Zi-han1, ZHANG Dong1, YANG Yan1, YANG Shuang2
2024, 0(03): 92-96. doi:
10.3969/j.issn.1006-2475.2024.03.015
Asbtract
(
123
)
PDF
(2044KB) (
283
)
References
|
Related Articles
|
Metrics
Abstract: Breast cancer is a dangerous malignant tumor. In medicine, human epidermal growth factor receptor 2(HER2)levels are needed to determine the aggressiveness of breast cancer in order to develop a treatment plan, this requires immunohistochemical(IHC)staining of the tissue sections. In order to solve the problem that IHC staining is expensive and time-consuming, firstly, a HER2 prediction network based on mixed attention residual module is proposed, and a CBAM module is added to the residual module, so that the network can focus on learning at the spatial and channel levels. The prediction network could directly predict HER2 level from HE stained sections, and the prediction accuracy reached more than 97.5%, which increased by more than 2.5 percentage points compared with other networks. Subsequently, a multi-scale generative adversarial network is proposed, which uses ResNet-9blocks with mixed attention residuals module as generator and PatchGan as discriminator and self-defines multi-scale loss function. This network can directly generate simulated IHC slices from HE stained slices. At low HER2 level, SSIM and PSNR between the generated image and the real image are 0.498 and 24.49 dB.
Global Cross-layer Interaction Networks Learning Fine-grained Images Features Representation
ZHANG Gao-yi1, XU Yang1, 2, CAO Bin1, 2, SHI Jin1
2024, 0(03): 97-104. doi:
10.3969/j.issn.1006-2475.2024.03.016
Asbtract
(
71
)
PDF
(1838KB) (
105
)
References
|
Related Articles
|
Metrics
Abstract: The key task of fine-grained visual categorization is to extract highly discriminative features. In previous models, bilinear pooling techniques and their variants are often combined to solve this problem. However, most bilinear pooling and its variants ignore intra-layer or inter-layer feature interactions, and such insufficient interactions can easily lead to the loss of discriminative information or make the discriminative information contain too much redundant information. Aiming at the above problems, a new method for learning fine-grained image features and feature representations—Global Cross-layer Interaction (GCI) network is designed. The proposed hierarchical bicubic pooling method balances the ability of extracting discriminative information and filtering redundant information and can simultaneously model the feature interaction within and between layers. The interactive computing structure is combined with the existing channel attention mechanism to form an interactive attention mechanism to improve the key feature extraction capability of the backbone network. Finally, the feature extraction network composed of interactive attention mechanism is fused with bicubic pooling method to obtain GCI, and robust fine-grained image feature representation is extracted. Experiments are carried out on three fine-grained benchmark datasets, and the experimental results show that the hierarchical bicubic pooling achieves the best results in the hierarchical interactive pooling framework, namely the classification accuracy of CUB-200-2011, Stanford-Cars and FGVC-Aircraft is 87.4%, 93.2% and 92.1%, respectively, and the classification accuracy is further improved to 88.5%, 95.1% and 93.9% after the interactive attention mechanism is integrated.
Base Station Location Mechanism of Power Wireless Private Network Based on MVO Algorithm
XU Yu-jia1, ZHANG Hua-mei1, 2
2024, 0(03): 105-109. doi:
10.3969/j.issn.1006-2475.2024.03.017
Asbtract
(
98
)
PDF
(2204KB) (
65
)
References
|
Related Articles
|
Metrics
Abstract: Aiming at the problem of base station location in power wireless private network, a base station site mathematic model containing construction cost, coverage and overlapping coverage is put forward. Then, the optimal layout of base station is implemented based on improved multi-verse optimizer algorithm. Considering the precocious convergence problem of multi-verse optimizer algorithm, the travel distance rate in the algorithm is improved first, and then tabu search algorithm is introduced to effectively solve the local optimal problem. Finally, the economics and feasibility of the proposed method is verified by conducting comprehensive analysis of the optimized base station deployment scheme. The experimental results show that the designed algorithm has higher optimization performance and convergence rate, can save cost budget and reduce co-channel interference while improving coverage, and has a good theoretical guidance for the site planning of power wireless private network.
Side Channel Analysis Based on Convolutional Auto-encoder
ZENG Zhong-jing-xin, GAN Gang
2024, 0(03): 110-114. doi:
10.3969/j.issn.1006-2475.2024.03.018
Asbtract
(
80
)
PDF
(2008KB) (
62
)
References
|
Related Articles
|
Metrics
Abstract: As an important indicator in side channel analysis, Points-Of-Interest (POI) is of great significance to accurately select effective POI. Aiming at the problem of poor selection of POI in the clustering model of public key cryptography algorithm, which leads to low recognition rate, this paper proposes a method of POI selection based on convolutional auto-encoder. After data preprocessing, the method uses convolutional auto-encoder to learn data features, and uses its encoded output as the selected POI, combines with clustering algorithm to complete side channel attack, and finally successfully recoveres the key. The experiment takes the multi-point operation process in the SM2 decryption algorithm as the research object, and the results show that the proposed method can be used for data POI selection in the side channel analysis, which greatly improves the flexibility and practicality of neural networks in side channel analysis.
Identifying Influential Nodes in Large-scale Networks Based on Neighbor Classification
SHENG Jia-ye
2024, 0(03): 115-121. doi:
10.3969/j.issn.1006-2475.2024.03.019
Asbtract
(
75
)
PDF
(2298KB) (
88
)
References
|
Related Articles
|
Metrics
Abstract: Identifying important nodes has always been one of the hot problems under complex networks, because the identified important nodes can play an important role in information dissemination or disease immunization in the population. The research of large number of current methods is basically based on three perspectives: node’s neighbor information, shortest path in the network and node deletion. The existing approaches based on the node’s neighbor information do not provide a specific description of the role of neighboring nodes and do not differentiate the contributions of neighboring nodes in different dimensions. This paper proposes a SCCN method, this method divides the contribution of neighbor nodes into two parts: strengthening the propagation effect within the tightly connected local area where the node is located and extending the information carried by the node to other areas of the network. The performance of SCCN is evaluated by the standard SIR model and compared with degree centrality, K-shell, meso-centrality and PageRank on eight real networks. The experimental results show that SCCN has higher accuracy and stability, as well as lower time complexity, and can be applied to large-scale networks.
Fractional Repetition Codes Based on Petersen Graphs
YU Chun-lei1, 2, LIU Du-jin1, ZHU Hua-wei1, YANG Jia-rong3
2024, 0(03): 122-126. doi:
10.3969/j.issn.1006-2475.2024.03.020
Asbtract
(
59
)
PDF
(1284KB) (
56
)
References
|
Related Articles
|
Metrics
Abstract: In order to study the repair efficiency of distributed storage system, a fractional repetition code design based on Petersen graph edge coloring is proposed. The design uses the edge coloring of Petersen graph to construct, that is, the coloring the edges of Petersen graph is dyed first, the different number of colored edges is marked, and then the links of different edge colors in Petersen graph is constructed. Finally, each link is regarded as a storage node of partial repeat code, which is called PECBFR code. Theoretical analysis points out that the PECBFR code can randomy access the system storage capacity in reach. In addition, the experimental simulation results show that the proposed fractional repeat code construction algorithm based on Petersen graph edge coloring, compared with the reed-solomon codes and simple regenerative codes in distributed storage system, can quickly repair the faulty node when the system repairs the faulty node. Compared with common coding algorithms in distributed storage systems, the performance of repair locality, repair complexity and repair bandwidth overhead are greatly improved.