Top Read Articles

    Published in last 1 year |  In last 2 years |  In last 3 years |  All
    Please wait a minute...
    For Selected: Toggle Thumbnails
    Stock Movement Prediction Algorithm Based on Deep Learning
    ZHOU Run-jia
    Computer and Modernization    2023, 0 (01): 69-73.  
    Abstract595)            Save
    To improve the accuracy of stock movement prediction, this paper proposes a stock movement prediction algorithm AACL(Adversarial Attentive CNN-LSTM)which utilizes CNN and LSTM for feature extraction and combines attention mechanism and adversarial training. The algorithm uses CNN to extract the overall trend information of the stock, LSTM to extract the short-term fluctuation information of the stock, and connects multiple stocks through the attention mechanism to capture the rising and falling relationship between stocks. The algorithm also introduces adversarial training to improve the robustness of the algorithm by interfering the data. To verify the effectiveness of the AACL algorithm, experiments are carried out on three data sets KDD17, ACL18, and China50, and compared with existing algorithms. Experiments results show that the algorithm proposed in this paper can obtain the best result.
    Reference | Related Articles | Metrics
    Review of Relation Extraction Based on Pre-training Language Model
    WANG Hao-chang, LIU Ru-yi
    Computer and Modernization    2023, 0 (01): 49-57.  
    Abstract347)            Save
    In recent years, with the continuous innovation of deep learning technology, the application of pre-training models in natural language processing has become more and more extensive, and relation extraction is no longer purely dependent on the traditional pipeline method. The development of pre-training language models has greatly promoted the related research of relation extraction, and has surpassed traditional methods in many fields. First, this paper briefly introduces the development of relationship extraction and classic pre-training models;secondly, summarizes the current commonly used data sets and evaluation methods, and analyzes the performance of the model on each data set; finally, discusses the development challenges of relationship extraction and future research trends.
    Reference | Related Articles | Metrics
    Categorical Data Clustering Based on Extraction of Associations from Co-association Matrix
    GUAN Yun-peng, LIU Yu-long
    Computer and Modernization    2022, 0 (11): 1-8.  
    Abstract335)            Save
    Categorical data clustering is widely used in different fields in the real world, such as medical science, computer science,  etc. The usual categorical data clustering is studied based on the dissimilarity measure. For data sets with different characteristics, the clustering results will be affected by the characteristics of the data set itself and noise information. In addition, the categorical data clustering based on representation learning is too complicated to implement, and the clustering results are greatly affected by the representation results. Based on the co-association matrix, this paper proposes a clustering method that can directly consider the relationship between the original information of categorical data, categorical data clustering based on extraction of associations from co-association matrix (CDCBCM). The co-association matrix can be regarded as a summary of the information association in the original data space. The co-association matrix is constructed by calculating the co-association frequency value of different objects in each attribute subspace, and some noise information is removed from the co-association matrix, and then the clustering result is obtained by normalized cut. The method is tested on 16 publicly available datasets in various aspects, compared with 8 existing methods, and detected using the F1-score metric. The experimental results show that this method has the best effect on 7 data sets, the average ranking is the best, and it can better complete the clustering task of categorical data.
    Reference | Related Articles | Metrics
    High Illumination Visible Image Generation Based on Generative Adversarial Networks
    ZHUANG Wen-hua, TANG Xiao-gang, ZHANG Bin-quan, YUAN Guang-ming
    Computer and Modernization    2023, 0 (01): 1-6.  
    Abstract323)            Save
    To solve the problem of low accuracy of target detection under low illumination conditions at night, this paper proposes a generative adversarial network-based algorithm for high illumination visible light image generation. To improve the ability of the generator to extract features, a CBAM attention module is introduced in the converter module; To avoid the noise interference of artifacts in the generated images, the decoder of the generator is changed from the deconvolution method to the up-sampling method of nearest neighbour interpolation plus convolution layer; to improve the stability of the network training, the adversarial loss function is replaced from the cross-entropy function to the least-squares function. The generated visible images have the advantages of spectral information, rich detail information and good visibility enhancement compared with infrared images and night visible images, which can effectively obtain information about the target and scene. We verified the effectiveness of the method by image generation metrics and target detection metrics respectively, in which the mAP obtained from the test on the generated visible image improved by 11.7 percentage points and 30.2 percentage points respectively compared to the infrared image and the real visible image, which can effectively improve the detection accuracy and anti-interference capability of nighttime targets.
    Reference | Related Articles | Metrics
    Enhanced Image Caption Based on Improved Transformer_decoder
    LIN Zhen-xian, QU Jia-xin, LUO Liang
    Computer and Modernization    2023, 0 (01): 7-12.  
    Abstract311)            Save
    Transformer's decoder model(Transformer_decoder)has been widely used in image caption tasks. Self Attention captures fine-grained features to achieve deeper image understanding. This article makes two improvements to the Self Attention, including Vision-Boosted Attention(VBA)and Relative-Position Attention(RPA). Vision-Boosted Attention adds a VBA layer to Transformer_decoder, and introduces visual features as auxiliary information into the attention model, which can be used to guide the decoder model to generate more matching description semantics with the image content. On the basis of Self Attention, Relative-Position Attention introduces trainable relative position parameters to add the relative position relationship between words to the input sequence. Based on COCO2014 experiments, the results show that the two attention mechanisms of VBA and RPA have improved image caption tasks to a certain extent, and the decoder model combining the two attention mechanisms has better semantic expression effects.
    Reference | Related Articles | Metrics
    Flame Detection Algorithm Based on Improved YOLOV5
    WANG Hong-yi, KONG Mei-mei, XU Rong-qing
    Computer and Modernization    2023, 0 (01): 103-107.  
    Abstract296)            Save
    Aiming at the existing flame detection algorithms having problems of low average detection accuracy and high missed detection rate of small target flames, an improved YOLOV5 flame detection algorithm is proposed. The algorithm uses the Transformer Encode module to replace the CSP bottleneck module at the end of the YOLOV5 backbone network, which enhances the network's ability to capture different local information and improves the average accuracy of flame detection. In addition, the CBAM attention module is added to the YOLOV5 networker, which enhances the network's ability to extract image features, and can better extract features for small target flames, reducing the missed detection rate of small target flames. Experiment with the algorithm on the public datasets BoWFire and Bilkent, the experimental results show that the average flame detection accuracy of the improved YOLOV5 network is higher, reaching 83.9%, the small target flame missed detection rate is lower, only 1.6%, and the detection rate is 34 frames/s. Compared with the original YOLOV5 network, the average accuracy is improved 2.4 percentage points, the small target flame missed detection rate is reduced by 4.1 percentage points, the improved YOLOV5 network can meet the real-time and precision requirements of flame detection.
    Reference | Related Articles | Metrics
    Improving Latency and Bandwidth Probe of BBR Congestion Control Algorithm
    HUANG Hong-ping, ZHU Xiao-yong, WANG Zhi-yuan,
    Computer and Modernization    2022, 0 (10): 113-120.  
    Abstract280)            Save
    The traditional congestion control algorithm based on packet loss can’t meet the requirements of many applications for network performance because of its high packet loss rate and buffer expansion. The BBR (bottleneck bandwidth and round trip) algorithm proposed by Google has attracted extensive attention and research because of its characteristics of anti packet loss, high bandwidth utilization and low delay. However, BBR still has some problems, such as high queuing delay, poor performance in a small RTT (round trip time) environment, untimely bandwidth detection, etc. This paper analyzes the queuing delay and convergence of BBR, and then puts forward an improved method: Limit inflight data, and reduce the congestion window size timely according to the network feedback to reduce the delay; In small RTT environment, the bandwidth estimation before the probe RTT stage is retained to after probe RTT; Set the maximum holding time of steady state, exit the steady cycle in time and enter the detection cycle. The simulation results in NS3 show that the improved BBR reduces the RTT and its jitter, and improves the convergence speed of the algorithm; The bandwidth can be efficiently used in the environment with small RTT; The improved BBR can significantly improve the bandwidth probe frequency of long RTT streams.
    Reference | Related Articles | Metrics
    A Review of Deep Neural Networks Combined with Attention Mechanism
    HUANGFU Xiao-ying, QIAN Hui-min, HUANG Min
    Computer and Modernization    2023, 0 (02): 40-49.  
    Abstract276)            Save
    Attention mechanism has become one of the research hotspots in improving the learning ability of deep neural network. In view of the wide attention paid to the attention mechanism, this paper aims to give a comprehensive analysis and elaboration of attention mechanism in deep neural network from three aspects: the classification of attention mechanism, the way of combining with deep neural network, and the specific applications in natural language processing and computer vision. Specifically, attention mechanism has been divided into soft attention mechanism, hard attention mechanism and self-attention mechanism, and their advantages and disadvantages are compared. Then, the common ways of combining attention mechanism in recursive neural network and convolutional neural network are discussed respectively, and the representative model structures of each way are given. After that, the applications of attention mechanism in natural language processing and computer vision are illustrated. Finally, several future developments of attention mechanism are illustrated expecting to provide clues and directions for subsequent researches.
    Reference | Related Articles | Metrics
    Dynamic Allocation Algorithm of Container Cloud Resources Based on Bi-level Programming
    ZHOU Yong-fu, XU Sheng-chao
    Computer and Modernization    2022, 0 (12): 1-5.  
    Abstract275)            Save
    The dynamic configuration decision problem of container cloud resources is analyzed in this paper. By defining the scheduling task of container cloud resources, the scheduling time of container source resources is solved. The shortest time matrix of container cloud resource scheduling task is used to obtain the conditions needed for container cloud resource scheduling. Under the bi-level planning condition, the objective function and constraint function of container cloud resource scheduling are solved, and the container cloud resource scheduling model is constructed. Considering the tasks of users and the cloud resources of data centers, a matrix to physical hosts is constructed on virtual machines. By constructing the objective function of container cloud resource dynamic configuration results in optimization, and combining with constraints, the dynamic configuration of container cloud resources is realized. Experimental results show that the proposed algorithm can not only improve the utilization of container cloud resources, but also reduce the configuration completion time, and has better dynamic configuration performance.
    Reference | Related Articles | Metrics
    Research Review of Single-channel Speech Separation Technology Based on TasNet
    LU Wei, ZHU Ding-ju
    Computer and Modernization    2022, 0 (11): 119-126.  
    Abstract273)            Save
    Speech separation is a fundamental task in acoustic signal processing with a wide range of applications. Thanks to the development of deep learning, the performance of single-channel speech separation systems has been significantly improved in recent years. In particular, with the introduction of a new speech separation method called time-domain audio separation network (TasNet), speech separation technology is also gradually transitioning from the traditional method based on time-frequency domain to the one based on time domain methods. This paper reviews the research status and prospect of single-channel speech separation technology based on TasNet. After reviewing the traditional methods of speech separation based on time-frequency domain, this paper focuses on the TasNet-based Conv-TasNet model and DPRNN model, and compares the improvement research on each model. Finally, this paper expounds the limitations of the current single-channel speech separation model based on TasNet, and discusses future research directions from the aspects of model, dataset, number of speakers, and how to solve speech separation in complex scenarios.
    Reference | Related Articles | Metrics
    Lightweight Vision Transformer Based on Separable Structured Transformations
    HUANG Yan-hui, LAN Hai, WEI Xian
    Computer and Modernization    2022, 0 (10): 75-81.  
    Abstract271)            Save
    Due to a large number of parameters and high floating-point calculations of the Visual Transformer model, it is difficult to deploy it to portable or terminal devices. Because the attention matrix has a low-rank bottleneck, the model compression algorithm and the attention mechanism acceleration algorithm cannot well balance the relationship between the amount of model parameters, model inference speed and model performance. In order to solve the above problems, a lightweight ViT-SST model is designed. Firstly, by transforming the traditional fully connected layer into a separable structure, the number of model parameters is greatly reduced and the reasoning speed of the model is improved, and it is guaranteed that the attention matrix will not destroy the model’s expressive ability due to the appearance of low rank. Secondly, this paper proposes a Kronecker product approximate decomposition method based on SVD decomposition, which can convert the pre-training parameters of the public ViT-Base model to the ViT-Base-SST model. It slightly alleviates the overfitting phenomenon of the ViT-Base model and improves the accuracy of the model. Experiments on five common public datasets show that the proposed method is more suitable for the Transformer structure model than traditional compression methods.
    Reference | Related Articles | Metrics
    Semi-supervised Learning Method Based on Convolution and Sparse Coding
    LIU Ying-jie, LAN Hai, WEI Xian
    Computer and Modernization    2022, 0 (11): 9-16.  
    Abstract249)            Save
    Convolutional neural network (CNN) has achieved great success in semi-supervised learning. It uses both labelled samples and unlabelled samples in the training stage. Unlabelled samples can help standardize the learning model. To further improve the feature extraction ability of semi-supervised models, this paper proposes an end-to-end semi-supervised learning method combining deep semi-supervised convolutional neural network and sparse coding dictionary learning, called Semi-supervised Learning based on Sparse Coding and Convolution (SSSConv), which aims to learn more discriminative image feature representation and improve the performance of classification tasks. Firstly, the proposed method uses CNN to extract features and performs orthogonal projection transformation on them. Then, learn the corresponding sparse coding and obtain the image representation. Finally, the classifier of the model can classify them. The whole semi-supervised learning process can be regarded as an end-to-end optimization problem. CNN part and sparse coding part have a unified loss function. In this paper, conjugate gradient descent algorithm, chain rule, and backpropagation algorithm are used to optimize the parameters of the objective function. Among them, we restrict the relevant parameters of sparse coding to the manifold, and the CNN parameters can be defined not only in Euclidean space but also in orthogonal space. Experimental results based on semi-supervised classification tasks verify the effectiveness of the proposed SSSConv framework, which is highly competitive with existing methods.
    Reference | Related Articles | Metrics
    Optimization Method of Hadoop File Archiving Based on LZO
    ZHANG Jun, SU Wen-hao
    Computer and Modernization    2023, 0 (06): 1-6.   DOI: 10.3969/j.issn.1006-2475.2023.06.001
    Abstract242)            Save
    The distributed framework Hadoop is widely used in various fields of big data processing. However, more metadata information will be generated while a large number of small files are stored in Hadoop, which can lead to excessive usage of memory in NameNode and affect its ability to provide high performance and high concurrent access. Archiving and storing small files is an effective solution to this problem. At the same time, as data compression can effectively reduce the size of data storage space and network data transmission load, this paper proposes a Hadoop file archiving optimization method named LA (LZO-Archive)based on a real-time lossless compression algorithm LZO. In order to reduce the time of generating index files, LA incorporates LZO compression algorithm during the process of the index file generation stage on the basis of archiving and merging small files. Moreover, a file compression storage algorithm is designed in LA to compress and store data files and index files, which can effectively reduce the occupied disk space in DataNode and the occupied memory space in NameNode. This paper also elaborates the design and implementation of experimental method for LA. Experimental results show that compared with the original HDFS data storage method, the benchmark method of file archiving HAR and the comparison method LHF, the proposed method LA performs better in the aspects of file archiving time, memory usage in NameNode, disk space usage in DataNode, and file access time.
    Reference | Related Articles | Metrics
    A Non-intrusive Load Monitoring Method Based on Improved kNN Algorithm and Transient Steady State Features
    TIAN Feng, DENG Xiao-ping, ZHANG Gui-qing, WANG Bao-yi
    Computer and Modernization    2022, 0 (10): 29-35.  
    Abstract233)            Save
    Non-intrusive load monitoring (NILM) can obtain the operation data of the electrical appliance in the circuit by analyzing the record from a single energy meter, which can serve as an important tool for energy saving planning and optimal dispatching for power grid. The existing NILM methods mainly focus on improving the accuracy of load identification, the model complexity is too high to be applied on embedded devices. A NILM method based on improved kNN algorithm and transient steady state feature is proposed to solve the above problems. Firstly, the kNN algorithm is selected as the load identification model because it does not require training, the kNN algorithm is improved by statistical method of distance weight, and the cosine similarity judgment mechanism is added to verify the accuracy of the kNN load identification results. Secondly, the transient and steady state features are selected as load characteristics to improve the identification of load features. Finally, experimental data are used to verify that the above NILM method has superior performance.
    Reference | Related Articles | Metrics
    Fault Diagnosis of Pumping Unit Based on 1D-CNN-LSTM Attention Network
    WANG Lei, ZHANG Xiao-dong, DAI Huan
    Computer and Modernization    2023, 0 (04): 1-6.  
    Abstract226)            Save
    Aiming at the problems of complex feature extraction, large amount of model parameters and low diagnostic efficiency in traditional fault diagnosis methods of pumping unit based on dynamometer diagram, this paper proposes a fault diagnosis method based on 1D-CNN-LSTM attention network. The dynamometer diagram is converted into a load displacement sequence as the network input, the one-dimensional convolutional neural network (1D-CNN) is used to extract local features of the sequence while reducing sequence length. Considering the temporal characteristics of the sequence, the long-short-term memory (LSTM) network is further used to extract temporal features of the sequence. In order to highlight the impact of key features, the attention mechanism is introduced to give higher attention weights to temporal features related to fault type. Finally, the weighted features are input into a fully connected layer, and the Softmax classifier is used to realize fault diagnosis. The experimental results show that the average accuracy, precision, recall and F1 value of the proposed method reach 99.13%, 99.35%, 99.17% and 99.25%, respectively, and the model size is only 98 kB. Compared with other methods based on feature engineering, it has higher diagnostic accuracy and generalization. Compared with other methods based on two-dimensional convolutional neural network (2D-CNN) model, it significantly reduces model parameters and training time, improves the efficiency of fault diagnosis.
    Reference | Related Articles | Metrics
    Improved Kmeans Segmentation Algorithm for Brain Tumor Based on HMRF
    MA Yu-juan, HAN Jian-ning, SHI Shao-jie, CAO Shang-bin, YANG Zhi-xiu
    Computer and Modernization    2023, 0 (03): 1-5.  
    Abstract224)            Save
    In order to solve the problems of misidentification of brain tumor regions in MRI and the uncertainty in segmentation of tumor sites in brain MRI images, an improved Kmeans algorithm combined with hidden Markov random field (HMRF) model  is proposed to achieve accurate segmentation of brain tumor images. Firstly, the Euclidean distance of Kmeans algorithm is replaced by Manhattan-Chebyshev distance and the improved Kmeans algorithm is used to estimate the initial parameters and initial segmentation of the image to be segmented. Then the spatial information of the image is obtained by HMRF theory and the clustering center is updated by combining with EM algorithm to obtain more accurate clustering center so as to improve the segmentation performance of the algorithm. The experimental results show that the proposed method has good performance effect of brain tumor segmentation, in which the average values of Dice coefficient and Jaccard coefficient reach 0.9289 and 0.8725, respectively.
    Reference | Related Articles | Metrics
    Text Classification Based on ALBERT Combined with Bidirectional Network
    HUANG Zhong-xiang, LI Ming
    Computer and Modernization    2022, 0 (10): 8-12.  
    Abstract223)            Save
    Aiming at the defect that the current multi-label text classification algorithms cannot effectively utilize the deep text information, we propose a model——ABAT. The ALBERT model is used to extract the features of the deep text information, and the bidirectional LSTM network is used for feature training, and the attention mechanism is used to enhance the classification effect to complete the classification. Experiments are carried out on the DuEE1.0 data set released by Baidu. Compared with each comparative model, the performance of the model reaches the best, Micro-Precision reaches 0.9625, Micro-F1 reaches 0.9033, and the model’s Hamming loss drops to 0.0023. The experimental results show that the improved ABAT model can better complete the task of multi-label text classification.
    Reference | Related Articles | Metrics
    Troposcatter Channel Estimation Based on Massive MIMO
    SHI Qing-lin, LIU Li-zhe, LI Xing-jian
    Computer and Modernization    2022, 0 (12): 18-25.  
    Abstract222)            Save
    With the increasing demand of users for communication speed, the communication capacity of tropospheric scattering communication needs to be improved. Massive multiple input multiple output (MIMO) technology is an important way to improve capacity. This paper studies the channel estimation problem of troposcatter communication system based on massive MIMO. Firstly, a massive MIMO troposcatter channel model based on two-dimensional uniform rectangular array is established. Secondly, a channel covariance matrix estimation algorithm is proposed to improve the traditional minimum mean square (MMSE) channel estimation algorithm. Finally, the accuracy of channel estimation algorithm is compared with that of least square (LS), traditional MMSE and ideal MMSE. The simulation results show that when the SNR is 0~25 dB, the accuracy of the traditional MMSE algorithm is not significantly improved compared with that of LS algorithm, and there is a certain gap between the accuracy of the ideal MMSE algorithm and that of the traditional MMSE algorithm. However, the accuracy of the improved MMSE channel estimation algorithm is better than that of the traditional MMSE algorithm. Under the same conditions, when the NMSE is the same, the SNR of the improved MMSE algorithm can be improved by 3~5 dB, and gradually approaches the ideal MMSE algorithm with the increase of SNR.
    Reference | Related Articles | Metrics
    FOCoR: A Course Recommendation Approach Based on Feature Selection Optimization
    WANG Yang, CHEN Mei, LI Hui
    Computer and Modernization    2022, 0 (10): 1-7.  
    Abstract219)            Save
    To solve the cold start problem of the recommendation model based on the behavioral log from online education platform, we design a course recommendation method named FOCoR that integrates data of course selection. First, we propose a technology of feature selection based on genetic algorithm (FSBGA), and then take the result of feature selection as input to build a recommendation model based on LightGBM which is a technology of gradient boosting tree for course recommendation. To be more specific, we construct a fitness function combining the loss of model and the number of features in the proposed FSBGA so that we successfully searched out the optimal feature subset that takes into account the loss of model and the number of features in the feature subset space of university course selection data. According to three indicators of log loss, F1-score and AUC, the model of course selection trained on the feature subset selected by the FSBGA is better than the models trained on the others selected by algorithms based on mutual information or F-test. In order to verify the effectiveness of the work in this paper, we have tested and evaluated FOCoR, LightGBM, XGBoost, decision tree, random forest, logistic regression and other algorithms on real data sets, and the results show that FOCoR has achieved the best performance in F1 scores.
    Reference | Related Articles | Metrics
    Stock Volatility Prediction of LightGBM-GRU Model under Corrective Learning Strategy
    SHI Zhi-wei, WU Zhi-feng, ZHANG Zhe
    Computer and Modernization    2023, 0 (01): 95-102.  
    Abstract218)            Save
    In order to improve the accuracy of traditional intelligent algorithms in time series prediction and the adaptability of solving engineering data problems, a corrective learning strategy is proposed. Volatility is widely used in the financial field, so it is of great value to predict the volatility of stocks. Since the time series of stock prices are non-linear and non-stationary, predicting the volatility of the stock market has become a difficult point in time series forecasting. In this paper, a simulation experiment is carried out by corrective learning strategy, and a LightGBM-GRU model is designed. Using LightGBM and GRU as the base model and corrector, we predict the volatility of 126 stocks from different industries in the next 10 minutes within 3 years. According to RMSPE,MAE,MSE,RMSE and other indicators: even the classical integrated learning model with good effect, the accuracy and generalization ability also can be improved at the same time by the corrective learning strategy. This paper points out that in the era of algorithm enrichment and big data, the contradiction of intelligent algorithms has turned into a contradiction between the limited versatility of intelligent algorithms and the diversity of engineering problems. Correcting learning strategies can provide new ideas for data simulation.
    Reference | Related Articles | Metrics
    Climate Change Prediction in Canada Based on VAR Model
    KOU Lu-yan, LIAO Jing, LI Xue-jun, WU Chang-shu, XIONG Jian-hua
    Computer and Modernization    2022, 0 (10): 13-18.  
    Abstract209)            Save
    The melting of Antarctic glaciers, the increasing of hurricanes and the gradual rising of sea level make people aware of the great challenges caused by global warming. So it is necessary to do research on global climate change. Missing data imputation is taken to study the data of four representative provinces in Canada, and a vector autoregressive (VAR) model is established considering the factors of solar radiation intensity, carbon dioxide content, soil water content, temperature, rainfall etc. to study Canada’s climate change. The specific model is established by doing stability test, impulse response and variance analysis, and is used to predict the temperature and precipitation in Canada. The experimental results show that the average temperature in Canada in the next 25 years will reach 15.0410 ℃, and the average precipitation will reach 2.0950 mm.
    Reference | Related Articles | Metrics
    Bearing Fault Diagnosis Based on CWGAN-GP and CNN
    JIANG Lei, TANG Jian, YANG Chao-yue, LYU Ting-ting
    Computer and Modernization    2023, 0 (07): 1-6.   DOI: 10.3969/j.issn.1006-2475.2023.07.001
    Abstract204)            Save
    Abstract: Aiming at the problem that the number of bearing fault samples is small and unbalanced in the actual work process, a bearing fault diagnosis method based on Conditional Wasserstein Generative Adversarial Network (CWGAN-GP) and Convolutional Neural Network (CNN) is proposed. First, a CWGAN-GP generative adversarial network is constructed by combining conditional generative adversarial network (CGAN) and gradient penalized Wasserstein distance-based generative adversarial network (WGAN-GP). Then, a small number of bearing fault data samples are input into CWGAN-GP, in order to obtain high-quality samples similar to the original samples. When the network reaches the Nash equilibrium, the generated samples and the original samples are mixed to generate a new sample set. Finally, the new sample set is input into the convolutional neural network to learn the sample features for fault diagnosis. The experimental results show that the diagnostic accuracy of the diagnostic method proposed in this paper exceeds 99%, which is higher than other diagnostic methods, effectively improving the diagnostic accuracy and enhancing its generalization ability.
    Reference | Related Articles | Metrics
    Anomaly Detection of Student Consumption Data Based on Semi-supervised Learning
    SONG Xiao-li, ZHANG Yong-bo, ZHANG Pei-ying
    Computer and Modernization    2022, 0 (12): 13-17.  
    Abstract199)            Save
    With the more and more extensive application scenarios of campus card, the problem of capital security of campus card has become increasingly prominent. Campus card fraud will not only bring economic losses to teachers, students and businesses in the school, but also endanger the normal order of the campus. Aiming at the problem that the traditional anomaly detection method can not effectively extract the temporal feature of student consumption data, this paper proposes an anomaly detection method of student consumption data based on semi-supervised learning. Firstly, the auto-encoder is enhanced with the Gated Recurrent Unit, so that the model can reconstruct the consumption data more accurately. Then, the reconstruction error is calculated by Mahalanobis Distance, and the error threshold is determined by Fβ-Socre to detect abnormal data. Finally, the proposed method is used to detect the anomaly of student consumption data in a university. Experimental results show that the proposed method has better detection performance.
    Reference | Related Articles | Metrics
    Parallax Image Stitching Algorithm Based on GMS and Improved Optimal Seam
    LI Si-jie, TANG Qing-shan, GAO Ying-hua
    Computer and Modernization    2022, 0 (12): 95-101.  
    Abstract194)            Save
    Aiming at the problems of ghost and uneven brightness in parallax image stitching, this paper proposes a parallax image stitching algorithm based on grid motion statistics(GMS) and improved optimal seam. Firstly, the fast feature extraction and description(ORB) algorithm is used to extract feature points and the GMS algorithm is used to screen out the mismatched points. Then, HSV color space and image gradient difference are introduced to improve the energy function to avoid the stitching line passing through the image edge. Based on the graph cutting method, the optimal seam is obtained, and the gradient fusion stitching of the image is carried out. The simulation results show that, in the case of large disparity, compared with the algorithm based on scale feature invariance(SIFT) and the algorithm based on accelerated robustness feature(SURF), the accuracy of feature point matching of this algorithm is increased by 2.01 times and 4.73 times at the lowest and highest, and the image naturalness is increased by 20.6% on average. Moreover, the stitched image has uniform brightness and no perspective distortion.
    Reference | Related Articles | Metrics
    Extended Isolated Forest Anomaly Detection Algorithm Based on Simulated Annealing
    WANG Shi-yu, XIAO Li-dong, YAN Xin-chun, YING Wen-hao
    Computer and Modernization    2023, 0 (01): 88-94.  
    Abstract193)            Save
    Extended Isolation Forest (EIF) effectively solves the problem that Isolation Forest(iForest) is not sensitive to local abnormal points, but EIF replaces the isolated condition of axis-parallel with a hyperplane with random slope, which causes the algorithm model to lose part of the generalization ability, and increases time cost due to a large number of vector dot multiplication operations. In response to the above situation, an Extended Isolation Forest based on Simulated Annealing (SA-EIF) is proposed. The algorithm calculates the accuracy value and the difference value of each iTree (Isolation Tree) according to the prediction result of each iTree for the data set, then builds fitness function based on this. Finally, the iTree with better detection performance is selected by the simulated annealing algorithm to construct integrative learning model. The experimental results of K-fold cross-validation in the ODDS anomaly detection dataset indicate that the SA-EIF algorithm is sensitive to local anomalies, reducing the time cost by 20%~40% compared with EIF, and the recognition accuracy is about 5%~10% higher than EIF.
    Reference | Related Articles | Metrics
    Prediction of Railway Freight Volume Based on GS-LSTM Model
    ZHOU Chang-ye, LI Cheng
    Computer and Modernization    2022, 0 (10): 24-28.  
    Abstract183)            Save
    The accuracy of railway freight volume forecasting is necessary for railway transportation companies to make marketing plans and marketing decisions, especially the impact of short-term railway freight volume is crucial. In order to improve the prediction accuracy of railway freight volume, this paper establishes a predictive model by optimizing the parameters of the long-short term memory network (GS-LSTM) and by using the grid search algorithm to optimize the most important parameters (batch size, number of hidden layer neural units and learning rate) in the LSTM model training network. Based on the monthly railway freight volume data from January 2005 to July 2021, firstly, BP and LSTM models are established to compare the prediction results. The MAPE of the LSTM model is 1.55 percentage points lower than that of the BP model. Then the network parameters of the BP and LSTM models are optimized and compared, the two optimized models have improved prediction effects than the basic model and the optimized LSTM model is further reduced by 0.18 percentage points than the BP model. The experimental results show that the optimized LSTM model has better prediction effect and better generalization ability, and has good research and utilization value. 
    Reference | Related Articles | Metrics
    A Hybrid Brain Tumor Classfication Study Based on CBAM and EfficientNet with Improved Channel Attention
    HUA Xin-yu, QI Yun-song
    Computer and Modernization    2023, 0 (05): 1-7.  
    Abstract183)            Save
    In order to further improve the accuracy and robustness of brain tumor image diagnosis, a novel hybrid brain tumor classification method based on CBAM(Convolutional Block Attention Module) and EfficientNet with improved channel attention mechanism (IC+IEffxNet) is proposed. The method is divided into 2 stages. In the first stage, the features will be extracted by CBAM model based on improved spatial attention mechanism. In the second stage, the sequence and exception (SE) block in EfficientNet architecture is replaced by the efficient channel attention (ECA) block, and the combined feature output of the first stage is used as the input of the second stage. Experiment shows the 4 classifications of glioma, meningioma, pituitary and normal images from the mixed brain tumor MRI dataset. The results show that the average classification accuracy is about 0.5~2 percentage points higher than the existing methods. The experimental results demonstrate the effectiveness of the method and provide a new reference for medical experts to accurately judge brain tumor.
    Reference | Related Articles | Metrics
    Face Clustering Method Based on Nearest Neighborhood Aggregation
    WEN Zi-xin, LI Shao-ying, WANG Bin-cheng, LIU Bo,
    Computer and Modernization    2022, 0 (12): 81-87.  
    Abstract173)            Save
    Face clustering is a pre-processing process for face annotation, face recognition and other tasks. It can reduce the labelling burden and provide high-quality annotation for face recognition models by grouping face images. The challenge of face clustering is to extract the global and local structural knowledge in large-scale face datasets and transfer it to the unlabelled ones. To address the issue, a face clustering method based on nearest neighbor aggregation is proposed. The method formulates local structure learning as a link prediction problem. It extracts multi-scale neighborhood characteristics by multiple improved residual Fully-Connected block. The experimental results show that the proposed method can effectively improve the clustering accuracy on the benchmark compared with the mainstream face clustering methods.
    Reference | Related Articles | Metrics
    Automatic Sleep Staging Algorithm Based on Self-attention Mechanism and Single Lead ECG
    LI Wei-song, TANG Min-fang, HE Zheng-ling, WANG Peng, DU Li-dong, FANG Zhen, CHEN Xian-xiang
    Computer and Modernization    2022, 0 (12): 50-59.  
    Abstract173)            Save
    The realization of sleep staging based on manual labeling or traditional machine learning methods is complex and inefficient. Deep neural network improves the results of sleep staging because of its powerful ability to extract complex features, but there are still some problems, such as ignoring the correlation of internal information. To solve this problem, this paper proposes an automatic sleep staging algorithm based on self-attention mechanism and single lead ECG signal, realizing feature extraction and classification automatically by using convolution module, bidirectional gated recurrent unit and self-attention mechanism. In the open database Sleep Heart Health Study database (SHHS1, SHHS2), Multi-Ethnic Study of Atherosclerosis database (MESA) and MIT-BIH Polysomnographic database (MITBPD), the single lead ECG data of 1000, 1000, 1000 and 16 subjects are randomly selected for training and testing. The experimental results show that the accuracy of the four sleep classifications (wake, rapid eye movement, light sleep and deep sleep) of the model is 75.77% (kappa=0.63), 81.01% (kappa =0.66), 8279% (kappa=0.71) and 76.22% (kappa=0.58) respectively, which is better than the sleep staging results based on the traditional machine learning algorithms, verifying the validity of the model.
    Reference | Related Articles | Metrics
    Micro-expression Recognition Based on AU-GCN and Attention Mechanism
    ZHAO Jing-hua, YANG Qiu-xiang
    Computer and Modernization    2023, 0 (03): 48-53.  
    Abstract171)            Save
    As a kind of expression with very short duration, micro-expression can implicitly express people ’s true feelings of trying to suppress and hide, which has a good application in national security, judicial system, medical category and political elections. However, since micro-expression has the characteristics of less data sets, short duration and low expression amplitude, there are many difficulties in identifying micro-expressions, such as less data samples, larger calculation, lack of attention to key features, and easy to over-fitting. Therefore, this paper uses facial action units ( AU ) to highlight local features by weighted attention mechanism, and applies graph convolution network to find the dependencies between AU nodes, and aggregates them into global features for micro-expression recognition. The experimental results show that compared with the existing methods, the proposed method improves the accuracy to 79.3 %.
    Reference | Related Articles | Metrics
    Classification Method of Small Sample Apple Leaves Based on SE-ResNeXt
    BAI Xu-guang, LIU Cheng-zhong, HAN Jun-ying, GAO Jia-meng, CHEN Jun-kang
    Computer and Modernization    2023, 0 (01): 18-23.  
    Abstract167)            Save
    Based on the existing deep learning technology, this study adopts the variant SE-ResNeXt based on residual neural network to construct a convolutional neural network model wich can automatically classify apple varieties and train the model based on transfer learning method. The data is taken from 20 types apple leaves images taken at the Apple Industry Base in Jingning County, Gansu Province. There are 50 pictures of each type of apple leaves, 1000 pictures in total. On this dataset, six models, likes ResNet50,ResNet101,SE-ResNet50,SE-ResNet101,SE-ResNeXt50 and SE-ResNeXt101, are carried out comparison experiments. The results show that SE-ResNeXt101 outperforms other comparison models, with the highest accuracy rate of 97.5% and the inference time of single image only 0.125 s. The method proposed in this paper provides a mean for identifying apple varieties efficiently and accurately in the future, and can be a great help for assisting agricultural research and apple planting.
    Reference | Related Articles | Metrics
    Application of Multimodal Fusion TCN-SSDAEs-RF Method in SA Detection
    YANG Juan, TENG Fei, GUO Da-lin
    Computer and Modernization    2022, 0 (10): 121-126.  
    Abstract167)            Save
    In order to solve the problem that the traditional machine learning method used in sleep apnea (SA) detection requires a lot of work on feature engineering, which leads to low efficiency, and the model uses single-channel signals to extract features and has poor recognition results, a multimodal feature fusion model based on temporal convolutional network (TCN) and stacked sparse denoising auto-encoder (SSDAEs) is proposed to realize automatic feature extraction. The model takes two signals of ECG and breathing as input. First, the TCN network is used to extract the timing features of the input signal, and then the shallow and deep high-dimensional features of the signal are extracted through the SSDAEs. The ECG and respiratory signal features in different feature spaces are fused by a small neural network, and the model is combined with the random forest algorithm to solve the SA fragment detection problem. The experimental results show that the accuracy, sensitivity, and specificity of this method in the detection of SA fragments are 91.5%, 88.9%, and 90.8%, respectively. Compared with previous related studies, it is verified that the SA detection performance of this model is better and the efficiency is higher.
    Reference | Related Articles | Metrics
    Prediction Model of Diabetic Complications Based on BP Neural Network Optimized by Improved Genetic Algorithm#br#
    WANG Min, XU Ying-hao, ZHU Xi-jun
    Computer and Modernization    2022, 0 (11): 69-74.  
    Abstract166)            Save
    BP neural network is one of the most frequently used neural networks in deep learning research. In this paper, an improved genetic algorithm (IGABP) is proposed to optimize the initial structure of BP neural network. The genetic algorithm is easy to fall into local optimal solution, which affects its own optimization ability, so the genetic algorithm is improved, and finally the prediction model of diabetes complications is constructed to predict the occurrence of diabetes complications. The selection operator of genetic algorithm is improved, and the crossover and mutation probability formula of adaptive genetic algorithm is improved also. By building a prediction model, the improved IGABP is compared with BP, GABP and AGABP. The simulation results show that the prediction accuracy of IGABP is significantly better than that of BP, GABP and AGABP, and the convergence speed of the network is accelerated.
    Reference | Related Articles | Metrics
    Medical Knowledge Extraction Based on BERT and Non-autoregressive
    YU Qing, MA Zhi-long, XU Chun
    Computer and Modernization    2023, 0 (01): 120-126.  
    Abstract163)            Save
    In order to avoid the problems of error accumulation and entity overlap caused by the pipeline entity relation extraction model, a joint extraction model based on BERT and Non-autoregressive is established for medical knowledge extraction. Firstly, with the help of the BERT pre-trained language model, the sentence code is obtained. Secondly, the Non-autoregressive method is proposed to achieve parallel decoding, extract the relationship type, extract entities according to the index of the subject and object entities, and obtain the medical triplet. Finally, we import the extracted triples into the Neo4j graph database and realize knowledge visualization. The dataset is derived from manual labeling of data in electronic medical records. The experimental results show that the F1 value, precision and recall based on BERT and non-autoregressive joint learning model are 0.92, 0.93 and 0.92, respectively. Compared with the existing model, the three evaluation indicators have been improved, indicating that the proposed method can effectively extract medical knowledge from electronic medical records.
    Reference | Related Articles | Metrics
    Composite Object Detection Based on Improved YOLOv3 from High-resolution Remote Sensing Image
    ZHANG Biao, WANG Hui-xian, HAN Bing,
    Computer and Modernization    2022, 0 (12): 74-80.  
    Abstract162)            Save
    Compared with a single object, the composite object of remote sensing image has multiple structures, and there are certain differences between the structures. The composite object has the characteristics of variability and complexity; the remote sensing image is wide and the background is complex, and there are many areas similar to the characteristics of the composite object to be inspected. The above two points lead to the low accuracy of the composite object detection. In response to this problem, this article develops research on composite object detection based on high-resolution remote sensing images. This paper first carries out object characteristic analysis and sample data labeling; then proposes an improved YOLOv3 detection algorithm based on Coordinate Attention attention mechanism and Focal Loss function; finally, an experiment is carried out with a composite target of a basketball court as an example. The experimental results show that compared with the original YOLOv3 algorithm, the recall rate and average detection accuracy of the improved algorithm are increased by 10.3 percentage points and 28.8 percentage points, respectively. The result verifies the feasibility and rationality of the proposed scheme.
    Reference | Related Articles | Metrics
    Lightweight Object Detection Model for Underwater Sonar Images
    FAN Xin-nan, CHEN Xin-yang, SHI Peng-fei, SUN Huan-ru, LU Liang, ZHOU Zhong-kai
    Computer and Modernization    2023, 0 (03): 16-22.  
    Abstract159)            Save
    With the development of unmanned underwater detection technology, AUV with sonar detection has become the main method of underwater object detection. However, due to the complexity of the underwater environment and the limitation of the sonar imaging mode, the sonar image resolution is low. Therefore, the traditional morphology based on object detection method has the problems of low detection accuracy and poor real-time performance. When deep learning algorithms such as YOLO are directly applied to underwater sonar image target detection, they still face challenges such as few underwater samples and many model parameters. This paper proposes a lightweight object detection model for sonar image datasets. In view of the characteristics of low-resolution sonar image data and the real-time requirements of underwater AUV automatic detection, the YOLOv4 model is used as the main framework to carry out model tailoring, replace the optimized feature fusion module, target prediction K-means clustering and improve the loss function, etc., and the constructed detection model is applied to sonar target detection. According to the experimental data, the mAP of the proposed model in this paper is 0.0659,0.0214,0.0402 and 0.1701 higher than that of SSD, YOLOv3, YOLOV3-DFPIN and YOLOV4-tiny respectively, Under the conditioms of the mAPs are only 0.0186 lower than that of YOLOv4, only 0.0093 lower than CenterNet, only 0.0074 lower than EfficientdetD0, however, FPS is more than twice as high as YOLOv4 and CenterNet, more than fifth as high as EfficientdetD0. At the same time, the proposed model in this paper has the advantages of both high precision and real time. The experimental results show that the proposed feature extraction network can greatly reduce the redundancy of network parameters and improve the model efficiency and detection speed. Combined with the adaptive spatial feature fusion module, the mutual fusion and reuse of features in different scales are enhanced, and the accuracy of low resolution sonar image target detection is improved.
    Reference | Related Articles | Metrics
    Mobile Edge Computing Task Allocation Method Based on Particle Swarm Optimization
    CHEN Gang, WANG Zhi-jian, XU Sheng-chao
    Computer and Modernization    2022, 0 (11): 32-36.  
    Abstract156)            Save
    In order to improve the task allocation efficiency of mobile terminals and reduce the computational energy consumption, a mobile edge computing task allocation method based on particle swarm algorithm is proposed. By building a heterogeneous network, we can obtain the complete tasks that need to be allocated, and clarify the specific conditions required for task allocation, that is, allocating consumption and delay. The assignment task is converted into finding the optimal solution of the assignment result, building the optimal solution model, solving the model with the help of the particle swarm algorithm, and generating the assignment result of the optimal edge computing task through continuous iteration and update. The experimental results show that the calculation time of the proposed method is between 1 and 3.3 s when the number of tasks assigned is between 20 and 100; when the number of tasks is 100, the energy consumption of the proposed method is only 4107 J; When it is 100%, its delay is only 12.5 ms; its task allocation calculation time is short, energy consumption is small and data transmission delay is short, which can satisfy the certain application requirement.
    Reference | Related Articles | Metrics
    Small Object Detection Method Based on Improved YOLOv5
    WANG Yi-cheng, ZHANG Guo-liang, ZHANG Zi-jie,
    Computer and Modernization    2023, 0 (05): 100-105.  
    Abstract154)            Save
    In order to solve the problems of low detection accuracy and missing detection in traditional YOLOv5 object detection algorithm, a small object detection method based on improved YOLOv5 was proposed. Firstly, to make anchor box better adapt to small targets, IOU (interp over union) is used to replace the Euclidean distance formula originally used in the K-means clustering process to redefine the distance between anchor box and ground truth. Secondly, a maximum pooling of 3×3 kernel size is added to spatial pyarmid pooling (SPP) to improve the detection accuracy of small targets. Finally, a data set containing a variety of small object is designed to verify the algorithm performance. Experimental results show that the mean average precision (mAP) of the improved YOLOv5 algorithm reaches 76.92%, which is 3.56 percentage points higher than that of the classical YOLOV5 algorithm. The detection performence is improved and missed object can be detected.
    Reference | Related Articles | Metrics
    Byzantine Fault-tolerant Distributed Consistency Algorithm for Edge Computing Applications
    ZHANG Hao, LU Hong-ying
    Computer and Modernization    2022, 0 (12): 33-41.  
    Abstract149)            Save
    In order to solve the problem that edge nodes are easy to be attacked or captured to produce Byzantine errors and thus destroy the availability of edge computing applications, a Byzantine fault-tolerant distributed consistency algorithm Edge-Raft is designed for edge computing applications. The algorithm on the basis of the existing classical algorithm of Raft, under the conditions of edge, the Byzantine error of potential, with the introduction of digital signature, synchronous log detection, polling elections, inert vote, three phase synchronization mechanisms such as log, makes its have the Byzantine fault tolerance features at the same time, limits the complexity of the messaging to linear, ensures that the edge nodes with less than 1/3 of the total number of clusters can still provide effective services for users when Byzantine errors occur. Experimental results based on different node sizes show that compared with the existing Raft algorithm, the proposed algorithm retains the comprehensibility of Raft algorithm while ensuring the usability and activity of the proposed algorithm in the edge environment. Compared with the existing Practical Byzantine Fault Tolerance algorithms, the proposed algorithm limits the time complexity of message passing to the linear level, which ensures the scalability of the proposed algorithm in multi-node edge environment.
    Reference | Related Articles | Metrics
    Reliability Evaluation Model of BP Neural Network Based on Particle Swarm Optimization
    WANG Ying-ying, ZHUANG Yi, SUN Yi-fan
    Computer and Modernization    2022, 0 (12): 42-49.  
    Abstract148)            Save
    The reliability of the CPU is critical to a computer system. For the problem that the difficulty of parameter optimization and inaccurate evaluation accuracy in reliability analysis and evaluation of methods such as neural network, this paper proposes a reliability evaluation model based on particle swarm optimization BP neural network. The model optimized the PSO algorithm optimized by the sine map, and then optimized the weights and thresholds of the BP neural network. Through this method, the weights and thresholds of the BP neural network were optimized. Based on the reliability of each functional module in the CPU, a reliability evaluation model of the CPU was established according to the improved BP neural network model. The reliability evaluation of the CPU was completed through model training and testing. Through comparative experiments, the validity and accuracy of the model for CPU reliability evaluation under radiation environment are verified.
    Reference | Related Articles | Metrics