GUO Xinyi1, JIANG Kai1, YANG Pengwei2, CHEN Geng2. Residential Microgrid Scheduling Algorithm Based on Deep Reinforcement Learning[J]. Computer and Modernization, 2025, 0(06): 106-113.
[1] 花硕. 新能源并网的挑战与应对[D]. 北京:华北电力大学, 2019.
[2] 杨新法,苏剑,吕志鹏,等. 微电网技术综述[J]. 中国电机工程学报, 2014,34(1):57-70.
[3] 韩沐枫. 计及需求响应的微电网多时间尺度调度仿真[J]. 计算机与现代化, 2023(3):102-106.
[4] SAEED M H, FANGZONG W, KALWAR B A, et al. A review on microgrids’ challenges & perspectives[J]. IEEE Access, 2021,9:166502-166517.
[5] 叶斌,代磊,马静,等. 面向新型城镇化的未来配电网形态研究[J]. 电力需求侧管理, 2019,21(2):56-61.
[6] HUANG G, WU F, GUO C X. Smart grid dispatch powered deep learning: A survery[J]. frontiers of Information Technology & Electronic Engineering, 2022,23(5):763-777.
[7] 马恺珧,王国庆,于雷. 不确定性环境下微电网优化调度综述[J]. 工程研究——跨学科视野中的工程, 2023,15(2):93-103.
[8] YANN L C, BENGIO Y, HINTON G. Deep learning[J]. Nature, 2015,521(7553):436-444.
[9] KONG W C, DONG Z Y, JIA Y W, et al. Short-term residential load forecasting based on LSTM recurrent neural network[J]. IEEE Transactions on Smart Grid, 2017,10(1):841-851.
[10] ZHANG Q C, YANG L T, CHEN Z K, et al. A survey on deep learning for big data[J]. Information Fusion, 2018,42:146-157.
[11] 尹春杰,肖发达,李鹏飞,等. 基于LSTM神经网络的区域微网短期负荷预测[J]. 计算机与现代化, 2022(4):7-11.
[12] ZHU Z Y, PENG G L, CHEN Y H, et al. A convolutional neural network based on a capsule network with strong generalization for bearing fault diagnosis[J]. Neurocomputing, 2019,323:62-75.
[13] 郭成,王宵,王波,等. 基于多层融合神经网络模型的短期电力负荷预测方法[J]. 计算机与现代化,2021(10):94-99.
[14] YAO J K, XU J C, ZHANG N, et al. Model-based reinforcement learning method for microgrid optimization scheduling[J]. Sustainability, 2023,15(12):9235.
[15] GAO J K, LI Y, WANG B, et al. Multi-microgrid collaborative optimization scheduling using an improved multi-agent soft actor-critic algorithm[J]. Energies, 2023,16(7):3248.
[16] VAN H H, GUEZ A, SILVER D. Deep reinforcement learning with double q-learning[C]// Proceedings of the Thirtieth AAAI conference on artificial intelligence. ACM, 2016,30(1):2094-2100.
[17] LILLICRAP T P, HUNT J J, PRITZEL A, et al. Continuous control with deep reinforcement learning[J]. arXiv preprint arXiv:1509.02971, 2015.
[18] THRUN S, LITTMAN M L. Reinforcement learning: An introduction[J]. AI Magazine, 2000,21(1):103-103.
[19] GOODFELLOW I, BENGIO Y, COURVILLE A. Deep learning[M]. MIT Press, 2016.
[20] SILVER D, HUANG A, MADDISON C J, et al. Mastering the game of Go with deep neural networks and tree search[J]. Nature, 2016,529(7587):484-489.
[21] MNIH V, KAVUKCUOGLU K, SILVER D, et al. Playing atari with deep reinforcement learning[J]. arXiv preprint arXiv:1312.5602, 2013.
[22] WOEGINGER G J. Exact algorithms for NP-hard problems: A survey[C]// Combinatorial Optimization—Eureka, You Shrink! Papers Dedicated to Jack Edmonds 5th International Workshop Aussois, 2001 Revised Papers. Springer, 2003:185-207.
[23] BERTSEKAS D. Dynamic programming and optimal control: Volume I[M]. Athena Scientific, 2012.
[24] ARULKUMARAN K, DEISENROTH M P, BRUNDAGE M, et al. A brief survey of deep reinforcement learning[J]. arXiv preprint arXiv:1708.05866, 2017.
[25] ZHANG Z X, LIU Q J, WANG Y H. Road extraction by deep residual U-Net[J]. IEEE Geoscience and Remote Sensing Letters, 2018,15(5):749-753.
[26] LIANG S Y, SRIKANT R. Why deep neural networks for function approximation?[J]. arXiv preprint arXiv:1610.04161,
2016.
[27] HOCHREITER S, SCHMIDHUBER J. Long short-term memory[J]. Neural Computation, 1997,9(8):1735-1780.
[28] GRAVES A. Generating sequences with recurrent neural networks[J]. arXiv preprint arXiv:1308.0850, 2013.