Computer and Modernization ›› 2022, Vol. 0 ›› Issue (11): 75-80.

Previous Articles     Next Articles

Deep Q-learning Based Task Offloading in Power IoT

  

  1. (1. State Grid Electric Power Research Institute, Nari Group Corporation, Nanjing 211000, China;2. College of Telecommunications and Information Engineering, Nanjing University of Posts and Telecommunications, Nanjing 210003, China
  • Online:2022-11-30 Published:2022-11-30

Abstract: With the increasing demand for electricity in modern cities and industrial production, power internet of things (PIoT) has attracted extensive attention. PIoT is considered as a solution which can significantly improve the efficiency of power systems. In order to establish effective access, power equipment now is often equipped with 5G modules with lightweight built-in AI. However, limited to computing and communication capabilities of the modules, great challengs are brought by real-time processing and analysis of massive data generated by the equipment. In this paper, we mainly focus on task offloading in the PIoT system. By jointly optimizing the task scheduling and the computing resource allocation of edge servers, the weighted sum of latency and energy consumption turn out to be reduced. We propose a task offloading algorithm based on deep reinforcement learning. Firstly, the task execution on each edge server is modeled as a queuing system. Then, the local computing resource allocation is optimized based on convex optimization theory. Finally, a deep Q-learning algorithm is proposed to optimize the task offloading decisions. Simulation results show that, the proposed algorithm can reduce the latency and energy consumption significantly.

Key words: power internet of things, edge offloading, resource allocation, deep reinforcement learning, 5G modules