WEKO3
アイテム
マルチエージェント強化学習による分散太陽光発電蓄電システムの制御
https://doi.org/10.15102/1394.00002630
https://doi.org/10.15102/1394.000026302c8d39d6-f53f-4f93-a35d-8300f89232b7
名前 / ファイル | ライセンス | アクション |
---|---|---|
![]() |
|
|
![]() |
||
![]() |
|
Item type | 学位論文 / Thesis or Dissertation(1) | |||||||
---|---|---|---|---|---|---|---|---|
公開日 | 2023-04-20 | |||||||
タイトル | ||||||||
タイトル | マルチエージェント強化学習による分散太陽光発電蓄電システムの制御 | |||||||
言語 | ja | |||||||
タイトル | ||||||||
タイトル | Multi-Agent Reinforcement Learning for Distributed Solar-Battery Energy Systems | |||||||
言語 | en | |||||||
言語 | ||||||||
言語 | eng | |||||||
資源タイプ | ||||||||
資源タイプ識別子 | http://purl.org/coar/resource_type/c_db06 | |||||||
資源タイプ | doctoral thesis | |||||||
ID登録 | ||||||||
ID登録 | 10.15102/1394.00002630 | |||||||
ID登録タイプ | JaLC | |||||||
アクセス権 | ||||||||
アクセス権 | open access | |||||||
アクセス権URI | http://purl.org/coar/access_right/c_abf2 | |||||||
著者 (英) |
Huang, Qiong
× Huang, Qiong
|
|||||||
抄録 | ||||||||
内容記述タイプ | Other | |||||||
内容記述 | Efficient utilization of renewable energy sources, such as solar energy, is crucial for achieving sustainable development goals. As solar energy production varies in time and space depending on weather conditions, how to combine it with distributed energy storage and exchange systems with intelligent control is an important research issue. In this thesis, I explore the use of reinforcement learning (RL) for adaptive control of energy storage in local batteries and energy sharing through energy grids. I first test multiple RL algorithms for energy storage control of single houses. I then extend the Autonomous Power Interchange System (APIS) from SONY to combine it with reinforcement learning algorithms in each house. I consider different design decisions in applying RL: whether to use centralized or distributed control, at what level of detail actions should be learned, what information is used by each agent, and how much information is shared across agents. Based on these considerations, I implemented deep Q-network (DQN) and prioritized DQN to set the parameters of real-time energy exchange protocol of APIS and tested it using the actual data collected from OIST DC-based Open Energy System (DCOES). The simulation results showed that DQN agents outperform rule-based control on energy sharing and that prioritized experience replay further improves the performance of DQN. Simulation results also suggest that sharing average energy production, storage and usage within the community helps the performance. The results contribute to future designs of distributed intelligent agents and effective operations of energy grid systems. |
|||||||
言語 | en | |||||||
口頭試問日 | ||||||||
2022-11-30 | ||||||||
学位授与年月日 | ||||||||
学位授与年月日 | 2023-01-31 | |||||||
学位名 | ||||||||
学位名 | Doctor of Philosophy | |||||||
学位授与番号 | ||||||||
学位授与番号 | 甲第117号 | |||||||
学位授与機関 | ||||||||
学位授与機関識別子Scheme | kakenhi | |||||||
学位授与機関識別子 | 38005 | |||||||
学位授与機関名 | Okinawa Institute of Science and Technology Graduate University | |||||||
著者版フラグ | ||||||||
出版タイプ | VoR | |||||||
出版タイプResource | http://purl.org/coar/version/c_970fb48d4fbd8a85 | |||||||
権利 | ||||||||
権利情報 | © 2023 The Author. |