ログイン
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 学術雑誌論文
  2. 銅谷ユニット

Sigmoid-weighted linear units for neural network function approximation in reinforcement learning

https://oist.repo.nii.ac.jp/records/659
https://oist.repo.nii.ac.jp/records/659
df473d29-d03c-4035-8804-88112a5c1119
名前 / ファイル ライセンス アクション
1-s2.0-S0893608017302976-main.pdf 1-s2.0-S0893608017302976-main (1.4 MB)
Creative Commons Attribution-NonCommercial-NoDerivatives License
(http://creativecommons.org/Licenses/by-nc-nd/4.0/)
Item type 学術雑誌論文 / Journal Article(1)
公開日 2018-08-14
タイトル
タイトル Sigmoid-weighted linear units for neural network function approximation in reinforcement learning
言語 en
言語
言語 eng
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_6501
資源タイプ journal article
著者(英) Elfwing, Stefan

× Elfwing, Stefan

Elfwing, Stefan

Search repository
Uchibe, Eiji

× Uchibe, Eiji

Uchibe, Eiji

Search repository
Doya, Kenji

× Doya, Kenji

Doya, Kenji

Search repository
書誌情報 en : Neural Networks

発行日 2018-01-11
抄録
内容記述タイプ Other
内容記述 In recent years, neural networks have enjoyed a renaissance as function approximators in reinforcement learning. Two decades after Tesauro's TD-Gammon achieved near top-level human performance in backgammon, the deep reinforcement learning algorithm DQN achieved human-level performance in many Atari 2600 games. The purpose of this study is twofold. First, we propose two activation functions for neural network function approximation in reinforcement learning: the sigmoid-weighted linear unit (SiLU) and its derivative function (dSiLU). The activation of the SiLU is computed by the sigmoid function multiplied by its input. Second, we suggest that the more traditional approach of using on-policy learning with eligibility traces, instead of experience replay, and softmax action selection can be competitive with DQN, without the need for a separate target network. We validate our proposed approach by, first, achieving new state-of-the-art results in both stochastic SZ-Tetris and Tetris with a small 10 x 10 board, using TD(lambda) learning and shallow dSiLU network agents, and, then, by outperforming DQN in the Atari 2600 domain by using a deep Sarsa(lambda) agent with SiLU and dSiLU hidden units.
出版者
出版者 Elsevier Ltd.
ISSN
収録物識別子タイプ ISSN
収録物識別子 0893-6080
PubMed番号
関連タイプ isIdenticalTo
識別子タイプ PMID
関連識別子 info:pmid/29395652
DOI
関連タイプ isIdenticalTo
識別子タイプ DOI
関連識別子 info:doi/10.1016/j.neunet.2017.12.012
権利
権利情報 © 2017 The Author(s).
関連サイト
識別子タイプ URI
関連識別子 https://www.sciencedirect.com/science/article/pii/S0893608017302976
著者版フラグ
出版タイプ VoR
出版タイプResource http://purl.org/coar/version/c_970fb48d4fbd8a85
戻る
0
views
See details
Views

Versions

Ver.1 2023-06-26 12:00:25.657627
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR 2.0
  • OAI-PMH JPCOAR 1.0
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3