ログイン
言語:

WEKO3

  • トップ
  • ランキング
To
lat lon distance
To

Field does not validate



インデックスリンク

インデックスツリー

メールアドレスを入力してください。

WEKO

One fine body…

WEKO

One fine body…

アイテム

  1. 学術雑誌論文
  2. 谷ユニット

A Novel Predictive-Coding-Inspired Variational RNN Model for Online Prediction and Recognition

https://oist.repo.nii.ac.jp/records/1413
https://oist.repo.nii.ac.jp/records/1413
6a6d9b3b-d4d2-4562-aff9-6fc904e7e765
名前 / ファイル ライセンス アクション
Ahmadi-2019-A Ahmadi-2019-A Novel Predictive-Coding-Inspired (1.3 MB)
Item type 学術雑誌論文 / Journal Article(1)
公開日 2020-04-27
タイトル
タイトル A Novel Predictive-Coding-Inspired Variational RNN Model for Online Prediction and Recognition
言語 en
言語
言語 eng
資源タイプ
資源タイプ識別子 http://purl.org/coar/resource_type/c_6501
資源タイプ journal article
著者(英) Ahmadi, Ahmadreza

× Ahmadi, Ahmadreza

Ahmadi, Ahmadreza

Search repository
Tani, Jun

× Tani, Jun

Tani, Jun

Search repository
書誌情報 en : Neural Computation

巻 31, 号 11, p. 2025-2074, 発行日 2019-10-17
抄録
内容記述タイプ Other
内容記述 This study introduces PV-RNN, a novel variational RNN inspired by predictive-coding ideas. The model learns to extract the probabilistic structures hidden in fluctuating temporal patterns by dynamically changing the stochasticity of its latent states. Its architecture attempts to address two major concerns of variational Bayes RNNs: how latent variables can learn meaningful representations and how the inference model can transfer future observations to the latent variables. PV-RNN does both by introducing adaptive vectors mirroring the training data, whose values can then be adapted differently during evaluation. Moreover, prediction errors during backpropagation-rather than external inputs during the forward computation-are used to convey information to the network about the external data. For testing, we introduce error regression for predicting unseen sequences as inspired by predictive coding that leverages those mechanisms. As in other variational Bayes RNNs, our model learns by maximizing a lower bound on the marginal likelihood of the sequential data, which is composed of two terms: the negative of the expectation of prediction errors and the negative of the Kullback-Leibler divergence between the prior and the approximate posterior distributions. The model introduces a weighting parameter, the meta-prior, to balance the optimization pressure placed on those two terms. We test the model on two data sets with probabilistic structures and show that with high values of the meta-prior, the network develops deterministic chaos through which the randomness of the data is imitated. For low values, the model behaves as a random process. The network performs best on intermediate values and is able to capture the latent probabilistic structure with good generalization. Analyzing the meta-prior's impact on the network allows us to precisely study the theoretical value and practical benefits of incorporating stochastic dynamics in our model. We demonstrate better prediction performance on a robot imitation task with our model using error regression compared to a standard variational Bayes model lacking such a procedure.
出版者
出版者 The MIT Press
ISSN
収録物識別子タイプ ISSN
収録物識別子 0899-7667
ISSN
収録物識別子タイプ ISSN
収録物識別子 1530-888X
PubMed番号
関連タイプ isIdenticalTo
識別子タイプ PMID
関連識別子 info:pmid/31525309
DOI
関連タイプ isIdenticalTo
識別子タイプ DOI
関連識別子 info:doi/10.1162/neco_a_01228
権利
権利情報 © 2019 Massachusetts Institute of Technology
関連サイト
識別子タイプ URI
関連識別子 https://www.mitpressjournals.org/doi/full/10.1162/neco_a_01228
著者版フラグ
出版タイプ VoR
出版タイプResource http://purl.org/coar/version/c_970fb48d4fbd8a85
戻る
0
views
See details
Views

Versions

Ver.1 2023-06-26 11:51:15.865055
Show All versions

Share

Mendeley Twitter Facebook Print Addthis

Cite as

エクスポート

OAI-PMH
  • OAI-PMH JPCOAR 2.0
  • OAI-PMH JPCOAR 1.0
  • OAI-PMH DublinCore
  • OAI-PMH DDI
Other Formats
  • JSON
  • BIBTEX

Confirm


Powered by WEKO3


Powered by WEKO3