feature(tj): integrate PPO into UniZero framework#464
Open
tAnGjIa520 wants to merge 3 commits intoopendilab:mainfrom
Open
feature(tj): integrate PPO into UniZero framework#464tAnGjIa520 wants to merge 3 commits intoopendilab:mainfrom
tAnGjIa520 wants to merge 3 commits intoopendilab:mainfrom
Conversation
- Replace manual GAE computation with ding.rl_utils.gae_data and gae - Keep original implementation as _batch_compute_gae_for_pool_bak for backup - Add test script to verify GAE computation correctness - Fix lunarlander_env.py to handle both int and numpy array actions - Add lunarlander_disc_unizero_ppo_config.py for PPO training
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Integrate PPO into UniZero
Key Changes
compute_loss_ppo()inworld_model.pyfor PPO loss calculationunizero.pymuzero_collector.pygame_segment.pycollect_with_pure_policymode to bypass MCTS将 PPO 集成到 UniZero
主要变更
world_model.py中添加compute_loss_ppo()用于 PPO 损失计算unizero.py中集成 PPO 超参数和训练逻辑muzero_collector.py中添加 GAE 计算和对数概率存储game_segment.py中添加 PPO 数据字段(优势、回报、旧对数概率)collect_with_pure_policy模式以绕过 MCTS