Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Collecting ideas on how to speed up recovery #943

Open
toydarian opened this issue Jun 20, 2024 · 0 comments
Open

Collecting ideas on how to speed up recovery #943

toydarian opened this issue Jun 20, 2024 · 0 comments

Comments

@toydarian
Copy link
Contributor

I have a fairly large database and currently replaying WALs takes over 24 hours.
I'm trying to find ways to speed this up and would appreciate some input.
Up until now, I have found two possible improvements that could be done on the barman-wal-restore script.

  • The first one is in try_deliver_from_spool where the file is actually copied instead of moved. Assuming we don't run on a copy-on-write file-system and the spool is on the same file-system as pg_wal, it would be faster to move or hard-link the file instead of copying.
  • As far as I understand it, even when running with multiple parallel processes fetching files, the script will fetch n files, then PostgreSQL will replay those n files and then ask for the next one, where barman-wal-restore will fetch the next batch of n files and the whole thing starts from the beginning. I wonder if it would be possible to continuously fetch files, so the database never has to wait for any files to get delivered.

Just to be clear, I don't expect anybody to implement any of this. I'm collecting ideas which I plan to implement myself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

2 participants