Using table partitioning works well for me. I detach and move the partition to an archive schema; which I can then backup and restore into a separate database, and once that is complete, remove the table from the archive schema on the live database.
Another thing I have done is to set up foreign tables so I can move records across databases.
Joe.
On Fri, May 30, 2025 at 1:51 AM Motog Plus <mplus7535@xxxxxxxxx> wrote:
Hi Team,We are currently planning a data archival initiative for our production PostgreSQL databases and would appreciate suggestions or insights from the community regarding best practices and proven approaches.**Scenario:**- We have a few large tables (several hundred million rows) where we want to archive historical data (e.g., older than 1 year).- The archived data should be moved to a separate PostgreSQL database (on a same or different server).- Our goals are: efficient data movement, minimal downtime, and safe deletion from the source after successful archival.- PostgreSQL version: 15.12- Both source and target databases are PostgreSQL.We explored using `COPY TO` and `COPY FROM` with CSV files, uploaded to a SharePoint or similar storage system. However, our infrastructure team raised concerns around the computational load of large CSV processing and potential security implications with file transfers.We’d like to understand:- What approaches have worked well for you in practice?- Are there specific tools or strategies you’d recommend for ongoing archival?- Any performance or consistency issues we should watch out for?Your insights or any relevant documentation/pointers would be immensely helpful.Thanks in advance for your guidance!Best regards,Ramzy