Dump db from one VM to another on Azure without intermediate file

Keywords: PostgreSQL - Microsoft Azure - How to - Other
Description:
Hello,

I am trying to restore my database from a VM to another VM on Azure.
The data needs to stay on the Azure Cloud, as the DB is too big to bring over to my local machine.
Using the backup tool from pgAdmin I can only select a local backup file.
When I run the command pg_dump -C -h localhost -U localuser db_name | psql remotehost -U remoteuser db_name using Putty, I keep getting the request for a password.
But I do not know for which pasword.

I login on the bitnami VM using putty.
I specify the command above updated with the server ip address, user and database name.
There is a prompt for a password. But for which password I do not know.

Should I use putty or should I use another tool?
Should I put both VMs in the same virtual network for this to work? I now used the external IP address

Or should I run a similar command in psql?

Thank you for your support.

Hi @mekong,

Please note that the PostgreSQL port is restricted in the firewall by default, so the external DB won’t allow any external access. An option would be temporarily relaxing this restriction following these instructions, and revert the changes once the migration is done.

Please note that you can do this in 2 steps by writing the dump file to a local file inside the source VM:

pg_dump -U postgres db_name > dump_file

Then import it in the new VM with:

psql -h remotehost -U postgres -p 5432 db_name < dump_file

Would that work for you?

Hello Alejandor,

thank you for your support. Yes, I would prefer the 2 steps process.

Where would you store the local file on the bitnami VM ?
I was already experimenting with import and export files, but I could only
write on the temp directory and there the size is limited.
Or am I missing something?

Regards,
Kim

Hi @mekong,

Sorry for the misunderstanding. I thought you were concerned about the dump file being too big for your local computer, not the VM where you would be generating it.

In this case, I think some of the options we have are:

  • Create a new disk with enough space, attach it to your running instance and create the dump file in that disk. This could also make the import process safer, as you could detach this disk and reattach it to the new VM, and import the dump file locally. You can find some information on how to do this in this link to the Azure docs.
  • Transfer the data through an SSH connection (this does not require to open the DB port to the public during the migration). For this, you should ensure your source VM has SSH access to the target VM (this may involve using SSH Agent Forwarding for connecting to the source VM). Then run, follow these steps in the source VM:
    • Create an SSH tunnel between the port 55432 of your source VM to the 5432 (PostgreSQL) of the target one (using & to run it in background mode):

      ssh -N -L 55432:localhost:5432 bitnami@remote_host &
      
    • (Recommended) Use the password file to avoid the interactive password prompt:

      echo 'localhost:5432:DB_NAME:postgres:SOURCE_DB_PASSWORD' >> ~/.pgpass
      echo 'localhost:55432:DB_NAME:postgres:TARGET_DB_PASSWORD' >> ~/.pgpass
      
    • Test the connection works (and doesn’t prompt for a password):

      psql -U postgres -h localhost -p 55432 DB_NAME
      
    • Use the pg_dump with a pipe to the remote host:

      pg_dump -U postgres DB_NAME | psql -U postgres -h localhost -p 55432 DB_NAME
      
    • (Optional) Remove the credentials from the ~/.pgpass file.

Would any of the above methods work for you?

Hi Alejandror,

Thank you for your reply.
I will give it a try.

Thanks,
Kim

This topic was automatically closed 14 days after the last reply. New replies are no longer allowed.