Modifying tar.gz / tar file to prevent extraction?

Hello,

im making backups of sensitive data to my VPS, i cant trust owner of the VPS node fully.
So i got an idea on which i need Your comments.

I can zip file with password, but i just discovered zip may not pack files larger 2gb, got some error like
zip warning: name not matched: /backup/incremental/accounts/yzbuutpo/homedir/public_html/wp-content/themes/twentytwelve/bomba1/dovecot”

so i got idea renaming my backups to something uninteresting and without tar.gz extension.
and another level of protection - can i anyhow edit the archive easilly so non professional dont recognize what file type it is and also make it corrupted for extraction by editting somehow easilly archive content? If anyone point to a guide, im linux amateur? Thank you

PS i assume i can also use openssl to encrypt archive, but i think it would eat alot of resources to encrypt like 15gb file…

have you considered using encfs to create an encrypted password protected directory for your backups ?

no, im linux amateur.

Which distro/version/architecture ?

Are we talking about a server edition (without a GUI desktop environment) … or do you have a desktop environment ?

encfs isn’t that hard to use … see the man page:

man encfs

Redhat, centos 5.9, server, SSH only…

You can encrypt files yourself on the command line with gpg - make the backup tar, then encrypt it, then you can upload it safely.

Info is here - https://www.gnupg.org/
Howto here - https://help.ubuntu.com/community/GnuPrivacyGuardHowto

That is too much computer work to tar, then encrypt then transfer. I need some lightweight one command way

You’re asking the impossible … if you need to copy/move files and or use compression, you have no option other than to use I/O, CPU, and if copying across a network bandwidth resources.

chemicalfan explained how to limit the impact …

You can limit I/O, CPU, and/or bandwidth impact, but that will cause it to take longer … there is NO magic way to achieve both speed and low system resource usage.

thx, i dont need high speed of the task (i badly said that, i need most speed effective), i just ask which is the most cpu, i/o effective command to achieve mentioned task?

But that depends on YOUR priorities … nobody else can know those.

chemicalfan told you to look into the use of “nice” to limit scheduling priority.

rsync can sync and compress at the same time … you can limit I/O bandwidth with rsync’s --bwlimit= option … and run it with a high nice number so the system favcours other processes.

and if you sync to an encrypted folder…

See:

man encfs

and

man nice

and

man rsync

thanks that looks like good way that rsync with compression and with nice.
but i dont understand that encfs, backup is made automatically to /backup folder
is there any command for encfs that is needed to run on each boot, or setup only requires one time command? or command can be added just before rsync ommand?