remote copy of Veeam backup?

10 posts / 0 new
Last post
j.backus
Offline
Last seen: 8 years 9 months ago
Joined: 25.08.2012 - 18:09
remote copy of Veeam backup?

Dear reader,

I want to syncrhonize Veeam backups with a remote Qnap nas appliance. Veeam does a reverse incremental backup so it updates the main image (c. 200GB)  every day and keeps reverse incremental files to go back to a previous state.

The Veeam backup server is a Windows XP PC on with I run the following cmd file:

@ECHO OFF

REM Make environment variable changes local to this batch file

SETLOCAL

 

REM where to find rsync and related files

SET CWRSYNCHOME=C:\Program Files\cwRsync

 

REM Set HOME variable to your appdata directory. That makes sure that

REM ssh command creates known_hosts in a directory you have access.

SET HOME=C:\Documents and Settings\admin\Application Data

 

REM Make cwRsync home as a part of system PATH to find required DLLs

SET CWOLDPATH=%PATH%

SET PATH=%CWRSYNCHOME%\BIN;%PATH%

 

"C:\Program Files\cwRsync\bin\rsync.exe" --recursive --compress --verbose --inplace --partial --password-file="/cygdrive/C/Program Files/cwRsync/password-file.txt" "/cygdrive/F/backup" "user@123.123.123.123::Backup/"

My question: is this the optimal configuration? Or should I configure this in a different way?

With best regards,

 

Jac

itefix
Offline
Last seen: 8 hours 7 min ago
Joined: 01.05.2008 - 21:33
Command syntax seems ok. Be

Command syntax seems ok. Be aware of that option --inplace uses the existing file for its operations, meaning that you don't have a fallback in the case something goes wrong. The default rsync behaviour is to create a copy before renaming it. As your files are big ones, i can understand that the option --inplace is requirement due to free disk space.

j.backus
Offline
Last seen: 8 years 9 months ago
Joined: 25.08.2012 - 18:09
Hello TK, Thanks for the

Hello TK,

Thanks for the reply. I use inplace also because of it takes a long time to copy 220 GB remotely.

The changes in the image are about 4-5GB each day, so I thought using rsync would be a good option. Unfortunately, until now, rsync is running for several days and each night, the image is changed again. I can configure a batch job after the backup, and I had the hope, the rsync action would finish within, say 20 hours. Until now, this is somewhat disappointing...

With best regards,

Jac

itefix
Offline
Last seen: 8 hours 7 min ago
Joined: 01.05.2008 - 21:33
I recommend to setup a test

I recommend to setup a test Cwrsync server locally and try to rsync your backup files by specifying --stats option. That way you will be able to see speed gain after an initial rsync. There is a possibility that those 4-5 GB changes are randomly distributed within the image every time, thus reducing rsync's ability to maximize its algorithm.

j.backus
Offline
Last seen: 8 years 9 months ago
Joined: 25.08.2012 - 18:09
Hello TK, After a few minutes

Hello TK,

After a few minutes I get an error:

C:\Program Files\cwRsync>debug.cmd
opening tcp connection to 123.123.123.123 port 873
Connected to 123.123.123.123 (123.123.123.123)
sending daemon args: --server -vvvrze.iLsf --inplace . Backup/
sending incremental file list
[sender] make_file(backup,*,0)
send_file_list done
[sender] make_file(backup/Default job FA,*,2)
[sender] make_file(backup/Default Job FA_1,*,2)
[sender] make_file(backup/Default Job FA_2,*,2)
send_files starting
[sender] make_file(backup/Default Job FA_2/Default Job FA.vbm,*,2)
[sender] make_file(backup/Default Job FA_2/Default Job FA2012-05-18T230015.vbk,*
,2)
[sender] make_file(backup/Default Job FA_2/Default Job FA2012-08-30T230034.vrb,*
,2)
[sender] make_file(backup/Default Job FA_2/Default Job FA2012-08-31T230018.vrb,*
,2)
[sender] make_file(backup/Default Job FA_2/Default Job FA2012-09-03T230018.vrb,*
,2)
[sender] make_file(backup/Default Job FA_2/Default Job FA2012-09-04T230012.vrb,*
,2)
[sender] make_file(backup/Default Job FA_2/Default Job FA2012-09-05T230021.vrb,*
,2)
[sender] make_file(backup/Default Job FA_2/Default Job FA2012-09-06T230015.vrb,*
,2)
server_recv(2) starting pid=2983
received 1 names
recv_file_list done
received 3 names
recv_file_list done
get_local_name count=4 /
generator starting pid=2983
delta-transmission enabled
recv_files(1) starting
recv_generator(backup,1)
recv_generator(backup,2)
send_files(2, /cygdrive/F/backup)
recv_generator(backup/Default Job FA_1,3)
recv_generator(backup/Default Job FA_2,4)
recv_generator(backup/Default job FA,5)
received 0 names
recv_file_list done
recv_generator(backup/Default Job FA_1,6)
received 8 names
recv_file_list done
send_files(6, /cygdrive/F/backup/Default Job FA_1)
recv_generator(backup/Default Job FA_2,7)
send_files(7, /cygdrive/F/backup/Default Job FA_2)
recv_generator(backup/Default Job FA_2/Default Job FA.vbm,8)
generating and sending sums for 8
send_files(8, /cygdrive/F/backup/Default Job FA_2/Default Job FA.vbm)
count=105 rem=583 blength=700 s2length=2 flength=73383
send_files mapped /cygdrive/F/backup/Default Job FA_2/Default Job FA.vbm of size
 73388
calling match_sums /cygdrive/F/backup/Default Job FA_2/Default Job FA.vbm
backup/Default Job FA_2/Default Job FA.vbm
built hash table
hash search b=700 len=73388
match at 6590 last_match=0 j=78 len=700 n=6590
match at 7290 last_match=7290 j=79 len=700 n=0
match at 7990 last_match=7990 j=80 len=700 n=0
match at 9390 last_match=8690 j=82 len=700 n=700
match at 10090 last_match=10090 j=83 len=700 n=0
match at 11396 last_match=10790 j=23 len=700 n=606
match at 12190 last_match=12096 j=86 len=700 n=94
match at 13496 last_match=12890 j=26 len=700 n=606
match at 14290 last_match=14196 j=89 len=700 n=94
match at 14990 last_match=14990 j=90 len=700 n=0
match at 16394 last_match=15690 j=92 len=700 n=704
match at 18498 last_match=17094 j=95 len=700 n=1404
match at 19198 last_match=19198 j=96 len=700 n=0
match at 21302 last_match=19898 j=99 len=700 n=1404
match at 22002 last_match=22002 j=100 len=700 n=0
match at 23406 last_match=22702 j=102 len=700 n=704
match at 24106 last_match=24106 j=103 len=700 n=0
done hash search
sending file_sum
false_alarms=0 hash_hits=124 matches=17
sender finished /cygdrive/F/backup/Default Job FA_2/Default Job FA.vbm
recv_generator(backup/Default Job FA_2/Default Job FA2012-05-18T230015.vbk,9)
generating and sending sums for 9
send_files(9, /cygdrive/F/backup/Default Job FA_2/Default Job FA2012-05-18T23001
5.vbk)
count=1757592 rem=41472 blength=131072 s2length=5 flength=230371009024
received 0 names
recv_file_list done
recv_files(backup)
recv_files(backup/Default Job FA_1)
recv_files(backup/Default Job FA_2)
recv_files(backup/Default Job FA_2/Default Job FA.vbm)
recv mapped backup/Default Job FA_2/Default Job FA.vbm of size 73383
got file_sum
finishing backup/Default Job FA_2/Default Job FA.vbm
rsync error: timeout in data send/receive (code 30) at io.c(137) [receiver=3.0.7
]
rsync: connection unexpectedly closed (6228689 bytes received so far) [sender]
rsync error: error in rsync protocol data stream (code 12) at io.c(614) [sender=
3.0.9]
[sender] _exit_cleanup(code=12, file=io.c, line=614): about to call exit(12)

C:\Program Files\cwRsync>

Could you tell why the synchronization fails? This is repeateble, the byte count is each time more or less the same.

Thank!

With best regards,

 

Jac

itefix
Offline
Last seen: 8 hours 7 min ago
Joined: 01.05.2008 - 21:33
From your log: rsync error:

From your log:

rsync error: timeout in data send/receive (code 30) at io.c(137) [receiver=3.0.7]

Your NAS box runs rsync version 3.0.7 dated Dec 2009. It is also a known fact that there are big differences when it comes to hardware performance of consumer oriented NAS-boxes.

Recommendations:

  • upgrade rsync at your NAS-box to the latest version (3.0.9 as of now) if possible,
  • Increase timeout value if you use --timeout option
  • Introduce bandwidth limit to allow your NAS box processing requests in a timely fashion (option --bwlimit, can be specified at both sides)
j.backus
Offline
Last seen: 8 years 9 months ago
Joined: 25.08.2012 - 18:09
Hello TK, Thanks for the

Hello TK,

Thanks for the recommendations!

The bandwith limit makes no difference. The time-out value is 0, so indefinite.

I can't upgrade on the Qnap box. You suggested running the Cwrsync server on the Windows PC. I have installed this and and write to a share on the Qnap box. But I don't get this working: whatever I configure on the Qnap, I always get a permission error: @ERROR: failed.

The Qnap box is domain member, the Cwrsync server server runs as domain admin.

With best regards,

Jac

j.backus
Offline
Last seen: 8 years 9 months ago
Joined: 25.08.2012 - 18:09
Should be: @ERROR: chdir

Should be: @ERROR: chdir failed

j.backus
Offline
Last seen: 8 years 9 months ago
Joined: 25.08.2012 - 18:09
A little addition. The share

A little addition.

The share is the following:

OK           P:        \\fa-nas2\Backup          Microsoft Windows Network

This is in rsyncd.conf on the server:

[Backup]
path = /cygdrive/p/
read only = false
transfer logging = yes

This is the error on the server side:

2012/09/11 15:13:15 [4948] rsync: chdir /cygdrive/p failed
: No such file or directory (2)

itefix
Offline
Last seen: 8 hours 7 min ago
Joined: 01.05.2008 - 21:33
Service account running Rsync

Service account running Rsync Service has no knowledge of the existence of drive P. So that workaround will not work. What I wanted to know if you could run successful backups to a Windows-based Rsync server instead of QNAP.