Manual Restore Bacula Without Database

OK, another problem i have. I though my data was gone for good although i do remember my Bacula was doing all the backup! And i finally found a way to get those 1TB files back! Well, as much as you don't know anything about Bacula, you do know where those files are stored right? These files are called 'Volume'. And we will be using these volumes to restore our backup! We will be using bacula volume utility tools to assist us in extracting these precious data!

What's in the Bacula Volume?

Before you can do anything at all, the first thing you need to do is to scan your volume to see whether your stuff is located in the volume!

bls -j -V volume-0177 devicenamehere

and the above will show you something like the one below,


Begin Job Session Record: File:blk=0:8814 SessId=161 SessTime=1480534092 JobId=481
   Job=job.name.com.2017-01-20_01.00.00_33 Date=25-Jan-2017 21:26:12 Level=I Type=B
End Job Session Record: File:blk=0:8814 SessId=161 SessTime=1480534092 JobId=481
   Date=25-Jan-2017 22:53:20 Level=I Type=B Files=2 Bytes=942 Errors=0 Status=T

And what's important on the above are SessId and SessTime. So that we can create a Bootstrap file! Create a file call bootstrap.bsr as show below,

Volume = volume-0177
VolSessionId = 161
VolSessionTime = 1480534092

Now, with this information, we will be able to extract the information out of Bacula Volume!

Extracting Bacula Volume?

In order to extract from Bacula volume, there are a few ways to do it. You can either use your bootstrap file as created above and fire the below command

bextract -p -b ./bootstrap.bsr devicename /home

or you can specific which volume you want to extract without using a bootstrap file as show below,

bextract -p -V volume-0177 devicename/home

and file will starts extracting to /home directory where volume-0177 is the file name and devicename is the actual device name you found on /etc/bacula/bacula-sd file that you wish to restore.

The following shows you some options you can add to your command,

Usage: bextract [-d debug_level] <device-name> <directory-to-store-files>
       -b <file>       specify a bootstrap file
       -dnn            set debug level to nn
       -e <file>       exclude list
       -i <file>       include list
       -p              proceed inspite of I/O errors
       -V              specify Volume names (separated by |)
       -?              print this message
  • -p is useful if your backup is like 1TB and it throws off an i/o error after 50 hours of extracting. -p basically prevent that.
  • -i takes in a file path to include only these files or folder to your restoration plan
  • -e takes in a file path to exclude these files or folder out of your restoration plan
  • -V specific a volume as shown on my example
  • -b takes in a file path which is a bootstrap file to tell bextract what you want to do

Now, go save your own ass from getting whoop! Peace out!

Share

Schedule Rsync Backup From Windows to Linux Server

Windows, WHY ARE YOU ALWAYS SO DIFFICULT! Gosh. Damn you are. This time. i wanted to do schedule a backup from my windows server 2012r2 to my linux backup drive. Its as simple as that (while i though it was at least). Google doesn't help with so many rubbish online. Hence, here is a guide that will help us out (me included)

Environment

Enterprise server (Windows 2012 R2)

This is a windows environment server 2012 R2 where our data is

Backup server (Debian Linux)

This is my backup server where i would like to rsync over.

 

Installation

On Windows server 2012 r2

  • Download cwRsync
  • Unzup cwRsync and copy to "C:\cwRsync".
  • Add "C:\cwRsync\bin" to PATH.
  • Create the directory "C:\cwRsync\home" and "C:\cwRsync\home\USER" (USER should be the name of the user who will run the Rsync in my case its "admin").
  • Create public/private keys with the following command:
  • ssh-keygen -t rsa
    • Paths with "/home/USER/" correspond to the directories that we created in "C:\cwRsync\".
    • Leave the password blank.

On Linux

  • Install openssh-server and rsync.
  • Provide data to a partition (eg.: /backup/).
  • Place the public key in /home/USER/.ssh/ and rename the file to authorized_keys. (assuming its root)

On Windows

  • Test the connection without a password with the following command:
ssh USER@BackupServerIP
  • Test Rsync:
rsync -v -rlt -z --delete "/myfiles/" "USER@BackupServerIP:/backups/"
  • where cygdrive is the directory on C:\cygdrive so the above  C:\cygdrive\myfiles
  • To Test Other port
rsync  -e "ssh -p 14000" -arv "--exclude=.svn/" /myfiles USER@BackupServerIP:/backups/
  • Create a bat file with the rsync command and place it in C:\cwRsync\bin.
  • Schedule execution every day at 0:30 (half past midnight).

Helpful Resources

  • http://stackoverflow.com/questions/34147565/rsync-uid-gid-impossible-to-set-cases-cause-future-hard-link-failure-how-to
  • http://www.smellems.com/tiki-read_article.php?articleId=14
Share

PID:4 using Port 80 In Windows Server 2012 R2

i will cut the chase, if you are suspecting something is using Port 80 and is trying to find out what is it, here are some suggestion. Try stopping the following services

  • IIS
  • World Wide Web Publishing service
  • IIS Admin Service
  • SQL Server Reporting services
  • Web Deployment Agent Service

And if the  NT Kernel was still listening on port 80, you just hit the jackpot with me. Its BranchCode. Try removing it under "Remove Roles" in "Server Management" as show below,

Once you remove that, restart your server and port 80 should be all yours. Verify using the following command,


netstat -nao | find ":80"

and it should show you this.

Good LUCK!

Share

How to move all cPanel accounts to new server via command line

This is a short how-to tutorial to migrate or transfer all cPanel accounts from my old 1.5TB server to another new SSD server through command line.

Backup all cPanel accounts

i am assuming, you know what you want, so we have to first backup all the cPanel accounts in our old system using the following command

ls /var/cpanel/users | while read a; do
/scripts/pkgacct $a
done

remember to screen first before doing the above, as this might take a while if there is a lot of accounts in your machine.

Transfer all cPanel accounts

now, we need to transfer all the cPanel accounts from our old server to the new ones. Notice the pkgacct script generate all the cpmove file on /home directory, use the command below,

bash-4.1# rsync -av --progress /home/*.tar.gz root@192.168.0.2:/home

where 192.168.0.2 is your new server. Now, all the files are transferring to our new server!

Restore all cPanel accounts

Finally, in our new server, fire the following command,

ls /home/ | awk -F'[-.]' '{print $2}' | while read a; do
/scripts/killacct --user=$a
/scripts/restorepkg $a
done

similarly, remmeber to screen first before doing the above.

Change new server ip address

You might want to change the ip address of your new server to the old ones. Do the following,

To change the server's main IP address, perform the following steps:
Open the /etc/sysconfig/network-scripts/ifcfg-eth0 file with a text editor.
Edit the IPADDR and GATEWAY lines to use the IP address and gateway of your old server.
Open the /etc/ips file with a text editor.
Add your old server's primary IP address, net mask, and gateway to the file.
Note:
Remove the new server’s primary IP address from this file if it is present.
Restart the network service with the following commands:
For CentOS, CloudLinux™, and Red Hat® Enterprise Linux (RHEL) 6 and earlier, and Amazon Linux, run the service network restart command.
Note:
Amazon Linux always runs in a NAT configuration. 
 
For CentOS, CloudLinux, and RHEL 7 and later, run the systemctl restart network command.
Run the /scripts/mainipcheck command to add the IP address to the /var/cpanel/mainip file.
Run the /scripts/fixetchosts command to add the IP address and hostname of your server to the /etc/hosts file.

and you should be good to go. Test it out and enjoy your new environment!

Share

corosync died: Could not read cluster configuration Check cluster logs for details

Well, if you see this and you did nothing to your cluster other than freeing up some space, you may just find yourself seeing the error below,

Stopping cluster:
   Stopping dlm_controld... [  OK  ]
   Stopping fenced... [  OK  ]
   Stopping cman... [  OK  ]
   Unloading kernel modules... [  OK  ]
   Unmounting configfs... [  OK  ]
Starting cluster:
   Checking if cluster has been disabled at boot... [  OK  ]
   Checking Network Manager... [  OK  ]
   Global setup... [  OK  ]
   Loading kernel modules... [  OK  ]
   Mounting configfs... [  OK  ]
   Starting cman... corosync died: Could not read cluster configuration Check cluster logs for details

Google around and you get scary stuff like reinstalling the cluster and stuff like that. But what really works for me without jumping off the building was that the cluster logs in /var/log/cluster was deleted and causes the corosync to die. Hence, you might want to check your log is there before doing some fanciful work of redoing everything.

Share