corosync died: Could not read cluster configuration Check cluster logs for details

Well, if you see this and you did nothing to your cluster other than freeing up some space, you may just find yourself seeing the error below,

Stopping cluster:
   Stopping dlm_controld... [  OK  ]
   Stopping fenced... [  OK  ]
   Stopping cman... [  OK  ]
   Unloading kernel modules... [  OK  ]
   Unmounting configfs... [  OK  ]
Starting cluster:
   Checking if cluster has been disabled at boot... [  OK  ]
   Checking Network Manager... [  OK  ]
   Global setup... [  OK  ]
   Loading kernel modules... [  OK  ]
   Mounting configfs... [  OK  ]
   Starting cman... corosync died: Could not read cluster configuration Check cluster logs for details

Google around and you get scary stuff like reinstalling the cluster and stuff like that. But what really works for me without jumping off the building was that the cluster logs in /var/log/cluster was deleted and causes the corosync to die. Hence, you might want to check your log is there before doing some fanciful work of redoing everything.

Share

Setup NFS Server on LXC in Proxmox

Quick tutorial on how i setup NFS server on Proxmox using LXC rather than the old OpenVZ. Before i began writing out my old tutorial on NFS, you can take a look at the instruction on OpenVZ NFS which is exactly the same. I will write out a quick one here.

Enable Port on Firewall

Before you proceed further, remember to install nfs server kernel as shown below


apt-get install nfs-kernel-server

the above will need to be install onto your host machine in this case, proxmox debian machine (host).

NFS Server on LXC

Do the following instruction to install NFS server (i'm using Centos btw)

yum install nfs* -y
service rpcbind start
chkconfig rpcbind on
service nfs start
chkconfig nfs on
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon: rpc.nfsd: Unable to access /proc/fs/nfsd errno 2 (No such file or directory).
Please try, as root, 'mount -t nfsd nfsd /proc/fs/nfsd' and then restart rpc.nfsd to correct the problem
                                                           [FAILED]

Now here we can either do the following from the file /etc/apparmor.d/lxc/lxc-default-cgns

# Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles under /etc/apparmor.d/lxc

profile lxc-container-default-cgns flags=(attach_disconnected,mediate_deleted) {
  #include <abstractions/lxc/container-base>

  # the container may never be allowed to mount devpts.  If it does, it
  # will remount the host's devpts.  We could allow it to do it with
  # the newinstance option (but, right now, we don't).
  deny mount fstype=devpts,
  mount fstype=cgroup -> /sys/fs/cgroup/**,
  mount fstype=nfs*,
  mount fstype=rpc_pipefs,
}

or you could edit the configure file and disable Apparmor. Assuming your LXC is on 101, you will go to /etc/pve/lxc/101.conf and add the following line

arch: amd64
cpulimit: 4
cpuunits: 1024
hostname: nfs.localhost.com
memory: 4000
nameserver: 8.8.8.8 8.8.4.4
net0: bridge=vmbr2,gw=192.168.100.1,hwaddr=32:36:30:61:61:34,ip=192.168.100.3/24,name=eth0,type=veth
onboot: 1
ostype: centos
rootfs: local:101/vm-101-disk-1.raw,size=1000G
searchdomain: localhost
swap: 512
lxc.aa_profile: unconfined

I will show you what i added, which is

lxc.aa_profile: unconfined

and also remember to add the line to /var/lib/lxc/101/config

lxc.aa_profile=unconfined

remember to reboot your LXC or else it won't work.

Now, in your LXC, open the file /etc/exports

/mnt/nfs     *(rw,no_root_squash,no_subtree_check,fsid=0)

and add the above line. remember to create the folder /mnt/nfs

Enable Port on Firewall

Enable the following in iptables

-A PREROUTING -d 10.6.25.101/32 -i vmbr0 -p tcp -m tcp --dport 2925 -j DNAT --to-destination 192.168.0.111:22
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p tcp -m tcp --dport 32803 -j DNAT --to-destination 192.168.0.111:32803
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p tcp -m tcp --dport 892 -j DNAT --to-destination 192.168.0.111:892
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p tcp -m tcp --dport 662 -j DNAT --to-destination 192.168.0.111:662
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p tcp -m tcp --dport 2049 -j DNAT --to-destination 192.168.0.111:2049
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p tcp -m tcp --dport 111 -j DNAT --to-destination 192.168.0.111:111
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p udp -m udp --dport 892 -j DNAT --to-destination 192.168.0.111:892
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p udp -m udp --dport 662 -j DNAT --to-destination 192.168.0.111:662
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p udp -m udp --dport 111 -j DNAT --to-destination 192.168.0.111:111
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p udp -m udp --dport 2049 -j DNAT --to-destination 192.168.0.111:2049
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p udp -m udp --dport 32769 -j DNAT --to-destination 192.168.0.111:32769
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p tcp -m tcp --dport 8000 -j DNAT --to-destination 192.168.0.111:8000

where 10.6.25.101 is public ip and 192.168.0.111 is lxc ip address.

Configure NFS

Head over to /etc/sysconfig/nfs and update the following

#
# Define which protocol versions mountd
# will advertise. The values are "no" or "yes"
# with yes being the default
#MOUNTD_NFS_V2="no"
MOUNTD_NFS_V3="yes"
#
#
# Path to remote quota server. See rquotad(8)
#RQUOTAD="/usr/sbin/rpc.rquotad"
# Port rquotad should listen on.
#RQUOTAD_PORT=875
# Optinal options passed to rquotad
#RPCRQUOTADOPTS=""
#
#
# Optional arguments passed to in-kernel lockd
#LOCKDARG=
# TCP port rpc.lockd should listen on.
LOCKD_TCPPORT=32803
# UDP port rpc.lockd should listen on.
LOCKD_UDPPORT=32769
#
#
# Optional arguments passed to rpc.nfsd. See rpc.nfsd(8)
# Turn off v2 and v3 protocol support
#RPCNFSDARGS="-N 2 -N 3"
# Turn off v4 protocol support
RPCNFSDARGS="-N 4"
# Number of nfs server processes to be started.
# The default is 8.
#RPCNFSDCOUNT=8
# Stop the nfsd module from being pre-loaded
NFSD_MODULE="noload"
# Set V4 and NLM grace periods in seconds
#
# Warning, NFSD_V4_GRACE should not be less than
# NFSD_V4_LEASE was on the previous boot.
#
# To make NFSD_V4_GRACE shorter, with active v4 clients,
# first make NFSD_V4_LEASE shorter, then restart server.
# This will make the clients aware of the new value.
# Then NFSD_V4_GRACE can be decreased with another restart.
#
# When there are no active clients, changing these values
# can be done in a single server restart.
#
#NFSD_V4_GRACE=90
#NFSD_V4_LEASE=90
#NLM_GRACE_PERIOD=90
#
#
#
# Optional arguments passed to rpc.mountd. See rpc.mountd(8)
#RPCMOUNTDOPTS=""
# Port rpc.mountd should listen on.
MOUNTD_PORT=892
#
#
# Optional arguments passed to rpc.statd. See rpc.statd(8)
#STATDARG=""
# Port rpc.statd should listen on.
STATD_PORT=662
# Outgoing port statd should used. The default is port
# is random
#STATD_OUTGOING_PORT=2020
# Specify callout program
#STATD_HA_CALLOUT="/usr/local/bin/foo"
#
#
# Optional arguments passed to rpc.idmapd. See rpc.idmapd(8)
#RPCIDMAPDARGS=""
#
# Set to turn on Secure NFS mounts.
#SECURE_NFS="yes"
# Optional arguments passed to rpc.gssd. See rpc.gssd(8)
#RPCGSSDARGS=""
# Optional arguments passed to rpc.svcgssd. See rpc.svcgssd(8)
#RPCSVCGSSDARGS=""
#
# To enable RDMA support on the server by setting this to
# the port the server should listen on
#RDMA_PORT=20049

once you have done all the above, restart your host machine so you are clean and good to go.

Share

Setup OpenVPN on Proxmox LXC

Following the previous tutorial of setting up LXC, now i would like to setup my OpenVPN into Proxmox LXC container!

Adding Dev/Tun into LXC

On the host machine, we need to enable Tun for OpenvVPN on our LXC machine, go to the path /var/lib/lxc/xxx/config or /etc/pve/lxc/xxx.conf and add the following to the last line,

lxc.cgroup.devices.allow = c 10:200 rwm
lxc.hook.autodev = sh -c "modprobe tun; cd ${LXC_ROOTFS_MOUNT}/dev; mkdir net; mknod net/tun c 10 200; chmod 0666 net/tun"

and that's all we need to do. restart the lxc container.

Adding tun file into LXC container

Now login to your LXC container and fire the following command,

cd /dev
mkdir net
mknod net/tun c 10 200
chmod 0666 net/tun

this will create the net/tun directory and file, restart the machine and we are good to go!

Install OpenVPN on Proxmox LXC

Installing OpenVPN can never be easier in just 5 minutes which i wrote previously. But i will just summarise here, all you need to do is to fire the following into your LXC using NoVNC or SSH,

wget git.io/vpn --no-check-certificate -O ~/openvpn-install.sh; bash openvpn-install.sh

Follow all the instruction and we are good to go! And remember to port forward port 1194 and 53!

-A PREROUTING -i vmbr1 -p tcp -m tcp --dport 53 -j DNAT --to-destination 192.168.100.2:53
-A PREROUTING -i vmbr1 -p udp -m udp --dport 1194 -j DNAT --to-destination 192.168.100.2:1194
-A PREROUTING -i vmbr1 -p tcp -m tcp --dport 1194 -j DNAT --to-destination 192.168.100.2:1194
Share

List of Proxmox important configuration files directory

Ok, this is it, there are many times when i need to find the path to certain configuration regardless of Proxmox or LXC or KVM or OpenVZ configuration file and i always need to 'remember' where it is and if you do this daily in and out, you might have an idea, if not, this is just another digging the web task! How about recording all these down for me instead? Hence, here are all the important path for anyone who needs it when dealing with Proxmox!

=== OpenVZ Section ===

config: /etc/vz/conf/xxx.conf
data: /var/lib/vz/root/xxx
template: /var/lib/vz/template/cache
snapshot: /var/lib/vz/dump
OpenVZ config: /etc/vz/vz.conf

=== KVM Section ===

config: /etc/pve/qemu-server/xxx.conf
data: /var/lib/vz/images/xxx
template: /var/lib/vz/template/iso
snapshot: /var/lib/vz/dump


=== LXC Section ===

config: /var/lib/lxc/xxx/config
data: /var/lib/vz/images/xxx
template: /var/lib/vz/template/cache
snapshot: /var/lib/vz/dump

=== Cluster Section ===
config: /etc/pve/cluster.conf
nodes vm config: /etc/pve/nodes/xxx/xxx/qemu-server/xxx.conf
=== Files ===

 corosync.conf  => corosync/cman cluster configuration file (previous to PVE 4.x this file was called cluster.conf)
 storage.cfg   => PVE storage configuration
 user.cfg      => PVE access control configuration (users/groups/...)
 domains.cfg   => PVE Authentication domains 
 authkey.pub   => public key used by ticket system

 priv/shadow.cfg  => shadow password file
 priv/authkey.key => private key used by ticket system

 nodes/${NAME}/pve-ssl.pem                 => public ssl key for web server
 nodes/${NAME}/priv/pve-ssl.key            => private ssl key
 nodes/${NAME}/qemu-server/${VMID}.conf    => VM configuration data for KVM VMs
 nodes/${NAME}/openvz/${VMID}.conf         => VM configuratin data for OpenVZ containers

=== Symbolic links ===

 local => nodes/${LOCALNAME}
 qemu-server => nodes/${LOCALNAME}/qemu-server/
 openvz => nodes/${LOCALNAME}/openvz/

=== Special status files for debugging (JSON) ===

 .version    => file versions (to detect file modifications)
 .members    => Info about cluster members
 .vmlist     => List of all VMs
 .clusterlog => Cluster log (last 50 entries)
 .rrd        => RRD data (most recent entries)


=== Enable/Disable debugging ====

 # enable verbose syslog messages
 echo "1" >/etc/pve/.debug 

 # disable verbose syslog messages
 echo "0" >/etc/pve/.debug 

more info. You are welcome.

Share

Setup LXC and Nat on Proxmox

The latest Proxmox 4.0 no longer support OpenVZ and we are met with LXC, Linux Container, which is kinda the next thing. But how do we setup a NAT on a LXC? Is it different from the original OpenVZ. Well, its kinda the same. But i will cut the bullshit here and goes straight to the objective. Here, we will try to create a LXC container in Proxmox and allow the same public ip to connect to the LXC container, in and out.

Installing LXC Container on Proxmox

First let's setup a container, let's create a Ubuntu container by selecting the template.

Screen Shot 2016-03-13 at 4.11.51 AM

Once we selected, let's setup the network area, take note that i have the internet bridge of vmbr1 (which will need to be change later). Do take note that /24 meaning your submask is 255.255.255.0 and the Gateway should really be what you have set on your vmbr1 (which is also your bridge network to all your NAT container). In this case, mine is 192.168.100.1
Screen Shot 2016-03-13 at 4.12.33 AM

I am giving my LXC container the local ip of 192.168.100.6, just ignore the /24 for now. And setup the DNS

Screen Shot 2016-03-13 at 4.16.01 AM

And we are all done, now starts the machine and we are ready to go!

Setup NAT on Proxmox

Now this is the important part, we have 2 things to do, the first setup a new network on /etc/network/interface as show below,

auto vmbr2
#private sub network
iface vmbr2 inet static
        address  192.168.100.1
        netmask  255.255.255.0
        bridge_ports none
        bridge_stp off
        bridge_fd 0

        post-up echo 1 > /proc/sys/net/ipv4/ip_forward
        post-up   iptables -t nat -A POSTROUTING -s '192.168.100.0/24' -o vmbr1 -j MASQUERADE
        post-down iptables -t nat -D POSTROUTING -s '192.168.100.0/24' -o vmbr1 -j MASQUERADE

Do take note that i have added the above so that my container will have internet for all ip within the range of 192.168.100.0/24 (1-255). Now restart the network

/etc/init.d/networking restart

after restarting update the LXC container to use vmbr2.
Screen Shot 2016-03-13 at 4.20.57 AM
Now access your LXC container via NoVNC (Chrome or Firefox) and you should be able to connect to the internet!
Screen Shot 2016-03-13 at 4.23.04 AM

Allow outside connect to LXC

Although you have internet, you will notice that you are not allow to connect to your LXC machine, this is because you did not allow outside to connect to your LXC container. In order to do that, you will need to add stuff into your iptables, add these to your host machine,

#port forward port 2222 to our LXC machine port 22 so we could ssh
iptables -A PREROUTING -i vmbr1 -p tcp -m tcp --dport 2222 -j DNAT --to-destination 192.168.100.6:22
#we did the below just now on network interface config
iptables -A POSTROUTING -s 192.168.100.0/24 -o vmbr1 -j MASQUERADE
#this allows outside to connect to your LXC machines
iptables -A POSTROUTING -s 192.168.100.0/24 -o vmbr1 -j SNAT --to-source 45.125.192.250

What we did on the vmbr2 just now is shown above, if you do not want to add that on the interface section, just do it here. Once you've done that, you should be able to ssh into your LXC container as well! All good!

Share