List of Useful Proxmox Command

It's been some time since i wrote something in this blog. I've been pretty busy with product building till i lose track of time. Time sure pass quickly. Recently i've a chance to work on Proxmox again. It is still as powerful as ever and many more version came up and so are the problems. Hence, i figured to list down those command that i come across that might be useful one day. I'll bring it into a few sections for easy navigation in the future.

Proxmox Command

  1. Get a quick overview on how fast your system is: pveperf
  2. Verify the subscription status of your hardware node: pvesubscription get
  3. Start a backup of machine 101: vzdump 101 -compress lzo
  4. PVE Cluster Manager - see "man pvecm" for details.
  5. Restart every single Proxmox services: service pve-cluster restart && service pvedaemon restart && service pvestatd restart && service pveproxy restart
  6. Proxmox VE version info - Print version information for Proxmox VE packages. : pveversion
  7. Find next free VM ID: pvesh get /cluster/nextid
  8. View sum of memory allocated to VMs and CTs: grep -R memory /etc/pve/local | awk '{sum += $NF } END {print sum;}'
  9. View sorted list of VMs like vmid proxmox_host type: cat /etc/pve/.vmlist | grep node | tr -d '":,'| awk '{print $1" "$4" "$6 }' | sort -n | column -t
  10. View sorted list of vmid: cat /etc/pve/.vmlist | grep node | cut -d '"' -f2 | sort -n

KVM Command

  1. List all your KVM machines: qm list
  2. See how much memory your machine 101 has: qm config 101 | grep ^memory
  3. List the memory setting of a kvm: qm config 101 | grep ^memory
  4. restore KVM vzdump backups - see "man qmrestore"
  5. backup utility for virtual machine - see "man vzdump"
  6. unlock kvm: qm unlock 101
  7. Restore a QemuServer VM to VM 601: qmrestore /mnt/backup/vzdump-qemu-888.vma 601

LXC Command

  1. forcefully start lxc: lxc-start -n 101 -F
  2. mount lxc virtual disk: pct mount 101
  3. unmount lxc virtual disk: pct unmount 101
  4. repair virtual disk: pct fsck 101
  5. check configuration of lxc: pct config 101
  6. Remove container: pct destroy 101
  7. Restore a container to a new CT 600: pct restore 600 /mnt/backup/vzdump-lxc-777.tar

OpenVZ Command

  1. utility to control an OpenVZ container - see "man vzctl"
  2. vzctl wrapper to manage OpenVZ containers - see "man pvectl"
  3. display top CPU processes: vztop
  4. cat /proc/user_beancounters
  5. vzlist
  6. backup utility for virtual machine - see "man vzdump"
  7. restore OpenVZ vzdump backups - see "man vzrestore"

if you got anything to share or have any awesome command useful for your day to day Proxmox management, do let me know!

corosync died: Could not read cluster configuration Check cluster logs for details

Well, if you see this and you did nothing to your cluster other than freeing up some space, you may just find yourself seeing the error below,

Stopping cluster:
   Stopping dlm_controld... [  OK  ]
   Stopping fenced... [  OK  ]
   Stopping cman... [  OK  ]
   Unloading kernel modules... [  OK  ]
   Unmounting configfs... [  OK  ]
Starting cluster:
   Checking if cluster has been disabled at boot... [  OK  ]
   Checking Network Manager... [  OK  ]
   Global setup... [  OK  ]
   Loading kernel modules... [  OK  ]
   Mounting configfs... [  OK  ]
   Starting cman... corosync died: Could not read cluster configuration Check cluster logs for details

Google around and you get scary stuff like reinstalling the cluster and stuff like that. But what really works for me without jumping off the building was that the cluster logs in /var/log/cluster was deleted and causes the corosync to die. Hence, you might want to check your log is there before doing some fanciful work of redoing everything.

Setup NFS Server on LXC in Proxmox

Quick tutorial on how i setup NFS server on Proxmox using LXC rather than the old OpenVZ. Before i began writing out my old tutorial on NFS, you can take a look at the instruction on OpenVZ NFS which is exactly the same. I will write out a quick one here.

Enable Port on Firewall

Before you proceed further, remember to install nfs server kernel as shown below


apt-get install nfs-kernel-server

the above will need to be install onto your host machine in this case, proxmox debian machine (host).

NFS Server on LXC

Do the following instruction to install NFS server (i'm using Centos btw)

yum install nfs* -y
service rpcbind start
chkconfig rpcbind on
service nfs start
chkconfig nfs on
Starting NFS services:                                     [  OK  ]
Starting NFS quotas:                                       [  OK  ]
Starting NFS mountd:                                       [  OK  ]
Starting NFS daemon: rpc.nfsd: Unable to access /proc/fs/nfsd errno 2 (No such file or directory).
Please try, as root, 'mount -t nfsd nfsd /proc/fs/nfsd' and then restart rpc.nfsd to correct the problem
                                                           [FAILED]

Now here we can either do the following from the file /etc/apparmor.d/lxc/lxc-default-cgns

# Do not load this file.  Rather, load /etc/apparmor.d/lxc-containers, which
# will source all profiles under /etc/apparmor.d/lxc

profile lxc-container-default-cgns flags=(attach_disconnected,mediate_deleted) {
  #include <abstractions/lxc/container-base>

  # the container may never be allowed to mount devpts.  If it does, it
  # will remount the host's devpts.  We could allow it to do it with
  # the newinstance option (but, right now, we don't).
  deny mount fstype=devpts,
  mount fstype=cgroup -> /sys/fs/cgroup/**,
  mount fstype=nfs*,
  mount fstype=rpc_pipefs,
}

or you could edit the configure file and disable Apparmor. Assuming your LXC is on 101, you will go to /etc/pve/lxc/101.conf and add the following line

arch: amd64
cpulimit: 4
cpuunits: 1024
hostname: nfs.localhost.com
memory: 4000
nameserver: 8.8.8.8 8.8.4.4
net0: bridge=vmbr2,gw=192.168.100.1,hwaddr=32:36:30:61:61:34,ip=192.168.100.3/24,name=eth0,type=veth
onboot: 1
ostype: centos
rootfs: local:101/vm-101-disk-1.raw,size=1000G
searchdomain: localhost
swap: 512
lxc.aa_profile: unconfined

I will show you what i added, which is

lxc.aa_profile: unconfined

and also remember to add the line to /var/lib/lxc/101/config

lxc.aa_profile=unconfined

remember to reboot your LXC or else it won't work.

Now, in your LXC, open the file /etc/exports

/mnt/nfs     *(rw,no_root_squash,no_subtree_check,fsid=0)

and add the above line. remember to create the folder /mnt/nfs

Enable Port on Firewall

Enable the following in iptables

-A PREROUTING -d 10.6.25.101/32 -i vmbr0 -p tcp -m tcp --dport 2925 -j DNAT --to-destination 192.168.0.111:22
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p tcp -m tcp --dport 32803 -j DNAT --to-destination 192.168.0.111:32803
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p tcp -m tcp --dport 892 -j DNAT --to-destination 192.168.0.111:892
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p tcp -m tcp --dport 662 -j DNAT --to-destination 192.168.0.111:662
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p tcp -m tcp --dport 2049 -j DNAT --to-destination 192.168.0.111:2049
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p tcp -m tcp --dport 111 -j DNAT --to-destination 192.168.0.111:111
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p udp -m udp --dport 892 -j DNAT --to-destination 192.168.0.111:892
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p udp -m udp --dport 662 -j DNAT --to-destination 192.168.0.111:662
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p udp -m udp --dport 111 -j DNAT --to-destination 192.168.0.111:111
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p udp -m udp --dport 2049 -j DNAT --to-destination 192.168.0.111:2049
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p udp -m udp --dport 32769 -j DNAT --to-destination 192.168.0.111:32769
-A PREROUTING -d 10.6.25.101/32 -i vmbr1 -p tcp -m tcp --dport 8000 -j DNAT --to-destination 192.168.0.111:8000

where 10.6.25.101 is public ip and 192.168.0.111 is lxc ip address.

Configure NFS

Head over to /etc/sysconfig/nfs and update the following

#
# Define which protocol versions mountd
# will advertise. The values are "no" or "yes"
# with yes being the default
#MOUNTD_NFS_V2="no"
MOUNTD_NFS_V3="yes"
#
#
# Path to remote quota server. See rquotad(8)
#RQUOTAD="/usr/sbin/rpc.rquotad"
# Port rquotad should listen on.
#RQUOTAD_PORT=875
# Optinal options passed to rquotad
#RPCRQUOTADOPTS=""
#
#
# Optional arguments passed to in-kernel lockd
#LOCKDARG=
# TCP port rpc.lockd should listen on.
LOCKD_TCPPORT=32803
# UDP port rpc.lockd should listen on.
LOCKD_UDPPORT=32769
#
#
# Optional arguments passed to rpc.nfsd. See rpc.nfsd(8)
# Turn off v2 and v3 protocol support
#RPCNFSDARGS="-N 2 -N 3"
# Turn off v4 protocol support
RPCNFSDARGS="-N 4"
# Number of nfs server processes to be started.
# The default is 8.
#RPCNFSDCOUNT=8
# Stop the nfsd module from being pre-loaded
NFSD_MODULE="noload"
# Set V4 and NLM grace periods in seconds
#
# Warning, NFSD_V4_GRACE should not be less than
# NFSD_V4_LEASE was on the previous boot.
#
# To make NFSD_V4_GRACE shorter, with active v4 clients,
# first make NFSD_V4_LEASE shorter, then restart server.
# This will make the clients aware of the new value.
# Then NFSD_V4_GRACE can be decreased with another restart.
#
# When there are no active clients, changing these values
# can be done in a single server restart.
#
#NFSD_V4_GRACE=90
#NFSD_V4_LEASE=90
#NLM_GRACE_PERIOD=90
#
#
#
# Optional arguments passed to rpc.mountd. See rpc.mountd(8)
#RPCMOUNTDOPTS=""
# Port rpc.mountd should listen on.
MOUNTD_PORT=892
#
#
# Optional arguments passed to rpc.statd. See rpc.statd(8)
#STATDARG=""
# Port rpc.statd should listen on.
STATD_PORT=662
# Outgoing port statd should used. The default is port
# is random
#STATD_OUTGOING_PORT=2020
# Specify callout program
#STATD_HA_CALLOUT="/usr/local/bin/foo"
#
#
# Optional arguments passed to rpc.idmapd. See rpc.idmapd(8)
#RPCIDMAPDARGS=""
#
# Set to turn on Secure NFS mounts.
#SECURE_NFS="yes"
# Optional arguments passed to rpc.gssd. See rpc.gssd(8)
#RPCGSSDARGS=""
# Optional arguments passed to rpc.svcgssd. See rpc.svcgssd(8)
#RPCSVCGSSDARGS=""
#
# To enable RDMA support on the server by setting this to
# the port the server should listen on
#RDMA_PORT=20049

once you have done all the above, restart your host machine so you are clean and good to go.

Setup OpenVPN on Proxmox LXC

Following the previous tutorial of setting up LXC, now i would like to setup my OpenVPN into Proxmox LXC container!

Adding Dev/Tun into LXC

On the host machine, we need to enable Tun for OpenvVPN on our LXC machine, go to the path /var/lib/lxc/xxx/config or /etc/pve/lxc/xxx.conf and add the following to the last line,

lxc.cgroup.devices.allow = c 10:200 rwm
lxc.hook.autodev = sh -c "modprobe tun; cd ${LXC_ROOTFS_MOUNT}/dev; mkdir net; mknod net/tun c 10 200; chmod 0666 net/tun"

and that's all we need to do. restart the lxc container.

Adding tun file into LXC container

Now login to your LXC container and fire the following command,

cd /dev
mkdir net
mknod net/tun c 10 200
chmod 0666 net/tun

this will create the net/tun directory and file, restart the machine and we are good to go!

Install OpenVPN on Proxmox LXC

Installing OpenVPN can never be easier in just 5 minutes which i wrote previously. But i will just summarise here, all you need to do is to fire the following into your LXC using NoVNC or SSH,

wget git.io/vpn --no-check-certificate -O ~/openvpn-install.sh; bash openvpn-install.sh

Follow all the instruction and we are good to go! And remember to port forward port 1194 and 53!

-A PREROUTING -i vmbr1 -p tcp -m tcp --dport 53 -j DNAT --to-destination 192.168.100.2:53
-A PREROUTING -i vmbr1 -p udp -m udp --dport 1194 -j DNAT --to-destination 192.168.100.2:1194
-A PREROUTING -i vmbr1 -p tcp -m tcp --dport 1194 -j DNAT --to-destination 192.168.100.2:1194

List of Proxmox important configuration files directory

Ok, this is it, there are many times when i need to find the path to certain configuration regardless of Proxmox or LXC or KVM or OpenVZ configuration file and i always need to 'remember' where it is and if you do this daily in and out, you might have an idea, if not, this is just another digging the web task! How about recording all these down for me instead? Hence, here are all the important path for anyone who needs it when dealing with Proxmox!

=== OpenVZ Section ===

config: /etc/vz/conf/xxx.conf
data: /var/lib/vz/root/xxx
template: /var/lib/vz/template/cache
snapshot: /var/lib/vz/dump
OpenVZ config: /etc/vz/vz.conf

=== KVM Section ===

config: /etc/pve/qemu-server/xxx.conf
data: /var/lib/vz/images/xxx
template: /var/lib/vz/template/iso
snapshot: /var/lib/vz/dump


=== LXC Section ===

config: /var/lib/lxc/xxx/config
data: /var/lib/vz/images/xxx
template: /var/lib/vz/template/cache
snapshot: /var/lib/vz/dump

=== Cluster Section ===
config: /etc/pve/cluster.conf
nodes vm config: /etc/pve/nodes/xxx/xxx/qemu-server/xxx.conf
=== Files ===

 corosync.conf  => corosync/cman cluster configuration file (previous to PVE 4.x this file was called cluster.conf)
 storage.cfg   => PVE storage configuration
 user.cfg      => PVE access control configuration (users/groups/...)
 domains.cfg   => PVE Authentication domains 
 authkey.pub   => public key used by ticket system

 priv/shadow.cfg  => shadow password file
 priv/authkey.key => private key used by ticket system

 nodes/${NAME}/pve-ssl.pem                 => public ssl key for web server
 nodes/${NAME}/priv/pve-ssl.key            => private ssl key
 nodes/${NAME}/qemu-server/${VMID}.conf    => VM configuration data for KVM VMs
 nodes/${NAME}/openvz/${VMID}.conf         => VM configuratin data for OpenVZ containers

=== Symbolic links ===

 local => nodes/${LOCALNAME}
 qemu-server => nodes/${LOCALNAME}/qemu-server/
 openvz => nodes/${LOCALNAME}/openvz/

=== Special status files for debugging (JSON) ===

 .version    => file versions (to detect file modifications)
 .members    => Info about cluster members
 .vmlist     => List of all VMs
 .clusterlog => Cluster log (last 50 entries)
 .rrd        => RRD data (most recent entries)


=== Enable/Disable debugging ====

 # enable verbose syslog messages
 echo "1" >/etc/pve/.debug 

 # disable verbose syslog messages
 echo "0" >/etc/pve/.debug 

more info. You are welcome.