First upload the image via the Proxmox management console. Under the node, select the local storage, then ISO images. Upload the desired ISO image, with optional checksum.
After which, use the "Create new VM" button to create a new VM using the image. The usual VM configuration and provisioning applies, e.g. RAM, CPU, NIC. Note to enable the QEMU Agent. Follow this nice documentation.
sudo apt install qemu-guest-agent
sudo systemctl enable --now qemu-guest-agent
Some disambiguation may be needed. I'm pretty sure I don't have it exactly down, but these should be sufficient to get up and running:
Virtualization: Emulation of CPU and other hardware that allows a "guest
OS" to run within another
OS. Typically involves (
ref):
QEMU (Quick Emulator): An open-source software-based CPU emulator, together with hardware support and many other peripherals.
KVM (Kernel-based Virtual Machine): A Linux kernel module that uses hardware-assisted virtualization to allow the kernel to function as a hypervisor.
Proxmox uses QEMU and loads KVM whenever available, so in this context QEMU and KVM are generally synonymous with each other.
Installing the QEMU agent (ref):
vm:~# apt install qemu-guest-agent # debian
vm:~# systemctl enable --now qemu-guest-agent
host:~$ qm set ${VM_ID} --agent 1
host:~$ qm reboot ${VM_ID}
host:~$ qm agent ${VM_ID} ping
Saw from Reddit or somewhere that virt-sysprep doesn't really cooperate with PVE...?
A long-ass QEMU and Proxmox cheatsheet, but nice nonetheless.
Template guide here, together with a more automated version. Here's another one. And another one. Note the concerns with machine ID. This is what branched me off: Reddit.
Scripts
Branched off from this Reddit post. The following script is used:
sudo bash ./vm-reset.sh
history -c && history -w && sudo shutdown now
- vm-reset.sh
#!/bin/bash
this_file="$(realpath -s $0)"
this_dir="$(dirname $this_file)"
this_user=$SUDO_USER
# Ensure running as sudo from home
if [ `id -u` -ne 0 ]; then
echo Need sudo
exit 1
fi
cd ~
# Update packages
apt update
apt install -y qemu-guest-agent chrony
# Flush logs
logrotate -f /etc/logrotate.conf
systemctl stop rsyslog
# Cleanup machine ID for regeneration during reboot
cat /etc/machine-id
>/etc/machine-id
test -f /var/lib/dbus/machine-id && {
rm /var/lib/dbus/machine-id;
ln -s /etc/machine-id /var/lib/dbus/machine-id;
}
# Cleanup tmp and ssh directories
rm -rf /tmp/*
rm -rf /var/tmp/*
rm -f /etc/ssh/ssh_host_*
# Cleanup apt
apt clean
apt autoremove
# Trim filesystem
fstrim -av
# First boot initialization script,
# assuming 'vm-firstboot.sh' in same directory
test -f /etc/rc.local && cp -a /etc/rc.local /etc/rc.local.bak
cp -a "$this_dir/vm-firstboot.sh" /etc/rc.local
chown root:root /etc/rc.local
chmod +x /etc/rc.local
This ideally should run immediately after start up, especially since SSH keys need to be regenerated before any SSH connections can be initiated.
- vm-firstboot.sh
#!/bin/bash
this_file="$(realpath -s $0)"
# Ensure running as sudo
id
if [ `id -u` -ne 0 ]; then
echo Need sudo
exit 1
fi
# Set time zone
timedatectl set-timezone Asia/Singapore
# Update packages
apt update
apt -y full-upgrade
# Regenerate SSH host keys
test -f /etc/ssh/ssh_host_rsa_key || dpkg-reconfigure openssh-server
# Clear file, and replace rc.local <- rc.local.bak
if [ "$this_file" -ef /etc/rc.local ]; then
rm -f /etc/rc.local;
test -f /etc/rc.local.bak && mv /etc/rc.local.bak /etc/rc.local;
else
rm -f "$this_file"
fi
# Reboot
shutdown -r now
After reboot, run as user:
- vm-rename.sh
# Set virtual machine name
echo -n "Set the name of this VM: "
read vm_name_input
new_vm_name=$(echo $vm_name_input | tr '[:upper:]' '[:lower:]' | tr -d [:space:])
current_vm_name=$(cat /etc/hostname)
hostnamectl set-hostname $new_vm_name
hostname $new_vm_name
sudo sed -i "s/$current_vm_name/$new_vm_name/g" /etc/hosts
sudo sed -i "s/$current_vm_name/$new_vm_name/g" /etc/hostname
# Change user password and shutdown upon successful change
passwd && sudo shutdown -r now
Other stuff that can be done during reset:
# Setup init script
#wget https://script_server/scripts/init.sh
#chmod +rwx init.sh
#chown $this_user:$this_user init.sh
# Pull global SSH config
#wget https://script_server/scripts/sshd_config
#cat sshd_config > /etc/ssh/sshd_config
#rm sshd_config
# Clear audit logs (optional)
#>/var/log/audit/audit.log
#>/var/log/wtmp
#>/var/log/lastlog
# Cleanup persistent udev rules (optional)
rm -f /etc/udev/rules.d/70-persistent-net.rules
# Delete preparation file
rm "$this_file"
For QM related instructions, referenced from script:
# Remove old templates and clone existing VMs
qm destroy 6000 --destroy-unreferenced-disks 1
qm clone 9000 6000 --name mytemplate --full 1
# Prepare VM templates
qm set 6000 --ipconfig0 ip=dhcp
qm start 6000
# run stuff here...
qm shutdown 6000
# Convert to template
qm set 6000 --template 1
Setup apt-cacherng for faster upgrades.
Monitor LVM thin pool storage usage: pvesm status | grep lvmthin | awk '{print $7}'
"Linked clone feature is not supported for drive 'scsi0'" occurs when cloning from a template. Note to change the drive type for future updates.
Templates are available by default, and can be accessed by pveam available
. Updates are propagated via pveam update
.
Consider reading this for cloud-init based setup: https://github.com/UntouchedWagons/Ubuntu-CloudInit-Docs
The QEMU guest agent comes in handy here. We use the agent command network-get-interfaces
to query the IP addresses.
host:~# qm agent 1000 network-get-interfaces | grep "\"ip-address\""
"ip-address" : "127.0.0.1",
"ip-address" : "::1",
"ip-address" : "192.168.121.85",
"ip-address" : "fe80::be24:11ff:fe0b:8d17",