Changelog
This official guide should cover the pertinent installation steps, step-by-step. Relatively straightforward:
The strange configuration needed is likely on the Network Management screen that asks for FQDN, IP Address, Gateway and DNS. The IP address is a static self-assigned address, which should fall within the current subnet. Make sure to reserve the same DHCP address on the router for consistency.
Static IP addresses only!
Proxmox is designed to work only with static IP addresses, so there's some work needed to migrate subnets. To change the IP address, log onto the console and edit /etc/network/interfaces
(for network assignment) and /etc/hosts
(for loopback). Refresh configuration with ifreload -a
. See this for more updated configuration.
The management interface is exposed as a web service, by default on HTTPS port 8006. Default username is root
.
A shell to the server can be opened by clicking on the Shell button near the top right corner of the interface. The usual server management stuff, like updating apt update
and installing apt install vim
.
Importantly, if not using the Enterprise versions of Proxmox VE, change the apt sources to the community one instead.
#deb https://enterprise.proxmox.com/debian/pve bookworm pve-enterprise # Proxmox VE pve-no-subscription repository provided by proxmox.com, # NOT recommended for production use deb http://download.proxmox.com/debian/pve bookworm pve-no-subscription
May be worth running pveperf
to benchmark.
Problem is unprivileged LXCs cannot perform mounting of CIFS shares.
Pretty much the same technique when sharing an SMB, i.e. perform writes as a common group, even when different access credentials are used. According to the Proxmox wiki, UIDs and GIDs in the container are mapped to the value + 100_000 on the host machine (i.e. Proxmox). Follows this guide, and reproduced below:
######################### # MOUNT SHARE ON HOST # ######################### # Mount CIFS share on host, with some hints: # - _netdev: Force mount point to be recognised as network mount # - x-systemd.automount: Try to remount if share went offline # - noatime: Do not update access timestamps host:~$ mkdir -p /mnt/lxc_shares/<TARGET> host:~$ vim /root/.smb_credentials username=<USERNAME> password=<PASSWORD> host:~$ vim /etc/fstab //<IP_ADDR>/<SHARE> /mnt/lxc_shares/<TARGET> cifs _netdev,x-systemd.automount,noatime,\ uid=100000,gid=110000,dir_mode=0770,file_mode=0770,credentials=/root/.smb_credentials 0 0 host:~$ systemctl daemon-reload host:~$ mount /mnt/lxc_shares/<TARGET> ################################### # MOUNT HOST SHARE ON CONTAINER # ################################### # Create and assign group, then stop the container # Do it either within LXC console, or via CLI (per below) host:~$ pct exec <LXC_ID> -- groupadd -g 10000 lxc_shares host:~$ pct exec <LXC_ID> -- usermod -aG lxc_shares <USER> # Add bind mount to LXC, with optional read-only flag, then start LXC host:~$ pct stop <LXC_ID> # monitor with 'pct status <LXC_ID>' host:~$ vim /etc/pve/lxc/<LXC_ID>.conf mp0: /mnt/lxc_shares/<TARGET>/,mp=/mnt/<DEST>,ro=1,shared=1 host:~$ pct start <LXC_ID>
For Alpine Linux containers (where groupadd and usermod is unavailable),
USER=root GID=10000 LXC_ID=100 TARGET=/mnt/lxc_shares DEST=/mnt/nas pct exec ${LXC_ID} -- addgroup -g ${GID} lxc_shares pct exec ${LXC_ID} -- adduser root lxc_shares pct stop ${LXC_ID} echo "mp0: ${TARGET}/,mp=${DEST},ro=1,shared=1" >> /etc/pve/lxc/${LXC_ID}.conf pct start ${LXC_ID}
If cannot startup containers, check out the syslog on host. Accessible from the Web GUI as well under "System > System Log".
The website is here: https://www.proxmox.com/
Idea is to have hypervisor running on computer, then install Windows OS on top for gaming + other simultaneous OS to boot on and off.
Installation is straightforward: Download and burn Proxmox ISO installer onto a USB flash drive, then boot from USB. Run the installer as usual.
Several things:
If gaming / use of GPU is a primary factor, a good alternative is to run Windows, then host a Type-2 hypervisor (application level) instead. Good if the CPU has an integrated GPU, then can share graphics on separate monitors, see: https://communities.vmware.com/t5/VMware-Workstation-Pro/Make-VMWare-Workstation-use-my-Nvidia-card/td-p/2751434