if you start the pxe server after the clustered ip is online, does it work? and when does this actually fail, during failover, and how does failover occur - ip gets bound by the 2ndary nic, then the pxe service starts up?
and pxe is config'd right to co-exist w/ a dhcp server ? (the option 60 and a setting in the pxe server start)?
The setup is that each machines in the cluster has a NIC connected to our build LAN. Whichever node in the cluster is active and running PXE brings up a virtual IP on the same interface.
So for example, our primary node's eth0 interface is 10.210.254.192, and the virtual IP which is clustered and moves between machines depending upon which one is active is 10.210.254.254 and gets brought up as eth0:0 on the active node.
If PXE is brought online after this eth0:0 interface is brought up it doesn't work and we get the PXE-E55. However, I found that if I manually brought up PXE first and then the clustered interface it does work, but of course PXE is only bound to the IP of eth0 and not eth0:0.
I tried changing the pxe.conf so that the interface_to_bind is set to eth0:0 but it doesn't seem to make a difference.
However since posting this message I did find that if I set the interface in pxe.conf to "all" it does work, however the problem then is that PXE answers requests on the public LAN too which we don't want.
Although I'm looking at implementing some iptables rules to prevent PXE packets being sent out on all but the build LAN interfaces.
file a defect w/ support on the eth0:0 binding thing if you can.
does the 'all' work in the PM gui config also?
I believe "all" is valid when input into the PM Config options GUI menu yes.
i tried this on a virtual machine - rhel4 - what i get is that it always binds to any active interfaces i have, regardless of what's set in the db or the pxe.conf file. so i think limiting access to the port via iptables is the best best...