you should run these commands via nexec, and in between the reboots put some sleeps and a while loop that breaks when the box is accessible again (check w/ agentinfo or something). search this community for 'reboot script' for some examples of how to do that wait.
Thanks for your reply. I got your point. I got one more reply from the bmc support to
1. make a blpackage of script
2.add a external command and put your command in command field and choose reboot item as after item deployment.
I followed the same. The below explanation and screenshot is the order which I'm expecting.
A). I wanted to execute the command : eeprom boot-device=disk1 then take reboot
2. After rebooting the server execute the command : eeprom boot-device=disk0 take a reboot
But it's not functioning properly. The server is rebooting on disk1 two times actually it should boot on diks1 only once. Could you just validate am I doing this correct. If not suggest me.
that package should do what you state. in the deploy job, was is the reboot option - is it 'use item defined reboot options' ?
also - do you need to do a normal reboot or a reboot reconfig after changing this setting? or is there possibly some commit command you need to run after the eeprom command ?
Thanks for your response.
1. Yes. I'm using item defined reboots in the bldeploy job.
2. Just reboot option is ok for me. There is no need to run any commit command after eeprom command. This eeprom command automatically change the boot device and commit it.
Server is rebooting but not in appropriate mentioned way. It's taking 2 reboots on disk1.
You have the device aliases setup correctly?
Everything is configured perfect.
Can you run something after the eeprom sets to verify that it’s actually being set before the reboot?
Like take out the reboot in the packages and add some echo commands show the eeprom settings are taking effect?