I've noticed this issue a few times over the last month. Randomly affecting various people in different roles, and working with different jobs and target servers.
Sometimes it seems that a deploy job will just fail for no apparent reason citing being unable to open configuration files in the staging area for writing. There are no security issues, and there's space in /tmp etc. When the jobs finishes as a failure, the icon instead of going to a red cross goes to an activity style icon (blue with a looking glass).
It seems that simply removing the targets and re-adding them solves the problem, but I was hoping to understand what might cause this behaviour.
Also, why do jobs still retain their old name when they're a copy of an existing job. For example if I copy a job, and then rename the copy, when it runs, in the logs it still reports it's old name... this can be very disconcerting for our users.
when the jobs fail, is there anything in the deploy log ( ) on the target ?
that may provide some additional info...
for the copy thing, submit a ticket about this, i agree that it's confusing, or you end up w/ a bunch of jobs or packages have names like "Copy of ...." or something like that.