1 - the property is set when the job is created or if you manually update it on existing jobs. if you set it to 90, created a bunch of jobs then changed the default to 365 your new jobs would be retained for 365 days, but the existing jobs would still be retaining 90, unless you manually updated the property value on the existing jobs.
2,3 - AUTO_GENERATED must be 'true' for that to work for either jobs or depot objects. i forget when that is set.
cleanupHistorical is removing old data in the specified domain - audittrail, job run event, etc. you can set this at run time like 'cleanupHistoricalData <domain> <days> null null'. the default varies for domain - result i believe is based on dependencies, audittrail, jobrunevent and jobschedule have a default setting, i'm not sure offhand where that is set.
btw, you should be running the cleanup commands in this sequence:
blcli Delete executeRetentionPolicy
blcli Delete cleanupHistoricalData AuditTrail
blcli Delete cleanupHistoricalData JobRunEvent > /dev/null
blcli Delete cleanupHistoricalData JobSchedule
blcli Delete cleanupHistoricalData AuditResult
blcli Delete cleanupHistoricalData ComplianceResult
blcli Delete cleanupHistoricalData SnapshotResult
blcli Delete cleanupDatabase
blcli Delete cleanupAppServerCache <# of days, the minimum retention time is 3 days.>
blcli Delete cleanupFileServer
So for jobs and depot objects its something like this? (pseudo sql):
delete from job where job.auto_generated = true and SYSDATE > job.last_modified + job.results_retention_time
I guess the rest of my questions are all related. In this case, what happens to the job_runs in the database associated with the deleted jobs? Are those deleted at the same time? And when the job_runs are deleted, are the associated job_run_events also deleted? Probably not all that important as long as items with auto_generated are not affected as we would rarely run a job or use a depot object from autoremediation more than once.
And the last thing, while I've filed a ticket about this, I wonder if anyone else has had the issue. We encountered a bug where aborting a Windows Patch Analysis job would sometimes cause it to completely flood the appserver logs and job_run_revent table with records saying "Task has timed out, canceling command execution" starting 10 hours after the job was started. We had this happen a few times where we didn't notice and it ended up adding right around 100 million rows to the job_run_event table with one job run alone accounting for about 45 million of those. Has anyone ever seen this, and did you have any problems cleaning it up?
i don't know if it uses last modified or date_created for the auto_generated cleanup.
if the job is deleted, the job runs should be removed, but the job_run_event data will still be in the db afaik, until you run the cleanupHistorical on jobrun event.
your table spam issue is fixed in sp8.
can you post the issue #?
Is there a way to preview what going to be deleted? If i made a smartgroup for job/depot object where auto_generated = true would that be a complete list of possible targets?
The ticket is ISS03768656. Glad to see we werent the only ones to see the issue at least.
you could make the smart group to see what depot objects and jobs would get deleted.
That’s not going to show the job runs though (nothing will). you could see what jobs have what retention policy w/ a smart group.
What is the easiest way to set the retention policy on extant jobs? I see
Job recursivelySetProperty "GroupName" "Property" "Value"
if i were to run this as:
blcli_execute Job recursivelySetProperty "/" "RESULTS_RETENTION_TIME" 365
set Cleanup AutoGeneratedRetentionTime 365
blcli_execute Delete cleanupHistoricalData [audit,jobrunevent,jobschedule,auditresult,compliancresult,snapshotresult] 365 15000 BLAdmins
that should reduce it to only cleaning up autoremediation jobs, job runs, and historical data more than a year old ?
Or not, apparebntly Job recursivelySetProperty doesn't allow for setting integer values.
If RESULTS_RETENTION_TIME has no existing value, and you configure a default value in the property dictionary then all your Jobs will take this value.
You can also set the value based on job sub class, just have a look in the property dictionary.
Right now we have about 10000 jobs with RESULTS_RETENTION_TIME all set to 90, and my understanding is that altering the default value for the Job class or its subclasses would have no effect on these existing jobs.
try running the blcli against an actual group, not '/'. there should be an 'all jobs' smart group.
after you set the property you need to set the auto-gen, then run a blcli Delete executeRetentionPolicy, then the various cleanup commands.
The Job recursivelySetProperty function wasn't working at all (erroring before it even started) presumably because it was failing to convert the string "365" to the integer 365.
I just tried as you suggested with blcli_execute Job findAllByGroupId $GroupID false
where groupid was the id of the 'all jobs' smartgroup and got:
Command execution failed. java.lang.OutOfMemoryError: Java heap space
which seems to happen frequently when attempting to use the CLI (and makes it tempting to use SQL against the database directly).
you should increase the heap of the jvm. are you using a 64 or 32bit jvm? and are you using the 'blcli' or the performance commands (blcli_execute) ?
I tried from my laptop (32bit) and from the appserver itself (64bit). If I restrict it to only batch jobs (just under 1200 jobs) it comes up.
Using blcli_execute as I don't think Job findAllByGroupId can be used from blcli.