2 Replies Latest reply on Nov 17, 2016 9:02 AM by Yanick Girouard

    BSA 8.9 - Cleaning up orphan patch files on file server

    Yanick Girouard

      When removing irrelevant patches on a Windows patch catalog, the database objects is marked for deletion and later cleaned (hard deleted) when the database cleanup job is run. However, the physical file on the file server (or wherever you keep the patch catalog repository) is not cleaned. In other words, there are left-over orphan files (adding up to gigabytes over time) left on the repository location. At least it used to be the case in previous releases.

       

      There was a script (written by Bill Robinson) that could be used to clean such orphan files. This script was to be used "as is" and was not part of the OOTO bundle. Is there still a need to use such a script to cleanup orphan files, or is there now a way to clean those using the blcli cleanup commands?

       

      Also, are there any other kind of files besides catalog patches, that would become orphans on the file server after a database cleanup job run?

        • 1. Re: BSA 8.9 - Cleaning up orphan patch files on file server
          Jim Wilson

          To the best of my knowledge, there is still a case for executing the script to remove orphan files from File Server with BSA 8.9.

           

          That may change with the next release, which is being referred to currently as "8.9.1"

           

          If your heath dashboard shows your File Server Free Space below the recommended threshold, and still reducing, this would be a sign that action is required, to ensure that File Server storage is not exhausted.

          • 2. Re: BSA 8.9 - Cleaning up orphan patch files on file server
            Yanick Girouard

            Thanks Jim,

             

            I managed to extract just the orphan catalog patches part of the script Bill wrote, because the whole thing seemed a bit overkill.

             

            My version is a bit crude and could possibly include more error checking, but for what I need it to do, it's perfect:

             

            #! nsh
            blcli_setjvmoption -Dcom.bladelogic.cli.execute.quietmode.enabled=true
            
            
            TMPDIR=/D/Temp
            REMOTEFILEROOT=$1
            BACKUPLOCATION=$2
            DRYRUN=$3
            BATCHSIZE=1000
            MINROW=0
            MAXROW=1000
            RESULTROWS=1
            QUERYRESULTFILE="${TMPDIR}/$$.queryresult.txt"
            DEPOTFILELIST="${TMPDIR}/$$.depotfilelist.txt"
            LOCALFILELIST="${TMPDIR}/$$.localfilelist.txt"
            DELTAFILELIST="${TMPDIR}/$$.deltafilelist.txt"
            
            
            echo "QUERYRESULTFILE=$QUERYRESULTFILE"
            echo "DEPOTFILELIST=$DEPOTFILELIST"
            
            
            function getDepotFileList() {
              echo "Getting depot file paths (unique values)..."
              while [[ ${RESULTROWS} -ne 0 ]]; do
              MINROW=$((${MAXROW}+1))
              MAXROW=$((${MAXROW}+${BATCHSIZE}))
              #echo "MINROW=$MINROW, MAXROW=$MAXROW"
              SQLQUERY="select distinct fsPath from (select row_number() over (order by (select null as noorder)) as rn,dol.remote_path as fsPath from depot_object_location dol join depot_object do on dol.depot_object_id = do.depot_object_id and dol.depot_object_version_id = do.depot_object_version_id and do.object_type_id = 114 where do.is_latest_version = 1 and do.is_saved_explicitly = 1 and do.is_deleted = 0 and dol.is_deleted = 0 and dol.depot_object_location_type_id = 1 and dol.remote_path like '${REMOTEFILEROOT}/%' and dol.remote_path <> '${REMOTEFILEROOT}' ) as fsResultPath where rn between ${MINROW} and ${MAXROW}"
              blcli_execute Sql executeSqlCommand "${SQLQUERY}" &> /dev/null
              blcli_storeenv QUERYOUT
              QUERYRESULT="$(printf "${QUERYOUT}" | grep "Value" | sed "s/<Value>//;s/<\/Value>//;s/\/\//\//2;s/^[[:space:]]*//")"
              RESULTROWS=$(printf "${QUERYOUT}" | grep "numberOfRows" | cut -f2 -d= | sed -e 's/>//g' | tr -d '"')
              #echo "RESULTROWS=$RESULTROWS"
              echo "$QUERYRESULT" | awk 'NF' >> "${QUERYRESULTFILE}"
              done
              sort $QUERYRESULTFILE | uniq > $DEPOTFILELIST
              NUMFILES=$(wc -l $DEPOTFILELIST | awk '{print $1}')
              echo "NUMBER OF DEPOT FILES=$NUMFILES"
            }
            
            
            function getLocalFiles() {
              echo "Getting list of local files..."
              find "${REMOTEFILEROOT}" -type f -maxdepth 1 ! -name "*.xml" -print | sort > "${LOCALFILELIST}"
              NUMFILES=$(wc -l $LOCALFILELIST | awk '{print $1}')
              echo "NUMBER OF LOCAL FILES=$NUMFILES"
            }
            
            
            functon getDelta() {
              echo "Getting delta ..."
              comm -13 $DEPOTFILELIST $LOCALFILELIST > $DELTAFILELIST
              NUMFILES=$(wc -l $DELTAFILELIST | awk '{print $1}')
              echo "DELTA FOUND=$NUMFILES"
            }
            
            
            function moveFiles() {
              if [[ ! -d "$BACKUPLOCATION" ]]; then
              echo "Creating backup location $BACKUPLOCATION"
              mkdir -p "$BACKUPLOCATION"
              fi
            
              echo "Moving files to backup location ..."
              OLD_IFS=$IFS
              IFS=$'\n'
              cat $DELTAFILELIST | while read f; do
              if [[ $DRYRUN = "true" ]]; then
              echo "[DRYRUN] Moving ${f} to ${BACKUPLOCATION}"
              else
              echo "Moving ${f} to ${BACKUPLOCATION}"
              mv "${f}" "${BACKUPLOCATION}"
              fi
              done
            }
            
            
            function main() {
              getDepotFileList
              getLocalFiles
              getDelta
              moveFiles
            }
            
            
            main
            
            
            
            
            
            1 of 1 people found this helpful