6 Replies Latest reply on Dec 11, 2019 2:27 PM by Carl Wilson

    AO Attachment issue on CDP and HACDP

    Mohammed Akram Shaikh
      Share This:

      Hi Folk,

      I have a CDP and HACDP in AO GRID. for a use case I am attaching the file using the base64 decryption  I am storing it on the server but the next process which read's the attachment from disk is getting executed on the other server so the attachment is not available hence the transaction is getting fail.


      possible are the optioned to solve these issue or suggest the best one or new to resolve this issue 

      1) copy the same attachment on both the CDP as well HACDP (transaction time will increase)

      2)create a shared mount point and  read from it (will AO support it?)



      Please advise ?

        • 1. Re: AO Attachment issue on CDP and HACDP
          Mohammed Akram Shaikh

          Hi Team,


          Any advice ? related to this issue or other way? how to execute all the sub-process in a process only on one peer while the same adapter is active on the other peer as well.?

          • 2. Re: AO Attachment issue on CDP and HACDP
            Stefan Hall

            We use a shared folder, easy to set up and works great. There are no changes required in your AO processes.

            1 of 1 people found this helpful
            • 3. Re: AO Attachment issue on CDP and HACDP
              Harsh Mehta

              Hi Akram,


              We have implemented shared drive concept in our client environment to handle this scenario. Shared drive should be accessible from all the peers.



              Harsh Mehta

              • 4. Re: AO Attachment issue on CDP and HACDP
                Carl Wilson


                if your workflow is running in the one process/transaction, then you should not have an issue with storing on one CDP.

                Sounds like there is a separate process spawning from the saving of the attachment to the reading of the attachment (which you have mentioned).


                A way to solve this is as others has mentioned, using a "shared" folder however this is not always an option in some organisations.

                Alternately, you need to look at placing all relevant workflow in the one process to stop the separation of the transactions across the CDP's.




                1 of 1 people found this helpful
                • 5. Re: AO Attachment issue on CDP and HACDP
                  Mohammed Akram Shaikh

                  Hi Carl,


                  In My Parent process there are three  sub process are there in total.

                  First is  to write the base 64 to a file. using file adaptor

                  II) sub process is to execute the script to convert the base64.txt file to the actual file (example doc or pdf) using the SSH adaptor

                  third sub process is writing the file into Staging form.


                  Parent process is getting executed from the CDP (as the LB is pointing to CDP only) after that the process one and process two are also executing on the CDP

                  but the third process is given to HACDP (by CDP and not by Lb as the LB is pointing only to CDP). as the file is stored on cdp and converted on CDP it is not available on the HACDP hence the third sub process is getting fail as it is having the path to attachment in the AO mapping but the file is not available for the insert entry.


                  we are thinking to have a shared directory to solve this problem so that In future the LB will be pointing to both CDP and HACDP and from shared directory will be available to both the host.

                  as a workaround we have kept the adapters active only on CDP(Remedy Actor) the SSH adapter is also pointing to CDP so it is working fine)


                  please suggest best practice for the same




                  • 6. Re: AO Attachment issue on CDP and HACDP
                    Carl Wilson


                    an external LB has no influence on the processing here.

                    An external LB is only used for the balancing of incoming transactions across the CDP/HACDP.

                    Internal processing algorithms determine which Peer the processing will be performed on, and as you have "sub" processing this is what is causing the algorithms to kick in and perform internal process balancing.


                    You would need to run all activities in the one process to stop this behaviour and not use sub processing which spawns separate transactions.


                    If you want to keep your current workflow processing, with sub processes, then you would need to use a "shared" or symbolic linked "shared" directory.

                    TSO/AO has no issues with this as you will supply a static path which the system can interpret as one single directory.




                    1 of 1 people found this helpful