Share This:

Having worked in Pentaho Spoon for sometime , you tend to encounter issues due to the negligence of minor configurations or over analyzing things. Below are the few of the issues I have encountered when working around in the tool and the solution that has worked for me to resolve them(Which might not be right all the time )

 

1. Weird data being uploaded from a transformation

Issue: When you do a import from a CSV file, sometimes weird values (consisting of @#! and some alphanumeric) gets updated into remedy.

Solution: This is due to one of the option check box that is selected during the data load. This needs to be unchecked for the data to be loaded properly without hampering the data values.

Note: Lazy conversion is really useful when you have to just read the data from a file and just pass it on as an file output, without any modification.

 

2. Loading data into the CMDB relationship table

Issue: When running a transformation for loading data into relationship table , based on the source instance ID and Destination instance ID, the records would be rejected.

Solution: In the CMDB output step, there is a check box "Use CheckSum", this needs to be unchecked for the relationship tables of CMDB.

 

3. Executing the AI jobs to run on linux environment

Issue: Due to some reason we were not able to execute the AI jobs from the Atrium console from the mid tier. While that was being worked on, we still needed a way to run the same.

Solution: We executed and ran the jobs directly from the command prompt. Details regarding the same has been explained in the below blog post.

Executing Spoon jobs in Linux Environment

 

4. Unable to run a job from command prompt

Issue: When we were trying to run the job from the command prompt, few of the jobs failed to execute.

Solution: This was mainly due to the space in the job names. there are 2 ways to rectify this one

 

     - Either have jobs names without spaces or separated by an underscore(_)

     - If there is a space in the name, then have the name of the jobs enclosed in " " , in the script used for executing

 

5. The sequential transformation in a job fails to run, when the previous job has been failed

Issue: When you have a job with multiple transformations in it, when a first transformation(or any previous transformation) fails the execution, all the successors jobs are quit and the job ends.

Solution: To avoid situation like this, make sure to have the job flow to allow "Unconditional" for the flow evaluation. This is by default set to "Follow when result is true".

Note: Change this setting only if there is no dependency between the transformation. Leave it unchanged if you need the previous transformation to execute successfully!

6. Connections being reset when importing jobs

Issue: The connection information is being modified or updated when the jobs are being imported.

Solution: This is a configuration in the spoon client , which when checked will update the connection information. This needs to be enabled with care based on the requirement. To locate the configuration, navigate to Spoon client, "Tools > Option". The highlighted option needs to be unchecked for this.

 

Note: These are some of the things that worked for me in working with the spoon client. It may not be the same for everyone. Hence decide on what is best for you when working with it.