In TrueSight Capacity Optimization (TSCO), is it possible to configure the 'Amazon Web Services - AWS API Extractor' to import data at a higher granularity than 1 hour data points?

Version 8
    Share This:

    This document contains official content from the BMC Software Knowledge Base. It is automatically updated when the knowledge article is modified.


    PRODUCT:

    TrueSight Capacity Optimization


    COMPONENT:

    Capacity Optimization


    APPLIES TO:

    TSCO 10.5 and 10.7



    QUESTION:

    In TrueSight Capacity Optimization (TSCO), is there an option available in the in the ETL configuration to select lower resolution?

    Per the AWS documentation (https://docs.aws.amazon.com/AmazonCloudWatch/latest/APIReference/API_GetMetricStatistics.html ) Amazon CloudWatch retains metric data as follows:
      Data points with a period of 60 seconds (1 minute) are available for 15 days 
      Data points with a period of 300 seconds (5 minute) are available for 63 days 
      Data points with a period of 3600 seconds (1 hour) are available for 455 days (15 months)

    Is it possible to run the ETL to extract CloudWatch performance data from AWS at lower resolution (5 minutes)?


    ANSWER:

    By default the TrueSight Capacity Optimization (TSCO) 'Amazon Web Services - AWS API Extractor' ETL is designed to import data from AWS with a 1 hour data resolution.  But there is a configuration property that can be set in the aws-metric-conf.json file used to configure the CloudWatch data extraction to set the summarization period for each metric being extracted.'

    For example, to extract the CPU_UTIL metric from CloudWatch at a 5 minute granularity the CPU_UTIL section of the aws-metric-conf.json file would looke lik this:

                "mapping"    : [
                    {
                        "targetMetric"    : "CPU_UTIL",
                        "period": "300",
                        "formula"    : "(CPUUtilization)/100",
                        "sourceMeters":[{
                            "awsMetric"    : "CPUUtilization",
                            "statistic"    : "Average"
                        }]
                    },


    A sample aws-metric-conf.json file configured to extract metrics at a 5 minute granularity is enclosed.

    Note that changing the defined 'period' from the default value of 3600 will have an impact on the ETL execution duration and the volume of data imported into TSCO.

     

      

    Key ETL Performance Considerations

    AWS ETL data extraction is done in sequence for VMs, EBS volume, and Autoscaling groups.  This means that the increased in data being extracted for each group is likely to increase the elapsed data extraction time for each data type and increase the total run time of the ETL. 

    The volume of data loaded into TSCO will also be increased linearly with the change of the period.  So, for example, at the default period of 3600 seconds there will be 24 samples imported into TSCO for the CPU_UTIL metric but at 300 seconds that will increase to 288 samples (a factor 12 increase).  This increased data volume will need to be considered in relation to the sizing of the TSCO environment. 
      

     


    Article Number:

    000223526


    Article Type:

    FAQ/Procedural



      Looking for additional information?    Search BMC Support  or  Browse Knowledge Articles