There are a couple ways to approach this within TSCO.
The longer term approach is to create an Idea so that this could be evaluated by product management for roadmap.
There are a couple options for short-term approach that you could consider:
1. Assuming you want to mainly report the workloads in raw cores used, you can create a Formula and use the processor count from the server running the workload to derive the cores used by the workload. This approach will allow you to build reports and present an analysis, but it would not allow you to build a forecast for the workload using cores used or derive any statistics.
2. In case you want to also do a forecast or derive some statistics for the workload cores used, then you can create a self-ETL to produce custom metrics to contain the cores used by the workload. You may be somewhat familiar with using the self-ETL as this was done to create effective CPU for hyper-threaded systems in your environment.
These are some options. However, if the actual goal was to get this on the roadmap, then you should repeat this request as an Idea for the TSCO product management team to review for roadmap.
Thanks Mike for the information. I will submit an idea for future consideration. As for the short term options we will look to try one of them. I think this would be doable to recreate the data in raw unlike the Effective Rate ETL which we recently discovered is not the same as effective rate calculated in BPA/Visualizer data. We need to discuss that.
Hey Doug, for sake of curiosity have you tried working it around using a formula? that may help de-normalize CPU utilization in a pretty automated fashion.