0 Replies Latest reply: Mar 16, 2005 9:11 PM by Perry Stupp RSS

Memory Modeling changes in 7.2 - some good / some bad

Perry Stupp

I finally had an opportunity to do a study with the new 7.2 console and was somewhat surprised by the results, particularly the changes to memory modeling.  There hasn't been much, if anything, on Devcon about this so I thought I would write something up - I hope you find it interesting / helpful.

With previous versions of Analyze I often found myself working to split up workloads that contain a mix of persistent (i.e. long-running) and intermittent (many short-lived) processes. A typical example is something like Oracle where there are principal processes such as the archiver, logwriter and checkpoint that are running all the time even when the database is inactive. Provided you're not using MTS or a variety of other connection pooling techniques, you'll also see one oracle process for each connection to the database (with nomenclature along the lines of oracleINSTANCE) many that last only seconds or perhaps minutes. Combining these processes together, at times, lead to inconsistent results depending on whether the algorithm determines that it is more persistent or intermittent in nature. What's more, changes from period to period often lead to inconsistent results in the memory numbers when reported over time and occasionally lead to wild unreliable spikes.

I spent considerable time talking with Debbie Sheetz concerning the problem and eventually convinced her to take up the cause and resolve it. The result was a change in analyze (which actually first appeared as a patch in 7.1.20) that classifies all non-system workloads as persistent by default.

While I was pleased to see this change in full effect in the v7.2 console I was caught off-guard by the changes that accompanied it. For those of you who are familiar with the inner workings of memory modeling, you know that unlike intermittent memory - which grows, to varying degrees, regardless of how you grow your workloads, persistent memory only grows when you grow by user count. With all workloads classified as persistent one would expect that if you grew by call count then there would be no increase in memory demand - right? As it turns out, that's not quite the case.

Memory growth is now determined solely by the "Grow Workload" control and it behaves exactly the same whether you choose to grow by user count or call count. While this newfound flexibility is nice (in that you can choose to grow individual workloads rather than all or nothing), I was very concerned to see that "Grow Workloads" is enabled for all non-system workloads BY DEFAULT. This means that no matter what you choose for your growth technique (user count vs call count) you're going to see an increase in memory. While this, in itself, isn't necessarily a bad thing, I believe it is bad if only because it is an undocumented change in established behavior.

Given the difficult task of accurately collecting, analyzing and reporting memory (particularly given the limitations of say Solaris (in general) or AIX (global shared memory)), I tend to be more conservative with my memory projections unless I have the time to analyze things more thoroughly. Personally I'm concerned that this new default behavior will lead many less experienced users to make some less than ideal memory projections - perhaps with the help of this article, a few people will manage to steer clear of this.

If you have any questions about this post or memory modeling in general, don't hesitate to contact me.
Regards,

Perry.