HP 3000 Manuals

Transaction Consistency [ HP SPT/XL User's Manual: Analysis Software ] MPE/iX 5.0 Documentation


HP SPT/XL User's Manual: Analysis Software

Transaction Consistency 

An important part of analyzing a specific type of transaction is
understanding whether or not the transaction behaves consistently from
one instance to the next.  Without this you won't know whether the
average transaction times are typical or not.

Consistency of processing times for iterations of the same transaction is
also an important measure of performance from a user's standpoint, since
this represents consistency of response times for interactive
applications.  Variations in system responsiveness in either case are not
acceptable to users.

Here are some tips for checking consistency:

 *  Check transaction processing for consistency in the early stages of
    program profiling.

 *  Be sure you have enough transactions to draw meaningful conclusions.

 *  You can disregard the 10 percent of transaction instances with the
    highest processing time and the 10 percent with the lowest processing
    time because these are more likely to be non-representative extremes.
    Then you can concentrate on the remaining 80 percent of the
    transaction instances that are more representative.  For example,
    very low transaction processing times may result from a user
    canceling a particular transaction.

Next we'll look at the consistency of the PROCESS_ORDER_LINE transaction
within the order entry application.

Checking for Transaction Consistency 

The Transaction Process Time Histogram for PROCESS_ORDER_LINE is shown in
figure 4-1.

[]
Figure 4-1. Transaction Process Time Histogram This histogram shows tabular information on processing time per transaction for all occurrences of the transaction observed during data collection. Considering the exceptionally low and high values as extremes and ignoring them, the histogram shows a variation in processing time per instance of the PROCESS_ORDER_LINE transaction ranging from 0.310 seconds to 2.465 seconds. This variation suggests further investigation is warranted since, in this case, all iterations of this transaction perform a common user function. Inconsistencies in processing similar transactions can be caused by the transaction database processing paths being dependent on the input data values. This would incur a different number of intrinsic calls from one instance of the transaction to the next. Inconsistency can also be due to the loading characteristics or the structural complexity of files or databases. An example of loading characteristics might be where the times for DBFINDs, DBGET (calculated), DBPUT, or DBDELETEs to a TurboIMAGE master data set vary depending on whether the object of the intrinsic was a primary or secondary entry. An example of structural complexity might be the random addition or deletion of sorted detail data set entries using DBPUT and DBDELETE. Focus on a Key Intrinsic Let's examine the PROCESS_ORDER_LINE transaction to identify the cause of inconsistency between different instances of the transaction. With HP SPT/XL you can compare instances of a transaction type. HP SPT/XL divides the transaction instances into 10 groups by processing time. These are called percentile bands. You can then compare one percentile band with low process times against another percentile band with high process times and analyze the reasons for the variation.
[]
Figure 4-2. Individual Transaction Percentiles The Individual Transaction Percentiles screen in figure 4-2 shows the 10 percentile bands and their process time ranges. There appears to be significant variation in the processing time per transaction for all occurrences of the PROCESS_ORDER_LINE transaction. If you compare the transaction percentiles for bands 2 and 9, you will see the screen in figure 4-3. This Individual Transaction Comparison identifies those intrinsics that cause the variation. The total processing time per percentile range for DBGET requests varies from .322 seconds in percentile range #2 (shown as Curr on the display) to 8.678 seconds in percentile range #9 (shown as Comp on the display). It is clear that the variation in DBGET has the biggest impact on the inconsistency in PROCESS_ORDER_LINE processing. Also, the number of DBGET calls/transaction varies from 16.8 in the lower percentile range to 39.4 in the higher range.
[]
Figure 4-3. Individual Transaction Comparison Let's proceed by determining where the DBGET requests originated in the code and which data sets they were made against. A More Detailed Look at the Intrinsic The HP SPT/XL Analyzer identifies where DBGET requests were made within specific transaction types using the Detailed Intrinsic Information screen. Reviewing the procedures, you'll find the vintage_available procedure, shown in figure 4-4, is the dominant DBGET request within PROCESS_ORDER_LINE. It's called in statement number 25 of vintage_available.
[]
Figure 4-4. Detailed Intrinsic Information Look at Database Activity The Detailed TurboIMAGE Data Set Information screen shown in figure 4-5 suggests that the probable object of the DBGET requests is the STOCKS data set of HPWINE accessed using Mode-5 DBGETs (Forward Chain Reads) an average of 24.9 times per transaction. The reading of variable length chains of the detail data set within the STOCKS data set is a cause of the inconsistent processing times for the PROCESS_ORDER_LINE transaction.
[]
Figure 4-5. Detailed TurboIMAGE Data Set Information Fixing the Inconsistency In this order entry example, reading down variable-length chains resulted from a database design decision that failed to address the need to provide consistent response times for the PROCESS_ORDER_LINE transaction. Given PRODUCT_NO and VINTAGE values, the current implementation of PROCESS_ORDER_LINE performs a chained search on the STOCKS data set, searching on PRODUCT_NO and matching on VINTAGE. To correct this problem, we used a composite PRODUCT_NO and VINTAGE search item in the STOCKS data set, so a particular stock item could be referenced with a single DBGET call. We modified the database structure and the program code accordingly, then monitored Order Entry again with HP SPT/XL, storing the collected data in a new logfile SPTLOG2.
NOTE In examining a type of transaction for consistent processing time, a common error is the assumption that there is a single cause for an inconsistency. To ensure that you have pinpointed all possible causes for an inconsistency, restart the profiling process from the beginning after each code or data modification.
Verify the Improvement Let's compare the before and after measurements for consistency of PROCESS_ORDER_LINE. As mentioned previously, the data in SPTLOG1 represents the behavior of the application while processing variable-length detail data set chains within the STOCKS data set. The data in SPTLOG2 reflects the measurement following the modification of this data set to use a composite key. From the Transaction List screen we can use the Compare function to compare the timings for logfiles SPTLOG1 and SPTLOG2. For example, in PROCESS_ORDER_LINE, you can see that average processing time dropped from .814 to .496 seconds, a reduction of 35 percent.
[]
Figure 4-6. Transaction List Compare We can also reexamine the Transaction Process Time Histogram in figure 4-7 to see the improved consistency in comparison to the previous display. Average process time was reduced from .813 seconds as seen in figure 4-2 to .496 seconds.
[]
Figure 4-7. Transaction Process Time Histogram Overall, this modification produced a significant increase in both the consistency and the throughput of the most resource-consuming transaction. Next, we need to examine how much time PROCESS_ORDER_LINE spends waiting for disk I/O, memory, and locking resources.


MPE/iX 5.0 Documentation