Benchmark and Load Testing

Why Load Test

To ensure the configured system is up to the task it makes sense to design and run specific jobs and monitor the system under load. Those jobs should reflect the profile usual anticipated during the day-to-day operation, especially during peak demand.

Questions usually are:

  1. How much can the system take, i.e., how many jobs can be run concurrently without major impact to system response.
  2. What is the overall system utilization, any bottlenecks.
  3. Where can system performance be improved?

We distinguish between batch and dialog jobs.

Batch and Dialog

Batch jobs are those that are started, running in background and will not need any user interaction. Batch jobs (e.g., a report generation) is expected to complete within a certain time (several minutes at least), and, when rather complex, may be run during off-peak hours.

Dialog jobs are usually in continuous interaction with the user. Dialog jobs (e.g., order entry, status inquiry, etc.) are also be run during peak hours. System response is key and should be very short (a second or two).

Typical there is a certain mix, a profile of dialog jobs and batch jobs; those are usually different for peak and off-peak times.

Using appropriate tools batch jobs and dialog jobs can be designed. A set of batch jobs, generating specific reports. And recording dialog sessions that will be then used to simulate dialog jobs. Part of said tools are also schedulers to start and monitor the jobs, concurrently up to a maximum count, in sequence to ensure continuous load, etc. And in particular ensuring the dialog job with simulated user interaction is closely resembling a real dialog session.

What is needed

Once the prepared jobs are designed and in place, each jobs is run individually on the idle system to determine the resource requirements of each job:

  • elapsed time, measured from begin to end of the job
  • system time used to begin and end the job
  • processor time recorded for the job
  • main memory and pagefile usage
  • disk I/O and network I/O utilization
  • plot of all system resources before, during, and after the job.

When we run several batch jobs concurrently, we could certainly expect the elapsed time to eventually increase for the given job, while the system resource consumption (processor and I/O time, memory usage) should stay the same. Until we reach the threshold of a high system load.

In addition to the overall elapsed time a dialog job requires more measure points, to observe (from the user) the responsiveness of the system. Each individual pair of (simulated) user input and resulting system reply is to be looked at. To be as realistic as possible, appropriate think times are incorporated.

Think time is the time the user usually needs to prepare the input into the system (e.g., listening to the request, moving the mouse to a specific field, typing the data, etc.); i.e., after each system response, there is a certain delay or pause until the next user input is transmitted to the system.

Consequently, the following additional data is recorded:

  • elapsed time between user input and system response
  • deviation of the programmed think-time.

Then setting a few goals. What would be a feasible increase in elapsed time (batch) during load.  What are acceptable response times (dialog).

Then let’s run the prepared profile of batch and dialog jobs, increase the load, monitor the system utilization, and determine where and when we would reach a point of, well, „unbareablilty“…

(to be continued)


© March 2010 Jürgen Menge, San José

Schreibe einen Kommentar

Deine E-Mail-Adresse wird nicht veröffentlicht. Erforderliche Felder sind mit * markiert.