At this stage, the DBA as well as the system designers and developers test and fine-tune the database system for performance, integrity, concurrent access, and security constraints. During a database system test phase, the database and its corresponding end-user applications are put through massive concurrent user connections; users enter sample of actual data or live data, perform queries, and produce reports to simulate actual operating conditions.

How much testing is necessary?
The answer depends on the situation and requires good judgment and input from the DBA, system analysts, designers, developers, and administrators, as well as users and business management. Some end-users prefer a completely finished product, while others realize that minor changes can be treated as maintenance items once the system is operational. No database system goes to production phase 100 percent error-free; however, thorough testing should be considered as a cost-effective means of providing a quality product. This is ensures that errors that affect the integrity or accuracy of data and system functionality are corrected immediately.

Database-Testing Phase Objectives:
  • Demonstrate that the users can interact with the system successfully.
  • Verify that all system components are integrated properly and that actual processing situations are working correctly.
  • Ensure that the backup and restart capabilities of the system are adequate.
  • Confirm that the database information system can handle predicted data volumes in a timely and efficient manner.
  • Ensure that the IT staff has the needed documentation and troubleshooting instructions to operate the system properly.
  • Test database connections.
  • Identify problematic queries.
  • Test all database procedures and scripts.
  • Identify database configuration problems.
  • Document database performance maintenance procedures and administrative tasks.
  • Gather database statistics and analyze the results to determine whether performance goals were met.

What To Test
The IT staff often measures response time, bandwidth, throughput, and turnaround time to evaluate system performance both before and after changes to the database system requirements. Hence, workload measurements may include the number of line printed, the number of records accessed, and the number of transactions processed in a given time period. These activities are affected by the system design, capabilities, and processing methods. The performance data collected is very often used for cost-benefit analyzes of proposed new system or maintenance of the existing system, and to evaluate system components that are nearing the end of their economically useful lives.

Response time: is the overall time interval for the completion of a single transaction. If a user’s request involves online transaction (Internet/network access), user often expects an instant response. However, there are some factors that influence response time that system designers have no control over, such as system loading or communications time.

Bandwidth and throughput: are used interchangeably by many industry experts as they are both expressed as a data transfer rate in a given time period (interval).  However, throughput measures actual system performance under specific circumstances and is affected by network loads and hardware efficiency. Bandwidth and throughput have limitations especially those associated with graphics-intensive systems and Web-based systems that cover a large geographical area, can slow system performance and response time.

Turnaround time: applies to centralized batch processing operations, such as customer billing or credit card statement processing. This measures the time between submitting a request for information and the fulfillment of that request. It may be used to measure quality of support or service to user requests, i.e. when a user contacts a help desk automated system, the response should be prompt and effective.

Finally, IT staff uses current performance and workload data as input for the capacity planning process; which is a process that monitors current activity and performance levels, and anticipates future activity, and forecasts the resources needed to provide desired levels of service through a technique called the ‘what-if-analysis’.



References:
Afyouni, A. H. (2004). Oracle 9i Performance Tuning: Optimizing Database
Productivity. Thompson Course Tech.

Alapati, S. R. (2008). Expert Oracle Database 11g Administration. Apress.

Burleson, D. K. & Danchenkov, A. B. (2005). Oracle Tuning: The Definitive Reference.
Rampant Techpress

Connolly, M. T, & Begg, E. C, (2002). Database Systems: A Practical Approach to
Design, Implementation, and Management. Addison-Wesley.

Johnson, J. C. (2002). OCP: Oracle9i performance tuning study guide. Illustrated Edition.
John Wiley and Sons.

Mittra, S. S. (2002). Database Performance Tuning and optimization: Using Oracle.
Illustrated Edition: Springer.

Page, W. G. (1999). Using Oracle8/8i, Special Edition. Chapter 20: Que Books.
Retrieved on September 28, 2009 from http://docs.rinet.ru/O8/index.htm

Rob, P. & Coronel, C. (2004). Database Systems: Design, Implementation, &
Management. 6th Edition. Thompson Course technology

Shelly, G. B., Cashman, T. J., & Rosenblatt, H. J. (2000). System Analysis and Design.
4th Edition. Shell-Cashman Series. Course Technology.

Whalen, Edward. (2005). Oracle Database 10g Linux administration. McGraw-Hill-
Osborne/Oracle Press.