Home Page Icon
Home Page
Table of Contents for
III. Deployment
Close
III. Deployment
by Jeff Holt, Cary Millsap
Optimizing Oracle Performance
Dedication
A Note Regarding Supplemental Files
Foreword
Preface
Why I Wrote This Book
Audience for This Book
Structure of This Book
Which Platform and Version?
What This Book Is and Is Not
About the Tools, Examples, and Exercises
Citations
Conventions Used in This Book
Comments and Questions
Acknowledgments
I. Method
1. A Better Way to Optimize
1.1. “You’re Doing It Wrong”
1.2. Requirements of a Good Method
1.3. Three Important Advances
1.3.1. User Action Focus
1.3.2. Response Time Focus
1.3.3. Amdahl’s Law
1.3.4. All Together Now
1.4. Tools for Analyzing Response Time
1.4.1. Sequence Diagram
1.4.2. Resource Profile
1.5. Method R
1.5.1. Who Uses the Method
1.5.1.1. The abominable smokestack
1.5.1.2. The optimal performance analyst
1.5.1.3. Your role
1.5.2. Overcoming Common Objections
1.5.2.1. “But my whole system is slow”
1.5.2.2. “The method only works if the problem is the database”
1.5.2.3. “The method is unconventional”
1.5.3. Evaluation of Effectiveness
2. Targeting the Right User Actions
2.1. Specification Reliability
2.1.1. The System
2.1.2. Economic Constraints
2.2. Making a Good Specification
2.2.1. User Action
2.2.2. Identifying the Right User Actions and Contexts
2.2.3. Prioritizing the User Actions
2.2.4. Determining Who Will Execute Each Action and When
2.3. Specification Over-Constraint
3. Targeting the Right Diagnostic Data
3.1. Expectations About Data Collection
3.2. Data Scope
3.2.1. Scoping Errors
3.2.2. Long-Running User Actions
3.2.3. “Too Much Data” Is Really Not Enough Data
3.3. Oracle Diagnostic Data Sources
3.4. For More Information
4. Targeting the Right Improvement Activity
4.1. A New Standard of Customer Care
4.2. How to Find the Economically Optimal Performance Improvement Activity
4.3. Making Sense of Your Diagnostic Data
4.4. Forecasting Project Net Payoff
4.4.1. Forecasting Project Benefits
4.4.1.1. Monetizing the benefits
4.4.1.2. If you can’t monetize the benefits
4.4.2. Forecasting Project Cost
4.4.3. Forecasting Project Risk
II. Reference
5. Interpreting Extended SQL Trace Data
5.1. Trace File Walk-Through
5.2. Extended SQL Trace Data Reference
5.2.1. Trace File Element Definitions
5.2.1.1. Cursor numbers
5.2.1.2. Session identification and timestamps
5.2.1.3. Application identification
5.2.1.4. Cursor identification
5.2.1.5. Database calls
5.2.1.6. Wait events
5.2.1.7. Bind variables
5.2.1.8. Row source operations
5.2.1.9. Transaction end markers
5.2.1.10. Reference summary
5.2.2. Oracle Time Units
5.3. Response Time Accounting
5.3.1. Time Within a Database Call
5.3.2. Time Between Database Calls
5.3.3. Recursive SQL Double-Counting
5.3.3.1. Parent-child relationships
5.3.3.2. Recursive statistics
5.4. Evolution of the Response Time Model
5.5. Walking the Clock
5.5.1. Oracle Release 8 and Prior
5.5.2. Oracle Release 9
5.5.3. Clock Walk Formulas
5.6. Forward Attribution
5.6.1. Forward Attribution for Within-Call Events
5.6.2. Forward Attribution for Between-Call Events
5.7. Detailed Trace File Walk-Through
5.8. Exercises
6. Collecting Extended SQL Trace Data
6.1. Understanding Your Application
6.2. Activating Extended SQL Trace
6.2.1. Tracing Your Own Source Code
6.2.2. Tracing Someone Else’s Source Code
6.2.2.1. Triggering a session to activate its own trace
6.2.2.2. Activating trace from a third-party session
6.3. Finding Your Trace File(s)
6.3.1. Trace File Names
6.3.2. Simple Client-Server Applications
6.3.3. Oracle Parallel Execution
6.3.4. Oracle Multi-Threaded Server
6.3.5. Connection-Pooling Applications
6.3.6. Some Good News
6.4. Eliminating Collection Error
6.4.1. Time Scope Errors at Trace Activation
6.4.1.1. Missing wait event data at trace activation
6.4.1.2. Missing database call data at trace activation
6.4.1.3. Excess database call data at trace activation
6.4.2. Missing Time at Trace Deactivation
6.4.3. Incomplete Recursive SQL Data
6.5. Exercises
7. Oracle Kernel Timings
7.1. Operating System Process Management
7.1.1. The sys call Transition
7.1.2. The interrupt Transition
7.1.3. Other States and Transitions
7.2. Oracle Kernel Timings
7.3. How Software Measures Itself
7.3.1. Elapsed Time
7.3.2. CPU Consumption
7.4. Unaccounted-for Time
7.5. Measurement Intrusion Effect
7.6. CPU Consumption Double-Counting
7.7. Quantization Error
7.7.1. Measurement Resolution
7.7.2. Definition of Quantization Error
7.7.3. Complications in Measuring CPU Consumption
7.7.3.1. How gettimeofday works
7.7.3.2. How getrusage works
7.7.4. Detection of Quantization Error
7.7.5. Bounds of Quantization Error
7.8. Time Spent Not Executing
7.8.1. Instrumenting the Experiment
7.8.2. Process States and Transitions Revisited
7.9. Un-Instrumented Oracle Kernel Code
7.9.1. Effect
7.9.2. Trace Writing
7.10. Exercises
8. Oracle Fixed View Data
8.1. Deficiencies of Fixed View Data
8.1.1. Too Many Data Sources
8.1.2. Lack of Detail
8.1.3. Measurement Intrusion Effect of Polling
8.1.4. Difficulty of Proper Action-Scoping
8.1.5. Difficulty of Proper Time-Scoping
8.1.6. Susceptibility to Overflow and Other Errors
8.1.7. Lack of Database Call Duration Data
8.1.8. Lack of Read Consistency
8.2. Fixed View Reference
8.2.1. V$SQL
8.2.2. V$SESS_IO
8.2.3. V$SYSSTAT
8.2.4. V$SESSTAT
8.2.5. V$SYSTEM_EVENT
8.2.6. V$SESSION_EVENT
8.2.7. V$SESSION_WAIT
8.3. Useful Fixed View Queries
8.3.1. Tom Kyte’s Test Harness
8.3.2. Finding a Fixed View Definition
8.3.3. Finding Inefficient SQL
8.3.4. Finding Where a Session Is Stuck
8.3.5. Finding Where a System Is Stuck
8.3.6. Approximating a Session’s Resource Profile
8.3.7. Viewing Waits System-Wide
8.3.7.1. The “idle events” problem
8.3.7.2. The denominator problem
8.3.7.3. Infinite capacity for waiting
8.3.7.4. Idle events in background sessions
8.3.7.5. Targeting revisited
8.4. The Oracle “Wait Interface”
8.5. Exercises
9. Queueing Theory for the Oracle Practitioner
9.1. Performance Models
9.2. Queueing
9.2.1. Queueing Economics
9.2.2. Queueing Visualized
9.3. Queueing Theory
9.3.1. Model Input and Output Values
9.3.1.1. Arrivals and completions
9.3.1.2. Service channels, utilization, and stability
9.3.1.3. Service time and service rate
9.3.1.4. Queueing delay and response time
9.3.1.5. Maximum effective throughput
9.3.1.6. Cumulative distribution function (CDF) of response time
9.3.2. Random Variables
9.3.2.1. Expected value
9.3.2.2. Probability density function (pdf)
9.3.2.3. Using the pdf
9.3.2.4. Why understanding distribution is important
9.3.3. Queueing Theory Versus the “Wait Interface”
9.3.3.1. Oracle wait times
9.3.3.2. Differences in queueing theory notation
9.4. The M/M/m Queueing Model
9.4.1. M/M/m Systems
9.4.2. Non-M/M/m Systems
9.4.3. Exponential Distribution
9.4.3.1. Poisson-exponential relationship
9.4.3.2. Testing for fit to exponential distribution
9.4.3.3. A program to test for exponential distribution
9.4.4. Behavior of M/M/m Systems
9.4.4.1. Multi-channel scalability
9.4.4.2. The knee
9.4.4.3. Response time fluctuations
9.4.4.4. Parameter sensitivity
9.4.5. Using M/M/m: Worked Example
9.4.5.1. Suitability for modeling with M/M/m
9.4.5.2. Computing the required number of CPUs
9.4.5.3. What we can learn from an optimistic model
9.4.5.4. Negotiating the negotiable parameters
9.4.5.5. Using Goal Seek in Microsoft Excel
9.4.5.6. Sensitivity analysis
9.5. Perspective
9.6. Exercises
III. Deployment
10. Working the Resource Profile
10.1. How to Work a Resource Profile
10.1.1. Work in Descending Response Time Order
10.1.1.1. Why targeting is vital
10.1.1.2. Possible benefits of low-return improvements
10.1.2. Eliminate Unnecessary Calls
10.1.2.1. Why workload elimination works so well
10.1.2.2. Supply and demand in the technology stack
10.1.2.3. How to eliminate calls
10.1.2.4. Thinking in a bigger box
10.1.3. Eliminate Inter-Process Competition
10.1.3.1. How to attack a latency problem
10.1.3.2. How to find competing workload
10.1.4. Upgrade Capacity
10.2. How to Forecast Improvement
10.3. How to Tell When Your Work Is Done
11. Responding to the Diagnosis
11.1. Beyond the Resource Profile
11.2. Response Time Components
11.2.1. Oracle Pseudoevents
11.2.1.1. CPU service
11.2.1.2. unaccounted-for
11.2.2. No Event Is Inherently “Unimportant”
11.2.2.1. Responding to large SQL*Net response time contributions
11.2.2.2. Responding to large response time contributions from other events
11.3. Eliminating Wasteful Work
11.3.1. Logical I/O Optimization
11.3.1.1. Why LIO problems are so common
11.3.1.2. How to optimize SQL
11.3.2. Parse Optimization
11.3.3. Write Optimization
11.4. Attributes of a Scalable Application
12. Case Studies
12.1. Case 1: Misled by System-Wide Data
12.1.1. Targeting
12.1.2. Diagnosis and Response
12.1.3. Results
12.1.4. Lessons Learned
12.2. Case 2: Large CPU Service Duration
12.2.1. Targeting
12.2.2. Diagnosis and Response
12.2.3. Results
12.2.4. Lessons Learned
12.3. Case 3: Large SQL*Net Event Duration
12.3.1. Targeting
12.3.2. Diagnosis and Response
12.3.3. Results
12.3.4. Lessons Learned
12.4. Case 4: Large Read Event Duration
12.4.1. Targeting
12.4.2. Diagnosis and Repair
12.4.3. Results
12.4.4. Lessons Learned
12.5. Conclusion
IV. Appendixes
Glossary
A. Greek Alphabet
B. Optimizing Your Database Buffer Cache Hit Ratio
C. M/M/m Queueing Theory Formulas
D. References
Index
About the Authors
Colophon
Copyright
Search in book...
Toggle Font Controls
Playlists
Add To
Create new playlist
Name your new playlist
Playlist description (optional)
Cancel
Create playlist
Sign In
Email address
Password
Forgot Password?
Create account
Login
or
Continue with Facebook
Continue with Google
Sign Up
Full Name
Email address
Confirm Email Address
Password
Login
Create account
or
Continue with Facebook
Continue with Google
Prev
Previous Chapter
9. Queueing Theory for the Oracle Practitioner
Next
Next Chapter
10. Working the Resource Profile
Part III. Deployment
Chapter 10
Chapter 11
Chapter 12
Add Highlight
No Comment
..................Content has been hidden....................
You can't read the all page of ebook, please click
here
login for view all page.
Day Mode
Cloud Mode
Night Mode
Reset