I recently came across a deployment which involved a big data change. Several thousands rows of were deleted and inserted in the table by the application (MDM – Informatica’s Master data management Product).
And finally there was a sql to update the tables. But this update was taking hours and even after 20 hours there was no sign of it finishing.
So our DBA’s thought of killing the script and doing Optimizer Statistics for the tables/indexes involved as well profiling the sql queries involved and RESULT: The update ran in within an hour.
- Before running the udate sql, how to gather schema statistics: exec dbms_stats.gather_schema_stats('SCHEMA_NAME'); - If SQL hangs on a particular update statement then a SQL profile can be created
So this proves how important is doing Optimizer Statistics & SQL Profiling.
Checking when last stats were done,
SELECT owner, table_name, last_analyzed FROM all_tables ORDER BY last_analyzed DESC NULLS LAST; –Tables.
SELECT owner, index_name, last_analyzed FROM all_indexes ORDER BY last_analyzed DESC NULLS LAST; — Indexes.
With 10g and higher version of oracle, up-to-date statistics on tables and indexes are needed by the optimizer to make “good” execution plan decision.
How often you collect statistics is a tricky call to make but an important one.
Also generally it’s not recommended to gather statistics very frequently on the entire database unless you have a strong justification for doing that, such as if a bulk insert or big data change happens frequently on the database.
It will also depend on your application, schema and rate of data change within the schema.
Some application’s require that tables have no stats so that the database resorts back to rule based execution plan.
But in general oracle recommends that stats be collected on tables with stale statistics.
More on ‘last analyzed’ & ‘stale stats’ here,
You can set tables to monitor and check their state and have them analyze if/when stale. Often that is enough, sometime it’s not.
It really depends on your database. For OLTP tables you may need nightly stats collection to maintain performance whereas DW (Data warehouse) database you don’t and would do the stats as needed since the tables would be too large for regular analysis without affecting overall db load and performance.
You can backup original statistics and gather new ones and even restore back the original Statistics in case you need to. You can find the script here,
Conceptually, a SQL profile is to a SQL statement what statistics are to a table or index. The database can use the auxiliary information to improve execution plans. A SQL profile contains corrections for poor optimizer estimates discovered by the SQL Tuning advisor.
Another important topic is,