Discussion:
Snapshot too old from READ-ONLY table (data pump export)
Brad Peek
2011-07-15 17:24:59 UTC
Permalink
A table level export (via data pump) is failing with an ORA-01555.    The curious thing (to me, anyway) about this is that the tablespace containing the table is set to READ-ONLY so I don't get what would be needed from the UNDO segments anyway.   
 
Before I open up an SR (SR = serious run-around), can someone help me out with why export might need to get a "before" image of a block that hasn't changed?     The export is taking much longer than I would have expected (> 10 hours before failing) so there may be multiple issues at play here.
 
Could it be the datapump master table that is getting the ORA-01555 (seems unlikely)?  If so, how would I confirm that, and how would I get around that issue even if that is the case?
 
This error is repeatable.  I first got this error trying to export the whole schema, so I decided to try a single table export (details below).  
 
From the export log:
--------------------
Starting "SYSTEM"."SYS_EXPORT_TABLE_01":  system/******** parfile=expdp_mic_air_sp_send.parfile
Estimate in progress using STATISTICS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
.  estimated "MIC_AIR"."SP_SEND"                         119.5 GB
Total estimation using STATISTICS method: 119.5 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-31693: Table data object "MIC_AIR"."SP_SEND" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 46 with name "_SYSSMU46_1630464369$" too small
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
  /u01/app/datapump_dir/expdp_mic_air_sp_send_01.dmp
  /u01/app/datapump_dir/expdp_mic_air_sp_send_02.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 06:09:00

The parameter file is:
---------------------
$ cat expdp_mic_air_sp_send.parfile
tables=mic_air.sp_send
directory=data_pump_dir
dumpfile=expdp_mic_air_sp_send_%U.dmp  
filesize=10737418240                   
logfile=expdp_mic_air_sp_send.log
estimate=statistics
compression=all

From the instance ALERT LOG:
---------------------------------
Fri Jul 15 06:08:47 2011
ORA-01555 caused by SQL statement below (SQL ID: 1sqssb6vhvr6r, SCN: 0x000c.05716ebf):
SELECT * FROM RELATIONAL("MIC_AIR"."SP_SEND")

Note that in the above output from the instance alert log, the statement is referencing a function named RELATIONAL.    That could be a clue, but I don't think I have seen that before (must be a data pump thing).
 
The table doesn't contain any LOB columns:
--------------------------------------------
***@shareprd1> desc mic_air.sp_send
 Name                                      Null?    Type
 ----------------------------------------- -------- ----------------------------
 OID_SP_SEND                  NOT NULL NUMBER(10)
 MAILING_ID                                         VARCHAR2(20)
 RECIPIENT_TYPE                                 VARCHAR2(20)
 CONTACT_SOURCE                            NUMBER(10)
 REFERENCE_NUM                                VARCHAR2(11)
 EMAIL_ADDRESS                                  VARCHAR2(80)
 EVENT_TYPE                                         VARCHAR2(20)
 WHEN_SENT                                          DATE
 CAMPAIGNID                                         NUMBER(15)
 OFFERID                                                  NUMBER(15)
 MAILING_NAME                                    VARCHAR2(120)
 SUBJECT_LINE                                       VARCHAR2(120)
 OID_CUSTOMER                                    NUMBER(10)
 ADDUID                                    NOT NULL VARCHAR2(8)
 WHEN_ADDED                                NOT NULL DATE
 LASTUID                                   NOT NULL VARCHAR2(8)
 WHENLASTUPDATE                            NOT NULL DATE
Bobak, Mark
2011-07-15 17:59:18 UTC
Permalink
Hi Brad,

Here's Tom Kyte's discussion on ORA-1555 on objects in a read only tablespace:
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:895410916429

Hope it helps,

-Mark


From: oracle-l-bounce-***@public.gmane.org [mailto:oracle-l-bounce-***@public.gmane.org] On Behalf Of Brad Peek
Sent: Friday, July 15, 2011 1:25 PM
To: Oracle-L List
Subject: Snapshot too old from READ-ONLY table (data pump export)

A table level export (via data pump) is failing with an ORA-01555. The curious thing (to me, anyway) about this is that the tablespace containing the table is set to READ-ONLY so I don't get what would be needed from the UNDO segments anyway.

Before I open up an SR (SR = serious run-around), can someone help me out with why export might need to get a "before" image of a block that hasn't changed? The export is taking much longer than I would have expected (> 10 hours before failing) so there may be multiple issues at play here.

Could it be the datapump master table that is getting the ORA-01555 (seems unlikely)? If so, how would I confirm that, and how would I get around that issue even if that is the case?

This error is repeatable. I first got this error trying to export the whole schema, so I decided to try a single table export (details below).
--------------------
Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** parfile=expdp_mic_air_sp_send.parfile
Estimate in progress using STATISTICS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. estimated "MIC_AIR"."SP_SEND" 119.5 GB
Total estimation using STATISTICS method: 119.5 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-31693: Table data object "MIC_AIR"."SP_SEND" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 46 with name "_SYSSMU46_1630464369$" too small
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
/u01/app/datapump_dir/expdp_mic_air_sp_send_01.dmp
/u01/app/datapump_dir/expdp_mic_air_sp_send_02.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 06:09:00
The parameter file is:
---------------------
$ cat expdp_mic_air_sp_send.parfile
tables=mic_air.sp_send
directory=data_pump_dir
dumpfile=expdp_mic_air_sp_send_%U.dmp
filesize=10737418240
logfile=expdp_mic_air_sp_send.log
estimate=statistics
compression=all
From the instance ALERT LOG:
---------------------------------
Fri Jul 15 06:08:47 2011
ORA-01555 caused by SQL statement below (SQL ID: 1sqssb6vhvr6r, SCN: 0x000c.05716ebf):
SELECT * FROM RELATIONAL("MIC_AIR"."SP_SEND")
Note that in the above output from the instance alert log, the statement is referencing a function named RELATIONAL. That could be a clue, but I don't think I have seen that before (must be a data pump thing).

The table doesn't contain any LOB columns:
--------------------------------------------
***@shareprd1<mailto:***@shareprd1>> desc mic_air.sp_send
Name Null? Type
----------------------------------------- -------- ----------------------------
OID_SP_SEND NOT NULL NUMBER(10)
MAILING_ID VARCHAR2(20)
RECIPIENT_TYPE VARCHAR2(20)
CONTACT_SOURCE NUMBER(10)
REFERENCE_NUM VARCHAR2(11)
EMAIL_ADDRESS VARCHAR2(80)
EVENT_TYPE VARCHAR2(20)
WHEN_SENT DATE
CAMPAIGNID NUMBER(15)
OFFERID NUMBER(15)
MAILING_NAME VARCHAR2(120)
SUBJECT_LINE VARCHAR2(120)
OID_CUSTOMER NUMBER(10)
ADDUID NOT NULL VARCHAR2(8)
WHEN_ADDED NOT NULL DATE
LASTUID NOT NULL VARCHAR2(8)
WHENLASTUPDATE NOT NULL DATE
Brad Peek
2011-07-15 18:14:31 UTC
Permalink
I should have mentioned that the tablespace was placed in read-only mode over two weeks ago. The explanation in the Tom Kyte article that you referenced leads me to think that would be enough time to avoid still being affected by the delayed block writes.

Sent from my iPad
Post by Bobak, Mark
Hi Brad,
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:895410916429
Hope it helps,
-Mark
Sent: Friday, July 15, 2011 1:25 PM
To: Oracle-L List
Subject: Snapshot too old from READ-ONLY table (data pump export)
A table level export (via data pump) is failing with an ORA-01555. The curious thing (to me, anyway) about this is that the tablespace containing the table is set to READ-ONLY so I don't get what would be needed from the UNDO segments anyway.
Before I open up an SR (SR = serious run-around), can someone help me out with why export might need to get a "before" image of a block that hasn't changed? The export is taking much longer than I would have expected (> 10 hours before failing) so there may be multiple issues at play here.
Could it be the datapump master table that is getting the ORA-01555 (seems unlikely)? If so, how would I confirm that, and how would I get around that issue even if that is the case?
This error is repeatable. I first got this error trying to export the whole schema, so I decided to try a single table export (details below).
--------------------
Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** parfile=expdp_mic_air_sp_send.parfile
Estimate in progress using STATISTICS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. estimated "MIC_AIR"."SP_SEND" 119.5 GB
Total estimation using STATISTICS method: 119.5 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 46 with name "_SYSSMU46_1630464369$" too small
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
/u01/app/datapump_dir/expdp_mic_air_sp_send_01.dmp
/u01/app/datapump_dir/expdp_mic_air_sp_send_02.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 06:09:00
---------------------
$ cat expdp_mic_air_sp_send.parfile
tables=mic_air.sp_send
directory=data_pump_dir
dumpfile=expdp_mic_air_sp_send_%U.dmp
filesize=10737418240
logfile=expdp_mic_air_sp_send.log
estimate=statistics
compression=all
---------------------------------
Fri Jul 15 06:08:47 2011
SELECT * FROM RELATIONAL("MIC_AIR"."SP_SEND")
Note that in the above output from the instance alert log, the statement is referencing a function named RELATIONAL. That could be a clue, but I don't think I have seen that before (must be a data pump thing).
--------------------------------------------
Name Null? Type
----------------------------------------- -------- ----------------------------
OID_SP_SEND NOT NULL NUMBER(10)
MAILING_ID VARCHAR2(20)
RECIPIENT_TYPE VARCHAR2(20)
CONTACT_SOURCE NUMBER(10)
REFERENCE_NUM VARCHAR2(11)
EMAIL_ADDRESS VARCHAR2(80)
EVENT_TYPE VARCHAR2(20)
WHEN_SENT DATE
CAMPAIGNID NUMBER(15)
OFFERID NUMBER(15)
MAILING_NAME VARCHAR2(120)
SUBJECT_LINE VARCHAR2(120)
OID_CUSTOMER NUMBER(10)
ADDUID NOT NULL VARCHAR2(8)
WHEN_ADDED NOT NULL DATE
LASTUID NOT NULL VARCHAR2(8)
WHENLASTUPDATE NOT NULL DATE
David Roberts
2011-07-18 13:29:47 UTC
Permalink
The problem with delayed block cleanout only occurs after the tablespace was
placed in read only mode and only when the undo has been aged out.

The fact that the tablespace is read only is the thing that stops Oracle
from performing the clearout.

The amount of elapsed time since the tablespace was made read only isn't a
factor.

Dave
Post by Brad Peek
I should have mentioned that the tablespace was placed in read-only mode
over two weeks ago. The explanation in the Tom Kyte article that you
referenced leads me to think that would be enough time to avoid still being
affected by the delayed block writes.
Sent from my iPad
Hi Brad,****
** **
Here’s Tom Kyte’s discussion on ORA-1555 on objects in a read only
tablespace:****
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:895410916429
****
** **
Hope it helps,****
** **
-Mark****
** **
** **
*Sent:* Friday, July 15, 2011 1:25 PM
*To:* Oracle-L List
*Subject:* Snapshot too old from READ-ONLY table (data pump export)****
** **
A table level export (via data pump) is failing with an ORA-01555. The
curious thing (to me, anyway) about this is that the tablespace containing
the table is set to READ-ONLY so I don't get what would be needed from the
UNDO segments anyway. ****
****
Before I open up an SR (SR = serious run-around), can someone help me out
with why export might need to get a "before" image of a block that hasn't
changed? The export is taking much longer than I would have expected (>
10 hours before failing) so there may be multiple issues at play here.****
****
Could it be the datapump master table that is getting the ORA-01555 (seems
unlikely)? If so, how would I confirm that, and how would I get around that
issue even if that is the case?****
****
This error is repeatable. I first got this error trying to export the
whole schema, so I decided to try a single table export (details below).
****
****
From the export log:****
--------------------****
Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/********
parfile=expdp_mic_air_sp_send.parfile
Estimate in progress using STATISTICS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. estimated "MIC_AIR"."SP_SEND" 119.5 GB
Total estimation using STATISTICS method: 119.5 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-31693: Table data object "MIC_AIR"."SP_SEND" failed to load/unload and
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 46 with name
"_SYSSMU46_1630464369$" too small
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
/u01/app/datapump_dir/expdp_mic_air_sp_send_01.dmp
/u01/app/datapump_dir/expdp_mic_air_sp_send_02.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 06:09:00**
**
The parameter file is:****
---------------------****
$ cat expdp_mic_air_sp_send.parfile
tables=mic_air.sp_send
directory=data_pump_dir
dumpfile=expdp_mic_air_sp_send_%U.dmp ****
filesize=10737418240 ****
logfile=expdp_mic_air_sp_send.log
estimate=statistics
compression=all****
From the instance ALERT LOG:****
---------------------------------****
Fri Jul 15 06:08:47 2011
SELECT * FROM RELATIONAL("MIC_AIR"."SP_SEND")****
Note that in the above output from the instance alert log, the statement is
referencing a function named RELATIONAL. That could be a clue, but I
don't think I have seen that before (must be a data pump thing).****
****
The table doesn't contain any LOB columns:****
--------------------------------------------****
Name Null? Type
----------------------------------------- --------
----------------------------
OID_SP_SEND NOT NULL NUMBER(10)
MAILING_ID VARCHAR2(20)
RECIPIENT_TYPE VARCHAR2(20)
CONTACT_SOURCE NUMBER(10)
REFERENCE_NUM VARCHAR2(11)
EMAIL_ADDRESS VARCHAR2(80)
EVENT_TYPE VARCHAR2(20)
WHEN_SENT DATE
CAMPAIGNID NUMBER(15)
OFFERID NUMBER(15)
MAILING_NAME VARCHAR2(120)
SUBJECT_LINE VARCHAR2(120)
OID_CUSTOMER NUMBER(10)
ADDUID NOT NULL VARCHAR2(8)
WHEN_ADDED NOT NULL DATE
LASTUID NOT NULL VARCHAR2(8)
WHENLASTUPDATE NOT NULL DATE****
****
****
Brad Peek
2011-07-18 18:14:16 UTC
Permalink
Thanks to you and others I have had some informative posts that have helped me understand why this may be happening. One suggested work around was to try and speed up the export (e.g. try parallel) which might work as long as the export completes before the min SCN has been overwritten in the UNDO segments.

I do intend to try that. Does anyone have a different suggestion as a work around?

Having another work around would be helpful because I can envision cases where the size of the segment being exported and the high level of update activity in the system could make that approach very much a hit-or-miss proposition.

The DB version is v11.2.

Sent from my iPad
The problem with delayed block cleanout only occurs after the tablespace was placed in read only mode and only when the undo has been aged out.
The fact that the tablespace is read only is the thing that stops Oracle from performing the clearout.
The amount of elapsed time since the tablespace was made read only isn't a factor.
Dave
I should have mentioned that the tablespace was placed in read-only mode over two weeks ago. The explanation in the Tom Kyte article that you referenced leads me to think that would be enough time to avoid still being affected by the delayed block writes.
Sent from my iPad
Post by Bobak, Mark
Hi Brad,
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:895410916429
Hope it helps,
-Mark
Sent: Friday, July 15, 2011 1:25 PM
To: Oracle-L List
Subject: Snapshot too old from READ-ONLY table (data pump export)
A table level export (via data pump) is failing with an ORA-01555. The curious thing (to me, anyway) about this is that the tablespace containing the table is set to READ-ONLY so I don't get what would be needed from the UNDO segments anyway.
Before I open up an SR (SR = serious run-around), can someone help me out with why export might need to get a "before" image of a block that hasn't changed? The export is taking much longer than I would have expected (> 10 hours before failing) so there may be multiple issues at play here.
Could it be the datapump master table that is getting the ORA-01555 (seems unlikely)? If so, how would I confirm that, and how would I get around that issue even if that is the case?
This error is repeatable. I first got this error trying to export the whole schema, so I decided to try a single table export (details below).
--------------------
Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** parfile=expdp_mic_air_sp_send.parfile
Estimate in progress using STATISTICS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. estimated "MIC_AIR"."SP_SEND" 119.5 GB
Total estimation using STATISTICS method: 119.5 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 46 with name "_SYSSMU46_1630464369$" too small
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
/u01/app/datapump_dir/expdp_mic_air_sp_send_01.dmp
/u01/app/datapump_dir/expdp_mic_air_sp_send_02.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 06:09:00
---------------------
$ cat expdp_mic_air_sp_send.parfile
tables=mic_air.sp_send
directory=data_pump_dir
dumpfile=expdp_mic_air_sp_send_%U.dmp
filesize=10737418240
logfile=expdp_mic_air_sp_send.log
estimate=statistics
compression=all
---------------------------------
Fri Jul 15 06:08:47 2011
SELECT * FROM RELATIONAL("MIC_AIR"."SP_SEND")
Note that in the above output from the instance alert log, the statement is referencing a function named RELATIONAL. That could be a clue, but I don't think I have seen that before (must be a data pump thing).
--------------------------------------------
Name Null? Type
----------------------------------------- -------- ----------------------------
OID_SP_SEND NOT NULL NUMBER(10)
MAILING_ID VARCHAR2(20)
RECIPIENT_TYPE VARCHAR2(20)
CONTACT_SOURCE NUMBER(10)
REFERENCE_NUM VARCHAR2(11)
EMAIL_ADDRESS VARCHAR2(80)
EVENT_TYPE VARCHAR2(20)
WHEN_SENT DATE
CAMPAIGNID NUMBER(15)
OFFERID NUMBER(15)
MAILING_NAME VARCHAR2(120)
SUBJECT_LINE VARCHAR2(120)
OID_CUSTOMER NUMBER(10)
ADDUID NOT NULL VARCHAR2(8)
WHEN_ADDED NOT NULL DATE
LASTUID NOT NULL VARCHAR2(8)
WHENLASTUPDATE NOT NULL DATE
Michael McMullen
2011-07-18 18:53:24 UTC
Permalink
Force a FTS on the table, have it get the error instead of you. This can be
done either via stats gathering or a query.



_____

From: oracle-l-bounce-***@public.gmane.org [mailto:oracle-l-bounce-***@public.gmane.org]
On Behalf Of Brad Peek
Sent: Monday, July 18, 2011 2:14 PM
To: David Roberts
Cc: Bobak, Mark; Oracle-L List
Subject: Re: Snapshot too old from READ-ONLY table (data pump export)



Thanks to you and others I have had some informative posts that have helped
me understand why this may be happening. One suggested work around was to
try and speed up the export (e.g. try parallel) which might work as long as
the export completes before the min SCN has been overwritten in the UNDO
segments.



I do intend to try that. Does anyone have a different suggestion as a work
around?



Having another work around would be helpful because I can envision cases
where the size of the segment being exported and the high level of update
activity in the system could make that approach very much a hit-or-miss
proposition.



The DB version is v11.2.

Sent from my iPad
Mark W. Farnham
2011-07-18 19:45:35 UTC
Permalink
There have been a variety of threads about this topic. IF you can stop people from updating the segments in the tablespace you are going to make read only BEFORE you actually make it read only long enough to force full table scans visiting all the blocks so that delayed cleanout is performed. Hint: count(*) on a table with an index on a not null column is not good enough, because it will bypass the table and just use the smallest index guaranteed to give the right answer. Then if you make the tablespace read only, you should be able to read blocks that do not require undo to process into block cache images.



Oracle is brilliantly lazy about avoiding work until it is logically required. That is usually a good thing for performance.



My attempts to get Oracle to add a “CLEAN” option on setting tablespaces read only in the early 1990’s did not get enough votes at Oracle VLDB. Why you can get snapshots too old reading things from read only tablespaces has been explained adequately in this thread. But it is still ironic and counter intuitive to the language. Also with a “CLEAN” option Oracle could certainly do it more efficiently than figuring out all the required queries and simply undertake the task at hand to make all the blocks in a tablespace clean.



They could also presumably get it exactly right for the general case, while from the user viewpoint you have to do the dance each time to make sure you are doing it correctly. (Think about blocks laying around in buffer cache that are okay for a new query, so you still don’t get the cleanout, even with a full table scan correctly driven.) They might even consider a new state CLEANING in place of READONLY that prevented new changes to the objects in a tablespace via sql whilst the cleanout task took place (presumably at a controllable parallelism). If they did it as a special task they wouldn’t even need to produce the blocks into shared buffer cache! PGA in the special task would work just fine.



Please note this is not about how Oracle works, but rather about how I wished it worked.



Regards,



Mark



From: oracle-l-bounce-***@public.gmane.org [mailto:oracle-l-bounce-***@public.gmane.org] On Behalf Of Brad Peek
Sent: Monday, July 18, 2011 2:14 PM
To: David Roberts
Cc: Bobak, Mark; Oracle-L List
Subject: Re: Snapshot too old from READ-ONLY table (data pump export)



Thanks to you and others I have had some informative posts that have helped me understand why this may be happening. One suggested work around was to try and speed up the export (e.g. try parallel) which might work as long as the export completes before the min SCN has been overwritten in the UNDO segments.



I do intend to try that. Does anyone have a different suggestion as a work around?



Having another work around would be helpful because I can envision cases where the size of the segment being exported and the high level of update activity in the system could make that approach very much a hit-or-miss proposition.



The DB version is v11.2.

Sent from my iPad




On Jul 18, 2011, at 8:29, David Roberts <big.dave.roberts-gM/Ye1E23mwN+***@public.gmane.org> wrote:

The problem with delayed block cleanout only occurs after the tablespace was placed in read only mode and only when the undo has been aged out.



The fact that the tablespace is read only is the thing that stops Oracle from performing the clearout.



The amount of elapsed time since the tablespace was made read only isn't a factor.



Dave

On Fri, Jul 15, 2011 at 7:14 PM, Brad Peek <brad_peek-/***@public.gmane.org> wrote:

I should have mentioned that the tablespace was placed in read-only mode over two weeks ago. The explanation in the Tom Kyte article that you referenced leads me to think that would be enough time to avoid still being affected by the delayed block writes.

Sent from my iPad




On Jul 15, 2011, at 12:59, "Bobak, Mark" <Mark.Bobak-tdDEoEIwgbJWk0Htik3J/***@public.gmane.org> wrote:

Hi Brad,



Here’s Tom Kyte’s discussion on ORA-1555 on objects in a read only tablespace:

http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:895410916429



Hope it helps,



-Mark





From: oracle-l-bounce-***@public.gmane.org [mailto:oracle-l-bounce-***@public.gmane.org] On Behalf Of Brad Peek
Sent: Friday, July 15, 2011 1:25 PM
To: Oracle-L List
Subject: Snapshot too old from READ-ONLY table (data pump export)



A table level export (via data pump) is failing with an ORA-01555. The curious thing (to me, anyway) about this is that the tablespace containing the table is set to READ-ONLY so I don't get what would be needed from the UNDO segments anyway.



Before I open up an SR (SR = serious run-around), can someone help me out with why export might need to get a "before" image of a block that hasn't changed? The export is taking much longer than I would have expected (> 10 hours before failing) so there may be multiple issues at play here.



Could it be the datapump master table that is getting the ORA-01555 (seems unlikely)? If so, how would I confirm that, and how would I get around that issue even if that is the case?



This error is repeatable. I first got this error trying to export the whole schema, so I decided to try a single table export (details below).
--------------------

Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/******** parfile=expdp_mic_air_sp_send.parfile
Estimate in progress using STATISTICS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. estimated "MIC_AIR"."SP_SEND" 119.5 GB
Total estimation using STATISTICS method: 119.5 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-31693: Table data object "MIC_AIR"."SP_SEND" failed to load/unload and is being skipped due to error:
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 46 with name "_SYSSMU46_1630464369$" too small
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
Dump file set for SYSTEM.SYS_EXPORT_TABLE_01 is:
/u01/app/datapump_dir/expdp_mic_air_sp_send_01.dmp
/u01/app/datapump_dir/expdp_mic_air_sp_send_02.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 06:09:00

The parameter file is:

---------------------

$ cat expdp_mic_air_sp_send.parfile
tables=mic_air.sp_send
directory=data_pump_dir
dumpfile=expdp_mic_air_sp_send_%U.dmp

filesize=10737418240

logfile=expdp_mic_air_sp_send.log
estimate=statistics
compression=all
---------------------------------

Fri Jul 15 06:08:47 2011
ORA-01555 caused by SQL statement below (SQL ID: 1sqssb6vhvr6r, SCN: 0x000c.05716ebf):
SELECT * FROM RELATIONAL("MIC_AIR"."SP_SEND")

Note that in the above output from the instance alert log, the statement is referencing a function named RELATIONAL. That could be a clue, but I don't think I have seen that before (must be a data pump thing).



The table doesn't contain any LOB columns:

--------------------------------------------

***@shareprd1> desc mic_air.sp_send
Name Null? Type
----------------------------------------- -------- ----------------------------
OID_SP_SEND NOT NULL NUMBER(10)
MAILING_ID VARCHAR2(20)
RECIPIENT_TYPE VARCHAR2(20)
CONTACT_SOURCE NUMBER(10)
REFERENCE_NUM VARCHAR2(11)
EMAIL_ADDRESS VARCHAR2(80)
EVENT_TYPE VARCHAR2(20)
WHEN_SENT DATE
CAMPAIGNID NUMBER(15)
OFFERID NUMBER(15)
MAILING_NAME VARCHAR2(120)
SUBJECT_LINE VARCHAR2(120)
OID_CUSTOMER NUMBER(10)
ADDUID NOT NULL VARCHAR2(8)
WHEN_ADDED NOT NULL DATE
LASTUID NOT NULL VARCHAR2(8)
WHENLASTUPDATE NOT NULL DATE
Chris Dunscombe
2011-07-18 08:56:17 UTC
Permalink
Hi,

I've experienced the same issue a few years ago with a data pump ORA-1555 and
the cause was delayed block cleanout. I did a little write up at the time which
I've included as an attachement.

Chris




________________________________
From: Brad Peek <***@yahoo.com>
To: "Bobak, Mark" <Mark.Bobak-tdDEoEIwgbJWk0Htik3J/***@public.gmane.org>
Cc: Oracle-L List <oracle-l-***@public.gmane.org>
Sent: Friday, July 15, 2011 19:14:31
Subject: Re: Snapshot too old from READ-ONLY table (data pump export)


I should have mentioned that the tablespace was placed in read-only mode over
two weeks ago. The explanation in the Tom Kyte article that you referenced
leads me to think that would be enough time to avoid still being affected by the
delayed block writes.

Sent from my iPad


On Jul 15, 2011, at 12:59, "Bobak, Mark" <Mark.Bobak-tdDEoEIwgbJWk0Htik3J/***@public.gmane.org> wrote:


Hi Brad,
Post by Bobak, Mark
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:895410916429
Hope it helps,
-Mark
Behalf Of Brad Peek
Sent: Friday, July 15, 2011 1:25 PM
To: Oracle-L List
Subject: Snapshot too old from READ-ONLY table (data pump export)
A table level export (via data pump) is failing with an ORA-01555. The
curious thing (to me, anyway) about this is that the tablespace containing the
table is set to READ-ONLY so I don't get what would be needed from the UNDO
segments anyway.
Before I open up an SR (SR = serious run-around), can someone help me out with
why export might need to get a "before" image of a block that hasn't
changed? The export is taking much longer than I would have expected (> 10
hours before failing) so there may be multiple issues at play here.
Could it be the datapump master table that is getting the ORA-01555 (seems
unlikely)? If so, how would I confirm that, and how would I get around that
issue even if that is the case?
This error is repeatable. I first got this error trying to export the whole
schema, so I decided to try a single table export (details below).
--------------------
Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/********
parfile=expdp_mic_air_sp_send.parfile
Estimate in progress using STATISTICS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. estimated "MIC_AIR"."SP_SEND" 119.5 GB
Total estimation using STATISTICS method: 119.5 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-31693: Table data object "MIC_AIR"."SP_SEND" failed to load/unload and is
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 46 with name
"_SYSSMU46_1630464369$" too small
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
/u01/app/datapump_dir/expdp_mic_air_sp_send_01.dmp
/u01/app/datapump_dir/expdp_mic_air_sp_send_02.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 06:09:00
---------------------
$ cat expdp_mic_air_sp_send.parfile
tables=mic_air.sp_send
directory=data_pump_dir
dumpfile=expdp_mic_air_sp_send_%U.dmp
filesize=10737418240
logfile=expdp_mic_air_sp_send.log
estimate=statistics
compression=all
---------------------------------
Fri Jul 15 06:08:47 2011
SELECT * FROM RELATIONAL("MIC_AIR"."SP_SEND")
Note that in the above output from the instance alert log, the statement is
referencing a function named RELATIONAL. That could be a clue, but I don't
think I have seen that before (must be a data pump thing).
--------------------------------------------
Name Null? Type
----------------------------------------- --------
----------------------------
Post by Bobak, Mark
OID_SP_SEND NOT NULL NUMBER(10)
MAILING_ID VARCHAR2(20)
RECIPIENT_TYPE VARCHAR2(20)
CONTACT_SOURCE NUMBER(10)
REFERENCE_NUM VARCHAR2(11)
EMAIL_ADDRESS VARCHAR2(80)
EVENT_TYPE VARCHAR2(20)
WHEN_SENT DATE
CAMPAIGNID NUMBER(15)
OFFERID NUMBER(15)
MAILING_NAME VARCHAR2(120)
SUBJECT_LINE VARCHAR2(120)
OID_CUSTOMER NUMBER(10)
ADDUID NOT NULL VARCHAR2(8)
WHEN_ADDED NOT NULL DATE
LASTUID NOT NULL VARCHAR2(8)
WHENLASTUPDATE NOT NULL DATE
Tanel Poder
2011-07-18 21:33:35 UTC
Permalink
Very nice explanation!

--
Tanel Poder
http://blog.tanelpoder.com
Post by Chris Dunscombe
Hi,
I've experienced the same issue a few years ago with a data pump ORA-1555
and the cause was delayed block cleanout. I did a little write up at the
time which I've included as an attachement.
Chris
------------------------------
*Sent:* Friday, July 15, 2011 19:14:31
*Subject:* Re: Snapshot too old from READ-ONLY table (data pump export)
I should have mentioned that the tablespace was placed in read-only mode
over two weeks ago. The explanation in the Tom Kyte article that you
referenced leads me to think that would be enough time to avoid still being
affected by the delayed block writes.
Sent from my iPad
Hi Brad,
Here’s Tom Kyte’s discussion on ORA-1555 on objects in a read only
<http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:895410916429>
http://asktom.oracle.com/pls/asktom/f?p=100:11:0::::P11_QUESTION_ID:895410916429
Hope it helps,
-Mark
*Sent:* Friday, July 15, 2011 1:25 PM
*To:* Oracle-L List
*Subject:* Snapshot too old from READ-ONLY table (data pump export)
A table level export (via data pump) is failing with an ORA-01555. The
curious thing (to me, anyway) about this is that the tablespace containing
the table is set to READ-ONLY so I don't get what would be needed from the
UNDO segments anyway.
Before I open up an SR (SR = serious run-around), can someone help me out
with why export might need to get a "before" image of a block that hasn't
changed? The export is taking much longer than I would have expected (>
10 hours before failing) so there may be multiple issues at play here.
Could it be the datapump master table that is getting the ORA-01555 (seems
unlikely)? If so, how would I confirm that, and how would I get around that
issue even if that is the case?
This error is repeatable. I first got this error trying to export the
whole schema, so I decided to try a single table export (details below).
--------------------
Starting "SYSTEM"."SYS_EXPORT_TABLE_01": system/********
parfile=expdp_mic_air_sp_send.parfile
Estimate in progress using STATISTICS method...
Processing object type TABLE_EXPORT/TABLE/TABLE_DATA
. estimated "MIC_AIR"."SP_SEND" 119.5 GB
Total estimation using STATISTICS method: 119.5 GB
Processing object type TABLE_EXPORT/TABLE/TABLE
Processing object type TABLE_EXPORT/TABLE/GRANT/OWNER_GRANT/OBJECT_GRANT
Processing object type TABLE_EXPORT/TABLE/INDEX/INDEX
Processing object type TABLE_EXPORT/TABLE/CONSTRAINT/CONSTRAINT
Processing object type TABLE_EXPORT/TABLE/INDEX/STATISTICS/INDEX_STATISTICS
Processing object type TABLE_EXPORT/TABLE/STATISTICS/TABLE_STATISTICS
ORA-31693: Table data object "MIC_AIR"."SP_SEND" failed to load/unload and
ORA-02354: error in exporting/importing data
ORA-01555: snapshot too old: rollback segment number 46 with name
"_SYSSMU46_1630464369$" too small
Master table "SYSTEM"."SYS_EXPORT_TABLE_01" successfully loaded/unloaded
******************************************************************************
/u01/app/datapump_dir/expdp_mic_air_sp_send_01.dmp
/u01/app/datapump_dir/expdp_mic_air_sp_send_02.dmp
Job "SYSTEM"."SYS_EXPORT_TABLE_01" completed with 1 error(s) at 06:09:00
---------------------
$ cat expdp_mic_air_sp_send.parfile
tables=mic_air.sp_send
directory=data_pump_dir
dumpfile=expdp_mic_air_sp_send_%U.dmp
filesize=10737418240
logfile=expdp_mic_air_sp_send.log
estimate=statistics
compression=all
---------------------------------
Fri Jul 15 06:08:47 2011
SELECT * FROM RELATIONAL("MIC_AIR"."SP_SEND")
Note that in the above output from the instance alert log, the statement is
referencing a function named RELATIONAL. That could be a clue, but I
don't think I have seen that before (must be a data pump thing).
--------------------------------------------
Name Null? Type
----------------------------------------- --------
----------------------------
OID_SP_SEND NOT NULL NUMBER(10)
MAILING_ID VARCHAR2(20)
RECIPIENT_TYPE VARCHAR2(20)
CONTACT_SOURCE NUMBER(10)
REFERENCE_NUM VARCHAR2(11)
EMAIL_ADDRESS VARCHAR2(80)
EVENT_TYPE VARCHAR2(20)
WHEN_SENT DATE
CAMPAIGNID NUMBER(15)
OFFERID NUMBER(15)
MAILING_NAME VARCHAR2(120)
SUBJECT_LINE VARCHAR2(120)
OID_CUSTOMER NUMBER(10)
ADDUID NOT NULL VARCHAR2(8)
WHEN_ADDED NOT NULL DATE
LASTUID NOT NULL VARCHAR2(8)
WHENLASTUPDATE NOT NULL DATE
Jan-Hendrik Boll
2011-07-19 07:47:43 UTC
Permalink
Hi,

I have one question about what happens after the ORA-01555:
If i am getting it correctly: the information stored in the block-header
says: uncommited changes, oracle tries to read the information from undo
which leads us to ORA-01555.

And what happens next to our changed block?
Is oracle perfoming a rollback on the uncommited blocks? (Wouldn't that lead
to ORA-01555 too?)

Regards
Jan-Hendrik Boll
L***@public.gmane.org
2011-07-19 08:36:24 UTC
Permalink
Since the question was raised again, does anyone know the latest,
up-do-date explanation on how oracle multi versioning works.

ITL's, commit SCN's, row lock byte all put together and explained :)

The problem is that many very clever people reverse engineered one or
another aspect of the process but one can hardly find an all-out
explanation.
I dare to claim that some statements one finds on internet made by even
experienced oracle gurus are doubtful to say the least.
Actually, oracle's own metalink is known to provide hmmm.. strange
statements. That happens.
The whole multi versioning is easy to explain at a high level but
implementation is quite tricky.

Why is implementation that important? Because knowing how it works one can
answer some questions right away how fast oracle can perform in particular
situations.
It's less of a black box approach and more of algorithm analysis then.


Thank you in advance,
Laimis N

--
http://www.freelists.org/webpage/oracle-l
Ron Chennells
2011-07-19 08:58:28 UTC
Permalink
Some details here

http://arup.blogspot.com/2011/01/more-on-interested-transaction-lists.html

Ron
Post by L***@public.gmane.org
Since the question was raised again, does anyone know the latest,
up-do-date explanation on how oracle multi versioning works.
ITL's, commit SCN's, row lock byte all put together and explained :)
The problem is that many very clever people reverse engineered one or
another aspect of the process but one can hardly find an all-out
explanation.
I dare to claim that some statements one finds on internet made by even
experienced oracle gurus are doubtful to say the least.
Actually, oracle's own metalink is known to provide hmmm.. strange
statements. That happens.
The whole multi versioning is easy to explain at a high level but
implementation is quite tricky.
Why is implementation that important? Because knowing how it works one can
answer some questions right away how fast oracle can perform in particular
situations.
It's less of a black box approach and more of algorithm analysis then.
Thank you in advance,
Laimis N
--
http://www.freelists.org/webpage/oracle-l
http://www.linkedin.com/in/ronchennells



--
http://www.freelists.org/webpage/oracle-l
L***@public.gmane.org
2011-07-19 11:05:20 UTC
Permalink
Post by Ron Chennells
http://arup.blogspot.com/2011/01/more-on-interested-transaction-lists.html
Exactly.
But I had to add this to

http://avdeo.com/2008/06/16/interested-transaction-list-itl/

to

https://richardfoote.wordpress.com/2010/07/20/index-block-dump-block-header-part-i-editions-of-you/

and

https://richardfoote.wordpress.com/2010/07/28/index-block-dump-block-header-part-ii-and-read-consistency-i-cant-read/

to Jonatahn Lewis comments at:.

http://forums.oracle.com/forums/thread.jspa?messageID=3725660

Jonathan is probably the first in the above mentioned links to talk about
COMMIT scn not just CHANGE scn - who cares about change SCN anyway ?
But he makes one interesting statement:

"When all the ITL entries show SCNs no later than your query SCN the clone
is read-consistent and the data can be used by the query."

I'd like to know what was ment by that. My interpretation is:

one can reconstruct a consistent read of a particular row at query start
SCN. Or one can reconstruct a consistent read of the whole block at query
start SCN.

As oracle operates "in blocks" then I assume this is how oracle
reconstructs the consistent read of a block: ALL ITL's have to be delt with
to have every row in a block rollbacked to commit SCN just before query
start SCN.

and finally

http://forums.oracle.com/forums/thread.jspa?threadID=521913

Mark J Bobak tells what is really needed to complete the picture:

"Then, at the row level, in the row directory, there's a lock byte. The
value of the lock byte will be either 0, if there is no lock on the row, or
it will be the number of the slot in the ITL that corresponds to the
transaction that's making a change, or just holding a lock, on that
particular row. So, the locked row in the block points to the ITL slot, and
the ITL slot points back at the undo segment/slot/sequence. "

Here, IMHO, the structure is complete:

Row Lock byte points to ITL, ITL points to transaction commit SCN and undo
record.

Then everything adds up: row level locking, transaction isolation, all
ITL's traversal.

The block oriented nature of oracle is explained by Tom Kyte:

http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:27330770500351
"o we cache blocks
o we read blocks
o blocks have transaction tables on them, this is what drives the
multi-versioning

relational databases tend to do things "to blocks", it is basically the way
they work.

We do not multi-version ROWS.
We multi-version BLOCKS.
"


Quite a list to have somehow complete picture :) , isn't it ?


Brgds, Laimis N








---------------------------------------------------------------------------------

Please consider the environment before printing this e-mail



From: Ron Chennells <ron.chennells-mDk+juSHL55m/Kok3h1/***@public.gmane.org>

To: oracle-l-***@public.gmane.org

Date: 2011.07.19 11:58

Subject: Re: Fwd: Snapshot too old from READ-ONLY table (data pump export)






Some details here

http://arup.blogspot.com/2011/01/more-on-interested-transaction-lists.html

Ron
Post by Ron Chennells
Since the question was raised again, does anyone know the latest,
up-do-date explanation on how oracle multi versioning works.
ITL's, commit SCN's, row lock byte all put together and explained :)
The problem is that many very clever people reverse engineered one or
another aspect of the process but one can hardly find an all-out
explanation.
I dare to claim that some statements one finds on internet made by even
experienced oracle gurus are doubtful to say the least.
Actually, oracle's own metalink is known to provide hmmm.. strange
statements. That happens.
The whole multi versioning is easy to explain at a high level but
implementation is quite tricky.
Why is implementation that important? Because knowing how it works one can
answer some questions right away how fast oracle can perform in particular
situations.
It's less of a black box approach and more of algorithm analysis then.
Thank you in advance,
Laimis N
--
http://www.linkedin.com/in/ronchennells



--
http://www.freelists.org/webpage/oracle-l:





--
http://www.freelists.org/webpage/oracle-l
r***@public.gmane.org
2011-07-19 13:12:16 UTC
Permalink
I would also be interested to know if rowdependencies changes this in
anyway

-----Original Message-----
From: oracle-l-bounce-***@public.gmane.org
[mailto:oracle-l-bounce-***@public.gmane.org] On Behalf Of
Laimutis.Nedzinskas-***@public.gmane.org
Sent: Tuesday, July 19, 2011 7:05 AM
To: oracle-l-***@public.gmane.org
Subject: Re: Fwd: Snapshot too old from READ-ONLY table (data pump
export)
Post by Ron Chennells
http://arup.blogspot.com/2011/01/more-on-interested-transaction-lists.h
tml

Exactly.
But I had to add this to

http://avdeo.com/2008/06/16/interested-transaction-list-itl/

to

https://richardfoote.wordpress.com/2010/07/20/index-block-dump-block-hea
der-part-i-editions-of-you/

and

https://richardfoote.wordpress.com/2010/07/28/index-block-dump-block-hea
der-part-ii-and-read-consistency-i-cant-read/

to Jonatahn Lewis comments at:.

http://forums.oracle.com/forums/thread.jspa?messageID=3725660

Jonathan is probably the first in the above mentioned links to talk
about
COMMIT scn not just CHANGE scn - who cares about change SCN anyway ?
But he makes one interesting statement:

"When all the ITL entries show SCNs no later than your query SCN the
clone
is read-consistent and the data can be used by the query."

I'd like to know what was ment by that. My interpretation is:

one can reconstruct a consistent read of a particular row at query start
SCN. Or one can reconstruct a consistent read of the whole block at
query
start SCN.

As oracle operates "in blocks" then I assume this is how oracle
reconstructs the consistent read of a block: ALL ITL's have to be delt
with
to have every row in a block rollbacked to commit SCN just before query
start SCN.

and finally

http://forums.oracle.com/forums/thread.jspa?threadID=521913

Mark J Bobak tells what is really needed to complete the picture:

"Then, at the row level, in the row directory, there's a lock byte. The
value of the lock byte will be either 0, if there is no lock on the row,
or
it will be the number of the slot in the ITL that corresponds to the
transaction that's making a change, or just holding a lock, on that
particular row. So, the locked row in the block points to the ITL slot,
and
the ITL slot points back at the undo segment/slot/sequence. "

Here, IMHO, the structure is complete:

Row Lock byte points to ITL, ITL points to transaction commit SCN and
undo
record.

Then everything adds up: row level locking, transaction isolation, all
ITL's traversal.

The block oriented nature of oracle is explained by Tom Kyte:

http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:273307
70500351
"o we cache blocks
o we read blocks
o blocks have transaction tables on them, this is what drives the
multi-versioning

relational databases tend to do things "to blocks", it is basically the
way
they work.

We do not multi-version ROWS.
We multi-version BLOCKS.
"


Quite a list to have somehow complete picture :) , isn't it ?


Brgds, Laimis N








------------------------------------------------------------------------
---------

Please consider the environment before printing this e-mail




From: Ron Chennells <ron.chennells-mDk+juSHL55m/Kok3h1/***@public.gmane.org>



To: oracle-l-***@public.gmane.org



Date: 2011.07.19 11:58



Subject: Re: Fwd: Snapshot too old from READ-ONLY table (data pump
export)








Some details here

http://arup.blogspot.com/2011/01/more-on-interested-transaction-lists.ht
ml

Ron
Post by Ron Chennells
Since the question was raised again, does anyone know the latest,
up-do-date explanation on how oracle multi versioning works.
ITL's, commit SCN's, row lock byte all put together and explained :)
The problem is that many very clever people reverse engineered one or
another aspect of the process but one can hardly find an all-out
explanation.
I dare to claim that some statements one finds on internet made by even
experienced oracle gurus are doubtful to say the least.
Actually, oracle's own metalink is known to provide hmmm.. strange
statements. That happens.
The whole multi versioning is easy to explain at a high level but
implementation is quite tricky.
Why is implementation that important? Because knowing how it works one can
answer some questions right away how fast oracle can perform in particular
situations.
It's less of a black box approach and more of algorithm analysis then.
Thank you in advance,
Laimis N
--
http://www.linkedin.com/in/ronchennells



--
http://www.freelists.org/webpage/oracle-l:





--
http://www.freelists.org/webpage/oracle-l


Please visit our website at
http://financialservicesinc.ubs.com/wealth/E-maildisclaimer.html
for important disclosures and information about our e-mail
policies. For your protection, please do not transmit orders
or instructions by e-mail or include account numbers, Social
Security numbers, credit card numbers, passwords, or other
personal information.
--
http://www.freelists.org/webpage/oracle-l
Tanel Poder
2011-07-20 08:58:33 UTC
Permalink
Post by L***@public.gmane.org
Since the question was raised again, does anyone know the latest,
up-do-date explanation on how oracle multi versioning works.
If you can wait until November, then Jonathan Lewis'es new book will be the
best book ever to explain this :-)

--
Tanel Poder
http://blog.tanelpoder.com
Stephane Faroult
2011-07-19 10:59:51 UTC
Permalink
Post by Jan-Hendrik Boll
Hi,
If i am getting it correctly: the information stored in the
block-header says: uncommited changes, oracle tries to read the
information from undo which leads us to ORA-01555.
And what happens next to our changed block?
Is oracle perfoming a rollback on the uncommited blocks? (Wouldn't
that lead to ORA-01555 too?)
Regards
Jan-Hendrik Boll
ORA-01555 isn't a problem due to uncommitted data. It's linked to
committed data. What happens is that as long as a change is uncommitted,
then the previous value is kept in rollback segments (undo) - as long
as you don't run out of undo space, everything is fine.
Trouble starts when a change is committed, because then Oracle doesn't
feel compelled to keep previous values. It will try to keep them as long
as possible, but not forever. When a change is committed, you can no
longer rollback; and from then on previous values can only be needed for
giving a consistent view to a concurrent query that would have started
before the transaction was committed (and therefore at a time when it
couldn't see the change) and that must be kept in a blissful ignorance
of the change until it has returned its very last row (this is the basic
principle of read-consistency in Oracle - whatever the duration of a
query, it must "see", as long as it runs, the database in the very same
state as it could see it when it started running). I am ignoring
flashback queries here, but you could extend the reasoning.
It's a long query that runs while many, many changes occur AND ARE
COMMITTED that has a problem and gets ORA-01555, because if a large
number of changes requires massive amounts of undo, Oracle will happily
reuse undo space of transactions that have been committed, even if a
SELECT needs the data for consistency. The SELECT will abort, and
hopefully if you run it again , if it had the mishap of running during a
burst of changes to the database, it will complete successfully. If not,
review your code.

HTH,
--
Stephane Faroult
RoughSea Ltd <http://www.roughsea.com>
Konagora <http://www.konagora.com>
RoughSea Channel on Youtube <http://www.youtube.com/user/roughsealtd>
Saibabu Devabhaktuni
2011-07-16 19:03:45 UTC
Permalink
Which version of Oracle are you using?

When the tablespace is in read only mode, delayed block cleanout is performed but the dirty buffers will not be written to disk. Same is not true for read only tables.
More on this at http://sai-oracle.blogspot.com/2010/05/read-only-tables-in-11g-is-not-fully.html

If you were to select the records from the block you got ORA-1555, by using rowid predicate instead, you wouldn't have gotten ORA-1555. Oracle is trying to update ITL in block header with the minimum SCN of all the transactions available in the undo segments, since your export datapump is running longer than the undo retention, you are getting this error. Try running export datapump with parallel threads.

There is a concept of "min active scn" introduced in 11g which is supposed to enhance delayed block cleanout operations.

But in reality, Oracle shouldn't even need to do the delayed block cleanout on *sufficiently* old blocks (determined by min active scn - which is supposed to be minimum scn of all active transactions). I opened enhancement request on this last year, Oracle support opened bug# 9941362 for this issue (hopefully they fix it in 12g).

Thanks,
 Sai
http://sai-oracle.blogspot.com
L***@public.gmane.org
2011-07-19 11:19:07 UTC
Permalink
p.s. Ive put everything into blog post

http://laimisnd.wordpress.com/2011/07/19/reverse-engineering-consistent-reads-multi-versioning-itl-ora-1555-row-level-locks-concurency/

Currently, the data structure seesm to be quite complete.

What lacks is a better understanding how oracle reconstructs consistent
block image.
I believe Jonathan Lewis nailed the point:

"When all the ITL entries show SCNs no later than your query SCN the clone
is read-consistent and the data can be used by the query."

Again, one may ask - why all this is important? Ok, it's quite a difference
in CPU and disk IO and storage(undo including, hence ora-1555 ) if one ITL
or ALL ITL's have to be processed, right ?

Then more ITL's decrease ITL contention but potentially (if many concurrent
txn's) increase CPU/IO (?)

Then one may wonder if extreme case one block-one row may be good under
certain circumstances ?

Brgds, Laimis N






---------------------------------------------------------------------------------

Please consider the environment before printing this e-mail
----- Forwarded by Laimutis Nedzinskas/VBankas on 2011.07.19 14:11 -----

From: Laimutis Nedzinskas/VBankas

To: oracle-l-***@public.gmane.org

Date: 2011.07.19 14:05

Subject: Re: Fwd: Snapshot too old from READ-ONLY table (data pump export)
Post by Ron Chennells
http://arup.blogspot.com/2011/01/more-on-interested-transaction-lists.html
Exactly.
But I had to add this to

http://avdeo.com/2008/06/16/interested-transaction-list-itl/

to

https://richardfoote.wordpress.com/2010/07/20/index-block-dump-block-header-part-i-editions-of-you/

and

https://richardfoote.wordpress.com/2010/07/28/index-block-dump-block-header-part-ii-and-read-consistency-i-cant-read/

to Jonatahn Lewis comments at:.

http://forums.oracle.com/forums/thread.jspa?messageID=3725660

Jonathan is probably the first in the above mentioned links to talk about
COMMIT scn not just CHANGE scn - who cares about change SCN anyway ?
But he makes one interesting statement:

"When all the ITL entries show SCNs no later than your query SCN the clone
is read-consistent and the data can be used by the query."

I'd like to know what was ment by that. My interpretation is:

one can reconstruct a consistent read of a particular row at query start
SCN. Or one can reconstruct a consistent read of the whole block at query
start SCN.

As oracle operates "in blocks" then I assume this is how oracle
reconstructs the consistent read of a block: ALL ITL's have to be delt with
to have every row in a block rollbacked to commit SCN just before query
start SCN.

and finally

http://forums.oracle.com/forums/thread.jspa?threadID=521913

Mark J Bobak tells what is really needed to complete the picture:

"Then, at the row level, in the row directory, there's a lock byte. The
value of the lock byte will be either 0, if there is no lock on the row, or
it will be the number of the slot in the ITL that corresponds to the
transaction that's making a change, or just holding a lock, on that
particular row. So, the locked row in the block points to the ITL slot, and
the ITL slot points back at the undo segment/slot/sequence. "

Here, IMHO, the structure is complete:

Row Lock byte points to ITL, ITL points to transaction commit SCN and undo
record.

Then everything adds up: row level locking, transaction isolation, all
ITL's traversal.

The block oriented nature of oracle is explained by Tom Kyte:

http://asktom.oracle.com/pls/apex/f?p=100:11:0::::P11_QUESTION_ID:27330770500351
"o we cache blocks
o we read blocks
o blocks have transaction tables on them, this is what drives the
multi-versioning

relational databases tend to do things "to blocks", it is basically the way
they work.

We do not multi-version ROWS.
We multi-version BLOCKS.
"


Quite a list to have somehow complete picture :) , isn't it ?


Brgds, Laimis N








---------------------------------------------------------------------------------

Please consider the environment before printing this e-mail



From: Ron Chennells <ron.chennells-mDk+juSHL55m/Kok3h1/***@public.gmane.org>

To: oracle-l-***@public.gmane.org

Date: 2011.07.19 11:58

Subject: Re: Fwd: Snapshot too old from READ-ONLY table (data pump export)






Some details here

http://arup.blogspot.com/2011/01/more-on-interested-transaction-lists.html

Ron
Post by Ron Chennells
Since the question was raised again, does anyone know the latest,
up-do-date explanation on how oracle multi versioning works.
ITL's, commit SCN's, row lock byte all put together and explained :)
The problem is that many very clever people reverse engineered one or
another aspect of the process but one can hardly find an all-out
explanation.
I dare to claim that some statements one finds on internet made by even
experienced oracle gurus are doubtful to say the least.
Actually, oracle's own metalink is known to provide hmmm.. strange
statements. That happens.
The whole multi versioning is easy to explain at a high level but
implementation is quite tricky.
Why is implementation that important? Because knowing how it works one
can
Post by Ron Chennells
answer some questions right away how fast oracle can perform in
particular
Post by Ron Chennells
situations.
It's less of a black box approach and more of algorithm analysis then.
Thank you in advance,
Laimis N
--
http://www.linkedin.com/in/ronchennells



--
http://www.freelists.org/webpage/oracle-l:






--
http://www.freelists.org/webpage/oracle-l
Dunbar, Norman (Capgemini)
2011-07-21 12:00:18 UTC
Permalink
Hi JP,
Can you name the book Tanel ? I don't find any new book
mentioned in JL's website/blog .
Does this link help?

http://jonathanlewis.wordpress.com/2011/06/17/author-author/


Cheers,
Norm.

Norman Dunbar
Contract Senior Oracle DBA
Capgemini Database Team (EA)
Internal : 7 28 2051
External : 0113 231 2051


Information in this message may be confidential and may be legally privileged. If you have received this message by mistake, please notify the sender immediately, delete it and do not copy it to anyone else.

We have checked this email and its attachments for viruses. But you should still check any attachment before opening it.
We may have to make this message and any reply to it public if asked to under the Freedom of Information Act, Data Protection Act or for litigation. Email messages and attachments sent to or from any Environment Agency address may also be accessed by someone other than the sender or recipient, for business purposes.

If we have sent you information and you wish to use it please read our terms and conditions which you can get by calling us on 08708 506 506. Find out more about the Environment Agency at www.environment-agency.gov.uk
--
http://www.freelists.org/webpage/oracle-l
Prem
2011-07-21 11:39:49 UTC
Permalink
If you can wait until November, then Jonathan Lewis'es new book will be the best book ever to explain this >> Tanel Poder
Can you name the book Tanel ? I don't find any new book mentioned in JL's website/blog .

~ JP ~
Prem
2011-07-21 12:21:27 UTC
Permalink
Post by Dunbar, Norman (Capgemini)
Does this link help?
http://jonathanlewis.wordpress.com/2011/06/17/author-author/
Thanks Norman :-)

~ JP ~

Loading...