Ora-01466: Unable to Read Data - Table Definition Has Changed
The nightly backup task you take scheduled for your database, the datapump export task, starts failing with the following mistake:
ORA-31693: Table data object "HRAPP"."REPORT_TABLE" failed to load/unload and is beingness skipped due to error:
ORA-02354: error in exporting/importing data
ORA-01466: unable to read data – table definition has changed
The beginning fourth dimension this happens yous don't think much of this error, yous just re-run the task during the day. As expected, the job completes successfully, and you file the incident equally a one fourth dimension simply occurrence.
Then, a week passes by, and the export job fails again, aforementioned mistake message, but a unlike tabular array. This export job failure seems to happen ever on the same day, and it alternates between a few tables. Now you start getting curious to figure out what is causing the error and how to fix it.
There are ii things to figure out, in order to find the solution. We need to know what the cause of the error is, and under what circumstances this mistake occurs.
Let's tackle the starting time thing. When does this error occur?
There are two conditions for this fault to occur:
i) the export datapump task is running using the FLASHBACK_SCN parameter in the parameter file. Using this parameter means that the consign datapump operation is performed with data that is consistent upward to the specified SCN.
two) the tabular array specified in the first line of the mistake (in our case HRAPP.REPORT_TABLE), is changed during the export datapump job. Usually the table definition is changed by a TRUNCATE command, while the export is running.
Permit'due south answer the 2d question. Why does this error occur?
The mistake occurs because the LAST_DDL_TIME on the table is newer or more than current than the FLAHBACK_SCN translated into a fourth dimension. In other words, the LAST_DDL_TIME happened at a time after the FLASHBACK_SCN but earlier the consign datapump task completed.
Hither is a quick diagram to better empathize what is going on
Now that nosotros know when and why the error occurs, there are two options to fix this:
i) remove the FLASHBACK_SCN parameter from the expdp parameter file, and run the export without it. I do not recommend this method, equally your consign might not be consistent, and data could change during the consign, making the dumpfile useless.
two) figure out what is causing the table definition change. Is there another chore that interferes with the export chore? Check what jobs you take scheduled in the database, in the crontab, or any other scheduler that you are using.
Verify the exact fourth dimension the DDL operation occurred on the table(s):
select object_name, to_char(last_ddl_time,'DD-MON-YY HH24:MI:SS') equally time from dba_objects where possessor='HRAPP' and object_name='REPORT_TABLE'; OBJECT_NAME TIME ------------------------- ---------------------- REPORT_ENTITY 01-AUG-19 20:25:00
In the instance I was troubleshooting, the consign chore was running between xx:05 and twenty:twoscore. The above argument confirmed that the tabular array was modified while the export was running, at 20:25.
If you identified the job that truncates the tabular array, and are able to modify the run time for it, you can reschedule that job to run at a different fourth dimension. If you practice not have access to the job, or were not able to place it, you can reschedule the export job. The latter, the export job might be easier to reschedule, just because as the DBA you accept more than control over information technology
If you enjoyed this article, and would similar to learn more about databases, delight sign upwards beneath, and you will receive The Ultimate 3 Step Guide To Find The Root Crusade Of The Slow Running SQL!
Source: http://dbaparadise.com/2019/08/what-to-do-when-the-export-fails-with-ora-01466/
0 Response to "Ora-01466: Unable to Read Data - Table Definition Has Changed"
Post a Comment