Plsql delete how many rows




















Here is the list: Listed from slowest to fastest, method and comments in italics 1. Delete rows from a bulk collect, issue multiple commits until deletion exhausted. Then issue a single Commit. Copy rows to preserve into a new table. Drop table. Rename new table into old name.

Build Indexes. Only redo logs generated, needs half space for logs. Copy rows same as 3. Partition table to issue parallel deletes per partition. Partition table, drop old partition create a new partition for new rows in every purge exercise. This is the most efficient approach and if implemented any purge process will take only few seconds. Thanks for your attention,. December 10, - pm UTC. Even if this was a couple thousand rows - that is teeny, tiny, trivial. Reader, December 15, - am UTC.

Both tables are daily partition tables. They can have data from I am only concerned with data after This runs fine if there are less records. If Ihave k records for example, it takes a lot of time to complete. Can you advice what I should do to accomplish this task? December 15, - pm UTC. Explain your thinking on that one please? Tom, Questions here: 1 in the open cursor loop fetch.. I thought both are necessary. Can you please comment? Not sure how to explain that, can you please help?

This route takes about 2hrs and change. But doing bulk deletes it averages about less. So it seems recreating is bad compared to bulk deletes, is this correct? Merry Christmas !!! December 31, - pm UTC. Just one delete - and NO commit, the commit is something the client only knows when it is appropriate.

The commit 1 you have - that commits before the transaction is complete, what happens if the instance fails after the first commit and before the last?

Your database is in an unknown state. In the acceptable cases, I would say - use partitioning and don't delete, just truncate an old partition. But if you commit 1 , there is no need for commit 2 - think about it. In fact, it could be that commit 1 is done once too many times not too few, too many!

What happens if your cursor returns rows and no more. Unless you are using a unique, primary key or rowid in your where clause, the delete could delete 0, 1 or 1,, or more! I would not expect rows deleted, I would expect somewhere between 0 and infinity. Now, in your case, you are removing a small amount of data from a big amount of data. In this case, a single delete would outperform all of them probably. Procedural code would come in 2nd.

Which is the better option? For the same or larger volumne of data same or larger than WHAT exactly???? That would be best. Purging via where clause is so - well, Tom, Thanks very much for your time and enlightening us. We have : Oracle 9. Method being opted are in Test env first : a Mark the rows to be deleted. Default value of "dev" column would is NULL We use the following function to set the "dev" column to 1 f Kind Regards.

I would suggest erasing all of the code you wasted time on and just do the delete. That is my suggestion. It'll be much much much faster, generate less undo and redo in total and be quite easy. I'll never get why people do this, make the simple so very very complex. Further, a Do we need to reorganise the table with every 'phase' of deletion? How it would be as the total number of rows deleted in muplitple passes using code and number of rows deleted in single pass..

February 01, - am UTC. Lets say that table is 16,, rows. And it has an average row width of bytes. That table is about 3. Done, math is simple. Now, assuming that every other row is to be deleted assuming a perfectly even distribution of data in the data , you'll read 1,, rows the first time to delete , - or about mb.

You'll read If you are going to do this bad approach - please use rowid ranges or something - search for do it yourself parallelism diy parallelism on this site to at least avoid that heinous overhead. A reader, February 01, - am UTC. As per regards to the bitmap index on dev1, I think you will be hiting ora deadlock detected while waiting for ressource if you create this type of index as far as your table is subject to multiple and concurent DML operations Regards Mohamed Houri.

Sorry because my question wasn't clear previously. The table was being designed long time ago and data being stored since then, and we can't change it now. So each partition has around 4. When we do the deletion by partition, with where clause, undotablespace will need to store the before state of the targeted partition or the whole huge table? February 15, - pm UTC. If you delete k rows, you will need undo to hold k rows a delete puts the before image into redo - the entire row image will be recorded there.

A reader, July 19, - am UTC. Best method to cleanup hundreds of gbs of date is through CTAS table select to ensure only data needed is retained and dropping old table and followed by renaming table to its original name. It also reclaims space. I don't want to lose constraints and indexes during this operation. Am I right? July 19, - am UTC. However, it is just about the hardest method going when compared to And it would be very very very slow when compared to Unfortunately, it is not partitioned.

Probably we could gain by implementing partition to make such operations easier in future. Please advise if this is fine for non partitioned table.

Tom, The explanation of the approach for doing the deletes in batches using rownum makes sense. It will have to scan the table again, causing each iteration of the loop to take longer to run. However, we've seen that if we kill the stored procedure after awhile, then restart it, the iterations will run faster than they did before we killed it. That seems to contradict what we've observed since rerunning the stored procedure will still need to read in all of the data that's already been processed as it did before.

Or, could it potentially read in a batch faster because the order the data is accessed is not guaranteed, so by blind luck, the process grabs the first batch quickly? January 19, - pm UTC. I'd need an example to explain what might be happening. I'd at least need a pretty comprehensive explanation of what the entire setup was, query plans and such. Rick, January 20, - am UTC. Ok, let me get some information together and see if I can quantify what they're seeing.

Deleting from partitioned table based on non partitioned key column Aditi, April 25, - am UTC. So in short we want something like: Delete from table t where t. Do you think enabling row movement would be a good idea here? Do you think its a good approach, do you have any better idea? Also, correct me if I am wrong, but parallel dml and disabling of the indexes would prove beneficial in this case.

Currently there are 7 local indexes on t. April 25, - am UTC. Delete works great for a row, delete consumes the most resources Use DDL for this mass purge. You'll end up with a clean data structure - nicely packed indices and can do it using parallel operations that will not generate any redo or undo redo generation is optional for you - if you do not generate redo, please do schedule a backup of the affected datafiles right after you are done. I read the entire discussion.

It cleared most of my doubts. I still have few questions. Our client is using Oracle 11gr2 database. Client is having a million record table, containing 11 years data. That table is partitioned by month. There are 4 global indexes on that table. The client wants to archive and purge 6 years data based on some business validation. These 6 years data will be read only data. Minimum row size of one partition of these 72 partitions is 1 million and maximum row size one partition of these 72 partitions is 5 million.

We are planning to give an archiving and purging strategy in steps: We take care of append, unrecoverable, no log, parallelism and backup where ever needed. We identify records that can be deleted based on id, date before which it can be deleted based on business validations. We put ids based on dates in to each partition. Above 4 steps we can do it in online mode when transactions are going on.

Now comes actual deletion part. Then we have to validate global indexes. For Step 5, we are thinking that it requires a downtime. My two questions are: 1. Is a 12 hr downtime sufficient? What happens to DR site during 5th Step? Will the main database site be performance wise affected because of this?

DR site replication is on Data Guard. May 06, - pm UTC. But I don't see how you can do steps in an online fashion? I said "These 6 years data will be read only data. Ours is a banking scenario. Millions of end users will be doing transactions per day. I read the link you provided. It says "When you perform DDL on a table partition, if an index is defined on table, then Oracle Database invalidates the entire index, not just the partitions undergoing DDL.

This clause lets you update the index partition you are changing during the DDL operation, eliminating the need to rebuild the index after the DDL. However, I need the downtime as the global index will be invalidated and before validating it, the users might fire queries on this table, which will be going for FTS and CPU consumption might be drastically increased and might bring the system to halt.

Going for update index clause, I am thinking I can take down time of maximum of 4 hours only. What is your take on that? You wrote: The client wants to archive and purge 6 years data based on some business validation. You better make sure the 72 partitions you are doing this to are in fact 'read only'. That is the advantage.

Who cares if it takes longer if the end users have continuous access. That is the point. The index is maintained during the partition operation.

If you use update indexes - you will have 0. I exactly got the solution, what I was looking for: Using " A reader, May 08, - pm UTC. Hi tom, Per your first reply, by case 2 and 3, you mean the design from the very beginning should consider 'partition', right? If it has already on production without partition, partition it then do either 'parallel delete' or 'partition drop' will not solve the issue as 'partitioning the existing table might take some time', right?

May 10, - am UTC. Hi Tom, You are the best, all the time! I followed your second recommendation of partitioning tables to facilitate mass deletions, as this choice provides some level of transparency to our applications.

However my first attempt did not seem to give a result as impressive as I expected Did I do something wrong here?

Or is 20 minutes is about the best I can get? Thanks for your insights! December 17, - pm UTC. You would partition so you can drop or truncate a partition. Hi Tom, Thanks for making this very clear. I've found it helpful to split my table into more than 2 partitions - actually 8, one for each possible value of the ACTION column. Now I have to truncate 2 partitions instead of just one, and rebuild the indexes afterwards.

Best, Brian. December 20, - pm UTC. You could probably remove the index - if the index was only on action. Each ID has about rows. What is the right way of doing this? July 16, - pm UTC. You really want to do 12,, single block reads??? I'd rather full scan or hopefully the table is partitioned and you can scan just a single partition maybe? Yes full table scans are awesome. But here is the issue. We have a product which works on both sqlserver and oracle.

I am oracle guy just trying to learn sqlserver. Weather it is deletes,selects or creating csv files from the database SQL Server beats hands down on performance. Our QA uses same hardware for testing both the systems. I am talking about minutes to hours comparison. SQL server handles millions of row deletes in matter of 10 minutes whereas on the same table, same number of rows, same hardware oracle takes more than an hour.

I run the SQL from sqldeveloper and the query returns in few secs. For delete, both sqlserver and oracle does full table scan. I have no answer to our QA people. July 18, - pm UTC. You haven't run the entire query. I think it is more than enough for the HASH joins etc. For delete, yes it is doing full table scan but takes hours to finish.

Which level tracing you recommend for this issue? Thanks, Ravi. I cannot guess what you mean by things. Jess, September 05, - pm UTC. Hi Tom, Hoping you'll be able to help with fixing a performance problem in deleting rows from a table for some code that I've just inherited. There is a non-partitioned table of unfulfilled orders holding about 15M wide rows. The data grows about 3M a year overall but fluctuates from a daily perspective e.

Also, the column order in the index is ascending, which doesn't make sense given that the queries all want to be descending It should matter, no?

Now that we're capping the grows by offloading processed records, does it make sense to stick to an FTS or would an index now be an option? Now with a change in the data model, we're removing the filled orders into another table.

Given that each run is expected to delete about K records, that adds up to a long time. What can we do to speed this up? Any tips would be much appreciated. September 09, - am UTC. And you've just clobbered your buffer cache. Collectives on Stack Overflow. Learn more. Asked 3 years, 5 months ago. Active 3 years, 5 months ago. Viewed times. Improve this question. William Robertson Avishekkj Avishekkj 11 2 2 bronze badges. Have a look at stackoverflow.

Add a comment. Active Oldest Votes. Improve this answer. Thorsten Kettner Thorsten Kettner Thanx for the reply.. This category only includes cookies that ensures basic functionalities and security features of the website. These cookies do not store any personal information. Any cookies that may not be particularly necessary for the website to function and is used specifically to collect user personal data via analytics, ads, other embedded contents are termed as non-necessary cookies.

It is mandatory to procure user consent prior to running these cookies on your website. Skip to content.



0コメント

  • 1000 / 1000