Deadlocked!

I believe what we are seeing here is the result of using ORM magic and now the resulting pain of trying to debug the SQL the ORM is doing.

I’ve never encountered a deadlock situation, but I use my own handwritten sprocs, so I know exactly what the DB is doing at all times.

Jeff I get that, but I don’t understand why simple read SELECTS need to lock at all. I mean, maybe if that database was storing my bank account balance, in which case transactional orders are hugely important.

Because the selected data could be usefull later during the transaction. It’s why you don’t use transaction if you don’t care about your selected data.

I’m sure you could set all of your SELECT to read uncommitted by default, but – really – who cares?

If you have something mission-critical where you can’t afford a dirty-read, isolation is good. For the rest of us, NOLOCKS are wonderful as long as you know when to apply them.

Playing it safe by default is fine. Forces you to actually think about what you’re doing.

–Kevin

The reason most people recommend accepting dirty read (with nolock) is because you get better performance, have not seen if this has changed with ms-sql 2008. However don’t use it when the results really matter for instance if you are transferring money between two rows and you do a dirty read in the middle you could get the money counted twice once in the original row and also in the row it is being transferred into.

As for Oracle by default it only reads committed information at the time the statement started running

Re. Row level locks on UPDATE statements. I can demonstrate a scenario where this consistently and reliably doesn’t work - it is a very small table, and the WHOLE table ALWAYS gets locked. Microsoft’s semi-official response was (paraphrased) row locking on update is a hint to the server, not an instruction.

I switched to using (nolock) years ago – I would say that it is the right choice at least 90% of the time.

One should build one’s application correct first, then optimize. The database’s default behavior is paranoid locking for banking correctness ; this is the sane default. In a web messaging app, slightly stale data is perfectly acceptable, but for banking stale data is a fatal error.

Just thinking of DB deadlock (Oracle background), the only time I’ve seen them is when there are multiple statements in a transaction.

Like…
begin tx
insert
update
commit

I’ve not worked sufficiently with SQL Server or AutoCommit to know if this could happen.

Just something to think about.

I’ve got a couple quick thoughts flying around in my head…

OK, but why doesn’t SQL Server re-issue the transaction on my behalf, then?
I can think of many ways to argue this point, but really, it comes down to practice… If you know the problem and solution, why not just implement the code to test for that condition and re-submit the batch when it occurs? It’s not rocket science… in fact, it’s good practice for any data access layer.

Instead of relying on a whole database change or dealing with (nolock) on lots of tables, have you considered using SET TRANSACTION ISOLATION LEVEL on the problematic queries? For example, SET TRANSACTION ISOLATION LEVEL SNAPSHOT: Except when a database is being recovered, SNAPSHOT transactions do not request locks when reading data. SNAPSHOT transactions reading data do not block other transactions from writing data. Transactions writing data do not block SNAPSHOT transactions from reading data.

Maybe not applicable in your specific case, but just for completeness, I want to mention using SET DEADLOCK_PRIORITY in some instances… maybe where the read is killing the write.

There’s a lot of things you can do to troubleshoot the actual cause of the deadlock, but I really hate to see someone configure a whole database setting just because they don’t understand the problem, but rather want to assuage the symptoms.

What is surprising is that SQL Server thinks there is a deadlock. It doesn’t wait for the writer to finish and then just do the read? You’re not even referencing other tables in your UPDATE or doing an INSERT SELECT.

Anyhow, the ANSI standard way to do your nolock thing is is to set the transaction isolation level, either globally or at the time that the transaction starts. (It sounds like SQL Server defaults to SERIALIZABLE for its transactions? That’s a very strange default.)

-Max

Jeff,

I would recommend not to use LINQ if you really want to make your site work.
LINQ is raw and not very reliable technology.

Unfortunately not everything that ScottGu promotes is good enough for production use.

ASP.NET 2.0 was really good. LINQ, Silverlight, and Atlas – not so much.

Jeff, tell us about any foreign keys you have connected to the page table. A lot of times a deadlock can be caused by SQL server attempting to reconcile an FK.

Nolock is the ON ERROR RESUME NEXT of SQL. It’s not a good practice to get into unless you understand fully why you’re doing it.

Are your primary keys Autonums? If so, take a look at how SQL Server treats locking at a page level instead of a row level. If you have a page that’s id #10 and id #11 is being heavily updated, it’s possible that the updates to id #11 is locking the entire data page. Maybe you should change field names or something and post the entire SQL here. I get the feeling something is being abstracted out. You should only be seeing a deadlock situation in the real world maybe one out of a million transactions. Otherwise, there’s a code problem in there somewhere.

http://msdn.microsoft.com/en-us/library/ms173763.aspx

It ain’t rocket surgery

Nice pericombobulation there :slight_smile:

I totally agree with Justin on the parallelism business. After turning it off, the performance more than doubled in my system.

In addition, parallelism can also lead to a phenomena known as CPU starvation. It works like this, let’s say you have different 20 queries in the queue and an 8 CPU box. Typically many of them would execute simultaneously since they all hit different tables. However, with parallelism, SQL Server breaks up the first query into 7 pieces and sends them off to 7 different CPUs. Now most CPUs are busy and the ones that completed processing are waiting for the results from other CPUs to trickle in.

Meanwhile, the queries in the queue, are sitting around waiting. Then, if the queue wait timeout is hit, SQL Server dumps the queries into tempdb, to be retrieved milliseconds later, when CPUs are available. So all of a sudden, your system is going to hell because of all the writes to tempdb.

As far as the deadlocks are concerned, I’d throw several traceflags on the system - they will tell you exactly why (and what keys, indexes, etc… are involved) your deadlocks are happening.

Try this for starters,

http://support.microsoft.com/kb/832524

then if you need some more help, contact me via the blog.

Regards
Robert

Weird problem I have never experienced it especially as you are saying a few times a day. I might have missed it if it was something once a month but not a few times a day.

Are you as I have noticed in your previous posts using lots of dynamic sql as oppose to stored procedures?

Even for a quick select or update statement the query plan compilation can take time on a busy server, perhaps this is causing record locks to last longer or even lock more than necessary.

Oh, you’re using SQL-Server 2005…they fixed that in 2008…

The only reason you should be getting a deadlock is if you are waiting on someone who is in turn waiting on you. You have to ask yourself two questions: why, exactly, are they waiting on you? and why, exactly, are you waiting on them?

If your read transaction is waiting on some other transaction’s write and you deadlock, that means they are waiting on you. Usually because of your own write lock. Do you really need to hang on to that lock (aka, why not just commit what you have)?

OMG! i was shocked when reading this : How can a read be blocked by a write? What possible contention could there be from merely reading the data?

like in one comment : It’s always a little disturbing to see a well-known coder ask a dumb question
and another : I think you misunderstand what a deadlock really is. A deadlock is not a lock that is taking a long time. It is a lock that will never be released until one of the two processes is killed.
i totally agree…

i’ve streched my head enough to know that threads / synchronisation = pain in the ass
…but that is basic stuff, you deceive me Jeff :frowning:

if you need to read stuff coming from a probe, maybe you doesn’t care if the data has been written into your buffer when you read it to directly display it…[but it’s bad to do that way]
Usually : you don’t want to read unvalid stuff.
The reader needs to wait before the written has done this job, else you are going to read old/invalid data… !!


never heard of NOLOCK option before, seems another ??!!? stupid stuff from M. $ql $erver

You really need to make sure you understand what read committed snapshot does and if you should apply it to your application. I have put together this quick informational post on read committed snapshot to help non-DBA types to understand the process.

http://www.coderjournal.com/2008/08/deadlocked-read-committed-snapshot-explained/

It can cause breaking changes in your application if you apply this with out knowing what the effects first. Your application might rely on row level locking for specific business rules. So be careful.