Deadlocked!

These types of problems are a big part of why SQL Server introduced MVCC features in the 2005 release. Most of the DB world has moved to MVCC as their default behaviour (Oracle, Postgres, MySQL, Firebird/Interbase, etc) with DB2 being the main holdout that still uses the pure locking model for database transactions.

I suspect they didn’t change the default behaviour (thus treating you as a banking site) in order to limit breakage to backwards compatibility. But seriously, for web apps MVCC is the better approach.

But instead of storing it into Post table, store number of replies to PostCommentCount table.

The whole point of storing the count instead of calculating it is making reads cheap. So now you are asking for an extra join? Why not just tell him to calculate it each time like the other people who don’t get it.

Someone has to explain to me sometime - why every time a problem is posed, peoples first response is to tell you that you should use a different platform/technology/brand.

It’s because they don’t know how to solve the actual problem. This is not a case of a bad product - it is a case of a mistake in programming, and it can be fixed. I’m sure that’s why it was posted in the first place. I don’t like the solution, but the point is, it was solved without re-working the whole site, changing databases, or anything else drastic - he just added two little words to the query. Simple problem, simple solution.

I agree with some of the comments (though I might phrase it differently) that you would probably be happier if you learned more about databases.

The other thing is, you know the old axiom select(2) is not broken? That if you think you found a bug in the OS, 99% of the time it’s in your code, and the other 1% of the time you need to check your code even harder?

Not true with Microsoft products.

I suspect that the reason why there aren’t many google entries for MySql deadlocks is because the default storage engine in MySql (MyISAM) doesn’t have deadlocks.

From the MySql docs at http://dev.mysql.com/doc/refman/5.0/en/internal-locking.html

Table locking in MySQL is deadlock-free for storage engines that use table-level locking. Deadlock avoidance is managed by always requesting all needed locks at once at the beginning of a query and always locking the tables in the same order.

However, this is had at the expense of transactions.

The deadlock part seems pretty well covered so I’m going to echo the, AnswerCount field is probably a bad idea and should be calculated. sentiment.

If you have long running transactions, lock all the required data in the beginning to prevent deadlocks:

Thread 1:
begin tran
declare @x int
Select @x = A.x
from A with (holdlock, updlock)
join B with (holdlock, updlock)
on A.id = B.id
where id = 1

update B
set y = @x
where id = 1
commit

thread 2:
begin tran
declare @y int
Select @y = B.y
from B with (holdlock, updlock)
join A with (holdlock, updlock)
on B.id = A.id
where id = 1

update A
set x = @y
where id = 1
commit

If you don’t like to join you can write:
select x from A, B where A.id = 1 and B.id = 1

SQL Server - That is MySql, Progresql, Oracle, MS-SQL or any other SQL server

MySql Server is MySql

MS-SQL Server/Microsoft SQL Server, however is Microsoft’s SQL Server,

The Generic name has been appropriated by Microsoft for marketing reasons, Various SQL Servers (including the original engine Microsoft used) are older than Microsoft the company …

Deadlocking on dirty reads on a web interface that does not really care if the information is a few seconds out of date sound like the perfect case for using nolock …

The deadlock issue has nearly been beaten to death, but I want to add one more thing. I think there has been a lot of confusion in the discussion about the isolation levels that Jeff is talking about using. NOLOCK is equivalent to READ UNCOMMITTED, but if you read the entire post, that’s not what Jeff is talking about using. He’s planning to use READ COMMITTED SNAPSHOT, which does no dirty reads. With snapshot isolation enabled, SQL Server creates a duplicate copy of the data page before beginning any transaction that includes a write operation and purges it after the transaction is committed; if that copy exists, SQL Server reads from the copy. That eliminates the locking issue, and therefore should eliminate the deadlocks. It does require more disk resources, but in my experience, snapshots have been very fast. I wouldn’t use a dirty read unless it was absolutely necessary - all sorts of nasty side effects can come of that if you’re not very, very careful - but this isn’t a dirty read. You’re still reading the data that has already been committed to the database.

Jeff, if you haven’t done it already, it would be worthwhile to use SQL Profiler to run a trace on your database while the app is running. This has helped me sort out a number of issues since it tells you everything the database is doing. You may find a transaction that LINQ is creating implicitly that you weren’t aware of, and even if you do choose to use snapshot isolation, correcting a transaction like that may help the performance of the application.

Good luck!

Ed

You need to read this one Jeff. You happened to mention you are using LINQ To SQL for your design. Please tell me you didn’t use the Linq To SQL designer in Visual Studio to define your database tables. I’m sure LINQ To SQL works just fine if you write the code yourself, but using the designer is going to make your site SLOOOOOW!.

I recently developed a Virtual Tour posting system for real estate agents at my job and I used the Linq To SQL designer to define my tables to start with (programming in C#). I never could find out why a single page would take seconds to build (1.93s it turns out). Finally I used the trace feature and found the DataContext.ExecuteMethodCall procedure run from the designer code was taking .73 seconds to retrieve ONE row from the database, from a table with ONLY 17 rows, used a stored procedure that was ONLY a simple select statement, even with DataContext.ObjectTrackingEnabled set to FALSE.

That method uses a lot of reflection and testing of relationships to insure the data is valid. I rewrote the Data Access Layer of my system, still using attributes and reflection for parameter types, inherited their IExecuteResult interface to add a little more flexibility, and now that same call takes 0.000434 seconds, and now the page builds in 0.034 seconds. I know it sounds like an exaggeration, but I am dead serious. MS’ code for that method must be incredibly slow. And so you don’t think it is my system, our server is a twin dual-core (4 cores) Xeon 2.8GHz, 8GB RAM, Window Server 2008 Enterprise, SQL Server 2005 Professional, 1.6TB RAID 5 array. I could watch the page peg the processor at 26% (their code only using one thread) for over a second in the task manager.

Thought you need to know that, but only if you used the designer in Visual Studio.

The deadlock does not occur when an operation takes too long on a resource and another is waiting for the same resource.

It means, instead, that a query, of any nature, is waiting for a resource that is taken by another query, that is waiting for a resource of the first one. obvously there’s no solution for this, if none of the query explicitly dies a horrible death, so that the other one can continue.

In SQL Server, there’re two ways of accessing the data: Read Committed and Read Uncommitted, the first being the default (and yours).
In Read Committed, each query, even a Select, must access only data that has been completely written and processed and is not being currently modified or handled by another query. That ensures that your data is always the REAL data, and not some old data you’ve already overwritten during the select itself.
Read Uncommitted, on the other hand, tells the Select to look for anything it can give you right now, even if it’s being updated by some other query during the select itself. I leave you to the documentation on how to switch the two modes.

Anyway there’s another way: you can use a
SELECT *
from BLAHTABLE nolock

the nolock will tell the select to ignore any lock, like if it was a read uncommitted environment. Note that you can’t use a nolock in an update, but you can still reduce the risk of a deadlock by using a rowlock instead. Beware rowlock will have an impact on the performance though.

  1. The root cause is not using Oracle. If you are using Oracle and still get a deadlock, you have an unindexed foreign key and failed to heed the warnings in TOAD or OEM. I only need one hand to count the number of deadlocks I’ve encountered in a decade of using Oracle. All had obvious causes and were discovered early in development.

I guess you haven’t tried running Oracle in full SERIALIZABLE isolation mode then. Try that and say hello to Cannot serialize the transaction error unless you’re doing just really, really small transactions.

But overall, deadlocks and serialization errors are a fact of life. You just try again and hope you succeed. Happily, in our framework, we just rollback the transaction and call the continuation that was saved when the transaction was started.

I’ve never used MS SQL Server, I’ve used oracle for over a decade.

As has been metioned, in Oracle readers don’t block writers, and writers don’t block readers.

A thread above suggested that MS SQL Server was better because it turned off the ‘snapshot’ feature that enabled this behaviour by default : personally, I’d rather have correct than fast in the first instance anyday…

I have seen a couple of Oracle deadlocks in my day, always caused by U1 updates A and tries to update B, while U2 updates B and tries to update A.

SQLServer is a work in progress for Microsoft compared to Oracle - from plenty experience deploying to both platforms we’ve had MUCH better performance, much fewer deadlocks and fewer headaches in general with Oracle.

Has anyone mentioned triggers?

I usually avoid triggers because 'tis easy to forget they exist and get included in any transactions you might generate. So that simple SELECT or UPDATE might be introducing a second table for the deadlock through a trigger.

I amend my previous statement - no SELECT triggers in MS SQL.

How about this case?

CONNECTION 1: Update BankAccount Set Balance = Balance - 10 WHERE ID =1
CONNECTION 2: select Sum(Balance) From BankAccount
CONNECTION 1: Update BankAccount Set Balance = Balance + 10 WHERE ID =2

Connection 1 is moving $10 from account 1 to account 2. Connection 2 in the meantime just read that there is a total of $10 less than actually exists. Nothing rolled back, but you’ve just completely lost atomicity.

Hey Jeff,

You’re not storing the Id of the original post (primordial one) in it’s ParentId col are you? This would obviously introduce immediate overlap and a race-condition for the offending (parent) post. I can’t believe it would be that obvious but, if it is you can SQL-around that one with an extra clause to ignore that particular record. I’m reading waaay too much into a DM that i didn’t design!

I think in the case of this race condition to re-acquire a Read lock on a matching row amongst the Update and Select you should perform the retry as per direction via the db error. Have you tried this as a default option before applying hints that result in dirty reads for some of your users? It smells like your data isn’t that concurrent anyway - the select seems to pull those records that were related to a parent record which the UPDATE is isolated to changing. I suspect your comment about denormalisation i.e. storing responses to an inital post in the SAME table have A LOT do with deadlock as normalising parent-child fashion would result in separate tables used in each query.

Anyway fundamentally it’s perhaps a usability choice - do you allow your users to see dirty data? Would they know it was dirty? Or do you trap the error when you know data to be dirty coz SQLServer is telling you it is and requery on behalf of the user. For me - the system should do this but, the cost should be measured in terms of trapping the error and requerying and it’s affect on your desires to scale up.

Deadlocks are caused by the Database Designer and the Developers. It is always possible to write a system that is incapable of causing a Deadlock. Of course, this is the same as saying it is possible to write code that has no bugs and therefore easy in theory yet incredibly hard in practice.

Turning COMMITTED SNAPSHOT on can result in two identical SELECT statements within the same transaction returning different data. These two SELECT statements are both returning fully correct and valid data. However, that is the reason is is not turned on by default.

I’ve seen two brilliant comments here:

Ian Nelson stated Imagine how pissed you’d be if SQL Server treated your banking application like a silly little web app.

S stated It’s not ‘silly’ that SQL Server treats your website as a banking operation; personally I would expect nothing less. Correctness is a vastly more important than performance. Only the developer can trade off correctness for performance; and it is rarely needed.

I couldn’t agree more with these statements, you as the developer has to be the one to choose what level of locking you require. There is a time and a place for all of them, from the pessimistic default to the much feared (nolock). However, as the majority of us Developers do suck, I’m rather glad Microsoft have taken the default approach they have, even if it is unnecessary in a large number of cases.

If they were to justify reducing the level to READ COMMITTED SNAPSHOT, then they could in fairness reduce the default to READ UNCOMMITTED, how many people would that affect?

Robin Day
http://www.advorto.com

The devil is in the details, and in this case there are a lot of details. I’d like to echo the sentiment expressed by some people that I do hope the answers on stackoverflow will contain a little less vitriol. Lock issues are hard, even seasoned DBAs can run into them on occasion. Having done a database course or two does not make you anymore of an expert on the exact cause of these issues and it doesn’t make you smarter than Jeff.

Other than that, I hope the type of person who likes to insult other people’s intelligence will end up with appropiately low karma so I can just ignore their posts on the eventual stackoverflow. (which I’m not beta testing cause I’m lazy :wink: )