In my most recent adventure, I was tasked with creating a real-time push subscription to our companies distributed publication server. This effort kicked off a lot of ideas, foremost being the Asynchronous trigger promoted by the service broker. I used the example found in the back of the bookPro SQL Server 2008 Service Broker. While this asynchronous triggerasynchronous in respect to writing to an internal SQL Server object (i.e. a table) it still takes a bit of time to execute (and thus return the initial transaction). I was disappointed to now associate the word asynchronous to timely. Service broker has (and will) bring about many advances in the near-term future, however the speed of submission and true asynchrony will hopefully be revisited sometime soon.
The second option for performing the real-time push is to use a synchronous trigger, or an after update trigger. The after update trigger would push the record to a common table which would then be polled by a job. Initially I thought there was a way to insert to this common table then execute another trigger on this table to publish the record. However I soon learned that all triggers that call triggers contribute to the same transaction.
When one trigger inserts into a table and that table also contains a trigger, this is a nested trigger. The reason that nested triggers are a concern is because the first call that performs the initial insert does not return until the last trigger in the sequence is completed. In trying to circumvent this behaviour, as mentioned before, I tried to implement asynchronous triggers. Asynchronous triggers utilize the service broker in order to send an asynchronous response. Problem is, these asynchronous triggers are slower than synchronous triggers. This was illustrated by an ex SQL Server Service Broker developer (Remus Rusanu) on an MSDN forum. Ill quote:
Synchronous audit has to do one database write (one insert). Asynchronous audit has to do at least an insert and an update (the SEND) plus a delete (the RECEIVE) and an insert (the audit itself), so that is 4 database writes. If the destination audit service is remote, then the sys.transmission_queue operations have to be added (one insert and one delete). So clearly there is no way asynchronous audit can be on par with synchronous audit, there are at least 3 more writes to complete. And that is neglecting all the reads (like looking up the conversation handle etc) and all the marshaling/unmarshaling of the message (usually some fairly expensive XML processing).
Within one database the asynchronous pattern is apealing when the trigger processing is expensive (so that the extra cost of going async is negligible) and reducing the original call response time is important. It could also help if the audit operations create high contention and defering the audit reduces this. Some more esoteric reasons is when asynchronous processing is desired for architecture reasons, like the posibility to add a workflow triggered by the original operation and desire to change this workflow on-the-fly without impact/down time (eg. more consumers of the async message are added, the message is schreded/dispatched to more processing apps and triggers more messages downstream etc etc).
If the audit is between different databases even within same instance then the problem of availabilty arrises (audit table/database may be down for intervals, blocking the orginal operations/application).
If the audit is remote (different SQL Server instances) then using Service Broker solves the most difficult problem (communication) in addition to asynchronicity and availability, in that case the the synchrnous pattern (e.g. using a linked server) is really a bad choice.
You can also find the source code for the asynchronous trigger by downloading the followingfileand looking in chapter 10.
So what options does that leave? If you want to do asynchronous auditing you are left with Change tracking or Change Data Capture (CDC). Both of which would need to utilize a job and are not the most failsafe way to track changes in case of a job failure. I can say that we have run Change Data Capture running in our production environment for over a year and have had only one failure (due to the transaction log filling up). However, on a failure of CDC, we cannot shut down our production system, thus we would miss some changes. The way to compensate for this would be a nightly reconciliation module.
Change tracking, uses the same framework used for CDC yet is more lightweight, however it is suggested that you enable Snapshot Isolation, which can cause extensive utilization of TempDB.
So, for now, in order to trap our data, we are going to use a synchronous trigger which writes out the changed row to an intermediary table along with the column bitmask of what changed. Then we will run a job against that table polling for changed records.
Overall, I wish I could find a more efficient solution. If anyone has any ideas, please comment below.
Excellent blog right here! Additionally your site lots up fast!
What host are you using? Can I am getting your associate link
on your host? I desire my site loaded up as fast as yours lol
Linode 🙂 Dont tell MS. Nah.. I dont care.. This is the NGINX (LEMP) stack with Varnish. No its not hosted using SQL Server, its my dirty little secret.. but hey wordpress is the right tool for the job and thats what Im all about.
Thanks for sharing such a informative post.You have explained very easily that everyone understand that.Its very helpful.
the time to read or visit the content or sites we have linked to below the
How to cache stored procedure results using a hash key
There are a lot of different design patterns that lend themselves to creating
In the troubleshooting guide we went over the different physical bottlenecks that can
Yet Another Temp Tables Vs Table Variables Article
The debate whether to use temp tables or table variables is an old
Sometimes slow queries can be rectified by changing the query around a bit.