Wednesday, September 25, 2013

SQL File System Slides

Here are the slides from my Training on the Ts presentation: SQL File System Slides

SQL File System Questions

I did a presentation on the SQL File system Tuesday for Training on the Ts
The answers to questions asked during the presentation follow:

Questions Asked by Attendee
Q: Are there any changes in the file internals for Sql 2014? Does Sql Azure have the same
file architecture? Does it expose the same data and log files as on premise Sql Server versions?
A: I'm not aware of any changes to the filesystem in 2014.  There are always performance improvement

in each release but the basic file structure that we talked about hasn't changed for several releases. 
SQL Azure uses a very similar physical structure but a Azure database is a logical database and the
physical structure of the database isn't exposed to the user.
Q: Back in the day, we used to try and always allocate our DBs in allocations & growth sizes that
correlated (fit) onto 8k pages (size of physical disk page). What is the physical size these days, 64bit etc?
A: Good point.  SQL Server space is allocated in 64k chunks so if the file size isn't evenly divisible by

 64k there will be some unused space at the end of the file.
Q: can u please share the sqlserver capacity Planning templates or URL's
A: Unfortunately, I haven't ever run into a good SQL Server capacity planning system.  I generally

come up with a  solution tailored to the particular application.  Here are some of the links I use for
  information:  http://msdn.microsoft.com/en-us/library/ms978394.aspx  
  http://technet.microsoft.com/en-us/library/cc298801.aspx 
http://technet.microsoft.com/en-us/library/ms345368(v=SQL.105).aspx 
 
Q: how allocation is different, when allocation unit set to 64k
A: Not sure what you're asking.  Allocation is always 64k.  Are you talking about NTFS file

system block sizes?  If so, an NTFS block size of 64k is recommended for SQL Server
Q: I heard from another DBA that he creates 8 files automaticaly for TempDB, no matter the
  server's setup. What are you comments from this?
A: Not a bad thing to do.  I don't see any reason why that would cause a problem on a 4

core box.  The only down side is a little extra complexity.
Q: If temp space is requested by an application, does SQL allow it to use only one of the
8 files allocated to tempdb or can it use multiple?
A: SQL Server allocates space from the available files in a round robbin fashion if the

files are the same size.  If the files are different sizes SQL uses something called proportional
fill to allocate more space from the larger files so they stay approximately the same percent full. 
  This is why all files for a filegroup should be the same size.  If not the big file is a hot spot.
Q: is anyone else having audio issues?
A: Sorry about that - maybe there was a network issue
Q: is it recommended to run disk defrag where SQL mdfs and ndfs are located?
A: Yes but it's only necessary if the disk is fragmented.  If your files don't autogrow the disk

  volume shouldn't get fragmented.
Q: Is it worthwhile to update an existing database minimum file size after the database is
created and in use?
A: Yes, if the data size is increasing you can prevent further fragmentation.
Q: To Defrag the Windows filegroup, don't we have to thake the DB files offline, defrag
  windows file, then bring SQL file online and Defrag it?
A: Yes
Q: Best Practice we employ - use create table for tempDB's otherwise it locks all the

Allocation Maps until the entire insert into is completed
A: True.  Good point
Q: Very good presentation.  Thank you so very much.
A: --unanswered--
Q: How can I defrag the SQL file system after doing a windows file system defrag.
A: The SQL file system doesn't get fragmented but SQL indexes do.  Rebuilding

the indexes willl remove that fragmentation
Q: When does truncating a table reallocate space to be reused?
A: First, I want to be sure you're not talking about the Windows file size decreasing

when tables are truncated.  The windows file doesn't ever shrink unless you shrink
the database which I definitely don't recommend.  If you are asking when the pages
allocated to the table are freed up after a trancate table, they are freed after the next log
file backup.  There' a misconception that truncate is not logged.  It is actually minimally
  logged - the list of pages allocated to the table is written to the log and in the next log backup,
those pages are written to the backup.
Q: with today's SANs, how can you control fragmentation?
A: SAN volumes can be defragmented with the defrage tool of your choice.
Q: Would multiple files on user databases help reduce deadlocks?
A: Unfortunately probably not.  Allocation contention doesn't generally cause deadlocks.  It's

worth doing anyway but I haven't ever run into a deadlock caused by allocation contention.

Monday, September 9, 2013

TSQL Tuesday SSB Rube Goldberg Solutions


 

TT150x150
For those of you who don’t know me, I was one of the program managers for the SQL Server 2005 Service Broker.  One of the harder parts of being the Service Broker (SSB) PM was positioning it.  The first reaction I got when I did SSB talks was usually “Is this really different than MSMQ”?  To demonstrate that Service Broker was really much more than yet another reliable messaging product, I worked with several of our early adopters to come up with creative uses for the reliable, transactional, asynchronous queuing aspects of SSB.  I also wrote up a few uses that may not have ever been implemented.  Here are the more significant SSB Rube Goldberg ideas:

Data Warehouse Load Balancing


The first ever SSB customer was an internal Microsoft reporting application.  They had 4 or 5 copies of a database used for doing large ad-hoc queries.  When a query came in it was routed to one of the copies for processing.  The first SSB application was the obvious one – queries came in to a front-end server and were put on a queue on one of the query servers for processing.   The query engine would receive a message from the queue with the query, execute the query, and return the results in a response message.  This worked well and the transactional messaging ensured that queries weren’t lost but the customer wanted to be more intelligent about where queries should be routed.  To do this we set up a dialog timer on each of the query servers.  When the timer expired (every 10 seconds or so) an activation procedure would gather some statistics about running queries, processor, memory, and disk usage, etc. and send the info to the front end server.  The front end server could then use this information to decide which server was the least loaded and route the queries accordingly.

X-ray transmission


This one started as a SOAP toolkit question.  A research project at a college had a web service that people could use to search for and download x-rays.  The problem they were running into was the x-ray files were very large and some of the SOAP clients couldn’t handle the large messages.  We wrote a service that broke an x-ray down into a bunch of Service Broker messages that were sent to the SOAP server.  The client would request a message at a time and assemble the x-ray.  The client could request a message as many times as necessary.  SSB would send the same message until the client acknowledged it.  This was a pretty minor change to the web service that made it usable to significantly more clients.

Express backup


One of the biggest pieces of negative feedback we got for SQL Express was that since it didn’t include SQL Agent there wasn’t an easy way to set up periodic database backups.  The Service Broker dialog timer looked like a good way to do this so I came up with a way to backup SQL Express at any chosen interval:  http://blogs.msdn.com/b/rogerwolterblog/archive/2006/04/13/575974.aspx.  One of the cool things about this is that the backup routine is implemented in the database being backed up so if you move the database to another server, backups continue to work.  I like to think of this as the first step in the contained database effort.

Task Management


It was a pretty logical step from using SSB for scheduled database backups to using SSB for scheduling any database command.  You just need an activated stored procedure that reads SSB messages and executes them.  Service Broker is distributed so distributed tasks just work.  I had a lot of interest when I published this article but I didn’t hear any implementation stories:  http://technet.microsoft.com/en-us/magazine/2005.05.servicebroker.aspx

Windows Workflow


Service Broker was designed to support reliable, transactional, asynchronous applications.  The typical Service Broker application queues a message to request an action and then goes away until the action is complete and a response is received.  I always thought that pattern looked a lot like a workflow application so one day I built a Windows Workflow framework that used Service Broker as the eventing layer.  This resulted in a reliable, transactional infrastructure for building workflow applications.  This was pretty cool because as long as the actions the workflow invoked were transactional, the workflow was totally reliable.  The workflow application could die at any point and Service Broker would recover and pick up where it left off as soon as the database started again.  http://blogs.msdn.com/b/rogerwolterblog/archive/2007/05/21/windows-wf-on-sql-service-broker.aspx

Well, those are my Service Broker Rube Goldberg stories.  I would be very interested in hearing about other unique SSB applications.