Building a pay-per-use billing framework

By tom on August 28th, 2012

Hi all,

This is the first part in a series of blogposts I’m planning to do. It will outline the steps needed to perform a pay-per-use billing platform.

An environment of a particular client is fairly large. At this moment they bill their internal customers/divisions only on the space their databases need/use. I find this pretty unfair, as we have several large databases that are hardly used, while other small databases are very heavily used and consume most of the CPU, memory and disk I/O.

In this first post, you can download some scripts (my skydrive).
The first script contains the code to create the database, tables and stored procedures.
The second script contains the code to create 2 jobs.

A word of caution here:  I’m not responsible for any errors it contains.
Run the scripts in a test or development server to verify for yourself.

Now to how it works:

2 Jobs are being created with a different schedule.

The first job, called CapacityPlanning_5minute will run every 5 minutes and will trigger 4 stored procedures. Each of these stores procedure will fill the tables with ‘raw’ data.

  • The first stored procedure will store the output of the total worker time (this is the cpu time in ms) per database. The source is the sys.dm_exec_query_stats, cross applied with the sys.dm_exec_plan_attributes function.
  • The second stored procedure stores the output of the memory cache per database, coming from the ring buffers (sys.dm_os_buffer_descriptors)
  • The third stored procedure stores the output of the total disk I/O per database. The source for this is the sys.dm_io_virtual_file_stats dmv, and I use a temporary table to aggregate both data file and log file I/Os
  • Finally the fourth stored procedure contains logic to store the current size of the database (data + log) in the ‘raw’ tables. The source data is sys.master_files.

The second job, called CapacityPlanning_NightlyBatch will run every night and will trigger the stored procedures to calculate aggregates per database and per day. The results of this are stored in the ‘Polished’ tables. Upon successful execution of the calculation, the ‘raw’ tables will be cleaned, to avoid that the database becomes too large with data we no longer need.

That’s it for now, I plan to add at least 2 more parts, the first to copy all data over to a central database and the second to create reports to perform monthly billing of all databases, based on what they have consumed.

Have a look at it, download the scripts and as always, your comments are welcome.


Login failed – Error 18456 locally on SQL Server – no problems remotely

By tom on August 24th, 2012

This week I had to troubleshoot an issue where a user couldn’t connect with his management studio on the SQL server itself, but had no issues from remote.

In a remote desktop session on the SQL server, following error came up:

Logging into the SQL server from another server/workstation went without problems.

The errorlog showed following entries:

After some research, we found that the user was in the windows administrators group and in SQL the builtin\administrators group was given sa access (I know, certainly not a best practice, but this is not a server we manage/installed/configured, so don’t blame the messenger)

UAC was enabled on this server and apparently when you logon with an administrator of the machine, UAC ‘protects’ the (mis)usage of the administrator accounts/groups.

2 workarounds were possible:

  • either give the account explicit rights within SQL
  • or start the management studio as an administrator (right-click, run as administrator)