Returning succinct exception messages from CLR code in SQL Server

December 17, 2010 14:15 by Matt Whitfield

I recently received a question from Dave Ballantyne (blog|twitter) about returning succinct error messages from CLR code.

Let's have a look at a basic example of some CLR code where you might want to throw an exception:

private const string _passwordSalt = "--^p@$$w0rD-sAlT-";[Microsoft.SqlServer.Server.SqlFunction]public static SqlString fn_getPasswordHashCode(string password){    using (SHA256Managed hashProvider = new SHA256Managed())    {        byte[] hash = hashProvider.ComputeHash(             Encoding.UTF8.GetBytes(_passwordSalt + password));        StringBuilder output = new StringBuilder();        foreach (byte b in hash)        {            output.Append(b.ToString("X2"));        }        return output.ToString();    }}

So, this returns us a hex string that contains the hash of a user-supplied password. However, this code will return the hash of the salt if an empty string is passed, and will return a hash even if the input is NULL. We may well want to avoid those situations, so we could modify the code in order to throw an exception that notifies the user of this fact:

[Microsoft.SqlServer.Server.SqlFunction]public static SqlString fn_getPasswordHashCode(string password){    if (string.IsNullOrEmpty(password))    {        throw new ArgumentException(            "Supplied password cannot be null or empty.", "password");    }    using (SHA256Managed hashProvider = new SHA256Managed())    {        byte[] hash = hashProvider.ComputeHash(            Encoding.UTF8.GetBytes(_passwordSalt + password));        StringBuilder output = new StringBuilder();        foreach (byte b in hash)        {            output.Append(b.ToString("X2"));        }        return output.ToString();    }}

This returns us a particularly verbose and ugly error message:

Line 1: Msg 6522, Level 16, State 2:A .NET Framework error occurred during execution of user-defined routine or aggregate "fn_getPasswordHashCode": System.ArgumentException: Supplied password cannot be null or empty.Parameter name: passwordSystem.ArgumentException: at UserDefinedFunctions.fn_getPasswordHashCode(String password).

What this is doing is returning a static string which tells us the name of the assembly module being executed, as well as giving us the Message and StackTrace members.

My first point of call was to see if deriving from ArgumentException would allow us to control that output. So I created a class 'MyArgumentException':

private class MyArgumentException : ArgumentException{    public MyArgumentException(string message, string paramName)        : base(message, paramName)    {    }    public override string Message    {        get        {            return null;        }    }    public override string StackTrace    {        get        {            return null;        }    }}

Once the code was modified to throw MyArgumentException instead of ArgumentException, the error message was as follows:

Line 1: Msg 6522, Level 16, State 2:A .NET Framework error occurred during execution of user-defined routine or aggregate "fn_getPasswordHashCode": MyArgumentException: MyArgumentException: .

So, still pretty ugly - obviously this wasn't the way to go. At this point I was due elsewhere so I emailed Dave back to say I didn't think it was possible, and went on my merry way.

The next day, however, I did some more digging, and found that some people had been using SqlContext.Pipe.ExecuteAndSend(SqlCommand) in order to be able to send better error messages.

So I tried that, in a stored procedure, and it did what I wanted - almost. It returned the error to the client, but also returned the 6522 error afterwards. So I posted a connect item - Allow SQL CLR Exception messages to be passed more succinctly to the client - and Niels Burglund posted that you can wrap the ExecuteAndSend call in a try {} catch {} block to eat the .NET exception - and then the job's done. I let Dave know, and thought 'I'll write a blog post about that'.

It was only while I was thinking about how to write this very post, that I realised that method of returning errors is great for Stored Procedures - but for User Defined Functions the Pipe is not available - meaning you can't ExecuteAndSend, meaning no nice error message goodness.

So, if you want to be able to return nice error messages from the CLR, go vote for that connect item. Then hope.

Tags: , ,

clr | workarounds

Inserting into a temporary table from a stored procedure with implicit creation

September 7, 2010 10:28 by Matt Whitfield

One thing that I see a lot is people trying to export data from a stored procedure into a temporary table, and have the temporary table created based on the schema of the result set. There is a common solution using OPENROWSET which isn't ideal, but here it is:

sp_configure 'Show Advanced Options', 1GORECONFIGUREGOsp_configure 'Ad Hoc Distributed Queries', 1GORECONFIGUREGOSELECT * INTO #TempTable   FROM OPENROWSET('SQLNCLI', 'Server=(local);Trusted_connection=yes;',                              'EXEC [TestDatabase].[dbo].[stproc_test]')SELECT * FROM #TempTable

Ok, so that works. Or does it? Does Server=(local) actually get us to the server we want? What about if we have multiple instances? Ok, so we could use @@SERVERNAME to build the provider string and... Oh. OPENROWSET won't accept anything but a string in there. Not an expression that results in a string, nor a variable of a string type.

Well, at least we can build up the SQL string to remove the hard-coded reference to the... Oh. OPENROWSET places similar restrictions on the SQL statement to be executed.

All in all, I would say this is a practice to be avoided. You will end up with hard-coded references to both your server and instance name and your database name in code - and, worst of all, they are in a string. Why is that a problem? Because it means that there is no tool that will validate this for you before the SQL gets run for the first time. You can see it now, the operations team move the database onto a new system, and keep the old system live as a GTH plan. The queries continue to run, merrily accessing the old data, making it look (initially at least) like the system is working. Then, a couple of weeks later, somebody who is writing 'an urgent report for the directors' suddenly realises that the data isn't updating, and hasn't been updating since the system move. Cue panic investigation from the operations team, who find the above, then have to try and find all instances of that style of code in the whole database.

Not exactly how to win friends and influence people.

Tags: , ,

t-sql

Procedure and Trigger Metadata

May 19, 2010 20:11 by Matt Whitfield

There are a couple of extra views which give us some more information about SQL Modules. They relate to specific types of SQL Modules, namely procedures and triggers.

Let's look at [sys].[procedures] first, which gives us some more information about procedures, over and above the information in [sys].[objects]:

is_auto_executed This is 1 if the procedure is marked for auto-execution at server startup. Note that only procedures in the master database can receive this flag. To mark a procedure for startup execution, we use sp_procoption. For example, to set a procedure called sp_my_startup_procedure to run at startup, we would use the following:
exec sp_procoption
     N'sp_my_startup_procedure',
     'startup',
     'on'.
is_execution_replicatedThis is 1 if the procedure's execution is replicated. This is achieved by using the proc exec value for the @type parameter of sp_addarticle.
is_repl_serializable_onlyThis is 1 if the procedure's execution is replicated, but only when the transaction is serializable. This is achieved by using the serializable proc exec value for the @type parameter of sp_addarticle.
skips_repl_constraintsThis is 1 if the procedure skips contraints marked NOT FOR REPLICATION, usually true only on subscribers.

So, not really much extra information about procedures. Let's look at [sys].[triggers] which gives us information about triggers, and does not inherit columns from [sys].[objects]:

nameThis is the name of the trigger.
object_idThis is the object ID of the trigger. Both DML and DDL triggers are assigned object IDs, even though DDL triggers are not visible in [sys].[objects].
parent_classThis is the parent class of the trigger - specifying whether the trigger is a database parented trigger (for DDL triggers) or an object parented trigger (for DML triggers). It will be 0 for DDL triggers and 1 for DML triggers.
parent_class_descThis is the description that relates to the value in the parent_class column. The current values are 'DATABASE' or 'OBJECT_OR_COLUMN'.
parent_idThis is the ID of the parent object (table or view) for normal DML triggers. For DDL triggers, it will always be 0.
typeThis is the tye code of the trigger, the same as it would be in [sys].[objects]. The trigger type codes are 'TA' for an assembly trigger, or 'TR' for a SQL trigger.
type_descThis is the description that relates to the value in the type column. Currently the values are either 'CLR_TRIGGER' or 'SQL_TRIGGER'.
create_dateThis is the date on which the trigger was created.
modify_dateThis is the date on which the trigger was last modified by using an ALTER statement.
is_ms_shippedThis is 1 if the trigger was created by an internal process, or if the trigger was created as part of the SQL Server installation.
is_disabledThis is 1 if the trigger has been disabled by using a DISABLE TRIGGER statement. It can be re-enabled by using an ENABLE TRIGGER statement.
is_not_for_replicationThis is 1 if the trigger was created with the NOT FOR REPLICATION option, meaning that it will not fire on subscribers.
is_instead_of_triggerThis is 1 if the trigger is an INSTEAD OF trigger. These can be useful for creating complex views that can be updated, inserted into or deleted from using normal DML.

There seems to be quite a lot of information about triggers, but, in effect we're not told that much. We simply find out if the trigger is marked NOT FOR REPLICATION, whether it is enabled and whether it is an INSTEAD OF or AFTER trigger.

One thing that we still don't know about triggers is what events they fire on. For example - does a particular DML trigger fire on an INSERT, an UPDATE, a DELETE or a combination of the three? For the answer, we have to look to another table, [sys].[trigger_events]:

object_idThis is the object_id of the trigger for which the event is defined. You may find multiple rows for a single object_id for triggers that fire on more than one event.
typeThis is the type number of the event that causes the trigger to fire. Triggers that fire on multiple events have multiple rows, with each row specifying a different type value.
type_descThis is the description that relates to the value in the type column.
event_group_typeThis is the event group identifier that tells us which event group the trigger was created on. If the trigger was not created on an event group, then this is null.
event_group_type_descThis is the description that relates to the value in the event_group_type_desc column.
is_firstThis is 1 if the trigger is marked as the first trigger to fire for the specified event, using the sp_settriggerorder procedure.
is_lastThis is 1 if the trigger is marked as the last trigger to fire for the specified event, using the sp_settriggerorder procedure.

We now have all the information we need about triggers. A note, though, about the is_first and is_last columns - a lot of people mistakenly believe that you can actually specify the order in which multiple triggers fire. You can't. You can just specify that a trigger must execute first for a particular event, or that a trigger must execute last for a particular event, and not the explicit order of any other triggers in between. However, if you run across this problem, then you probably have bigger issues, as having more than 3 triggers on the same event type for the same parent object would not be a brilliant idea, performance-wise.

Next time we'll look at some of the metadata that surrounds the CLR objects, as well as the assemblies that contain them.

Tags: , ,

metadata

SQL Module metadata

May 11, 2010 21:38 by Matt Whitfield

So what are SQL modules? Simply put, SQL modules are objects in the database that are programmed in SQL. This isn't entirely accurate, as there are a few SQL programmable object types that don't fall under the 'sql module' banner. These are:

  • Computed columns
  • Check constraints
  • Default constraints

So, effectively, that leaves us with stored procedures, replication filter procedures, the three types of functions (scalar, in-line table-valued and multi-statement table-valued) as well as both DML and DDL triggers. Also, the legacy 'standalone rule' and 'standalone default' object definitions count as SQL modules. However, the less said about them the better, frankly! The meta-data for these objects is stored in the system view [sys].[sql_modules] - so let's look at what we're offered:

object_idThis is the ID of the object to which this definition applies. Unlike previous versions of SQL Server, there will be only one entry per object ID, since the entire definition can be stored in a varchar(MAX) field.
definitionThis is the actual text that represents the module at the time that it was created. Note that this may not be the same as the text needed to re-create an object, because name changes effected with sp_rename are not reflected in this definition. Client side code, whether in SSMS, our very own sql editor or any other editor is responsible for replacing the old object name with the new. Some (SSMS included) fail to re-create the object correctly under various circumstances (for example, if comments are placed between the schema and object names), so tread with care if using this definition to recreate objects. This column will show as NULL if the was created using WITH ENCRYPTION keyword.
uses_ansi_nullsThis is 1 if SET ANSI_NULLS was ON when the module was created.
uses_quoted_identifierThis is 1 if SET QUOTED_IDENTIFIER was ON when the module was created.
is_schema_boundThis is 1 if the module is schema bound - i.e. was defined with the WITH SCHEMABINDING keyword. Note that this is only relevant to views and functions - procedures and triggers cannot be schema bound.
uses_database_collationThis is 1 if there is a dependency between the collation of the database and the sql module, where the sql module is also schema-bound. This type of dependency means that the database default collation cannot be changed.
is_recompiledThis is 1 if the module was created with the WITH RECOMPILE keyword.
null_on_null_inputThis is 1 if the module will produce a null result on null input. This applies only to functions.
execute_as_principal_idThis is the ID of the database principal specified in the WITH EXECUTE AS clause. If there is no WITH EXECUTE AS clause, then this will be NULL. Explicitly specifying WITH EXECUTE AS CALLER will also result in this column being NULL. Specifying EXECUTE AS OWNER will result in the value being -2. For EXECUTE AS [database_principal_name] or EXECUTE AS SELF, this will be the ID of the relevant database principal in [sys].[database_principals].

So we get nearly enough to create the script for all SQL modules from this one view. The major exception being the correction of object names for sql modules that have had their name changed using sp_rename. Interestingly, if a procedure is altered using ALTER PROCEDURE, then the definition in [sys].[sql_modules] will still contain the CREATE keyword.

About system stored procedures

System stored procedures are not magic, they are just SQL modules. Their defintion can be found in [sys].[all_sql_modules], in exactly the same format as for [sys].[sql_modules]. Ok, so they are slightly magic - they often reference internal objects that you simply don't have access to as a normal SQL user. But, looking at the source of system stored procedures can be a great way to learn more about the internals of SQL server, as well as giving you the basis for creating your own customised versions.

Next time we'll look at some of the extra meta-data that is available around SQL modules, specifically triggers, stored procedures and replication filter procedures.

Tags: , , , ,

metadata

SQL Everywhere

Tag cloud

Calendar

<<  September 2014  >>
MoTuWeThFrSaSu
25262728293031
1234567
891011121314
15161718192021
22232425262728
293012345

View posts in large calendar