Another step in the road

October 4, 2012 08:49 by Matt Whitfield

So, I haven't really updated the blog a lot. At all. But I have been busy.

When Atlantis initially started, one of the long term goals was to wrap all the Atlantis tools into a single IDE which brought the tools together seamlessly. That would have really hit the 'be integrated' part of the strap-line.

Well, now they are. I'm pleased to announce that Schema Inspector, Data Inspector, Data Surf, Schema Surf and Data Space Analyzer (now with a z :-) ) are available as DBA xPress from Pragmatic Works. Currently DBA xPress is available as part of BI xPress Pro.

Most of you in the community will already know that I started working with Pragmatic Works at the beginning of the year. They were interested in acquiring the functionality provided by the Atlantis tools - but also interested by the look and feel of the Atlantis applications as they wanted to improve on the look and feel of their products. So, for the last 10 months I have been working on creating an IDE for Pragmatic Works which had very similar goals to my own long-term goals for the Atlantis toolset.

I'm really happy that this step has finally come to the announcement stage - so please check out DBA xPress here.

Tags: ,

releases

Long time, no blog

November 9, 2011 22:17 by Matt Whitfield

Ok, so it's been a while.

We've just updated the Atlantis site to remove the necessity to log in before downloading the applications. I'm a bit sad about it, if I'm honest, because of two things:

  • I liked seeing people go past, downloading the tools and getting value from something I'd made. Obviously quite a few people would put in 'asdf afds' or similar as their name, but others would put in fake names that were funny - which actually made me smile anyway.
  • Every registration had a referral code generated for it from two words in the English dictionary - and those were also a fairly constant source of amusement. Examples include 'HopelssWeightlifter', 'TwerpGesticulating', 'InfidelityWorshippers', 'IntangibleHemorrhoid' - and those are just since the start of November. The all-time great would have to be 'RandyGrannies'.

However, with a full-time job that I'm very committed to, and the birth of our third child, the idea of paying the £400 to renew the SSL certificate didn't seem like a good use of money. The hosting is up for renewal in February, so things may well change again around that time.

Another change is that I will no longer be offering support. Previously I have been committed to answering people's emails and forum posts at various times in the day, no matter how trivial. Most of the time, people don't even take the time to simply reply and say 'Thank you'. Also, considerable quantities of people simply can't be bothered to RTFM or use the software for the 5 minutes it would take to find their answer. It came down to a simple choice - do I want to spend time with my children or reply to support emails? No prizes for guessing which won.

That's not to say, of course, that I won't answer emails at all. I will. But only if you're polite.

In terms of updating the applications, I will still be updating the applications as/when necessary. I won't be supporting Denali before it's release, but you can expect an updated version of all applications around the time of it's release. I remain open to receiving feature requests etc.

Tags:

thoughts

What's new in Denali CTP3, DMV-wise?

August 7, 2011 22:33 by Matt Whitfield

Ok, so for researching what's new in Denali CTP3, and therefore what schema inspector has to cope with, I have had to look at the DMV differences between CTP3 and 2008. So, here's a summary of the differences. I'm not going to replicate BOL here, just provide a simple lookup for 'what's new?'.

So, DMVs that are in Denali CTP3 but not 2008:

  • [sys].[availability_group_listener_ip_addresses]
  • [sys].[availability_group_listeners]
  • [sys].[availability_groups]
  • [sys].[availability_read_only_routing_lists]
  • [sys].[availability_replicas]
  • [sys].[column_store_dictionaries]
  • [sys].[column_store_index_stats]
  • [sys].[column_store_segments]
  • [sys].[database_filestream_options]
  • [sys].[dm_db_fts_index_physical_stats]
  • [sys].[dm_db_log_space_usage]
  • [sys].[dm_db_uncontained_entities]
  • [sys].[dm_filestream_non_transacted_handles]
  • [sys].[dm_fts_semantic_similarity_population]
  • [sys].[dm_hadr_auto_page_repair]
  • [sys].[dm_hadr_availability_group_states]
  • [sys].[dm_hadr_availability_replica_states]
  • [sys].[dm_hadr_cluster]
  • [sys].[dm_hadr_cluster_members]
  • [sys].[dm_hadr_cluster_networks]
  • [sys].[dm_hadr_database_replica_cluster_states]
  • [sys].[dm_hadr_database_replica_states]
  • [sys].[dm_hadr_instance_node_map]
  • [sys].[dm_hadr_name_id_map]
  • [sys].[dm_logpool_hashentries]
  • [sys].[dm_logpool_stats]
  • [sys].[dm_os_cluster_properties]
  • [sys].[dm_os_memory_broker_clerks]
  • [sys].[dm_os_server_diagnostics_log_configurations]
  • [sys].[dm_os_windows_info]
  • [sys].[dm_server_memory_dumps]
  • [sys].[dm_server_registry]
  • [sys].[dm_server_services]
  • [sys].[dm_tcp_listener_states]
  • [sys].[filetable_system_defined_objects]
  • [sys].[filetables]
  • [sys].[fulltext_semantic_language_statistics_database]
  • [sys].[fulltext_semantic_languages]
  • [sys].[registered_search_properties]
  • [sys].[registered_search_property_lists]
  • [sys].[sequences]

DMVs that are in 2008 but not Denali CTP3:

  • [sys].[database_principal_aliases]

DMVs columns that are in Denali CTP3 but not 2008:

  • [sys].[change_tracking_databases].[max_cleanup_version]
  • [sys].[data_spaces].[is_system]
  • [sys].[database_principals].[authentication_type]
  • [sys].[database_principals].[authentication_type_desc]
  • [sys].[database_principals].[default_language_lcid]
  • [sys].[database_principals].[default_language_name]
  • [sys].[databases].[containment]
  • [sys].[databases].[containment_desc]
  • [sys].[databases].[default_fulltext_language_lcid]
  • [sys].[databases].[default_fulltext_language_name]
  • [sys].[databases].[default_language_lcid]
  • [sys].[databases].[default_language_name]
  • [sys].[databases].[group_database_id]
  • [sys].[databases].[is_nested_triggers_on]
  • [sys].[databases].[is_transform_noise_words_on]
  • [sys].[databases].[replica_id]
  • [sys].[databases].[target_recovery_time_in_seconds]
  • [sys].[databases].[two_digit_year_cutoff]
  • [sys].[dm_clr_appdomains].[compatibility_level]
  • [sys].[dm_clr_appdomains].[survived_memory_kb]
  • [sys].[dm_clr_appdomains].[total_allocated_memory_kb]
  • [sys].[dm_clr_appdomains].[total_processor_time_ms]
  • [sys].[dm_database_encryption_keys].[encryptor_type]
  • [sys].[dm_db_file_space_usage].[allocated_extent_page_count]
  • [sys].[dm_db_file_space_usage].[filegroup_id]
  • [sys].[dm_db_file_space_usage].[total_page_count]
  • [sys].[dm_db_task_space_usage].[is_remote_task]
  • [sys].[dm_db_task_space_usage].[task_address]
  • [sys].[dm_exec_cached_plans].[parent_plan_handle]
  • [sys].[dm_exec_query_stats].[last_rows]
  • [sys].[dm_exec_query_stats].[max_rows]
  • [sys].[dm_exec_query_stats].[min_rows]
  • [sys].[dm_exec_query_stats].[total_rows]
  • [sys].[dm_exec_sessions].[authenticating_database_id]
  • [sys].[dm_exec_sessions].[database_id]
  • [sys].[dm_exec_sessions].[open_transaction_count]
  • [sys].[dm_os_buffer_descriptors].[read_microsec]
  • [sys].[dm_os_cluster_nodes].[is_current_owner]
  • [sys].[dm_os_cluster_nodes].[status]
  • [sys].[dm_os_cluster_nodes].[status_description]
  • [sys].[dm_os_memory_cache_counters].[pages_in_use_kb]
  • [sys].[dm_os_memory_cache_counters].[pages_kb]
  • [sys].[dm_os_memory_cache_entries].[pages_kb]
  • [sys].[dm_os_memory_clerks].[page_size_in_bytes]
  • [sys].[dm_os_memory_clerks].[pages_kb]
  • [sys].[dm_os_memory_nodes].[cpu_affinity_mask]
  • [sys].[dm_os_memory_nodes].[foreign_committed_kb]
  • [sys].[dm_os_memory_nodes].[online_scheduler_mask]
  • [sys].[dm_os_memory_nodes].[pages_kb]
  • [sys].[dm_os_memory_nodes].[processor_group]
  • [sys].[dm_os_memory_objects].[max_pages_in_bytes]
  • [sys].[dm_os_memory_objects].[pages_in_bytes]
  • [sys].[dm_os_nodes].[online_scheduler_mask]
  • [sys].[dm_os_nodes].[processor_group]
  • [sys].[dm_os_sys_info].[affinity_type]
  • [sys].[dm_os_sys_info].[affinity_type_desc]
  • [sys].[dm_os_sys_info].[committed_kb]
  • [sys].[dm_os_sys_info].[committed_target_kb]
  • [sys].[dm_os_sys_info].[physical_memory_kb]
  • [sys].[dm_os_sys_info].[process_kernel_time_ms]
  • [sys].[dm_os_sys_info].[process_user_time_ms]
  • [sys].[dm_os_sys_info].[time_source]
  • [sys].[dm_os_sys_info].[time_source_desc]
  • [sys].[dm_os_sys_info].[virtual_machine_type]
  • [sys].[dm_os_sys_info].[virtual_machine_type_desc]
  • [sys].[dm_os_sys_info].[virtual_memory_kb]
  • [sys].[dm_os_sys_info].[visible_target_kb]
  • [sys].[dm_os_threads].[processor_group]
  • [sys].[dm_os_worker_local_storage].[federatedxact_address]
  • [sys].[dm_os_worker_local_storage].[query_driver_address]
  • [sys].[dm_os_workers].[processor_group]
  • [sys].[dm_tran_session_transactions].[open_transaction_count]
  • [sys].[filegroups].[is_system]
  • [sys].[fulltext_index_columns].[statistical_semantics]
  • [sys].[fulltext_indexes].[property_list_id]
  • [sys].[partition_functions].[is_system]
  • [sys].[partition_schemes].[is_system]
  • [sys].[server_audits].[predicate]
  • [sys].[server_event_session_events].[predicate_xml]
  • [sys].[server_file_audits].[max_files]
  • [sys].[server_file_audits].[predicate]
  • [sys].[server_principals].[is_fixed_role]
  • [sys].[server_principals].[owning_principal_id]
  • [sys].[server_trigger_events].[is_trigger_event]
  • [sys].[service_queues].[is_poison_message_handling_enabled]
  • [sys].[stats].[is_temporary]
  • [sys].[system_internals_partition_columns].[default_value]
  • [sys].[system_internals_partition_columns].[has_default]
  • [sys].[system_internals_partitions].[is_sereplicated]
  • [sys].[tables].[is_filetable]
  • [sys].[trigger_events].[is_trigger_event]

DMVs columns that are in 2008 but not Denali CTP3:

  • [sys].[dm_os_memory_cache_counters].[multi_pages_in_use_kb]
  • [sys].[dm_os_memory_cache_counters].[multi_pages_kb]
  • [sys].[dm_os_memory_cache_counters].[single_pages_in_use_kb]
  • [sys].[dm_os_memory_cache_counters].[single_pages_kb]
  • [sys].[dm_os_memory_cache_entries].[pages_allocated_count]
  • [sys].[dm_os_memory_clerks].[multi_pages_kb]
  • [sys].[dm_os_memory_clerks].[page_size_bytes]
  • [sys].[dm_os_memory_clerks].[single_pages_kb]
  • [sys].[dm_os_memory_nodes].[multi_pages_kb]
  • [sys].[dm_os_memory_nodes].[single_pages_kb]
  • [sys].[dm_os_memory_objects].[max_pages_allocated_count]
  • [sys].[dm_os_memory_objects].[pages_allocated_count]
  • [sys].[dm_os_sys_info].[bpool_commit_target]
  • [sys].[dm_os_sys_info].[bpool_committed]
  • [sys].[dm_os_sys_info].[bpool_visible]
  • [sys].[dm_os_sys_info].[physical_memory_in_bytes]
  • [sys].[dm_os_sys_info].[virtual_memory_in_bytes]
  • [sys].[dm_os_workers].[locale]

So, some neat stuff added. I'm not surprised to see database_principal_aliases disappear, but I was a little surprised to see some of the dm_os* stuff disappear... Interesting, Mr. Bond, very interesting.

Tags:

metadata

A generic list for passing table-valued parameters to SQL Server

May 21, 2011 21:11 by Matt Whitfield

In my current day job, one of the things I need to do is load log files from devices into SQL Server. Nothing out of the ordinary, but the data volume made for an interesting time - because each log file has maybe 100 or 200 rows. So not the sort of volume that you'd like to bulk load, and definitely not the sort of volume that you'd want to insert with individual insert statements. Because we're using SQL Server 2008 I thought that table-valued parameters would be the obvious choice.

So I read up on it and became concerned after reading Bob Beachemin's Blog at SQL Skills about TVP use and plan compilation. That had me worried, and so I was pleased to find, when I checked again, that he had blogged about it again, but this time with more positive news.

So I decided to implement a solution for data loading based on the use of TVPs. Researching that some more, I found a blog by Leonard Lobel about passing the data in. His method of explitly implementing IEnumerable<SqlDataRecord> fit pretty exactly with the method in which I wanted to pass the data to SQL Server. So I sat down at home and came up with the following utility class (yes I am sad and write day job code in the evenings!) which I thought would be good to share with you...

The idea is that it's a simple wrapper around List<T> that provides the explicit implementation of IEnumerable<SqlDataRecord>, but while remaining generic in and of itself. This necessitated that the classes that would use the list would be able to represent themselves as a SqlDataRecord object, and that the generic list would infer this through the use of a generic type constraint.

So here's the interface:

using System;
using Microsoft.SqlServer.Server;

namespace TVPTest
{
    /// <summary>
    /// IDataRecordTransformable is the interface to an object that can
    /// transform itself into a SqlDataRecord, for use with the
    /// <see cref="SqlDataRecordList&lt;T&gt;"/> class in order to load
    /// a data stream into a table-valued parameter
    /// </summary>
    public interface IDataRecordTransformable
    {
        /// <summary>
        /// Creates a SqlDataRecord object that represents the meta-data
        /// of each table row
        /// </summary>
        /// <returns>A SqlDataRecord object</returns>
        SqlDataRecord CreateDataRecord();

        /// <summary>
        /// Populates an existing SqlDataRecord object with the values
        /// from the current row
        /// </summary>
        /// <param name="dataRecord">The SqlDataRecord to populate</param>
        void PopulateDataRecord(SqlDataRecord dataRecord);
    }
}

So we have two methods, CreateDataRecord - which is the method that creates the SqlDataRecord object and populates it with columns, and PopulateDataRecord which is passed the existing SqlDataRecord and is expected to call the various set methods in order to set the values.

Let's have a look at the list class itself then...

using System;
using System.Collections.Generic;
using Microsoft.SqlServer.Server;

namespace TVPTest
{
    /// <summary>
    /// SqlDataRecordList represents a list of objects that are
    /// <see cref="IDataRecordTransformable"/>, and therefore can be used
    /// to represent the data passed to a table-valued parameter
    /// </summary>
    /// <typeparam name="T">The type of object in the list</typeparam>
    public class SqlDataRecordList<T> : List<T>, IEnumerable<SqlDataRecord>
        where T : IDataRecordTransformable
    {
        /// <summary>
        /// The enumerator for the SqlDataRecord list, used with
        /// SqlDbType.Structured
        /// </summary>
        /// <returns>An IEnumerator object to enumerate the
        /// SqlDataRecord list</returns>
        IEnumerator<SqlDataRecord> IEnumerable<SqlDataRecord>.GetEnumerator()
        {
            // declare the data record that we will populate
            SqlDataRecord dataRecord = null;

            // if we have entries
            if (this.Count > 0)
            {
                // create the meta-data based on the first entry
                dataRecord = this[0].CreateDataRecord();

                // and yield return each record
                foreach (T entry in this)
                {
                    // populate it
                    entry.PopulateDataRecord(dataRecord);

                    // and yield it for IEnumerable
                    yield return dataRecord;
                }
            }
            else
            {
                // no elements present
                yield break;
            }
        }
    }
}

No particular rocket-science here. Just a generic class with a type constraint, implementing the IEnumerator using the yield and yield break keyword. Job done.

Any questions?

Tags: , ,

clr

T-SQL Tuesday #18 - CTEs - The permission hierarchy problem

May 10, 2011 00:01 by Matt Whitfield

Recently, I was looking at a problem which involved an arbitrary tree structure for permissions. The idea was similar to NTFS security, in that the permission that was defined furthest from the root of the tree would apply to any items below it.

So, consider the following structure:

Users

  • Bob
  • James
  • Tim

Roles

  • Administrator
  • Reporter
  • Engineer

Containers

Company A
    North
    South
    East
    West
        RecursiveTown
        Tableton

So, Bob is the manager at Company A. He gets the admin role at all levels. James is the manager for Company A/North. He gets the admin role for Company A/North, but he is allowed to run reports on Company A's other regions. Tim is an engineer for RecursiveTown - and he just has the engineer role for that town.

Let's have some (very) basic table structures that meet our needs, data-wise:

CREATE TABLE [dbo].[Users] (    ID     INT           IDENTITY (1, 1) NOT NULL PRIMARY KEY CLUSTERED,    [Name] NVARCHAR (50));CREATE TABLE [dbo].[Roles] (    ID     INT           IDENTITY (1, 1) NOT NULL PRIMARY KEY CLUSTERED,    [Name] NVARCHAR (50));CREATE TABLE [dbo].[Containers] (    ID                INT           IDENTITY (1, 1) NOT NULL PRIMARY KEY CLUSTERED,    ParentContainerID INT           FOREIGN KEY REFERENCES [dbo].[Containers] (ID)                                     ON DELETE NO ACTION ON UPDATE NO ACTION,    [Name]            NVARCHAR (50));CREATE TABLE [dbo].[UserContainerRoles] (    ID          INT IDENTITY (1, 1) PRIMARY KEY CLUSTERED,    ContainerID INT NOT NULL FOREIGN KEY REFERENCES [dbo].[Containers] ([ID])                              ON DELETE NO ACTION ON UPDATE NO ACTION,    RoleID      INT NOT NULL FOREIGN KEY REFERENCES [dbo].[Roles] ([ID])                              ON DELETE NO ACTION ON UPDATE NO ACTION,    UserID      INT NOT NULL FOREIGN KEY REFERENCES [dbo].[Users] ([ID])                              ON DELETE NO ACTION ON UPDATE NO ACTION);

Simple... Let's look at the data that might be within them:

Users

IDName
1Bob
2James
3Tim

Roles

IDName
1Administrator
2Reporter
3Engineer

Containers

IDParentContainerIDName
1<NULL>Company A
21North
31South
41East
51West
65RecursiveTown
75Tableton

UserContainerRoles

IDContainerIDRoleIDUserID
1111
1122
1212
1633

So - how can we, succinctly, get a list of the containers and roles that apply to a particular user?

Enter the recursive CTE. Now, recursive CTEs can be a bit interesting. They can also suck, performance-wise. But, for smaller data-access patterns, such as this one, they can provide some very flexible solutions to some problems that previously were not easy to solve at all.

Recursive CTEs follow a simple pattern, but it can seem a bit daunting at first. They all follow this pattern:

Anchor
UNION
Recursor

The Anchor is the statement that returns the basic result set - the first level of recursion. This is the result set that we start with. In the example above, it is the containers to which the user has been given a direct role assignment through the UserContainerRoles table.

The Recursor is the interesting part - it's a query that references the Anchor, and provides the next level of result set. Very nice, but what happens next? Well, next the result set that was last returned by the recursor is passed to the recursor again - and this loop continues until either the recursor part returns no more rows, or the maxrecursion limit is reached.

Now I'm assuming that the fact that you're still reading means you probably just want to see the code. Who am I to stand in your way?

CREATE FUNCTION [dbo].[fn_GetContainerRolesForUserID](@UserID int)  RETURNS TABLEASRETURN (WITH   Hierarchy (ContainerID, RoleID, HierarchyLevel)  AS     (SELECT [c].[ID],                 [ucr].[RoleID],                 1 AS HierarchyLevel          FROM   [dbo].[Containers] AS [c]                 INNER JOIN                 [dbo].[UserContainerRoles] AS [ucr]                 ON [ucr].[ContainerID] = [c].[ID]          WHERE  [ucr].[UserID] = @UserID          UNION ALL          SELECT [c].[ID],                 [h].[RoleID],                 [h].[HierarchyLevel] + 1 AS HierarchyLevel          FROM   [dbo].[Containers] AS [c]                 INNER JOIN                 [Hierarchy] AS [h]                 ON [c].[ParentContainerID] = [h].[ContainerID])  SELECT [ContainerID],         [Name],         [RoleID]  FROM   (SELECT [ContainerID],                 [Name],                 [RoleID],                 [HierarchyLevel],                 ROW_NUMBER() OVER (PARTITION BY [h].[ContainerID]                                     ORDER BY [HierarchyLevel]) AS __RN          FROM   [Hierarchy] AS [h] INNER JOIN                  [dbo].[Containers] AS [c]            ON     [h].[ContainerID] = [c].[ID]) AS iDat  WHERE  [__RN] = 1)

The only non-obvious thing about this query is the use of the ROW_NUMBER function. This is what causes a permission defined at a lower level to take precedence over one defined at a higher level - this is because HeirarchyLevel increases each time it passes through the recursor section of the query, so we want to choose the container role with the lowest hierarchy level for each container.

So, let's look at each of our results:

SELECT [ContainerID], [Name], [RoleID]   FROM [dbo].[fn_GetContainerRolesForUserID](1)

ContainerIDNameRoleID
1Company A1
2North1
3South1
4East1
5West1
6RecursiveTown1
7Tableton1

Ok, so Bob has the admin role at all levels - just what we wanted to see...

SELECT [ContainerID], [Name], [RoleID]   FROM [dbo].[fn_GetContainerRolesForUserID](2)

ContainerIDNameRoleID
1Company A2
2North1
3South2
4East2
5West2
6RecursiveTown2
7Tableton2

James has the Admin role for North, but the Reporter role everywhere else - again, just what we want...

SELECT [ContainerID], [Name], [RoleID]   FROM [dbo].[fn_GetContainerRolesForUserID](3)

ContainerIDNameRoleID
6RecursiveTown3

This just shows that Tim has the Engineer role at RecursiveTown only - excellent...

Simple, right? Happy T-SQL Tuesday!

Tags: , ,

community | t-sql

Feedback on the Schema Engine OSS release

March 27, 2011 20:46 by Matt Whitfield

A few weeks back, I decided to open source Schema Engine, which is the core of the Atlantis product set. I was really hoping that I would get some good feedback on where to take it, see it used in some new and interesting ways and get some good ideas for features.

One person in particular, Justin Dearing (blog | twitter) has come up with some excellent ideas. He sent me an email saying he had some ideas for features, and I thought he was talking about the 'change the formatting of this DDL' level of feature. I was surprised (pleasantly, I might add) to see that he had thought about it at a much higher level - and some of his thoughts mirror where I had it going anyway (for example, Postgres support was something I had always wanted to do). But some of his other ideas are really cool - so check out his blog post on them, and leave feedback.

I have to say, I thought more people would have been interested in this release, and I am sure there are a fair few SQL people out there who could make use of this sort of functionality. Hopefully, Justin's input will start to give the project the impetus that it would need to achieve the future goals that he has outlined. So do, please, hop on over to his blog, check out his ideas, leave your feedback. Or, if you're looking at using the Schema Engine yourself - leave some feedback here, let me know what you think it's good at, let me know what you think it's bad at.

Schema Engine will be appearing on a more standard on-line forge - most likely CodePlex, in the not too distant future. I need to re-organise the code a bit first, so that it doesn't totally bork all the Atlantis products, while not including huge amounts of non-related code - the structure of the libraries in the Atlantis source tree is a little different, including all sorts of non-Schema Engine related stuff. But, right now, I'm busy preparing my session for SQL Bits 8, and hopefully I will meet some of you there...

Tags: , ,

community

The Atlantis applications are now licensing free

March 19, 2011 22:39 by Matt Whitfield

Ok, so you now no longer need to purchase the 0 rated licenses for the Atlantis applications... This is just a short announcement to that effect - the previously paid-for applications, SQL Everywhere, Schema Inspector and Data Inspector now have no licensing code present - meaning they start faster and are more convenient. The other changes in the applications are as follows:

SQL Everywhere - v2.1.173

  • Removal of licensing system
  • Added script as exec for procs (assembly & normal)
  • Added script as select for tables, views and table functions (all four kinds)
  • Auto query analysis now shows undeclared variables
  • Added function differentiation for some context-based functions (e.g. OBJECT_ID)
  • Updated code completion
  • Changed F5 to only execute when code window focused
  • Modified the way some context-only keywords are identified (e.g. SOURCE, TARGET)
  • Added support for WITH termination on ALTER DATABASE
  • Fixed issue with scalar function parameter info showing the return parameter
  • Fixed issue where insert list could show up in the wrong scope
  • Fixed some memory issues
  • Fixed issue whereby read-only files passed on the command line would not be read
  • Fixed issue with double clicking error messages after a syntax check
  • Fixed issue with renaming sub queries

Schema Inspector - v2.1.15

  • Removal of licensing system
  • Added tooltips to important function buttons to clarify their use
  • Fixed issue with timestamp columns and data restoration during table alteration

Data Inspector - v2.1.23

  • Removal of licensing system
  • Added tooltips to important function buttons to clarify their use
  • Fixed issue with selecting columns with incompatible types when comparing

Enjoy!

Tags: ,

releases

Free SQL Server Schema Synchronisation Engine - announcing the release of the Atlantis.SchemaEngine source code

February 24, 2011 19:51 by Matt Whitfield

So, it's been a couple of weeks since the Atlantis tools went 'fully free'. And no, I haven't updated the website yet!

But, here's a follow up that may prove useful to you if you work with schemas a lot, in that I am releasing the source code to SchemaEngine - which is the backbone of our SQL Server products (except for Data Space Analyser, which doesn't really use it). You can do quite a lot with SchemaEngine - you can make schema synchronisation apps, you could implement code completion of your own, make dependency tree viewers. All that sort of stuff.

The API documentation is pretty good - most things in there are fully XML commented, so included in the package is the API reference. This also includes references for all of the internal and private members of the library, too.

Even so, it's not something you'd be able to use straight away without some help - so over the next few weeks I will be doing some posts about how to use SchemaEngine, both in terms of comparing and navigating schemas, how to use the filters (which are fully awesome) and how to generate differencing scripts etc.

I have packaged up the code so that you only have to include one library, and there isn't any extraneous stuff (e.g. common classes that SchemaEngine doesn't make use of).

Before you ask:

  • Yes this is the same code that the Atlantis apps run
  • Yes it's totally free
  • Yes I will keep it up to date (as far as I can)
  • Yes, you do have to use it at your own risk
  • No it's not GPL
  • No I won't be offering any guarantee of support - I will help you if/when I can
  • No it doesn't support Azure
  • No it's not going to support Azure
  • Yes I would love to hear about what you do with it

So, without any further ado - here's the download: Atlantis.SchemaEngine.1.0.4116.zip.

Enjoy - and please feel free to ask questions / leave feedback / spread the word!

Tags: ,

releases

Why Atlantis now provide totally free SQL Server tools

February 3, 2011 20:33 by Matt Whitfield

The other night I had a dream.

In this dream, Bill Gates was a friend of mine, and I had met up with him at a bar. He asked me how Atlantis was going, and I asked him in return if he'd seen the episode of South Park that featured the underpants gnomes. Unsurprisingly, he hadn't seen it. Anyway, I explained to him how the underpants gnomes had a master plan of converting their efforts in collecting underpants into profit, and that, with hindsight, the Atlantis business plan had not been too dissimilar.

Atlantis Business Plan

Also somewhat unsurprisingly, this plan hasn't worked out too well for Atlantis. So, rather than just letting my effort over the last two years go to waste, I thought I might as well give away the applications to the community at large.

Currently, the applications still support licensing, so you have to 'purchase' the professional editions of SQL Everywhere, Schema Inspector and Data Inspector if you want to use them. It's a 0 cost, so you just go straight from the basket to purchase complete. In the near future I will be modifying the applications such that they don't support the licensing, and therefore that won't be required. I will probably also modify the web site so that people don't have to register in order to download the applications. That won't be for a while off yet, as I have other things to focus on.

Anyway, I'd just like to thank everyone that has supported Atlantis over the past two years, and hopefully the tools will still provide value to people. I will be running Google Ads on the web site in a vague attempt to cover the cost of hosting the site - however I somehow doubt that the income will cover it! Here's hoping...

Tags:

CSV File Format - 43 years on and it's still hard?

January 18, 2011 21:46 by Matt Whitfield

This is one issue that really bugs me. CSV files and support of them. It's a fairly simple concept - you have a file which contains tabular data in a grid, and each column is separated by a comma. Sounds simple, right? Well, it is. And it isn't.

CSV files are great as long as you don't have any data in them that contain commas or new line characters - at which point it tends to go a bit pear shaped in terms of universal support.

Let's look at a few of the extended rules around CSV that so many applications get wrong:

Fields with embedded commas must be delimited with double-quote characters

This one is a fairly obvious one. If you have three values - for example 'Hello', 'sir, how', 'are you?' - then you should have three columns in CSV. However, just dumping out the contents leaves you with four:

Hello,sir,how,are you?

So, fields that contain embedded commas have to be delimited:

Hello,"sir,how",are you?

That's nice, but now assigns special meaning to the double-quote character. So, if fields contain a double quote character - we need to follow another rule:

Fields with embedded double-quotes must escape the contained double-quotes if they are delimited

What does that actually mean?

Imagine the value 'test,"testing",test'. We would need to delimit that, because it contains commas. However, the double quote characters would then take on special meaning, and therefore we need to escape them. Escaping them just involves replacing a single double-quote with two:

"test,""testing"",test"

So we're all good - right? Again, almost. CSV records are generally one record per line - but what about fields that contain multiple lines of text? Again, they need to be delimited in the same way, and this can mean that a single CSV data record can span multiple lines. This leads us to our third really important rule:

Fields that span multiple lines must be delimited

Imagine the phrase 'testy\ntest' (i.e. a new line between testy and test). That would be:

"testy
test"

Now we are all good. Properly.

Was that really so hard?

Next time you use a standard piece of software (*cough* SSMS *cough*) try exporting CSV and see what happens...

Tags: ,

compatibility

Schema Surf

Tag cloud

Calendar

<<  October 2014  >>
MoTuWeThFrSaSu
293012345
6789101112
13141516171819
20212223242526
272829303112
3456789

View posts in large calendar