!-- Comments --!

Thursday, May 26, 2011

RE: T-SQL Parameters and Variables

Michael Campbell @ sqlmag has an interesting blogpost about parameter and variables. One of the suggestions he has is to declare variables late in T-SQL code (sprocs, functions, scripts, etc). This is something that I've been wondering about and am glad he posted it. From a performance standpoint, late declaration has no effect in SQL since everything is scoped at the batch level, but from a readability standpoint the late declaration is better. If you use an @table then finding the declaration all the way at the top can be cumbersome. Having it close by the first usage makes it easier to locate (and change) the definition if needed.

Check out the rest of the post for some good suggestions including variable naming and default values.

Friday, May 13, 2011

Migration to SQL Azure/Windows Azure

Recently our company (RQSI) switched over a small website we use for client interaction to Windows and SQL Azure. I have to give MS congrats on how good both of these products are, especially SQL Azure. The Windows Azure required some code updates but the SQL Azure was almost seemless. I think we had to change 2 stored procedures (out of probably 100). Both of the changes were minor since the code used #temp tables which is incompatible with SQL Azure. I simply updated the sproc's to use @temp tables instead which helped performance anyway.

SQL Azure gives you two ways to interact with the server, 1) SSMS (like usual) and 2) silverlight management console built into the azure website. I like the SSMS integration the best even though the silverlight tool is almost a replica of SSMS. The SSMS capabilities have been stripped down for Azure though, for instance when you right click on a database table you only get a limited context menu allowing you to script creates/drops, etc instead of the full menu allowing you to do modifies, edits, selects, etc. Even though the context menus are stripped down you can still code queries the same way you would with any other SQL Server.

One more quick item with SQL Azure, MS gives you the ability to view your usage through new DMVs. A couple of the new DMVs are sys.bandwidth_usage and sys.database_usage. This allows you to monitor your usage through SQL commands instead of having to go into the management console->billing to find out whether usage is increasing/decreasing.

I am testing more tools with SQL Azure (currently working on BI Studio) to see if we can make more use of this tool, I will update as I discover more.

Friday, February 1, 2008

LINQ to SQL

Over the past couple of months I have had the opportunity to build some small "utility" type of applications using Microsoft Visual Studio 2008 and the .NET Framework 3.5. In these utilities I had a lot of freedom since they were "one-off" type projects that were not going to be put into production use. I took advantage of using LINQ to SQL for the database portions. I must say that I am extremely impressed. Although support is pretty much limited to SQL Server I thought it made my database access much easier than any O/RM that I have ever used. I wanted to make a quick list of things that impressed me about LINQ to SQL:




  1. The ability to use drag and drop functionaliy from the server explorer right onto the canvas and the automatic drawing of relationships between tables. The relationships carry right over into the code and foreign keys can be exploited so that LINQ queries can easily join the data. For instance, if you have a CUSTOMERS table and a related CUSTOMER_STATUS table with the following layout:

    You can then you can easily query all the customers with a "Good" status but issuing the following query:


    var GoodCustomersQuery = (from c in m_DbContext.CUSTOMER
    where c.CUSTOMER_STATUS.Status = "Good"
    select c);


    This should encourage developers to properly use foreign keys (finger pointed right back at myself too) because automatically having the fk's generated mean less coding and less complicated LINQ queries.
  2. The ability to add stored procedures as methods to your database context. This is really cool. You can add all of your table objects to the database context and then you can also drag and drop stored procedures. These stored procedures are then generated as methods on the database context class with strongly typed parameters. This can lead to a cleaner implementation of your data layer classes since you can create database contexts for related groups of tables and then add the stored procedures that deal with those table objects inside the database context. This leads me to one final point.

Although I am impressed by LINQ to SQL, it does raise some interesting data layer architecture questions and makes me wonder about potential for abuse. I am experimenting with some ideas for data layer designs to see what will work the best, but really only time will tell as I use LINQ on larger projects. Also, I can see where many developers (myself included) will be tempted to access to db in layers other than DAL since it will be so easy with LINQ to SQL.



Friday, January 25, 2008

Article Review: Top 5 Web Service Mistakes

Recently in searching the web for data caching strategies inside web services I ran across the article on The Server Side regarding the Top 5 Web Service Mistakes. While this article is nearly two years old now I thought it was very timely in the wake of reading the book on Service Oriented Architectures by Thomas Erl (see previous post on Book Review). The author reviews 5 different mistakes that are common when web services are employed on an application architecture. The five are as follows
  1. Using .NET specific types
  2. Not taking advance of ASP.NET
  3. Not enough bang for the buck
  4. Using web services for data access
  5. Trusting the client application

I especially like the numbers 1, 4 and 5. The idea of web services and one of the main benefits of using the cross platform availability. However if you use .NET specific types (such as datasets) then users of other platforms will have difficulties consuming your services. This will destroy one of the principles that SOA's are built upon and that is interoperability.

The fourth common mistake addressed in the article (Using web services for data access) is something that I have seen on many projects. By exposing CRUD through web services you are needlessly coupling the consumers of your services to the back end architecture. In this case even small changes to the back end will ripple through the applications that are depending on your service. Instead it is better to have rich XML messages that pass the data that is needed by your service and let your service perform the needed business logic. In this way you can take advantage of another principle of SOA which is composibility.

The fifth common mistake is the web security. Failure to check input validity can expose your services to attacks. WS-Security can be used to help identificataion and authorization. The new WCF framework is really powerful in this regard and gives options through the configuration file that can make security much more transparent for the system administrator.

Overall I give this article a thumbs up. Although it is two years old I think it speaks volumes to the misunderstandings of web services and SOA.

Friday, January 18, 2008

Example of Data at Rest Compromise

I just read the following article about a British laptop stolen containing sensitive data of military recruits. Notice the article does not seem to mention that the data was being encrypted while at rest (stored on the laptop). An effective information security policy when carried out would insist on such a laptop having at the very least an encrypted file system and difficult passwords which expire in a timely manner. This would ensure that the thieves would only have a small window to try and break the password before it expired.

Mobile data seems to be a fertile area for compromises. Even the best information security policies can be violated with virtually zero awareness by the information security staff. Risk mitigation could be undertaken by using encryption wherever possible and as brought out previously in my blog to use an EFS. Even if your database uses column level encryption then I would still recommend on mobile devices to use the EFS if performance permits. In this case performance penalities would probably be paid by only one person (the user of the laptop). By using both column level encryption, EFS, and strong passwords which expire in a timely mannger you would ensure that a defense in depth or layers.

Protecting Data Part 2 (PCI DSS Standard)

In yesterday's blog I talked about protecting data at rest (e.g. when it is being stored in a database). In reviewing the PCI Data Security Standard used by credit card companies I thought it was interesting that they require sensitive data to be encrypted at the column level when the data is being stored. They actually distinguish in the specification between column level encryption and using an EFS (Encrypted File System) solution. The standard states in requirement 3.4.1 that "[i]f disk encryption is used (rather than file- or column-level database encryption), logical access must be managed independently of native operating system access control mechanisms (for example, by not using local system or Active Directory accounts). Decryption keys must not be tied to user accounts.".

The PCI standard calls for column level encryption at the database level however it does make exception if this is not possible, and as noted above if an encrypted file system is used instead of column level encryption then the key must not be tied to a user account. This would require the company to take definite steps to ensure that the keys were protected. If it was simply tied to a user account (generally a service account used by SQL Server) then it would not be possible to enforce non-repudiation since all the system admins would have access to that user account.

Another feature of the PCI standard that I like is the fact that it calls on a continous security plan. In section 1.1.8 it establishes that (at least) a quarterly review of the firewall and router rule sets must take place. Section 10 calls for regular monitoring of security logs and network tests. Section 11 calls for regular tests of security systems (not just network tests) and processes which would include penetration tests by a third party. All of these requirements stress the fact that engineering for security is a continuous process and not a one time event.

Thursday, January 17, 2008

Protecting Data at Rest

A good article providing an overview of Information Security especially from a database professional's prospective is found at DBTA. The author presents a general overview of information security including the concepts of data at rest and data in motion. "Data at rest" is a term that describes data while being stored in a database or other mechanism. This type of data can be vulnerable to theft from external sources through exploits, but more often it is vulnerable to theft by internal sources, knowingly or unknowingly. An example of this would be a rogue employee who knowingly sells a copy of the company database to a competitor or to someone who performs identity theft. Another example might be an employee who unknowingly leaves a laptop unsecured and has the laptop stolen from them. Both examples show how data at rest can be vulnerable.

The question is how do protect such data. Encrypting the data at rest would be a necessary solution, but exactly how and when to encrypt the data would be the question. For instance since Windows 2000 the Encrypted File System (EFS) could be used to encrypt the underlying database file (HOW-TO). This would certainly protect your data from the employee who unknowingly has his or her laptop stolen while housing the database. It would also give a measure of protection from a rogue employee who has network access to the database file system, but who does not have the user account password with which the data was encrypted with. However it does not protect against an intruder who uses a SQL Injection attack to compromise security and it does not protect against a System Administrator who knows the SQL Server service account password.

A solution to protecting against these types of attack would be to put encryption in place inside the database at the column level. Implementing a solution in SQL Server 2000 would have meant writing custom data layer code in order to implement the encryption, but with SQL Server 2005 we have functions to perform column level encryption/decryption (HOW-TO). These functions allow for assymetric (public/private keys) and symmetric (shared key) algorithms. This would mean that even if an unauthorized person were able to gain physical access to the database file the sensititive data within that file could still be protected. This would protect against even a database administrator who knew all the passwords, but did not have access to the encryption key.

Of course there is always a downside. First and foremost is performance. If you use the column level encryption then understand that searching for data in even simple queries will take exponentially longer than unencrypted versions. Balancing performance with information security is also a tradeoff, but when a compromise occurs erring to the side of security never hurts. The same cannot be side of erring to the side of performance.

The second downside is not really a downside but a matter of risk and that is hiding the key. Any encryption algorithm is only as strong as the key and how well it is hidden, however to make use of the data the key has to be available to the database and/or applications. SQL Server 2005 can store the key for you, but again if the key is stored unprotected with the data on a laptop then if the laptop is stolen then the data should be considered compromised. SQL Server 2005 can protect the key and uses 3DES to do so (HOW-TO). Understanding how SQL Server 2005 protects this key should be understood as having the key compromised would unlock the data.