!-- Comments --!
Thursday, May 26, 2011
RE: T-SQL Parameters and Variables
Check out the rest of the post for some good suggestions including variable naming and default values.
Friday, May 13, 2011
Migration to SQL Azure/Windows Azure
SQL Azure gives you two ways to interact with the server, 1) SSMS (like usual) and 2) silverlight management console built into the azure website. I like the SSMS integration the best even though the silverlight tool is almost a replica of SSMS. The SSMS capabilities have been stripped down for Azure though, for instance when you right click on a database table you only get a limited context menu allowing you to script creates/drops, etc instead of the full menu allowing you to do modifies, edits, selects, etc. Even though the context menus are stripped down you can still code queries the same way you would with any other SQL Server.
One more quick item with SQL Azure, MS gives you the ability to view your usage through new DMVs. A couple of the new DMVs are sys.bandwidth_usage and sys.database_usage. This allows you to monitor your usage through SQL commands instead of having to go into the management console->billing to find out whether usage is increasing/decreasing.
I am testing more tools with SQL Azure (currently working on BI Studio) to see if we can make more use of this tool, I will update as I discover more.
Friday, February 1, 2008
LINQ to SQL
- The ability to use drag and drop functionaliy from the server explorer right onto the canvas and the automatic drawing of relationships between tables. The relationships carry right over into the code and foreign keys can be exploited so that LINQ queries can easily join the data. For instance, if you have a CUSTOMERS table and a related CUSTOMER_STATUS table with the following layout:
You can then you can easily query all the customers with a "Good" status but issuing the following query:
var GoodCustomersQuery = (from c in m_DbContext.CUSTOMER
where c.CUSTOMER_STATUS.Status = "Good"
select c);
This should encourage developers to properly use foreign keys (finger pointed right back at myself too) because automatically having the fk's generated mean less coding and less complicated LINQ queries. - The ability to add stored procedures as methods to your database context. This is really cool. You can add all of your table objects to the database context and then you can also drag and drop stored procedures. These stored procedures are then generated as methods on the database context class with strongly typed parameters. This can lead to a cleaner implementation of your data layer classes since you can create database contexts for related groups of tables and then add the stored procedures that deal with those table objects inside the database context. This leads me to one final point.
Although I am impressed by LINQ to SQL, it does raise some interesting data layer architecture questions and makes me wonder about potential for abuse. I am experimenting with some ideas for data layer designs to see what will work the best, but really only time will tell as I use LINQ on larger projects. Also, I can see where many developers (myself included) will be tempted to access to db in layers other than DAL since it will be so easy with LINQ to SQL.
Friday, January 25, 2008
Article Review: Top 5 Web Service Mistakes
- Using .NET specific types
- Not taking advance of ASP.NET
- Not enough bang for the buck
- Using web services for data access
- Trusting the client application
I especially like the numbers 1, 4 and 5. The idea of web services and one of the main benefits of using the cross platform availability. However if you use .NET specific types (such as datasets) then users of other platforms will have difficulties consuming your services. This will destroy one of the principles that SOA's are built upon and that is interoperability.
The fourth common mistake addressed in the article (Using web services for data access) is something that I have seen on many projects. By exposing CRUD through web services you are needlessly coupling the consumers of your services to the back end architecture. In this case even small changes to the back end will ripple through the applications that are depending on your service. Instead it is better to have rich XML messages that pass the data that is needed by your service and let your service perform the needed business logic. In this way you can take advantage of another principle of SOA which is composibility.
The fifth common mistake is the web security. Failure to check input validity can expose your services to attacks. WS-Security can be used to help identificataion and authorization. The new WCF framework is really powerful in this regard and gives options through the configuration file that can make security much more transparent for the system administrator.
Overall I give this article a thumbs up. Although it is two years old I think it speaks volumes to the misunderstandings of web services and SOA.
Friday, January 18, 2008
Example of Data at Rest Compromise
Mobile data seems to be a fertile area for compromises. Even the best information security policies can be violated with virtually zero awareness by the information security staff. Risk mitigation could be undertaken by using encryption wherever possible and as brought out previously in my blog to use an EFS. Even if your database uses column level encryption then I would still recommend on mobile devices to use the EFS if performance permits. In this case performance penalities would probably be paid by only one person (the user of the laptop). By using both column level encryption, EFS, and strong passwords which expire in a timely mannger you would ensure that a defense in depth or layers.
Protecting Data Part 2 (PCI DSS Standard)
The PCI standard calls for column level encryption at the database level however it does make exception if this is not possible, and as noted above if an encrypted file system is used instead of column level encryption then the key must not be tied to a user account. This would require the company to take definite steps to ensure that the keys were protected. If it was simply tied to a user account (generally a service account used by SQL Server) then it would not be possible to enforce non-repudiation since all the system admins would have access to that user account.
Another feature of the PCI standard that I like is the fact that it calls on a continous security plan. In section 1.1.8 it establishes that (at least) a quarterly review of the firewall and router rule sets must take place. Section 10 calls for regular monitoring of security logs and network tests. Section 11 calls for regular tests of security systems (not just network tests) and processes which would include penetration tests by a third party. All of these requirements stress the fact that engineering for security is a continuous process and not a one time event.
Thursday, January 17, 2008
Protecting Data at Rest
The question is how do protect such data. Encrypting the data at rest would be a necessary solution, but exactly how and when to encrypt the data would be the question. For instance since Windows 2000 the Encrypted File System (EFS) could be used to encrypt the underlying database file (HOW-TO). This would certainly protect your data from the employee who unknowingly has his or her laptop stolen while housing the database. It would also give a measure of protection from a rogue employee who has network access to the database file system, but who does not have the user account password with which the data was encrypted with. However it does not protect against an intruder who uses a SQL Injection attack to compromise security and it does not protect against a System Administrator who knows the SQL Server service account password.
A solution to protecting against these types of attack would be to put encryption in place inside the database at the column level. Implementing a solution in SQL Server 2000 would have meant writing custom data layer code in order to implement the encryption, but with SQL Server 2005 we have functions to perform column level encryption/decryption (HOW-TO). These functions allow for assymetric (public/private keys) and symmetric (shared key) algorithms. This would mean that even if an unauthorized person were able to gain physical access to the database file the sensititive data within that file could still be protected. This would protect against even a database administrator who knew all the passwords, but did not have access to the encryption key.
Of course there is always a downside. First and foremost is performance. If you use the column level encryption then understand that searching for data in even simple queries will take exponentially longer than unencrypted versions. Balancing performance with information security is also a tradeoff, but when a compromise occurs erring to the side of security never hurts. The same cannot be side of erring to the side of performance.
The second downside is not really a downside but a matter of risk and that is hiding the key. Any encryption algorithm is only as strong as the key and how well it is hidden, however to make use of the data the key has to be available to the database and/or applications. SQL Server 2005 can store the key for you, but again if the key is stored unprotected with the data on a laptop then if the laptop is stolen then the data should be considered compromised. SQL Server 2005 can protect the key and uses 3DES to do so (HOW-TO). Understanding how SQL Server 2005 protects this key should be understood as having the key compromised would unlock the data.