Entity Framework by default doesn’t handle concurrency issues. If you want it to perform concurrency checks, you need to have some column in each of your tables to be able to check whether the row you’re about to overwrite has changed since the last time you read it. The best solution for that is to have a column in which to write a new value (ie. a version number) each time you write a row, and then compare it just before updating it.
Fortunately, Entity Framework does support this technique:
Entity Framework is a great ORM tool. It greatly simplifies database access and gets you going really fast. It’s got a great set of features, and it’s becoming one of the most used ORM tools in the .Net world. However, in some situations, not everything in EF is so great. There are certain things that you have to be careful with when working with EF. For the most simple scenarios, its usage right out the box without any other consideration will work. But, if you’re working in some medium or large-sized project, then there’s some things to consider:
Entity framework is the new kid on the block on the ORM realm. It’s Microsoft new technology for data access, and although it didn’t receive very good comments on its first version, things have been greatly improved in its latest release. It now looks really interesting, with some features which surpass the most seasoned ORM frameworks that have been in the game for years.
Here’s what Entity Framework (EF) has to offer:
Nowadays it’s very usual to work with object-oriented programming and relational data bases. Relational data bases are all about tables, relations and groups. However, the object-oriented programming paradigm consists of objects, attributes and inter-object relationships. When these objects need to be stored using a relational data base, it’s very obvious that these two models of representing information are very different from each other. We need some kind of translation to transfer data from one to the other.
If we’re using objects in our application and we want to save them, we’ll probably use some data base system. We’ll most likely establish a connection to the data base, create a SQL sentence with the values of the object (or call some kind of stored procedure), and save them to some table. Easy enough, right? Well, this may seem trivial for a small object with 4 or 5 properties. Just consider an object not so small, with 40 or 50 properties. But… what happens with associations? And… what if the object itself contains some other objects? We’ll store them in the data base as well? All of them or just some? What will we do with foreign keys? We can see that, as the application gets more complex, this translation is not so trivial anymore… As a matter of fact, storing objects in the database can become a very usual source of headaches. There are some studies which state that up to 35% of code is dedicated to the role of translating data between the application and the data warehouse.
When I’m talking to people about the new things that Visual Studio 2010 and ASP.NET 4.0 bring, and the easy it is to write code to be executed in parallel, many people just don’t care. They see it as if it wasn’t related to them. It’s true that some kind of applications don’t really need this type of optimization… but that’s not an excuse to try to take advantage of our hardware as much as possible. From several years ago the most common thing in any computer is to have at least two processor cores. As a matter of fact, we’re starting to see now desktop computers with 8-core processors. The trend is to have an ever-increasing number of cores in our computers. This means that any application will have the chance to work in an environment where more than one thread can be executed simultaneously. Besides, nowadays if you want your software to perform faster, you can no longer just move it to a machine with a faster processor and forget about it. We’re approaching the maximum speed a processor can get, so improvements in next CPUs will come from the side of more threads, not a higher clock-rate. I think parallel programming is a key concept all programmers should care about. ASP.NET 4.0 and Visual Studio 2010 bring new libraries, types and tools to ease multi-core development. ASP.NET 4.0 includes the new “Parallel Extensions”, which are formed by:
Surprisingly, there’s many programmers who still don’t know what Test-driven development (TDD) is. I think it’s a very important practice that all of us should follow, since it produces software with many less bugs.
TDD is a programming methodology that involves two other practices: test first development, and refactoring. First a set of unit tests is written and verified to fail. Then, code to make them pass must be written, and lastly this code is refactored. The aim of TDD is to achieve clean code that works. The main idea around this methodology is that requirements must be translated into automated tests: thus, when these tests pass, all requirements are guaranteed to have been accomplished. The application to be developed must be flexible enough to allow automated tests to be run. Each test should be small enough to exactly determine which piece of code is failing.
We all agree that ASP.NET and Visual Studio are amazing technologies with state-of-the-art tools. One requires almost nothing more to write any kind of software – except for a database. However, there are some tools that greatly simplify that task. It depends on what you’re trying to achieve, but I’m sure some of these projects will help you get your tasks accomplished faster and easier:
- NHibernate: NHibernate is a mature, open source object-relational mapper for the .NET framework. It’s actively developed, fully featured and used in thousands of successful projects.
- NUnit: NUnit is a unit-testing framework for all .Net languages, initially ported from JUnit. It is written entirely in C# and has been completely redesigned to take advantage of many .NET language features, for example custom attributes and other reflection related capabilities. NUnit brings xUnit to all .NET languages
- Rhino.Mocks: A dynamic mock object framework for the .Net platform. It’s purpose is to ease testing by allowing the developer to create mock implementations of custom objects and verify the interactions using unit testing.
- MVC Contrib: This is the contrib project for the ASP.NET MVC framework. This project adds additional functionality on top of the MVC Framework. These enhancements can increase your productivity using the MVC Framework. It is written in C#. Founded by Eric Hexter and Jeffrey Palermo.
- CruiseControl.NET: CruiseControl.NET is an Automated Continuous Integration server, implemented using the Microsoft .NET Framework.
- S#arp Architecture: Pronounced “Sharp Architecture,” this is a solid architectural foundation for rapidly building maintainable web applications leveraging the ASP.NET MVC framework with NHibernate. The primary advantage to be sought in using any architectural framework is to decrease the code one has to write while increasing the quality of the end product. A framework should enable developers to spend little time on infrastructure details while allowing them to focus their attentions on the domain and user experience.
- Spark View Engine: Spark is a view engine for Asp.Net Mvc and Castle Project MonoRail frameworks. The idea is to allow the html to dominate the flow and the code to fit seamlessly.
- TortoiseSVN: A Subversion client, implemented as a windows shell extension. TortoiseSVN is a really easy to use Revision control / version control / source control software for Windows. Since it’s not an integration for a specific IDE you can use it with whatever development tools you like.
- Castle Windsor: Castle Project offers two Inversion of Control Containers. The MicroKernel and the Windsor Container. Castle Windsor aggregates the MicroKernel and exposes a powerful configuration support. It is suitable for common enterprise application needs. It is able to register facilities and components based on the configuration and adds support for interceptors.
Take a look at these projects. I’m sure you’ll be able to find some good use for them in your projects!
Via .NET Zone.
When starting a new project, the choice of which database to use usually boils down to the one the company usually uses. The systems that are usually considered are just Oracle and Microsoft’s SQL server (MS-SQL). There’s no doubt that big bucks companies have the resources to make the best-of-its-breed products, and almost no other database system comes close to what Oracle and MS-SQL can offer in terms of quality, features, and highly skilled professionals around their products. Yes, all this is true, but there’s a price. And usually this price tag can only be afforded by a company, not by a sole individual.
So, what if I want to develop my own personal project and I require a RDBMS (Relational DataBase Management System)? Oracle and MS-SQL are out of the question because of their high cost. MS-SQL Express has great limitations, so, although it is free, we won’t be considering it for our purpose. Which is the best open source RDBMS?