So far, our approach to working with databases has been to start with a domain model, and then use NHibernate to generate appropriate database schema matching that domain model. Database schema generated this way is fit for purpose, and leads to a frictionless NHibernate experience. Even if you do not use NHibernate to generate database schema, you could build one that is closest to your domain model. This approach works great for green field applications and is most recommended approach. But if you are working with a legacy database, then what was simple so far may start showing its complex side. It is also worth noting that a legacy database, in the context of this chapter, is not just a database that is old, it is more of a situation that leads to the domain model differing from the database schema.
You could also be working with a database which is not legacy at all, but the application you are working on is not the only (or the primary) application using that database. For instance, you could be working on a small portal for a banking application. This portal of yours may be using the main banking application's database, which from the primary banking application's perspective is not legacy. You would not want to influence the design of domain model for your portal, driven by the design of that database. So you may end up designing a domain model which does not fit with the database available. Another legacy database situation could be that you are working in a team with dedicated DBAs, who are also the gatekeepers of database schema designs. They may have got their own rules about how tables are named and structured, and what level of normalization is used. Database schema that you generate from your domain model may not be accepted without changes, and you will almost always end up with a legacy-database kind of situation.
The main reason legacy databases become complex to work with is because business requirements, and hence the domain model, change at a higher rate than databases.
This leads to a situation where the domain model is vastly different from the underlying database. There is no magic switch in NHibernate that would make all the pains of working with legacy databases go away. But there are some features developed specifically to deal with one or more legacy situations. Besides that there are some tricks/trips that you can employ depending on the situation. In this chapter, we will take a look at those features, and learn how to use these features in different situations. Unlike previous chapters, this chapter does not follow an incremental approach where we start with something simple and keep enhancing it. Most features that we would discuss here are quite disconnected from each other. The same goes for examples and code samples. Wherever possible, I will try to use the employee benefits domain that we have been using throughout this book. But there may be situations where I would be forced to use a new example. Let's get started without much ado then.
Using surrogate keys is becoming the norm these days. Database tables we have seen so far had simple surrogate primary keys. A single column named Id
, of integer type, acted as primary key. In old databases, it is possible to have tables which use natural keys. Primary keys of such tables, at times, can be composed of more than one column. Such primary keys are called composite keys. Let's assume for a moment that the Employee
table in our employee benefits application is such a legacy table and it does not have an Id
column. In order to maintain the uniqueness of records in this table, we designate Firstname
and Lastname
columns to form a composite key.
A barebones Employee
class for this example could look as follows:
public class Employee { public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual DateTime DateOfJoining { get; set; } }
An additional property DateOfJoining
is added for the sake of having some state beyond the primary key of the table. Following code listing shows the mapping for this class, where FirstName
and LastName
properties form a composite ID for this entity.
public class EmployeeMapping : ClassMapping<Employee> { public EmployeeMapping() { ComposedId(idMapper => { idMapper.Property(e => e.FirstName); idMapper.Property(e => e.LastName); }); Property(e => e.DateOfJoining); } }
Instead of using the Id
method, we have used the ComposedId
method to specify which properties of the entity constitute the identifier. The signature for ComposedId
is straightforward. It takes in a delegate accepting IIdMapper
. This instance of IIdMapper
is used to specify which properties on the Employee
class participate in composite ID. In the previous chapters, when we were using a surrogate key, we also specified an identifier generation strategy to be used. Unfortunately, identifier generation strategies cannot be used with composite IDs. This is obvious; NHibernate does not know how to generate values for FirstName
and LastName
properties of an entity instance. Moreover, these properties contain information owned by business, and we do not want NHibernate to generate this information for us.
Saving entities with composite IDs is no different than saving any other entities. But retrieving such entities using ISession.Get<T>
or ISession.Load<T>
is different. Both of these methods take identifier value as input. But with composite IDs, there is more than one identifier value. How do we pass multiple identifier values into these methods? Solution is to create a default instance of the entity, set the values of properties forming the composite ID, and pass the instance of ISession.Get<T>
or ISession.Load<T>
method. The following unit test depicts this behavior:
[Test] public void EmployeeIsSavedCorrectly() { using (var tx = Session.BeginTransaction()) { Session.Save(new Employee { FirstName = "firstName", LastName = "lastName", DateOfJoining = new DateTime(1999, 2, 26) }); tx.Commit(); } Session.Clear(); using (var tx = Session.BeginTransaction()) { var id = new Employee { FirstName = "firstName", LastName = "lastName" }; var employee = Session.Get<Employee>(id); Assert.That(employee.DateOfJoining.Year, Is.EqualTo(1999)); tx.Commit(); } }
We are first saving an instance of Employee
entity. There is nothing new in that part. We then create a new instance of Employee
and set Firstname
and Lastname
properties to the same values that we had saved earlier. This instance is then passed into ISession.Get<Employee>
to retrieve a matching instance of Employee
. We then assert that the instance returned is the one we expected.
The preceding test would fail with NHibernate insisting that we must implement Equals
and GetHashCode
methods on Employee
class. If you recall Chapter 5, Let's Store Some Data in Database, in which we discussed how implementing these methods is a good practice. With composite IDs, implementing these methods becomes mandatory. When you have a single integer type column as an identifier, NHibernate can compare the value in that column in order to determine equality of two instances of same type. But in case of composite IDs, NHibernate cannot do the comparison for us, and rather asks us to implement the logic by overriding Equals
and GetHashCode
methods. Let's add a very simple implementation of these methods to get going. The following code snippet shows the Employee
class after these methods have been implemented.
public class Employee { public virtual string FirstName { get; set; } public virtual string LastName { get; set; } public virtual DateTime DateOfJoining { get; set; } public virtual ICollection<Benefit> Benefits { get; set; } public override bool Equals(object obj) { var otherEmployee = obj as Employee; if (otherEmployee == null) return false; return string.Equals(Firstname, otherEmployee.Firstname) && string.Equals(Lastname, otherEmployee.Lastname); } public override int GetHashCode() { var hash = 17; hash = hash*37 + Firstname.GetHashCode(); hash = hash*37 + Lastname.GetHashCode(); return hash; } }
In the Equals
method, we check if the FirstName
and LastName
properties on the object being compared have the same values as the current class. GetHashCode
looks a bit verbose, but is actually very simple. The algorithm starts with a prime number, to which we add the hash code of every property that should be considered in equality. The resulting hash is multiplied by the same prime number every time hash code of the property is added. An algorithm like this has higher chances of generating a hash code that is least likely to collide with another hash code in the same application domain—an important prerequisite for a hash code. If you ran the test now, it would run and pass successfully.
A foreign key association based on composite ID is where it gets a bit tricky. There is one thing you need to keep in mind while mapping associations that use composite foreign keys. Composite foreign keys work only if the composite ID is mapped as a component. What it means is that the properties that constitute a composite ID must be moved into their own class, like we do with components.
We would extend our previous example to see how associations based on composite IDs work. Let's begin by moving Firstname
and Lastname
properties into their own class. Here is how the new EmpoyeeId
class should look:
public class EmployeeId { public virtual string Firstname { get; set; } public virtual string Lastname { get; set; } public override bool Equals(object obj) { var otherEmployee = obj as EmployeeId; if (otherEmployee == null) return false; return Firstname.Equals(otherEmployee.Firstname) && Lastname.Equals(otherEmployee.Lastname); } public override int GetHashCode() { var hash = 17; hash = hash * 37 + Firstname.GetHashCode(); hash = hash * 37 + Lastname.GetHashCode(); return hash; } }
Notice that we have not only moved the Firstname
and Lastname
properties, but also moved the implementation of Equals
and GetHashCode
methods from Employee
class into this class. All identifier equality checks would now be based on EmployeeId
class and hence it is important to have those methods implemented here.
We do not have any association on Employee
entity that we can use. Let's add the usual benefits collection on Employee
. The next code listing shows the minimal Employee
and Benefit
entity implementation:
public class Employee { public virtual EmployeeId Id { get; set; } public virtual DateTime DateOfJoining { get; set; } public virtual ICollection<Benefit> Benefits { get; set; } public virtual void AddBenefit(Benefit benefit) { benefit.Employee = this; if(Benefits == null) Benefits = new List<Benefit>(); Benefits.Add(benefit); } } public class Benefit { public virtual int Id { get; set; } public virtual Employee Employee { get; set; } }
This code should be familiar to you. It is exactly the same code we have been using in all the previous chapters. The only difference is that the identifier on Employee
entity is not a simple integer type but a complex type EmployeeId
. Let's take a look at the mapping of these entities. The following code snippet shows how the Employee
entity, with its new composite id, is mapped:
public class EmployeeMapping : ClassMapping<Employee> { public EmployeeMapping() { ComponentAsId(e => e.Id, idMapper => { idMapper.Property(e => e.Firstname); idMapper.Property(e => e.Lastname); }); Property(e => e.DateOfJoining); Set(e => e.Benefits, mapper => { mapper.Key(k => { k.Columns(colMapper => colMapper.Name("Firstname"), colMapper => colMapper.Name("Lastname")); }); mapper.Cascade(Cascade.All.Include(Cascade.DeleteOrphans)); mapper.Inverse(true); mapper.Lazy(CollectionLazy.Extra); }, relation => relation.OneToMany(mapping => mapping.Class(typeof(Benefit)))); } }
We are familiar with most parts of the preceding mapping. The parts highlighted are relevant to the discussion of composite ID. Instead of using the ComposedId
function, we have used the ComponentAsId
function to map the properties on EmployeeId
class as the identifiers. The next part is the Set
mapping for the Benefits
collection. Even in that, the only different part is the mapping of key columns. Normally we would have only one key column to map here, but due to the composite identifier, we have to map the two key columns—namely Firstname
and Lastname
. A slight downside here is that the mapping is not refactor-friendly. Let's take a look at the mapping of the other side of the association. The following code snippet shows the mapping of the Benefit
entity:
public class BenefitMapping : ClassMapping<Benefit>
{
public BenefitMapping()
{
Id(b => b.Id);
ManyToOne(b => b.Employee, mapper =>
{
mapper.Columns(colMapper => colMapper.Name("Firstname"), colMapper => colMapper.Name("Lastname"));
});
}
}
Again, nothing new here except for the part where multiple columns are declared as part of the mapping of the many-to-one association.
At this point, you should be able to use the Employee
to Benefit
association as usual. The following unit test can be used to verify that the previous mapping works to our satisfaction:
[Test] public void BenefitsAssociationIsSavedCorrectly() { using (var tx = Session.BeginTransaction()) { var employee = new Employee { Id = new EmployeeId { Firstname = "firstName", Lastname = "lastName", }, DateOfJoining = new DateTime(1999, 2, 26) }; employee.AddBenefit(new Benefit()); Session.Save(employee); tx.Commit(); } Session.Clear(); using (var tx = Session.BeginTransaction()) { var id = new EmployeeId { Firstname = "firstName", Lastname = "lastName" }; var employee = Session.Get<Employee>(id); Assert.That(employee.DateOfJoining.Year, Is.EqualTo(1999)); Assert.That(employee.Benefits.Count, Is.EqualTo(1)); tx.Commit(); } }