Graphinder – Encapsulating persistence layer


Recently we’ve been talking a little about difference between domain and persistence models.
The main reason I mentioned for using both is making our domain logic completely persistence agnostic.
But how do we actually tell another developer using our persistence layer API to go with an exact behavior we want?
Since database context, persistence models and ways of mapping are generally visible outside of the data access layer, I began to search for a solution to get rid of it.
This article will elaborate in short on encapsulating persistence (or data access) layer, so that only the things we want to be used from other layers are the things that are exposed.

Encapsulating database context

Let’s think of a simple DbContext that EntityFramework offers:

Wherever our data access project is referenced, AlgorithmContext is accessible. That’s a huge NO-NO!

So how about we make our AlgorithmContext internal?

But what internal keyword actually means? Let’s see:

The internal keyword is an access modifier for types and type members. Internal types or members are accessible only within files in the same assembly.

As we’ve already made our data access layer a separate project, it’s a separate assembly. Therefore, our context won’t be visible for other projects referencing it.
So far we’re good. First goal achieved: encapsulating persistence layer.
But still, we need to expose it somehow, so layers referencing our data access layer can actually, you know… access data. :)

Exposing data access

As stated in previous article, I’ve moved from repository pattern to queries and commands approach.
Let’s make a quick recall on how an example of query would look like. I’ll use simple example based on actual Graphinder code, as opposed to examples used in previous article to get closer to real world case:

As AlgorithmContext is already internal I can’t expose it outside assembly with public keyword, because that would cause an accessibility mismatch. That’s actually great, because whenever I’ll attempt to use my context around data access project, there is no way I will raise its accessibility above internal.

Outcome

Query accessibility:

Context accesibility:

“Cannot access internal field ‘Context’ here”

Exactly what we wanted here! Cool!

But we want our API to be idiot-proof, don’t we?
No playing around with persistence models here. No knowledge on how anything is mapped, on how everything is persisted. Right?
Just make it all internal then. If we’re encapsulating persistence layer properly, only knowledge of very few things is exposed:

  • How to Query data from database
  • How to send Commands so that data is either created/modified or processed

No other functionality should be exposed here.

Wanna play around with my approach? Go visit GitHub then:


Last word of reminder

Such design is not reflection-proof. There is no such code. It’s also not idiot-proof if your fellow guys from team change signature from internal to public because it’s “easier”. Messier is always easier.

Graphinder – Queries and Commands vs Repository pattern


While planning a data access layer for Graphinder project I found myself really frustrated with Repository pattern, that I’ve followed for quite a time like a mantra, that was spread by many business and by Microsoft itself. But since Entity Framework 6 already provides a mix of repository and unit of work pattern, why would I build yet another abstraction over it?
That’s when I stumbled upon a concept of Queries and Commands, known especially from CQRS (Command and Query Responsibility Segregation). I would like to emphasize here, that I’m not 100% into CQRS. I like to think of myself as a picker of what’s best for me from any either newer or older pattern/approach. I’m nobodies blind follower.

What’s wrong with Repository pattern

Repository is a good-old pattern that is around here for some time already. Its purpose is quite simple:

The repository mediates between the data source layer and the business layers of the application. It queries the data source for the data, maps the data from the data source to a business entity, and persists changes in the business entity to the data source. A repository separates the business logic from the interactions with the underlying data source or Web service.

So we have mediating, we have mapping (post about persistence models: Domain models vs Persistence models) and we have seperation. And we do agree with those points, right? Right.
But things are not so simple out there. Repositories often end up as big, bloated almost-god-objects with lots of one-time-usage methods, spanning to mostly hundreds and hundreds of line. When we see such class in domain/business layer, what do we think first? Decomposition!

Why such thought have not been born here? Why encourage things like this (example exaggerated on purpose):

Of course we can go with query that takes one object of query params that will have milion of properties inside of it. And we can keep on going on this. But why?

Queries and Commands – decomposition

Why not go the other way round? When I see such big object with big responsibilities, I think decomposition.
I don’t care if it is most adore pattern in the world. From the moment I look at it, I can see how a great idea is so wrong much too often on implementation side. Why wouldn’t I go for something like this instead:

Can’t a simple insert operation be like this instead:

I know what would you ask now: how do I test then?
Question is, what would you like exactly to test here, except of maybe null or empty string or invalid debt value (which you can still test here)? Or maybe its ORM you want to test here? But let’s assume you really want to. What now?
You could simple create abstraction, ie. IUserContext instead of DbContext and expose it either through ctor or through public property. You can mock it now, you can stub it now.

So, how about replaceability here?
It has been said that it’s a common myth that databases are commonly swapped in business applications.
I would say that yes: it is quite rare. But it doesn’t stop you from being prepared!
In a rise of NoSQL solutions, it will also be much more easier to play around and test new technologies.
But hey, we already got it hidden from both domain and services layer, don’t we? Caller doesn’t care what is called up there. And that’s our goal.
As for replacing… Well, provided we don’t want to go through each query and command class to make an actual swap, we might want to expose Context up and just inject a property through any mainstream IoC container.
As an example, Autofac offers it out of the box: http://docs.autofac.org/en/latest/register/prop-method-injection.html#property-injection


Let’s sum it up then:
I guess I’ve just unsubscribed myself from Repository fan club.

Graphinder – Domain models vs Persistence models


As we’ve moved on from domain logic of optimization problems and algorithms for solving them, I’d like to start a series of few post about approaches of working with persistence layer. Today I’d like to elaborate a little about differences and usage of domain models vs persistence models (also called ‘entities’ in ORM world). But first of all, let’s start with few words about each of model types, what they are and what we’re using them for. For each example I will assume that we’re using Entity Framework 6 as an ORM of choice for each example.

Domain models vs Persistence models – what and what for?


Domain models

Generally speaking, models that represent closely as possible real world and its objects. In both pure DDD and in OOP in general we would see such models taking responsibility for some of the business logic that regards them (provided that SRP is not broken of course). While this approach is quite popular and is the closest to OOP paradigm, with rise of ORMs, so called Anemic Models concept have emerged. In such approach, models are mostly property bags and their business logic is often placed higher, in Service Layer. I do strongly oppose such approach, but there are possibly cases (like RAD approach) where it makes sense, whenever high re-usability is encouraged.

Persistence models

While in many cases domain tend to be simple and easy to persists as-is, it also tends to grow in complexity during application life cycle.
Persistence models attempt to solve that problem by abstracting away model representation required for persistence, that would often be considered at least inappropriate if not wrong in business logic world. When domain is simple, we can often just map properties one-to-one with some ORM-specific changes to meet persistence requirements. It’s also the moment when we think of merging two mentioned models into one. But should we?

Domain models vs Persistence models – to separate or not?

When models in both domain and persistence layer seem to be identical, an idea in developer’s head is born: let’s merge them!
That might be a good idea, whenever there is a strong tendency to model data one-to-one or whenever development is Database-First driven, not Domain-First driven. Whenever organization has at least few DB Admins/developers or analytics that model domain with client with ready to convert UML diagrams, that might be an often to see approach.

Simple domain

Let’s take a look on two sample models, that look almost the same and seem to be ideal candidates for merging.
They might come from Database-First approach or domain is so simple that it’s one-to-one mirror of persistence layer.

As Persistence models can enter invalid state as opposed to domain models, we might want not to bother even with encapsulation, as they won’t ever leave Persistence Layer at all. For Entity Framework purposes, we also needed to refactor Subordinates signature to virtual ICollection so that EF6 can create proxies for N-to-Many relationships.
Now if we think of merging those two models into one, that would be a no-brainer:

A must have that have been applied here is a parameterless constructor that Entity Framework will use to populate properties through reflection.

A not so simple domain

Now let’s think of a problem right from the Graphinder project. Let’s take our Simulated Annealing algorithm and all the possible cooling strategies it can use. I need to actually map interface to DB here:

But how do I map it to database? I cannot treat each implementation as separate entity (I mean, I can but it won’t make sense) as there won’t be any difference between them. ICoolingStrategy looks like this:

So.. what can I do?
I can introduce some sort of factory based on persisted enum, defining type of strategy, to be recreated on deserialization, ie:

Now I can recreate appropriate strategy whenever needed.
But strategies I use in algorithms are stateless. What if I have several implementations of interface that have state?
Well, then if there is only difference in logic, I could go with Table per Hierarchy (TPH) approach in Entity Framework and let it create a so called ‘Discriminator’ or even flatten the hierarchy to one entity myself.
But what if implementation might differ throughout the hierarchy?
Well, then the only valid way would be to go with Table per Type approach to have both flexibility and reusability preserved in case anything changes.

Domain models vs Persistence models – when to use what?

After doing a lot of research, I came to conclusion that whenever possible, go with both and even introduce ViewModels as additional representation used only by views. If you find yourself flexible enough without Persistance Model approach, make sure that you don’t throw around Domain models everywhere, make a clear separation.

    Finally, things to keep in mind when making decision:

  • Even if you decide to skip any of model types, make sure that domain has no idea how to persist itself – it’s persistence layer responsibility!
  • Make sure that you don’t throw around the same model throughout whole application. Entity might be good for persistence layer, but throwing it to your views can end up in some mumbo jumbo code in the end!