Visual Studio meets NDepend – use cases


NDepend


As a very long user of Visual Studio and a little less (but still long) user of Jetbrains Resharper, I’ve started to look out for a tools that would fit in my developer toolkit to cover most of the cases I encounter at work. I’ve decided to split them in general groups of my needs and then look out for tools that would help me in working things out.

One of them is decompilation of .dll, that I work with and would love to understand internally. I’ve started to use Resharper built in decompiler, but since I now own Red Gate Developer Bundle including famous Reflector, I choose it over R# in advanced scenarios.
Another one is definitely performance bottlenecks and general memory profiling. That’s were I rely on Red Gate’s product too.

But today’s post is not about those things. There was actually one tool that I really looked for few times already and failed to find. I was looking for a way to analyze my programs for code quality, meeting conventions I enforce and for general static code/program analysis.
I’ve tried few tools already, mostly free and the one that was actually closest to meet my requirements was StyleCop. It did its job, well… ok. But it was counter intuitive, lacking decent summary of analysis. Generally, UX sucked.
However, recently I’ve been contacted by Patrick Smacchia with an offer to try out his tool for general code analysis.
Let’s see then what happens when Visual Studio meets NDepend!


Review spoiler

What you probably expect here is classic functionality review, like in any other product review: we have this, and that, and that.
I chose to go the other way round and think of use cases where this tool actually fits.
My hopes are that it would make a little easier to decide, whether it fits your workflow, or not! 😉


Case I – Annual Code Quality Review


Use case

Team Leader or developer team as a whole wants to review code quality of a currently worked project to ensure it meets their criteria of ‘clean code’.

Proposed way of analysis

Depending on what we want to check, let’s scan our solution assemblies with chosen criterias and visualize everything with a heat map (called in NDepend ‘MetricTreemap’). As general Size rule I will choose # lines of code (LOC).

My proposed criterias are:

  • lines of code, per method (Level: Method | Color: # lines of code (LOC))
  • number of methods, per type (Level: Type | Color: # Methods)
  • Cyclomatic Complexity aka independent paths through source code, per type (Level: Type | Color: Cyclomatic Complexity (CC))
  • if API for external exposition percentage comment, per type (Level: Type | Color: Percentage Comment)

What do we get as a result?
As an output for every criteria we’ve chosen, we got a nice, colorful heat map representing places we need to take into consideration for refactoring. Example below of a map of lines of code, per method in a solution (gathered per assembly):

MetricTreemap


Proposed way of acting

With information we have and tools we mentioned in the beginning of the article, we now can do quite a lot to clean up our code.
Let’s take a too long method code smell that NDepend pointed us out. It will be easy to point out issues as we use a natural way of doing so – the closer to red color, the more serious problem we have.
Now, using power of Resharper to refactor the code, let’s extract a part of logic from too long method into another (possibly private) one:

  • Select part of logic you think is suited for extracting
  • Right-click -> Refactor -> Extract -> Extract Method…
  • Choose suitable name that will easily be recognizable for its responsibilities in a method we are extracting from
  • Voila!

If things get really messy, you would have to perhaps think of dividing the job to avoid technical debt. 😇
Please also note, that basic extracting mechanics are also available in Visual Studio (though not so good as in Resharper!).


Case II – Defining crucial, yet external dependencies


Use case

Throughout a lifetime of a project, team (or management, or both) can come up with an idea of switching few parts of current solution to meet new business needs (or pending deals).
Possible case could be migrating from EntityFramework to NHibernate (or other way round, or moving back to plain old SQL) or switching current JSON parsing library or perhaps moving to .NET Core?

Proposed way of analysis

Let’s think of a case when we decided to migrate from EntityFramework to other ORM. Before we even think of how complicated it would be to replace dependency, we would need to analyze how deep that dependency is. In general, first thought would be to check our data access layer and look upon its dependencies.
Let’s give it a shot then and see Dependency Graph that NDepened provides us with, when we right click on our data layer on graph and choose Keep only nodes that I use:

data-access-dependencies

Alright, we can see clearly that there is dependency to Entity Framework up here. But what if our implementation is leaky?
We should probably see if Entity Framework is not referenced by other layers of our solution.

Let’s go back to all assemblies on dependency graph by choosing first icon on the left in DependecyGraph menu: View Assemblies view-assemblies.
Now let’s find again Entity Framework and now choose Keep only nodes that use me.
We should end up with a dependency graph of what is really using our beloved (but to be migrated from) ORM framework:

using-entity-framework

We now clearly see, that there is one more direct dependency (GatewayApi in this particular case) to the ORM that we want to swap that we need to consider when making decisions on changing technology.


Proposed way of acting

Since we already defined possible points of problems – including ORM leaking out of data access layer – we need to analyze the complexity of code that we would need to rewrite. NDepend could help us here too a little, if we analyze actual complexity of dependency usage with previously used heat maps or change size of nodes in dependency graph to visualize how many calls to particular dependency we have.
This is of course where NDepend will end up being useful, as out of analysis stage we actually need to get into code and make decisions.
Great thing is that we already have an insight on the problem and can decide having broader knowledge on how many places can actually go down, when anything (or everything 😇) goes wrong.


Other use cases and verdict

While those two mentioned above were the one that immediately came up into my mind, I can think of few other (yet less probable) cases where Visual Studio toolkit and NDepend marriage would bloom.
We could for example troubleshoot performance issues (but not prematurely!) using RedGate’s performance toolkit for .NET developers and use it along with Cyclomatic Complexity, amount of memory allocated per type and number of IL calls that NDepend can visualize us with Metric Treemap.

Verdict?
Would I recommend using NDepend for analyzing and keeping in bounds projects that we’re working on? Definitely!
Can I see any points of improvement for NDepend? Yes, perhaps. Dependency graph UX needs a little more improvement. I couldn’t find ‘Search…’ option in it too, what made it a nightmare to find smaller node.
But, in the end, I guess it all goes down to the point where we need to share our feedback with Patrick (the creator) to keep the improvements coming.

Setup .NET Core on Ubuntu

MS <3 Linux


.NET Core 1.0 is here and it’s a great, great opportunity to start playing with it not only on Windows platform but also on Linux. Today I will show you not only how to run .NET Core but how to setup whole developer environment for developing .NET.
Since one of Microsoft main goals was multiplatform support, let’s take a quick look on how-to setup .NET Core on Ubuntu.

Why Ubuntu? Well.. it’s popular and easy. Also Microsoft used it as a platform of choice for their Bash and Docker support for Windows so decision seemed quite straightforward to me. We’ll be working here on Ubuntu 14.04 LTS as it’s the most widespread version now.

You can also try other platforms as I did, especially Red Hat Enterprise which is totally free on http://developers.redhat.com/ for developers. They also fully support .NET on RHEL and are part of .NET Foundation. Quite rock-solid backed, isn’t it?


Getting .NET Core on Ubuntu

Well, that’s probably the easiest part as it well documented (for now) at Microsoft .NET Core site.
As stated in docs, execute on after another in Terminal to install the framework:

Let’s try if that worked. Type dotnet in Terminal and see if command is recognized:

Setup .NET Core on Ubuntu

Alrighty. Working as intended!


Enter Yeoman – our beloved code scaffolder

Since what plain .NET Core framework offers us in terms of scaffolding is quite minimalist (we can only dotnet new to create empty project), we must turn ourselves to other tools. One of the most popular (if not the most popular) scaffolder is Yeoman. If you’re Linux fan, you already know it very well from other projects.
In order to install Yeoman we’d need two dependencies – Node.js and npm.
Since Ubuntu 14.04 has too old version of Node.js to run Yeoman, we’d need to update it. That’s the part where I’ve struggled a little, as most of generic Ubuntu approaches I knew did not work out. I’m not Ubuntu pro so I’ve turned myself to internet and browsed for working solution.

This one seems to be always working and I’ve read that it’s generally THE recommended way of updating/installing Node.js:

If update was successful, typing node –version should return version 5.X, where X is the latest version of 5th release.
As we now have proper Node.js version, let’s proceed to Yeoman installation.

Let’s install Yeoman itself with command:

Then, we’d need generator for .NET Core templates:

If everything went smoothly, we should see Yeoman welcoming us in generator when we run yo aspnet in Terminal:

yo-aspnet


Optional step – Visual Studio Code

That might be a NO-NO solution for wide range of vim or Sublime Text fans out there, but if you’re willing to check what Microsoft has to offer in lightweight, modular code editors, give VS Code a try.
Since VS Code is generally available through http://code.visualstudio.com/ site, there is no easy way to get it through terminal.

I’ve managed to get it using wget and downloading .deb package from Microsoft download site to Downloads folder and then installing it via dpkg:

No idea if the ?LinkID=760868 won’t change, will keep in touch with Microsoft to clarify that.


But, but… I want to automate!

Since we’re now in raging era of DevOps, we all want to automate. And that’s great! We should whenever we can.
We can easily pack all the lines above into bash .sh file and run it with preceding sudo to get elevated privileges.

Not enough? It’s fine as I’ve already prepare such script in my ubuntu-helpers repository on GitHub.
Just clone the repo to your Linux machine:

Then just run bash script with:

If you’re willing to contribute to ubuntu-helpers, feel free to do so on GitHub:

Graphinder – Application architecture


In a previous post I mentioned that I came to agreement with myself on Graphinder’s application architecture.
As it’s a moment when I’m slowly moving out to other services working around the whole infrastructure, it would be wise to wrap things up and pinpoint possible points of failure or misuse. I’d also share a little insight on my plans of putting such architecture in place on Azure, but wider coverage of that topic would come as a next article.

Application architecture overview

To keep you accustomed with my concept right from the start, I guess the wisest idea would be to start from graphical visualization.
As its far from ideal and still evolving (but not so dramatically as before), please keep it mind that it’s far from ideal in representing every aspect of architecture that’s out there. It would be impossible to put it all organized in a small space of diagram.



Application architecture diagram

Alrighty. We have three web applications I’m currently developing for Graphinder (WorkerApi, GatewayApi and Web) and one, that is currently on hold (Users).
For anyone that’s at least a little into microservices idea, one thing would be odd here. There is no service directory (or registry – have seen many names around the web) and my services are not designed to use service discovery.
Why not? Well, I’m really, really new into microservices. When I’ve said I’m quite new in Reactive Extensions and would want to compare it with my experience with microservices, I’d have to say I’m Rx.Net pro. But I’m not.
Step by step, over next iterations of project I’m gonna improve the whole architecture but hey, first things first. Let’s at least deliver minimal project on end of May!


Services communication flow

Since I have more than one communication flow from frontend down to database, I’ve separated responsibilities in algorithms domain:

  • Algorithms.GatewayApi – manages classical requests like get me a data set, accept a new data set and persists it etc; GatewayApi also stands as point of requesting problem solutions, manages queue of requests and stands as a point of registration for new WorkerApi instances; has also knowledge of SignalR hubs that will accept live progress reports from workers
  • Algorithms.WorkerApi – works on a problem received from gateway; persist current state of worked problem; notifies to address given by gateway; has no idea of nothing around except algorithms_db and parent GatewayApi

Example workflows

  • User posts new solution finding request → Web application calls Algorithms.GatewayApi with request data → GatewayApi enqueues request, callbacks on what has been done → Web informs user what has been done
  • User opens view for currently worked algorithm → Web connects user to SignalR hub user requested ↔ Worker keeps on posting progress to hub so that user has feedback on what’s going on
  • User requests historical data of once completed solution finding → Web calls GatewayApi for archival data -> GatewayApi takes data from algorithms_db and returns it to WebWeb displays archival data to user

The list of possible scenarios for this design is long, but I hope you get the idea on why it has been split up here.

Infrastructure concepts

Since I’m going to communicate over unsecured HTTP protocol throughout the architecture, I’d need to put some sort of environment isolation.
I’ve decided to put all services into separate virtual network, provided out of the box by Azure.
The only valid, public endpoint for accessing whole application would be HTTPS (443) port on Web application.
Since I will cover whole configuration on Azure in the next post, I will leave the rest for that post.

Point of interest

  • Since whole architecture is strongly encapsulated, there would be a need for at least one more public endpoint for other applications reaching services, e.g. mobile applications and other web apps
  • Provided I would like to add integration with other vendors services, I would need to decide whether Web application is point of connecting to them or should I provide small service inside infrastructure for this, depending on my requirements
  • Provided I would like to connect to any on-premise (or different Azure subscription) applications I have, I would need to think of Site-To-Site VPN configuration or additional endpoint for that (VPN gateway cost vs performance)

And that’s it for today. Let me know what you think on my current design.