Hello world!

Welcome to WordPress.com. This is your first post. Edit or delete it and start blogging!

Posted in Uncategorized | 1 Comment

Moving to CodeBetter

Well the cat is out of the bag. I am really excited to have been invited to move this blog over to the CodeBetter site. I met up with some of the CodeBetter team last year, and discovered that we had a shared set of values, focused on spreading the word to the .NET community about the best practices, tools, and techniques to deliver great software.

I’m looking forward to reaching a new audience, but I also hope those of you who have found this blog useful will follow me over to the CodeBetter site. Frankly, you should be reading what the guys there have to say, even if I was not moving.

I have some big posts lined up to help folks learn how to architect applications that use LINQ, based on prior experience and proven practice with other ORM frameworks. You will need to subscribe to the CodeBetter feed or bookmark CodeBetter or this page.

Looking forward to welcoming you all to my new home.

Posted in Computers and Internet | 100 Comments

What is the MVC noise all about, anyway? Pt 3

Forms and Leaky Abstractions


The model on Windows for rich-client programming is event-driven. Your window waits for a message, which is dispatched to a handler method it has registered with the OS (the Window Procedure). Frameworks hide this complexity here: in Windows Forms you register a delegate to listen an event and the underlying framework calls that delegate for you when it receives the event you were listening for.


Web applications work by a request-response paradigm. The browser sends a message to the server, which spins up your application in response to the request. Your application processes the request and responds with a message that tells the browser what content to display now.


WebForms’ server side controls brought the familiar world of event-driven programming to the web, by the trick of parsing the request to determine what action had caused the request and then raising an appropriate event on the target page.


Windows applications are stateful. While we react to events, the state of the application is preserved across those events, for the lifetime of the application.


By contrast web applications are stateless. The server spins up a new instance of our application to service each request.


As well as giving the illusion of event-driven programming, WebForms’ also created an illusion of state-fullness. The state of our page was preserved between requests using hidden fields embedded within the HTML and restored when the request was parsed so that our page felt as though it was stateful.


This illusion of event-driven, stateful behavior provided a familiar model for forms developers to use when tackling web projects and enabled programmers with little web experience to transition to web development.


The question is whether this illusion hinders as much as it helps. ViewState, the mechanism used to provide the stateful illusion has resulted in heavy pages as developers fail to grasp how it works ‘under the hood’ and make mistakes in its usage; developers no longer understand what HTML elements like input controls and forms are for; mixing dynamic HTML client-side behavior i.e. Ajax with server-side WebForms controls becomes messy rapidly; the postback model for raising events results in pages that flicker and are unresponsive; the loading of control state on requests encourages developers to think they are interacting with controls, not that they are rendering HTML for display by the browser; the page lifecycle is sufficiently torturous that is has become a favorite interview question.


WebForms have become a leaky abstraction, hiding too much so that the underlying protocols have been obscured and many developers cannot understand how to resolve problems when they hit them.


So the divorce of frameworks like MonoRail from the server-side control model is deliberate. While WebForms developers may look on hesitantly, the truth is that a model that reflects how the web works is in many ways easier to understand.


People who encounter MVC frameworks for the first time often stumble upon how useful they seem to be to RESTian services. The reality, of course, is that ,REST, Monorail et al all seem to have an affinity, because they expose how the web works, instead of obscuring it under a borrowed, and ill-fitting paradigm.



One facet of good design is software that is amenable to change. When we think about the cost of software we need to consider the total cost of ownership, not just the cost to live. Sadly that often gets forgotten, because people don’t add the two costs together when thinking about software, leading to short-term approaches in build, in an effort to reduce cost, even if the overall cost then increases.

When we talk about cost of ownership, the big cost is how amenable is the software to change. As the business the software serves changes, so must the software. While I have been told many times, ‘we do not need to worry about good design, we will never need to change the software’ I have never seen that to be true. In fact, change usually happens long before the first release.

 It turns out that Test Driven Development encourages the adoption of MVC frameworks because UIs tend to require hosting in a context (browser, windows message pump, and console) that vanilla testing frameworks can’t simulate. It’s much easier with TDD to use a Test Double and replace the view with something that you can test.


Of course you might want to replace the model too if it talks to the DB, but that’s a separate story. ‘


It’s worth repeating an earlier blog here. There is a danger that some xUnit testing frameworks try too hard to solve the problem of hard to test code. They introduce the risk that you are no longer encouraged by those ‘hard to test’ areas to adopt best practices. Testing frameworks need to be smart enough, but no smarter.


So, as other posters have mentioned the addition of an MVC framework enables people who want to work in a Test-First way. However it’s important to recognize that one facet of putting your code under test, is the emergence of good design, so designing for testability is also good design.

Just-enough architecture?


I do have some sympathy with people who resist ideas like ubiquitous language, do not want to think about entities and value types, repositories and aggregates when the project is a simple CRUD website with limited behavior. Those people often want to know what is ‘just-enough’ design, what is the minimum they could do to improve the maintainability of their software. I would say for those simple web sites MVC, coupled with a pattern like Active Record to shift the data around, is that: just-enough. I do not want you to discourage you from an understanding of Domain Driven Design or Responsibility-Driven Design, but if you are building a simple CRUD website with limited behaviors in the domain, then MVC-Active Record may well be the way to go. Certainly the popularity of Ruby on Rails would seem to attest that people can ‘get stuff done’ with that approach.

Posted in Computers and Internet | 100 Comments

What is the MVC noise all about, anyway? Pt 2

Monorail and ASP.NET MVC

Now, if you are sold on the principle of clean separation of your model, view and controller, and you want to start using application controllers and separated views then you could roll your own implementation. A better solution however is to use an off-the-shelf alternative to WebForms such as Monorail or, in the future perhaps, the ASP.NET MVC framework to provide this type of framework.



The Castle Project was founded by Hamilton Verissimo de Oliveira originally to develop an Inversion of Control container but later expanded its mission to common enterprise and web applications. Key projects include: MicroKernel & Windsor IoC containers, the MonoRail web framework, Active Record for NHibernate, Aspect # AOP framework, and a Dynamic Proxy generator.


Monorail is an MVC web framework built on the ASP.NET platform.


Monorail has distinct controllers and views. This makes the controllers easy to test. A Monorail controller for example looks something like this:


   [Layout("default"), Rescue("generalerror")]

    public class ProductController : SmartDispatcherController


        public void List()


             PropertyBag["products"] = Product.FindAll();





Here ProductController is our controller, identified by its base class (ultimately Controller). The controller exposes a number of public methods, which Monorail terms actions. Actions are requests to the controller – to provide a response to a request. Within the action we ask the model (Product) for data, which we pass to the view (via PropertyBag) for display. We then pick an appropriate view and tell it to render (Monorail will actually render the view with the same name as the action by default – we just show it here for clarity).


MonoRail, acting as the front controller, decides which (application) controller to display using the url of the request. The url which has the form <site>\<area>\<controller>\<action>.rails


The view is a template written in NVelocity (though you can use other view engines like Brail), that takes the data from the model (via the property bag) and generates html for the response:


<h3>Product list</h3>



<a href="new.rails">Create new Product</a>



<table width="100%" border="1" cellpadding="2" cellspacing="0">







#foreach($product in $products)


    <td align="center">$product.Id</td>

    <td align="center">$product.Name</td>

    <td align="center">$product.Supplier.Name</td>

    <td align="center">

        <a href="edit.rails?id=${product.Id}">Edit</a> |

        <a href="delete.rails?id=${product.Id}">Delete</a>






Monorail uses a composite view pattern equivalent to master pages. Note that on the controller class we attribute the Layout. The layout is the master for the controller and the view we render in response to the request is the child of this master.


In this case our layout looks like:


<!DOCTYPE html PUBLIC "-//W3C//DTD XHTML 1.1//EN" "http://www.w3.org/TR/xhtml11/DTD/xhtml11.dtd"&gt;



    <meta http-equiv="Content-Type" content="text/html; charset=iso-8859-1" />


    <link rel="stylesheet" href="$siteRoot/Content/css/base.css" />











Where $childContent is the placeholder for our view.


The model is just a Plain Old C# object, likely representing an object within our domain. In this case we have a simple Product object. We use the Castle Project’s Active Record framework build on NHibernate to provide persistence for this object:



public class Product : ActiveRecordBase



    public int Id {get;set;}



    public string Name {get;set;}



    public decimal Price {get;set;}



    public Supplier Supplier {get;set;}


    public static Product[] FindAll()


        return (Product[]) FindAll(typeof(Product));



    public static Product FindById(int id)


        return (Product) FindByPrimaryKey(typeof(Product), id);




Hopefully you can see the clean separation of concerns that MonoRail enables.


You can work through the example this code was taken from at: http://www.castleproject.org/monorail/gettingstarted/index.html



Posted in Computers and Internet | 17 Comments

What is the MVC noise all about, anyway? Pt 1

Scott Gu has a post describing the ASP.NET MVC framework that he presented on an altnetconf and you can see the video of that presentation on Scott Hanselman’s site. A lot of the noise has assumed that you comprehend what that means, and why some folks are jazzed about it, particularly the ALT.NET gang. But on the off chance that MVC frameworks have eluded you so far let’s talk about what they are and why they are important.

Separation of Concerns

The MVC in the acronym stands for Model-View-Controller. MVC is a pattern for separating what you are representing from how you are presenting it:


·        The View: which knows how to render the UI

·        The Controller: which knows how to initialize the UI and react to actions.

·        The Model: which encapsulates the information we are interested in displaying.


By separating these different concerns we make it easy to maintain them. Elements adhere to the single-responsibility principle: an object has one and only one reason to change. For example, if we want to change the way we render our elements, but not how we persist or calculate those elements, we don’t have to touch the model. This reduces risk: we do not have to modify code that is unrelated to our changes, and thus risk introducing errors in parts of the system that are orthogonal to the alteration we are making.


Classic forms based programming tends to mix everything together in the event handler: in response to a new price being entered, the code within the event handler calculates a new yield and sets its value on a label on the form. The problem here is that if we want that algorithm elsewhere, we have to first separate it from its interaction with the UI widgets. Often what happens is that instead, the algorithm is repeated elsewhere in a different form. Now when we want to change the algorithm we have to do it in two places. MVC tries to break the high-coupling between these separate concerns.


We use an abstract type (interface or virtual methods) for the controller, view, and model, so that we can provide new implementations of any one element without modifying the others. This is the open-closed principle: software entities should be open for extension, but closed for modification.


You might use this facet of MVC to provide alternative views of the same data, say a graph and a spreadsheet. MVC allows you to swap provide additional implementations because it encapsulates its display in a view.

Because MVC encapsulates the response mechanism in a controller object, it also allows you to change the way a view responds without changing its visual representation, for example to make a view read-only for low privilege users, or to provide a different workflow for different user roles.

This ability to provide many concrete implementations of an abstract type, polymorphism, is an example of the Strategy pattern. A Strategy is a type that represents an algorithm, and by providing new implementations of that type you can vary details of that algorithm or its internal data structures.

Variations on a theme

There is smorgasbord of terminology used for MVC variants, such as Model-View-Presenter or Application Controller. Fowler has a pretty good summary of the various flavors and phrases like Supervising Controller, Passive View, and Presentation Model. The variation is mainly around view-model interaction. For example Supervising Controller recognizes the presence of data-binding frameworks and so admits view-model interaction to refresh data. The Passive View has the controller mediate all interaction, and as a result is easier to test. The Presentation Model takes state and behavior out of the view into one class –a controller-model combination.


The key is to recognize the common thread: these patterns strive for a clean separation of concerns between what we are representing and how we represent it.


The observer pattern is a common adjunct to the separation when you have multiple views displayed simultaneously, such as in an MDI application. The model is an observable, and the view or controller can subscribe by registering an observer interface. If the model is updated outside of the current view, for example by a rate feed, then the views are notified to render themselves afresh. However, it’s not relevant in the presentation tier for a web application where you cannot push updates to the browser. Passive View is a variation of MVC where the controller is responsible for synching state between the model and the view, and does not rely on the Observer pattern, so it’s a common choice for web applications.


MVC also tends to support the notion of composite views, a view which itself contains other views, all of which may have their own controller and model. So your page is a view, but all of the controls on the page are also views. MVC identifies this as a composite view. A composite view may be used wherever a view can, but also delegates to its child views, which, of course, may in turn be composite. Control trees in UIs are in expression of this idea of composition.


The Model-View architecture has been a particularly recurrent variation. In this form the controller and the view are merged into one. It has been particularly prevalent in frameworks to support event-driven programming on windowing systems. MFC’s document view architecture is a classic example here; the form both knows how to render the controls and how to respond to events. The loser here is testability – to test the controller we need to instantiate the view.


A related danger here is the temptation to merge the model in as well, which is the anti-pattern of business logic in the UI. This is especially true when there is no tradition of separating out the model, such as in forms based development.


Some people don’t like the view talking to the model and try to improve the separation by using a Data Transfer Object between the view and the model. For my part as long as the view does not embed the business logic, but leaves it within the view, and is passed the model instead of holding a reference to it, I’m prepared to let the view access the model’s state. I don’t think that the protection of a DTO is always worth the cost. YMMV.


The view embedding behaviors that ought to be methods on the model and the view calling those methods and not the controller are the real smells we want to avoid.


View-Controller in WebForms


Within WebForms we handle any HTTP request in the code-behind, so we can identify it as having the responsibilities of the controller. The template code gives us the layout of the html we return in the response and represents the view. However, the code-behind file and the template are coupled, because controls declared on the aspx become fields in the code-behind, so there is no true separation between them. So within WebForms the built in model is Model-View and not Model-View-Controller. No shock here, perhaps, as the design goal was to move the forms rich-client programming model server side.


Unfortunately we get very poor testability from a view/controller, because we cannot test the controller logic in isolation from the view. This is a problem because in order to instantiate a view we need to spin up whatever hosts the view: in this case we need to launch the asp.net pipeline. This is difficult to do within most xUnit test runners, and even where it is supported, for example by MSTest, the tests are slow and fragile.


Types of Controllers


In a web application when the web server (IIS et al.) forwards our application an HTTP request we need to provide a response. Under ASP.NET an implementation of IHttpHandler  forms the end point for the request and sends the response.


Within WebForms, Page derived objects implement of IHttpHandler and are the end points for requests.


The key takeaway is that there an affinity between controller and page. Each page has one controller, each controller has one page. Fowler calls this a Page Controller and it is WebForms architecture. Note that although we have a controller, we are still implementing model-view with WebForms because the controller and view are coupled.


A Page Controller makes re-use hard. If several actions lead to the same view being displayed, but with differing data we end up with a lot of conditional logic in the page controller or duplication between cut & paste variants of the same page. A page controller also makes navigation hard, for example when coding a wizard, because the logic is embedded within a page, not between the pages, again leading to re-use.


There are other models, particularly Front Controller and Application Controller where the view-controller relationship is broken and controllers dispatch requests to the desired views, after first deciding which view and which data.


There is only ever one Front Controller, which decides what action to call in response to the request.


There are potentially many application controllers, to which the front controller forwards a request, which decide what action to take and redirect to the required view.


Front and application controllers make it much easier to orchestrate navigation within your application, because the navigation code no longer lives within the forms. Instead navigation is handled by the controllers.




Posted in Computers and Internet | 100 Comments

Agile and Just-Enough Design

I’m always amazed at the number of people that believe that if you are doing agile you do not do analysis or design. What changes with agile is not whether you do them, but when you do them or to quote Kent Beck "[the] question is not whether or not to design, the question is when to design".
Agile collapses the waterfall model: analysis-design-code-test into a single phase: build that includes all activities all the time. So we write our code test-first, folding our testing activity into the act of implementing; we design as we implement, revising our theory of how to deliver a software solution as we deliver it. The reason agile takes this approach is feedback. By merging these three activities each one gives rapid feedback on the others and makes change easy (and cheap). So the first thing to recognize is that agile does all the activities, just all the time. The second thing to recognize is that agile does its analysis, design, and testing at the same time as development to avoid wasted effort. The attacks provide immediate feedback to each other when conducted in proximity, feedback that helps us refine our efforts, feedback which allows us to make low cost change in-phase, instead of high cost change out-of-phase.
But in addition to this, agile methods do have an analysis period up front where we gather the users’ requirements and begin to uncover how we will model them, before we start building. Some agile methodologies are better at talking about this phase than others, but it remains implicit in the nature of planning activities such as blitz planning or the planning game that we must have done the work up front to gather the user stories or use cases that we will then estimate against. Projects tend to have to leap over some sort of governance gateway to begin, and while the amount of ceremony varies by organization, we often have a need to know just enough up front to understand if we can solve the problem and how expensive it is likely to be. The number of stories we need to gather for approval, the whole ‘project’, the first few iterations, or just the next iteration tends to depend on the maturity of both the organisation and the system under development.
Of course when we gather those stories or use cases and estimate them, we are doing some design, even if it is only tacitly in our heads, because we are beginning to think about how we will implement them in order to estimate and cost them. For me at what point you cross the line between analysis and design is not an easy one. I have heard it stated that analysis is something that can be discussed with the user, design is what happens after that, but the boundary is murky in a lot of writing.
Methodologies do differ on to what depth this upfront analysis goes. How much do we sketch out in advance, how much do we defer to the iteration in which the story first appears. The methodologies I know (XP, Scrum, Cystal) all capture user stories or use cases to create a requirements list for the system. Classically, XP defers further investigation until the iteration where we build. Crystal recognizes the need for a little more upfront activity called an Exploratory 360.
With Cystal’s Exploratory 360 we want to do just enough to ensure we have captured the knowledge we need to decide if and what to build, but no more. We don’t want to know how we will build, but be sure about what the user wants us to build. In order to defer cost until it is needed, upfront we only want to capture the elements of the domain that are essential, because we otherwise risk doing detailed analysis for requirements that never get implemented. In an exporatory 360 we typically: capture key use cases and actors, sketch the core of the domain model, spike technologies that may be involved, and then using the planning game or blitz planning sketch out a project plan. Cockburn says of this analysis that it is done "in a coarse-grained fashion, to detect whether the intended team and projected methodology, using the intended technology around the projected domain model, can deliver the intended business value with the projected set of requirements according to the draft project plan. The exploratory 360° results in a set of adjustments to the project setup, or in the most drastic case, a decision by the executive sponsor to cancel the project (better now than later!)."
Remember here that the domain model is the layer of your solution that should be comprehensible to an expert user, not all the other objects that go toward making software run such as UI widgets or persistence technology. It’s the place of Domain Driven Design’s Ubiquitious Language. Focus up-front on the domain model only. The other objects will emerge as we build, from our discussions on how to automate this domain. It is the builders not the expert users who care about them. 
This is not big-design up front it is just-enough design to get us started. It is a few days of activity, or perhaps one or two weeks for a large project.
Beck in the 2nd Ed. of XP recognizes the value of this activity: "McConnell writes, "In ten years the pendulum has swung from ‘design everything’ to ‘design nothing.’ But the alternative to BDUF [Big Design Up Front] isn’t no design up front, it’s a Little Design Up Front (LDUF) or Enough Design Up Front (ENUF)." This is a strawman argument. The alternative to designing before implementing is designing after implementing. Some design up-front is necessary, but just enough to get the initial implementation. Further design takes place once the implementation is in place and the real constraints on the design are obvious. Far from "design nothing," the XP strategy is "design always.""
The trouble for most people is figuring out what the ‘just enough’ is. The push from agile methodologies early on was against the BDUF style of methodologies such as SSADM and an environment of vendors pushing CASE tools. So the emphasis was on doing design during the build phase, not before. Activity was about weaning people of their BDUF addiction. But as more people approach agile from a zero-methodology background, where there was little or no design, we have to be careful to reinforce the message that agile methodologies do design, but do just-enough to get to build.

Posted in Computers and Internet | 100 Comments

All aboard the Monorail

Thanks to everyone at NextGenUG for letting me come up to talk about Monorail last night.
Posted in Computers and Internet | 3 Comments