Even though I have been to two Microsoft strategic design reviews on Oslo, attended all the PDC sessions, and presented “A Lap around Oslo” at the MDC, I have been learning something new about Oslo every day. As I stated before on this blog, Telerik is building some cool Oslo tools for Microsoft and I am designing them. I have to deliver the spec for the first tool to the programmers next week, so I have been hard at work. I thought it would be cool if I posted my progression here in a transparent design process so you can see how I learned Oslo while building this tool, giving you the ability to learn from my many mistakes. (You can get Oslo, along with M and the repository here.)
Just to review, Oslo is a modeling platform for building data driven applications. Oslo consists of three major pieces:
- A modeling language: M
- A modeling and data visualization tool: Quadrant
- A repository: SQL Server
The M language is very exciting. M is actually a little like XML and XSD. Meaning that you never do anything in XML, you create an XML grammar (XSD) to give your XML some meaning. M ships with two tools to give your M some meaning: MGraph and MGrammar.
With Oslo you will create an M grammar using MGrammar; MGrammar a contextual DSL creation language. MGrammar will convert your users or applications’ input (the users of your DSL) into MGraph. MGraph is a JSON-style M syntax that will allow you to put data into an instance of your M type.
I’ll go more into MGrammar later on, but for now, let’s use one DSL that ships out of the box for Oslo: MSchema. MSchema is a DSL for TSQL DDL or data definition language. If you learn MSchema, you don’t have to deal with TSQL “CREATE TABLE” ever again. (For some of us this is a good thing.) MSchema is just one of many M grammars that will ship with Oslo, others will include MService, a DSL for working with WCF in C# and VB.NET.
I will model parts of the application with MSchema and then map some data to that MSchema using MGraph. When it is all said and done, I will create database tables and views and sample data from my M code. (M will transform all that from my MSchema/MGraph code to SQL Server databases and data.) This database and metadata will be put into the Oslo repository. (More on that later too, arguable, this is one of the most important features of Oslo.)
I will not give all the details of the application here, not because it is super-secret, but because they are still evolving. Also I want to focus more on the process I took and the M code itself. In a nutshell, we are building an Oslo repository comparison tool with an M visualization engine as well as a data migration piece. Sorry to be vague, but only the first sprint or two are clear in my head, future sprints and versions will include a Visual Studio 2008/2010 plug in, a repository migration wizard, and a contextual DSL using MGrammar. We are building the repository comparison piece in the first few sprints and I will discuss it here.
The purpose of the repository comparison piece is for a developer who has modeled an application and transformed the MSchema and MGraph code into the repository and a runtime (such as .NET) is interacting with that metadata and the repository. Now the developer wants to make changes to the repository (version II of their app) by writing some more M code. The first feature of this tool will compare the old M to the new M and point out the inconsistencies. (I am starting with some basic stuff to get my feet wet.)
Modeling the Application
As I pointed out before, I was approaching this design process the wrong way. First I was writing up some initial user stories and then started to model the domain around those stories using various tools (mostly on paper) so they can be translated into requirements for developers on their first sprint. I was building a tool for Oslo, but I was not using Oslo. So I started over and did this the Oslo way.
I still started with a user story, but to accompany the user story, I started to model the domain using the M language. I am not sure if this is the right way to be completely honest, but it felt like the right thing to do since imho, it will be easier for the developer to understand these user stories and then estimate them and move them to the product backlog. It feels like a modified version of Scrum and DDD, but I am far from a purist.
While you are suppose to do the design as part of the sprint, I don’t think that modeling a few domain entities is a true design, I expect the team on the first sprint to completely refractor this (via M) as more issues and requirements come to light. Of course I am not the typical user to write a user story. I don’t expect users to know M, so maybe in the real world where the user doesn't know M, a developer will do the M code to accompany the user story, or do it as part of the sprint. As I play more with Oslo, this process will become more clear to me. I suspect that there is not going to be a right answer, it will be a matter of preference.
Anyway, the first thing that we need is an entity to deal with the users of the application. So I fired up Intellipad (or iPad) and I used MSchema to define a type called “ApplicationUser.” The type is defined below.
//MSchema to define a user type
UserID : Integer64=AutoNumber();
LastName : Text#25;
Password : Text#10;
} where identity UserID;
Here is what it looks like in iPad:
This is the most basic of types, but I figured I would get my feet wet with an easy one. (And besides, I am lazy.) I am defining the fields that will make up the user type, just using the most basic ones now (I am sure that the developers will add more later on.) I defined UserID as an AutoNumber (identity), FirstName as a 15 character text field, LastName as text 25, and password as text 10. (Yea, yea I know I should use a hash with a random salt, but this app does not need rock solid security.)
What I like about Oslo is that by defining the type here, I am giving the developers my intent. While they will most definitely rename, refactor, and reorganize this before it goes into production, they know the intent, the application will have a user and that user can log in with a password. I think this is more natural for a developer to work with (since it is code!) than boxes and lines or a formal written spec, or at least compliments those traditional artifacts nicely.
Now I need an instance of this type. I only can truly get a grip on my type once I put some data into it. This is where other modeling platforms fall down for me. Once I play a little bit with the data, I realize my model is wrong, and I go back and add to it.
To add some data, I need to use MGraph. To me, this seems like a collection of ApplicationUser types, so I named it the most logical thing that came to my mind: ApplicationUserCollection. Not sure if this is the best name for this collection or not, but hey, I am learning and I know this will be refactored by me a few times before it is refactored by the developers many times. So I will leave it this way and see how it evolves.
To create an instance of my ApplicationUser type, I need to tell M which type I am binding to with this syntax: ApplicationUserCollection : ApplicationUser*;
Think of ApplicationUser as the class and ApplicationUserCollection as the implementation or the instantiated object. Not an exact analogy, but it should give you a feel. I can also bound another instance like so: SuperUsers: ApplicationUser*; however, we only need one instance.
I won’t go too deep into how MGraph works, since Shawn Wildermuth has a great 3 part series on MSchema and MGraph here. Just notice that MGraph takes on this format:
Here is the implementation of my type:
//MGraph to get some test data in
ApplicationUserCollection : ApplicationUser*;
//using a named instance (Steve, etc)
When this is all compiled by the M parser, this will be transformed into TSQL INSERT INTO statements into my table, the table that was defined in my ApplicationUser type. We don’t have to worry at all about TSQL and SQL Server now since all we are doing is modeling the application in MSchema and MGraph. We won’t bother converting this to TSQL now, since I guarantee it (the M code) will change soon.
Since I am at such an early stage of the design phase and I only modeled one simple entity (and not even a domain entity for that matter, all applications have users), I am just going to save this file to disk via iPad. In theory I should push this M into the repository and then when I make changes I will have the opportunity to save my version history, etc. That seems like too much work at this early stage of the design process, so for now I am just saving different versions of the M file on disk and will push the M to the repository later when I am more confident in my design. Is this a best practice? I don’t know, but it feels right at this time to keep everything local on disk. Time will tell and my thinking on this may change. (It will be fun a year from now to re-read this blog post and compare my thinking now to then.)
Stay tuned, the next part will show a domain entity and some refactoring. As I make progress each day I will continue to post.