# Wednesday, February 11, 2009

Even though I have been to two Microsoft strategic design reviews on Oslo, attended all the PDC sessions, and presented “A Lap around Oslo” at the MDC, I have been learning something new about Oslo every day. As I stated before on this blog, Telerik is building some cool Oslo tools for Microsoft and I am designing them. I have to deliver the spec for the first tool to the programmers next week, so I have been hard at work. I thought it would be cool if I posted my progression here in a transparent design process so you can see how I learned Oslo while building this tool, giving you the ability to learn from my many mistakes. (You can get Oslo, along with M and the repository here.)

Just to review, Oslo is a modeling platform for building data driven applications. Oslo consists of three major pieces:

  • A modeling language: M
  • A modeling and data visualization tool: Quadrant
  • A repository: SQL Server

The M language is very exciting. M is actually a little like XML and XSD. Meaning that you never do anything in XML, you create an XML grammar (XSD) to give your XML some meaning. M ships with two tools to give your M some meaning: MGraph and MGrammar.

With Oslo you will create an M grammar using MGrammar; MGrammar a contextual DSL creation language.  MGrammar will convert your users or applications’ input (the users of your DSL) into MGraph. MGraph is a JSON-style M syntax that will allow you to put data into an instance of your M type.

I’ll go more into MGrammar later on, but for now, let’s use one DSL that ships out of the box for Oslo: MSchema. MSchema  is a DSL for TSQL DDL or data definition language. If you learn MSchema, you don’t have to deal with TSQL “CREATE TABLE” ever again. (For some of us this is a good thing.) MSchema is just one of many M grammars that will ship with Oslo, others will include MService, a DSL for working with WCF in C# and VB.NET.

I will model parts of the application with MSchema and then map some data to that MSchema using MGraph. When it is all said and done, I will create database tables and views and sample data from my M code. (M will transform all that from my MSchema/MGraph code to SQL Server databases and data.) This database and metadata will be put into the Oslo repository. (More on that later too, arguable, this is one of the most important features of Oslo.)

The App

I will not give all the details of the application here, not because it is super-secret, but because they are still evolving. Also I want to focus more on the process I took and the M code itself. In a nutshell, we are building an Oslo repository comparison tool with an M visualization engine as well as a data migration piece. Sorry to be vague, but only the first sprint or two are clear in my head, future sprints and versions will include a Visual Studio 2008/2010 plug in, a repository migration wizard, and a contextual DSL using MGrammar. We are building the repository comparison piece in the first few sprints and I will discuss it here.

The purpose of the repository comparison piece is for a developer who has modeled an application and transformed the MSchema and MGraph code into the repository and a runtime (such as .NET) is interacting with that metadata and the repository. Now the developer wants to make changes to the repository (version II of their app) by writing some more M code. The first feature of this tool will compare the old M to the new M and point out the inconsistencies. (I am starting with some basic stuff to get my feet wet.)

Modeling the Application

As I pointed out before, I was approaching this design process the wrong way. First I was writing up some initial user stories and then started to model the domain around those stories using various tools (mostly on paper) so they can be translated into requirements for developers on their first sprint.  I was building a tool for Oslo, but I was not using Oslo. So I started over and did this the Oslo way.

I still started with a user story, but to accompany the user story, I started to model the domain using the M language. I am not sure if this is the right way to be completely honest, but it felt like the right thing to do since imho, it will be easier for the developer to understand these user stories and then estimate them and move them to the product backlog. It feels like a modified version of Scrum and DDD, but I am far from a purist.

While you are suppose to do the design as part of the sprint, I don’t think that modeling a few domain entities is a true design, I expect the team on the first sprint to completely refractor this (via M) as more issues and requirements come to light. Of course I am not the typical user to write a user story. I don’t expect users to know M, so maybe in the real world where the user doesn't know M, a developer will do the M code to accompany the user story, or do it as part of the sprint. As I play more with Oslo, this process will become more clear to me. I suspect that there is not going to be a right answer, it will be a matter of preference.

Anyway, the first thing that we need is an entity to deal with the users of the application. So I fired up Intellipad (or iPad) and I used MSchema to define a type called “ApplicationUser.” The type is defined below.

//MSchema to define a user type
type ApplicationUser
    UserID : Integer64=AutoNumber();
    FirstName :Text#15;
    LastName : Text#25;
    Password : Text#10;      
} where identity UserID;

Here is what it looks like in iPad:


This is the most basic of types, but I figured I would get my feet wet with an easy one. (And besides, I am lazy.) I am defining the fields that will make up the user type, just using the most basic ones now (I am sure that the developers will add more later on.) I defined UserID as an AutoNumber (identity), FirstName as a 15 character text field, LastName as text 25, and password as text 10. (Yea, yea I know I should use a hash with a random salt, but this app does not need rock solid security.)

What I like about Oslo is that by defining the type here, I am giving the developers my intent. While they will most definitely rename, refactor, and reorganize this before it goes into production, they know the intent, the application will have a user and that user can log in with a password. I think this is more natural for a developer to work with (since it is code!) than boxes and lines or a formal written spec, or at least compliments those traditional artifacts nicely.

Now I need an instance of this type. I only can truly get a grip on my type once I put some data into it. This is where other modeling platforms fall down for me. Once I play a little bit with the data, I realize my model is wrong, and I go back and add to it.

To add some data, I need to use MGraph. To me, this seems like a collection of ApplicationUser types, so I named it the most logical thing that came to my mind: ApplicationUserCollection. Not sure if this is the best name for this collection or not, but hey, I am learning and I know this will be refactored by me a few times before it is refactored by the developers many times. So I will leave it this way and see how it evolves.

To create an instance of my ApplicationUser type, I need to tell M which type I am binding to with this syntax: ApplicationUserCollection : ApplicationUser*;

Think of ApplicationUser as the class and ApplicationUserCollection as the implementation or the instantiated object. Not an exact analogy, but it should give you a feel. I can also bound another instance like so: SuperUsers: ApplicationUser*;  however, we only need one instance.

I won’t go too deep into how MGraph works, since Shawn Wildermuth has a great 3 part series on MSchema and MGraph here. Just notice that MGraph takes on this format:







Here is the implementation of my type:

//MGraph to get some test data in
ApplicationUserCollection : ApplicationUser*;

     //using a named instance (Steve, etc)
     Steve {
     Vassimo {
     Zarko {
     Todd {

When this is all compiled by the M parser, this will be transformed into TSQL INSERT INTO statements into my table, the table that was defined in my ApplicationUser type. We don’t have to worry at all about TSQL and SQL Server now since all we are doing is modeling the application in MSchema and MGraph. We won’t bother converting this to TSQL now, since I guarantee it (the M code) will change soon.

Since I am at such an early stage of the design phase and I only modeled one simple entity (and not even a domain entity for that matter, all applications have users), I am just going to save this file to disk via iPad. In theory I should push this M into the repository and then when I make changes I will have the opportunity to save my version history, etc. That seems like too much work at this early stage of the design process, so for now I am just saving different versions of the M file on disk and will push the M to the repository later when I am more confident in my design. Is this a best practice? I don’t know, but it feels right at this time to keep everything local on disk. Time will tell and my thinking on this may change. (It will be fun a year from now to re-read this blog post and compare my thinking now to then.)

Stay tuned, the next part will show a domain entity and some refactoring. As I make progress each day I will continue to post.

posted on Wednesday, February 11, 2009 11:32:47 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback
# Tuesday, February 10, 2009

Many of you have been asking, so here it is. You can download the sessions materials here.

posted on Tuesday, February 10, 2009 9:29:45 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

I spoke at the very first meeting of the Cairo, Egypt based .network user group back in 2007. It will be my pleasure to speak at their first ever code camp in Cairo on Feb 19th and 20th (my birthday). Register here.


posted on Tuesday, February 10, 2009 9:37:16 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback
# Friday, January 30, 2009

Earlier today the Oslo SDK January CTP was released on MSDN. A lot of people have been saying since the PDC, “What is Oslo?” Oslo is a new platform from Microsoft that allows you to build data driven applications. Oslo revolves around the application’s metadata. As Chris Sells describes on in a great white paper on Oslo:

Metadata can be defined as the data that describes an application, for example, how a home page is rendered or how the purchasing workflow operates. Your application data represents the state of the execution of an application, such as what Bob put in his shopping cart on your Web site or his shipping address in your checkout workflow.

To provide a common set of tools for defining metadata so that it can be stored in a place that provides the same set of features as normal application data, Microsoft is creating "Oslo," a platform for building data-driven applications. "Oslo" is composed of three elements: a family of languages collectively called "M," a visual data manipulation tool called "Quadrant," and a data store called the repository.

Telerik is building some cool Oslo utilities and I am in the middle of designing them. As I was talking to Chris about some of the specs the other day, he asked me: “What are you using to keep track of the metadata of your application in your design process?” I was like: “Pen, paper, whiteboard, Word and Excel.” He said why are you not using Oslo? Then it struck me, I was in .NET programmer mode. So last decade. While I am using Visual Studio 2008, WPF, SQL Server 2008 and the Oslo SDK to build an application for Oslo, I was not using Oslo to help build the application.

The application is in its earliest phases (just moving from idea and drawing on a whiteboard to design.) I confess, I made my first mistake, I did not think about a model, I was thinking about the app. So I started over and started to model what the app would do using Oslo. How do you model an application using Oslo? You use the M language.

Specifically at this phase you would use the MSchema portion of the M specification.  I started by creating a schema using MSchema to hold some application artifacts. This requires a different way of thinking, but it is worth the effort because now information about my application is stored in the repository and I will have version history and a much easier time generating the application when the time comes. (You can also use the MGraph portion of the M specification to create a domain specific language (DSL), however, that part of the process won’t come for this application until a little later on.)

As I make progress designing and building this application, I will post it here. You can follow along and learn from my mistakes. Stay tuned, look for the “Oslo” category on this blog.

posted on Friday, January 30, 2009 11:12:43 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback
# Thursday, January 29, 2009

Mary Chipman and I are doing a talk together at TechEd in Los Angeles this May on building solutions “without spending any money.” One of the tricks we will show is using an Access front end utilizing TVPs from the back end SQL Server. She posted a blog on the Access team’s blog about it yesterday. Check it out here.

posted on Thursday, January 29, 2009 1:37:56 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback
# Wednesday, January 28, 2009

If you attended my user group on data driven RESTful apps, you can download the slides and code here. Enjoy!

posted on Wednesday, January 28, 2009 2:53:20 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback
# Monday, January 26, 2009

Due to my comment spam problem, the link to the ORM white paper I wrote got deleted. A month or so ago, I wrote a white paper for Telerik on ORMs in general and their ORM in particular. This white paper is mostly an intro to data access layers, what an ORM will give you and how they work. Here is the link.

posted on Monday, January 26, 2009 10:54:18 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback
# Tuesday, January 20, 2009

I will speaking at the SQL Server User Group on Thursday at 6pm.

You must register to attend: http://www.clicktoattend.com/?id=134822

Location:  Microsoft , 1290 Avenue of the Americas (the AXA building - bet. 51st/52nd Sts.) , 6th floor

Directions: B/D/F/V to 47th-50th Sts./Rockefeller Ctr 1 to 50th St./Bway N/R/W to 49th St./7th Ave.

Session Info:
Applications today are expected to expose their data and consume data-centric services via REST. In this session we discuss ADO .NET Data Services or “Project Astoria” and see how we can REST enable your data. Then you will learn how to leverage existing skills related to LINQ and data access to customize the behavior, control-flow, security model and experience of your data service. We will then see how to enable data-binding to traditional ASP.NET controls as well as Silverlight Then switching gears we will look quickly at consuming of REST services from any platform (including Ruby on Rails) using Visual Studio and the WCF REST Starter kit. We will conclude with a discussion on developing offline applications with the ability to sync back to the online data service. This is a very demo intensive session.

posted on Tuesday, January 20, 2009 11:42:37 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback
# Monday, January 19, 2009

If you are looking for the slides and code for my user group presentation: Data Access Hacks and Shortcuts, you can download it here. Please note, this session and its code is subject to some minor tweaks as the conference season kicks into high gear next month.

posted on Monday, January 19, 2009 6:03:51 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback