# Monday, February 23, 2009

I received an email from the Mix registration group that my 3rd night at the hotel will be free. Good marketing tactic on the Mix team’s part, this will help people justify the travel if the travel expense is lower. Many have argued that the live in-person event is dead, due to a sharp recession and free ways to learn like web casts and blogs, etc.

I think reports of live events death are incorrect. Humans are social and need interaction. We need to go to events to talk to each other, complain about Microsoft’s data access strategy, and see if speakers will embarrass themselves. But I think that the days of the large industry trade show (CES) and large industry events (PDC, TechEd, Mix) are numbered. What will replace them? Code Camps.

The NYC .NET User Group that I am a co-moderator ran a Code Camp in January. There have been several other code camps in the past few months and they have all been very popular and well attended. Code Camps started as a way to supplement the monthly user group meeting. They are community driven. Now people are attending in larger numbers since they can’t justify a travel budget. What I also found what attendees liked about code camp was the ability of the Code Camp to present some alternative points of view (we had some open source sessions and some sessions that would never make it to TechEd since they may have said a bad thing or two about MS in the session.) The attendees also liked the agility of the event, open spaces, and discussions.

Emerging markets are now doing Code Camps. I just attended and spoke at the Cairo, Egypt based .netWork’s code camp last week. Based about 45 Km out of Cairo, about 500 people took off work and traveled to attend the free event 2-day even. Four international speakers came as well as several local speakers. It was a great event and run at a very low cost. Even better is that it got technical education to people who desperately need it.

I think that in a down economy, Code Camps are going to be more and more important and continue to evolve. Industry events will not die, but they will change in size and scope to be smaller and more agile. Code Camps will force “Conference 2.0.” As a conference speaker myself, it will be great to see what that looks like.

posted on Monday, February 23, 2009 9:48:24 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback
# Tuesday, February 17, 2009

In Part I of this series we looked at the tool Telerik is building and how to model an entity in MSchema and MGraph. Part II dug deeper into the modeling process and we saw the value of MGraph and data visualization to help your model along. We did a little refactoring and are now more or less happy with the model for these domain entities. After modeling the application in M, I realize the power of a textual modeling language. Boxes and lines in UML does not excite me, but a textual modeling language makes complete sense.

So far we have ignored the fact that these entities will live in a database. You can push your M into the repository or you can push it to plain old TSQL. Let’s do that today.

Inside of iPad you can go to the “M Mode” menu option and choose “Generic TSQL Preview.” This will split your code with the M code on one side and the TSQL on the other as shown below.  (Note, you can also choose M Mode|Repository TSQL Preview, however I am still avoiding the Oslo repository at the moment. I have my M files under source control in TFS and will push to the repository a little alter in the process. Once again, I am still learning, so this may or may not be a best practice.)

image

Let’s take a look at the TSQL produced.

This type that we build in Part I:

//mschema to define a user type
type ApplicationUser
{
    UserID : Integer64=AutoNumber();
    FirstName :Text#15;
    LastName : Text#25;
    Password : Text#10;      
} where identity UserID;

Will produce a CREATE TABLE TSQL statement like this:

create table [Telerik.MigrationTool].[ApplicationUserCollection]
(
  [UserID] bigint not null identity,
  [FirstName] nvarchar(15) not null,
  [LastName] nvarchar(25) not null,
  [Password] nvarchar(10) not null,
  constraint [PK_ApplicationUserCollection] primary key clustered ([UserID])
);
go

 

Ok, a few things here. First my table name is [modulename].[MGraph instance name]

Ug! ApplicationUserCollection is a horrible name for a table. I incorrectly assumed that the type name would be what we have as a table name. (I guess I should have actually done the M labs at the last SDR instead of goofing off with Michelle Bustamante.) Well this is new technology, so live and learn. :) I have to refactor all my types and instances. I guess I have learned pretty quickly that “collection” is not a good name.

Here is the renamed base type, I named it “UserType” since I can’t think of a good name, however,  I will do this with all my types:

//mschema to define a user type
type UserType
{
    UserID : Integer64=AutoNumber();
    FirstName :Text#15;
    LastName : Text#25;
    Password : Text#10;      
} where identity UserID;

Here is the new MGraph, I am using ApplicationUser here instead of ApplicationUserCollection:

//mgraph to get some test data in
    ApplicationUser : UserType*; ApplicationUser
    {
        //using a named instance (Steve, etc)
        Steve {
        FirstName="Stephen",
        LastName="Forte",
        Password="Telerik"
        },
        Vassimo {
        FirstName="Vassil",
        LastName="Terziev",
        Password="123"
        },
        Zarko {
        FirstName="Svetozar",
        LastName="Georgiev",
        Password="456"
        },
        Todd {
        FirstName="Todd",
        LastName="Anglin",
        Password="789"
        }
    }

 

Now the M Mode|Generic TSQL Preview will show this:

create table [Telerik.MigrationTool].[ApplicationUser]
(
  [UserID] bigint not null identity,
  [FirstName] nvarchar(15) not null,
  [LastName] nvarchar(25) not null,
  [Password] nvarchar(10) not null,
  constraint [PK_ApplicationUser] primary key clustered ([UserID])
);
go

 

And the insert statements are also generated:

insert into [Telerik.MigrationTool].[ApplicationUser] ([FirstName], [LastName], [Password])
values (N'Stephen', N'Forte', N'Telerik');
declare @Telerik_MigrationTool_ApplicationUser_UserID0 bigint = @@identity;

insert into [Telerik.MigrationTool].[ApplicationUser] ([FirstName], [LastName], [Password])
values (N'Vassil', N'Terziev', N'123');

insert into [Telerik.MigrationTool].[ApplicationUser] ([FirstName], [LastName], [Password])
values (N'Svetozar', N'Georgiev', N'456');
declare @Telerik_MigrationTool_ApplicationUser_UserID2 bigint = @@identity;

insert into [Telerik.MigrationTool].[ApplicationUser] ([FirstName], [LastName], [Password])
values (N'Todd', N'Anglin', N'789');

I ran the entire TSQL and then the next step is to load it into the database.

I opened SQL Management Studio and created a new database called oslotest1 as shown here:

create database oslotest1
go

Now I will copy and paste the TSQL in the preview pane of iPad and run it. Fingers crossed. :)

As you can see in the image below, all my tables were created successfully.

image

Let’s take a look at some of the sample data. A simple SELECT * FROM ApplicationUser shows us:

image

As you can see MGraph creates a SQL Server schema [Telerik.MigrationTool] out of the module name in our M file. This is a pretty cool feature (SQL 2005/08 schemas are not used enough, there is too much DBO floating around out there.) I guess I can use an easier to work with schema in the future like migrationtool instead of telerik.migrationtool.

Let’s now query some of the sample data in SQL Server. Here is the result of a query looking at the results of Project ID #1 and the first run of that project, all from the data that we modeled in MGraph:

 

image

I am pretty satisfied with the results of my model. I think the next step is to hand off the user stories and M code to the developers and get started. I will post their reactions, they know nothing about Oslo besides what they read in this blog. :) I will also post my progress and thinking on the repository. I think that now we are going to be working with a team (and a team in another country than me), we can get some benefits by using the repository.

posted on Tuesday, February 17, 2009 10:48:48 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback
# Thursday, February 12, 2009

In Part I of this series, I talked about Oslo in general and about the tool Telerik is building for Oslo. Where we stand today is that I modeled a simple entity (User) and I still have to model some domain entities in MSchema and MGraph. The application I am modeling will allow a user to create a “project” that has the connection strings to the two Oslo repositories they are comparing. Then they will have to in a very Red Gate SQL Compare like fashion compare the entities in the repository and report back a status, including showing the offending M code that is causing a problem side by side with the good M code. Let’s get started modeling my top level domain with M.

As I am thinking now I need a “project” entity. Here is my first stab at a project entity.

//mschema to define a Project type
type Project
{
    ProjectID : Integer64 = AutoNumber();
    ProjectName : Text#25;
    ConectionStringSource : Text;
    ConectionStringDestination : Text;
    DateCompared: DateTime;
    Comment: Text?;
    ProjectOwner: ApplicationUser;
} where identity ProjectID;

You can see that I am making a reference to the ApplicationUser type in my “ProjectOwner” field. Down the line we will have this as a foreign key relationship in SQL Server, but we don’t have to worry about that now, for now we just realize that a ProjectOwner will refer back to the ApplicationUser type we build in Part I.

Here is how the type looks in iPad:

image

Just like before, I need to see some data before I can really figure out what my type is doing. Call me old school or a “database weenie” but I just connect the dots better when I see some data. So using MGraph, I am showing the data here:

//this will define a SQL foreign key relationship
ProjectCollection : Project* where item.ProjectOwner in ApplicationUserCollection;

ProjectCollection
{
    Project1{
        ProjectName = "My Project 1",
        ConectionStringSource = "Data Source=.;Initial Catalog=MyDB1;Integrated Security=True;",
        ConectionStringDestination = "Data Source=.;Initial Catalog=MyDB2;Integrated Security=True;",
        Comment="Project Comment",
        DateCompared=2009-01-01T00:00:00,
        ProjectOwner=ApplicationUserCollection.Steve //direct ref to steve (FK)
    },
    Project2{
        ProjectName = "My Project 2",
        ConectionStringSource = "Data Source=.;Initial Catalog=MyDB1;Integrated Security=True;",
        ConectionStringDestination = "Data Source=.;Initial Catalog=MyDB2;Integrated Security=True;",
        Comment="Project Comment",
        DateCompared=2009-01-01T00:00:00,
        ProjectOwner=ApplicationUserCollection.Zarko //direct ref to Zarko (FK)
    }
}

Notice that we define a relationship between the ProjectOwner and the ApplicationUserCollection from yesterday. This gives us the ability to use the named instances of the users and even gives us IntelliSense as shown below:

image

We are now going to model the results of the comparison of the repositories. I envision a grid showing you each object, its status, name, M code, and asking you to take some action. Let’s model the results. First we will need the Stats lookup values:

//Status type
type ComparisonStatus
{
    StatusID:Integer64=AutoNumber();
    StatusDS:Text#25;
} where identity StatusID;

//mgraph to get some data in to the status
StatusCollection:ComparisonStatus*;

StatusCollection
{
    Status1{StatusDS="Exist Only in Source"},
    Status2{StatusDS="Exist Only in Destination"},
    Status3{StatusDS="Exist in Both, Identical Structure"},
    Status4{StatusDS="Exist in Both, Changes"}
}

Next I need to model the results with a results type.

//mschema for the results
type ComparisonResults
{

ProjectRunID: Integer64=AutoNumber();
ProjectRunDate: DateTime;
ProjectID:Project;   //FK to Project
SourceTypeName: Text?;
SourceTypeM: Text?; //is it possible to generate this on the fly? is there value in storing it?
DestinationTypeName: Text?;
DestinationTypeM: Text?; //is it possible to generate this on the fly? is there value in storing it?
StatusID: StatusCollection; //FK

} where identity ProjectRunID;

After I put some data into this type, I immediately realized that the user will run the project multiple times and we will have to have a 1:M relationship between the run of the project’s results and the types. Meaning when you get the results there will be many types associated with each results. So I will spare you the iterations I went through with MGraph, but because of MGraph, I realized that this model was flawed! Here is the refactored version:

    //wow, we need refactoring tools badly in iPad! :)   
    //mschema for the results

    type ComparisonResults
    {
        ProjectRunID: Integer64=AutoNumber();
        ProjectRunDate: DateTime;
        ProjectID:Project;   //FK to Project
    } where identity ProjectRunID;


    //this will define a SQL foreign key relationship
    ResultsCollection : ComparisonResults* where item.ProjectID in ProjectCollection;
    //mgraph for some test data
    ResultsCollection
    {
        Result1{
        ProjectRunDate=2009-01-01T00:00:00,
        ProjectID=ProjectCollection.Project1
        }
    }

Notice how we have some relationships stored back to ProjectCollection.

Now we need to model the details:

//mschema for details
type ComparisonResultDetail
{
    ProjectRunID:ComparisonResults; //FK
    TypeID: Integer64=AutoNumber();
    SourceTypeName: Text?;
    SourceTypeM: Text?; //is it possible to generate this on the fly? is there value in storing it?
    DestinationTypeName: Text?;
    DestinationTypeM: Text?; //is it possible to generate this on the fly? is there value in storing it?
    StatusID: StatusCollection; //FK
} where identity TypeID ;//need a composite PK of ProjectRunID and TypeID

Now we need to add some data via MGraph. Remember it was above in MGraph where I had this breakthrough.

//this will define a SQL foreign key relationship, two FKs actually separated by a comma
ResultsDetailCollection : ComparisonResultDetail* where item.StatusID in StatusCollection,
     item.ProjectRunID in ResultsCollection;

ResultsDetailCollection
    {
        {
        ProjectRunID=ResultsCollection.Result1,
        SourceTypeName="Customers",
        SourceTypeM="m code here",
        DestinationTypeName="Customers",
        DestinationTypeM="m code here",
        StatusID=StatusCollection.Status1
        },
        {
        ProjectRunID=ResultsCollection.Result1,
        SourceTypeName="Orders",
        SourceTypeM="m code here",
        DestinationTypeName="Orders",
        DestinationTypeM="m code here",
        StatusID=StatusCollection.Status2
        } ,
        {
        ProjectRunID=ResultsCollection.Result1,
        SourceTypeName="Order Details",
        SourceTypeM="m code here",
        DestinationTypeName="Order Details",
        DestinationTypeM="m code here",
        StatusID=StatusCollection.Status3
        },
         {
        ProjectRunID=ResultsCollection.Result1,
        SourceTypeName="Products",
        SourceTypeM="m code here",
        DestinationTypeName="Products",
        DestinationTypeM="m code here",
        StatusID=StatusCollection.Status4
        }
    }

So today I modeled some domain entities and learned that when you play around with adding data via MGraph, you will learn and evolve your model much better. I suspect that showing this to the users will help, that is one of the goals of Quadrant. So with this model, I still have not pushed it into the repository yet, I am saving the data on disk in M files. I think that pushing to the repository may be important to do soon (time will tell if this is a best practice or not, remember I am learning!) It is now time to start playing with the MGraph and MSchema transformations to TSQL, that will be the subject of Part III.

posted on Thursday, February 12, 2009 7:51:14 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback
# Wednesday, February 11, 2009

Even though I have been to two Microsoft strategic design reviews on Oslo, attended all the PDC sessions, and presented “A Lap around Oslo” at the MDC, I have been learning something new about Oslo every day. As I stated before on this blog, Telerik is building some cool Oslo tools for Microsoft and I am designing them. I have to deliver the spec for the first tool to the programmers next week, so I have been hard at work. I thought it would be cool if I posted my progression here in a transparent design process so you can see how I learned Oslo while building this tool, giving you the ability to learn from my many mistakes. (You can get Oslo, along with M and the repository here.)

Just to review, Oslo is a modeling platform for building data driven applications. Oslo consists of three major pieces:

  • A modeling language: M
  • A modeling and data visualization tool: Quadrant
  • A repository: SQL Server

The M language is very exciting. M is actually a little like XML and XSD. Meaning that you never do anything in XML, you create an XML grammar (XSD) to give your XML some meaning. M ships with two tools to give your M some meaning: MGraph and MGrammar.

With Oslo you will create an M grammar using MGrammar; MGrammar a contextual DSL creation language.  MGrammar will convert your users or applications’ input (the users of your DSL) into MGraph. MGraph is a JSON-style M syntax that will allow you to put data into an instance of your M type.

I’ll go more into MGrammar later on, but for now, let’s use one DSL that ships out of the box for Oslo: MSchema. MSchema  is a DSL for TSQL DDL or data definition language. If you learn MSchema, you don’t have to deal with TSQL “CREATE TABLE” ever again. (For some of us this is a good thing.) MSchema is just one of many M grammars that will ship with Oslo, others will include MService, a DSL for working with WCF in C# and VB.NET.

I will model parts of the application with MSchema and then map some data to that MSchema using MGraph. When it is all said and done, I will create database tables and views and sample data from my M code. (M will transform all that from my MSchema/MGraph code to SQL Server databases and data.) This database and metadata will be put into the Oslo repository. (More on that later too, arguable, this is one of the most important features of Oslo.)

The App

I will not give all the details of the application here, not because it is super-secret, but because they are still evolving. Also I want to focus more on the process I took and the M code itself. In a nutshell, we are building an Oslo repository comparison tool with an M visualization engine as well as a data migration piece. Sorry to be vague, but only the first sprint or two are clear in my head, future sprints and versions will include a Visual Studio 2008/2010 plug in, a repository migration wizard, and a contextual DSL using MGrammar. We are building the repository comparison piece in the first few sprints and I will discuss it here.

The purpose of the repository comparison piece is for a developer who has modeled an application and transformed the MSchema and MGraph code into the repository and a runtime (such as .NET) is interacting with that metadata and the repository. Now the developer wants to make changes to the repository (version II of their app) by writing some more M code. The first feature of this tool will compare the old M to the new M and point out the inconsistencies. (I am starting with some basic stuff to get my feet wet.)

Modeling the Application

As I pointed out before, I was approaching this design process the wrong way. First I was writing up some initial user stories and then started to model the domain around those stories using various tools (mostly on paper) so they can be translated into requirements for developers on their first sprint.  I was building a tool for Oslo, but I was not using Oslo. So I started over and did this the Oslo way.

I still started with a user story, but to accompany the user story, I started to model the domain using the M language. I am not sure if this is the right way to be completely honest, but it felt like the right thing to do since imho, it will be easier for the developer to understand these user stories and then estimate them and move them to the product backlog. It feels like a modified version of Scrum and DDD, but I am far from a purist.

While you are suppose to do the design as part of the sprint, I don’t think that modeling a few domain entities is a true design, I expect the team on the first sprint to completely refractor this (via M) as more issues and requirements come to light. Of course I am not the typical user to write a user story. I don’t expect users to know M, so maybe in the real world where the user doesn't know M, a developer will do the M code to accompany the user story, or do it as part of the sprint. As I play more with Oslo, this process will become more clear to me. I suspect that there is not going to be a right answer, it will be a matter of preference.

Anyway, the first thing that we need is an entity to deal with the users of the application. So I fired up Intellipad (or iPad) and I used MSchema to define a type called “ApplicationUser.” The type is defined below.

//MSchema to define a user type
type ApplicationUser
{
    UserID : Integer64=AutoNumber();
    FirstName :Text#15;
    LastName : Text#25;
    Password : Text#10;      
} where identity UserID;

Here is what it looks like in iPad:

image

This is the most basic of types, but I figured I would get my feet wet with an easy one. (And besides, I am lazy.) I am defining the fields that will make up the user type, just using the most basic ones now (I am sure that the developers will add more later on.) I defined UserID as an AutoNumber (identity), FirstName as a 15 character text field, LastName as text 25, and password as text 10. (Yea, yea I know I should use a hash with a random salt, but this app does not need rock solid security.)

What I like about Oslo is that by defining the type here, I am giving the developers my intent. While they will most definitely rename, refactor, and reorganize this before it goes into production, they know the intent, the application will have a user and that user can log in with a password. I think this is more natural for a developer to work with (since it is code!) than boxes and lines or a formal written spec, or at least compliments those traditional artifacts nicely.

Now I need an instance of this type. I only can truly get a grip on my type once I put some data into it. This is where other modeling platforms fall down for me. Once I play a little bit with the data, I realize my model is wrong, and I go back and add to it.

To add some data, I need to use MGraph. To me, this seems like a collection of ApplicationUser types, so I named it the most logical thing that came to my mind: ApplicationUserCollection. Not sure if this is the best name for this collection or not, but hey, I am learning and I know this will be refactored by me a few times before it is refactored by the developers many times. So I will leave it this way and see how it evolves.

To create an instance of my ApplicationUser type, I need to tell M which type I am binding to with this syntax: ApplicationUserCollection : ApplicationUser*;

Think of ApplicationUser as the class and ApplicationUserCollection as the implementation or the instantiated object. Not an exact analogy, but it should give you a feel. I can also bound another instance like so: SuperUsers: ApplicationUser*;  however, we only need one instance.

I won’t go too deep into how MGraph works, since Shawn Wildermuth has a great 3 part series on MSchema and MGraph here. Just notice that MGraph takes on this format:

InstanceName

{

DataInstanceName(Optional)
{

     FieldName=”Value”

}

}

Here is the implementation of my type:

//MGraph to get some test data in
ApplicationUserCollection : ApplicationUser*;


ApplicationUserCollection
{
     //using a named instance (Steve, etc)
     Steve {
     FirstName="Stephen",
     LastName="Forte",
     Password="Telerik"
     },
     Vassimo {
     FirstName="Vassil",
     LastName="Terziev",
     Password="123"
     },
     Zarko {
     FirstName="Svetozar",
     LastName="Georgiev",
     Password="456"
     },
     Todd {
     FirstName="Todd",
     LastName="Anglin",
     Password="789"
     }
}


When this is all compiled by the M parser, this will be transformed into TSQL INSERT INTO statements into my table, the table that was defined in my ApplicationUser type. We don’t have to worry at all about TSQL and SQL Server now since all we are doing is modeling the application in MSchema and MGraph. We won’t bother converting this to TSQL now, since I guarantee it (the M code) will change soon.

Since I am at such an early stage of the design phase and I only modeled one simple entity (and not even a domain entity for that matter, all applications have users), I am just going to save this file to disk via iPad. In theory I should push this M into the repository and then when I make changes I will have the opportunity to save my version history, etc. That seems like too much work at this early stage of the design process, so for now I am just saving different versions of the M file on disk and will push the M to the repository later when I am more confident in my design. Is this a best practice? I don’t know, but it feels right at this time to keep everything local on disk. Time will tell and my thinking on this may change. (It will be fun a year from now to re-read this blog post and compare my thinking now to then.)

Stay tuned, the next part will show a domain entity and some refactoring. As I make progress each day I will continue to post.

posted on Wednesday, February 11, 2009 11:32:47 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback
# Tuesday, February 10, 2009

Many of you have been asking, so here it is. You can download the sessions materials here.

posted on Tuesday, February 10, 2009 9:29:45 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

I spoke at the very first meeting of the Cairo, Egypt based .network user group back in 2007. It will be my pleasure to speak at their first ever code camp in Cairo on Feb 19th and 20th (my birthday). Register here.

Poster2

posted on Tuesday, February 10, 2009 9:37:16 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback
# Friday, January 30, 2009

Earlier today the Oslo SDK January CTP was released on MSDN. A lot of people have been saying since the PDC, “What is Oslo?” Oslo is a new platform from Microsoft that allows you to build data driven applications. Oslo revolves around the application’s metadata. As Chris Sells describes on in a great white paper on Oslo:

Metadata can be defined as the data that describes an application, for example, how a home page is rendered or how the purchasing workflow operates. Your application data represents the state of the execution of an application, such as what Bob put in his shopping cart on your Web site or his shipping address in your checkout workflow.

To provide a common set of tools for defining metadata so that it can be stored in a place that provides the same set of features as normal application data, Microsoft is creating "Oslo," a platform for building data-driven applications. "Oslo" is composed of three elements: a family of languages collectively called "M," a visual data manipulation tool called "Quadrant," and a data store called the repository.

Telerik is building some cool Oslo utilities and I am in the middle of designing them. As I was talking to Chris about some of the specs the other day, he asked me: “What are you using to keep track of the metadata of your application in your design process?” I was like: “Pen, paper, whiteboard, Word and Excel.” He said why are you not using Oslo? Then it struck me, I was in .NET programmer mode. So last decade. While I am using Visual Studio 2008, WPF, SQL Server 2008 and the Oslo SDK to build an application for Oslo, I was not using Oslo to help build the application.

The application is in its earliest phases (just moving from idea and drawing on a whiteboard to design.) I confess, I made my first mistake, I did not think about a model, I was thinking about the app. So I started over and started to model what the app would do using Oslo. How do you model an application using Oslo? You use the M language.

Specifically at this phase you would use the MSchema portion of the M specification.  I started by creating a schema using MSchema to hold some application artifacts. This requires a different way of thinking, but it is worth the effort because now information about my application is stored in the repository and I will have version history and a much easier time generating the application when the time comes. (You can also use the MGraph portion of the M specification to create a domain specific language (DSL), however, that part of the process won’t come for this application until a little later on.)

As I make progress designing and building this application, I will post it here. You can follow along and learn from my mistakes. Stay tuned, look for the “Oslo” category on this blog.

posted on Friday, January 30, 2009 11:12:43 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback
# Thursday, January 29, 2009

Mary Chipman and I are doing a talk together at TechEd in Los Angeles this May on building solutions “without spending any money.” One of the tricks we will show is using an Access front end utilizing TVPs from the back end SQL Server. She posted a blog on the Access team’s blog about it yesterday. Check it out here.

posted on Thursday, January 29, 2009 1:37:56 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback
# Wednesday, January 28, 2009

If you attended my user group on data driven RESTful apps, you can download the slides and code here. Enjoy!

posted on Wednesday, January 28, 2009 2:53:20 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback