# Tuesday, August 31, 2010

See also:

In Part I we looked at when you should build your data warehouse and concluded that you should build it sooner rather than later to take advantage of reporting and view optimization. Today we will look at your options to build your data warehouse schema.

When architecting a data warehouse, you have two basic options: build a flat “reporting” table for each operation you are performing, or build with BI/cubes in mind and implement a “star” or “snowflake” schema. Let’s take a quick look at the first option and then we will take a look at the star and snowflake schemas.

Whenever the business requests a complex report, developers usually slow down the system with a complex SQL statement or operation. For example, pretend in our order entry system (OLTP) the business wants a report that says this: show me the top ten customers in each market including their overall rank. You would usually have to perform a query like this:

  1. Complex joins for unique customer
  2. Rollup the sales
  3. Ranking functions to determine overall rank
  4. Partition functions to segment the rank by country
  5. Standard aggregates to get the sales
  6. Dump all of this to a work table in order to pull out the top 10 (if you don’t do this, you will lose the overall rank)

A typical SQL statement to do steps 1-5 would look like this:

With CTETerritory
As
(
   Select cr.Name as CountryName, CustomerID, 
                Sum(TotalDue) As TotalAmt
   From Sales.SalesOrderHeader soh 
   inner join Sales.SalesTerritory ter
   on soh.TerritoryID=ter.TerritoryID
   inner join Person.CountryRegion cr 
   on cr.CountryRegionCode=ter.CountryRegionCode
   Group By cr.Name, CustomerID
)
Select *, Rank() Over (Order by TotalAmt DESC) as OverallRank,
Rank() Over
     (Partition By CountryName Order By TotalAmt DESC,
            CustomerID DESC) As NationalRank
From CTETerritory

Argh! No wonder developers hate SQL and want to use ORMs! (I challenge the best ORM to make this query!)

Instead you can create a table, lets call it SalesRankByRegion, with the fields: CountryName, CustomerID, TotalSales, OverallRank, and NationalRank, and use the above SQL as part of a synchronization/load script to fill your reporting table on a regular basis. Then your SQL statement for the above query looks like this:

SELECT * FROM SalesRankByRegion
WHERE CustomerNationalRank Between 1 and 10
ORDER BY CountryName, CustomerNationalRank

The results look like:

clip_image001

That is more like it! A simple select statement is easier for the developer to write, the ORM to map, and the system to execute.

The SalesRankByRegion table is a vast improvement over having to query all of the OLTP tables (by my count there are three tables plus the temp table). While this approach has its appeal, very quickly, your reporting tables will start to proliferate.

Your best option is to follow one of the two industry standards for data warehouse tables, a “star” or a “snowflake’ schema. Using a schema like this gives you a few advantages. They are more generic than the SalesRankByRegion, which was a table built for one query/report, giving you the ability to run many different reports off each table. Another advantage is that you will have the ability to build cubes very easily off of a star or snowflake schema as opposed to a bunch of SalesRankByRegion tables.

The design pattern for building true data warehouse tables are to build a “fact” table, or a table that contains detail level (or aggregated) “facts” about something in the real world, like an order or customer for example. Inside of the fact table you will also have “measures” or a numeric value that represents a “fact.” To support your fact table you will have “dimension” tables. Dimensions are a structure that will categorize your data, usually in the form of a hierarchy. A dimension table for example could be “time” with a hierarch of OrderYear, OrderQuarter, OrderMonth, OrderDate, OrderTime.

There are tons of tutorials on the internet that show you how to build a star or snowflake schema and the difference between them, so I will not repeat them here. (You may want to start here.) I’ll give you the high level on a simple star schema here.

Let’s say we have an order entry system, such as Northwind (in the Microsoft SQL Server sample database.) You can have a fact table that revolves around an order. You can then have three (or more) fact tables that focus on: time, product, and salesperson. The time dimension would roll-up the order date by year, quarter, month, and date. The product dimension would roll-up the product by the product and category. (In most systems you would have a much deeper hierarchy for products.) The salesperson dimension would be roll-up of the employee, the employee manager and the department they work in. The key in each of these tables would then be foreign keys in the fact table, along with the measure (or numerical data describing the fact.)

There is an example similar to this in Programming SQL Server 2008, a book where I am a co-author. Here is modified version of that demo:

Dimension tables:

CREATE TABLE [dwh].[DimTime] (
[TimeKey] [int] IDENTITY (1, 1) NOT NULL Primary Key,
[OrderDate] [datetime] NULL ,
[Year] [int] NULL ,
[Quarter] [int] NULL ,
[Month] [int] NULL 
) 

CREATE TABLE [dwh].[DimProduct] (
[ProductID] [int] not null Primary Key,
[ProductName] nvarchar(40) not null,
[UnitPrice] [money] not null,
[CategoryID] [int] not null,
[CategoryName] nvarchar(15) not null
) 

CREATE TABLE [dwh].[DimEmployee] (
EmployeeID int not null Primary Key,
EmployeeName nvarchar(30) not null,
EmployeeTitle nvarchar(30),
ManagerName nvarchar(30)
)

Fact table:
CREATE TABLE [dwh].FactOrder (
[PostalCode] [nvarchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[ProductID] [int] NOT NULL ,
[EmployeeId] [int] NOT NULL ,
[ShipperId] [int] NOT NULL ,
[Total Sales] [money] NULL ,
[Discount] [float] NULL ,
[Unit Sales] [int] NULL ,
[TimeKey] [int] NOT NULL 
)

We have the basis of a star schema. Now we have to fill those tables and keep them up to date. That is a topic for Part III.

posted on Tuesday, August 31, 2010 7:30:42 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [2] Trackback
# Monday, August 30, 2010

Most developers are scared of “Business Intelligence” or BI. Most think that BI consists of cubes, pivot/drill down apps, and analytical decision support systems. While those are very typical outcomes of a BI effort, many people forget about the first step, the data warehouse.

Typically this is what happens with a BI effort. A system is built, usually a system that deals with transactions. We call this an OLTP or on-line transaction processing system. Some time passes and reports are bolted on and some business analysts build some pivot tables from “raw dumps” of data. As the system grows, reports start to slow since the system is optimized to deal with one record at a time. Someone, usually a CTO type says: “we need a BI system.” A development effort is then spent to build a data warehouse and cubes, and some kind of analytical system on top of those cubes.

I make the argument that developers and project planners should embrace the data warehouse up front. When you design your OLTP system, also design the supporting data warehouse, even if you have no intention of building a full-fledged BI application with cubes and the like. This way you have two distinct advantages. First is that you have a separate system that is optimized for reporting. This system will allow the rapid creation of many new reports as well take the load off the OLTP system. Second, when you do decide to build a BI system based on cubes, you will already have the hard part done, building the data warehouse and supporting ETL.

Since a data warehouse uses more of a flatter data model (more on this in Part II), you can even design your application to use both the OLTP and data warehouse as data sources. For example, when you have highly normalized, 3rd normal form transactional tables to support transactions, it is never easy to use those tables for reporting and displaying of information. Those tables are optimized and indexed to support retrieving and editing (or adding/deleting) one record at a time. When you try to do things in aggregate, you start to stress your system, since it was designed to deal with one record at a time.

This design pattern is already in use today at many places. Consider your credit card company for example. I use American Express, and I never see my transactions show up for at least 24 hours. If go buy something and I phone American Express and say “what was my last transaction” they will tell you right away. If you look online, you will not see that transaction until the next business day. Why? When you call the customer service representative, they are looking at the OLTP system, pulling up one record at a time. When you are looking online, you are looking at the data warehouse, a system optimized for viewing lots of data in a reporting environment.

You can take this to an extreme, if you ran an e-commerce site, you can power your product catalog view portion of the site with the data warehouse and the purchase (inventory) system with the OLTP model. Optimize the site for browsing (database reads) and at the same time have super-fast e-commerce (database writes.) Of course you have to keep the purchasing/inventory (OLTP) and product display (data warehouse) databases in sync. I’ll talk about that in Part III. Next, I will take a look at how to build the data warehouse.

posted on Monday, August 30, 2010 7:33:02 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [1] Trackback
# Wednesday, August 25, 2010

A while ago I was asked by the publisher to be a tech editor of A Practical Guide to Distributed Scrum. Since agile luminaries like Ken Schwaber and Scott Ambler were also tech editors, I was honored to be chosen as well. Reviewing this book was a great experience and I have re-read the book since it was published (even thought I was paid to be a tech editor/reviewer, the publisher sent me a free copy when the book was published. Cool!)

8-25-2010 6-20-37 PM

You can learn a lot about using Scrum in a distributed environment from reading this book, it is the gold standard. If you have remote employees, off shore developers, or just a lot of offices where the product owner is in one location and the development team in another, this book is for you. The authors walk you through the process of setting up scrum in a distributed environment including planning, user stories, and the daily scrum. They give practical advice on how to deal with the problems specific to distributed teams using scrum, including most importantly communication and coordination. The authors are from IBM and show some of the techniques used at IBM with their remote employees, offices, and contractors.

I have been doing scrum in a distributed environment for almost 5 years now, and still learned quite a bit by reading this book. I encourage you to read it too.

posted on Wednesday, August 25, 2010 6:50:07 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [1] Trackback
# Tuesday, August 24, 2010

I recently read Kanban by David J Anderson. David is credited with implementing some of the first Kanban agile systems at various companies. In Kanban, he gives a great overview of what Kanban is, how it grew out of the a physical manufacturing process at Toyota, and offers practical advice on how to implement Kanban at your organization. David also shows you how to set up a Kanban Board and provides several ways to model your system and manage the board.

In addition, David walks us through what the Lean movement is and how it relates to agile software development. He makes a very convincing case for tracking work in progress (WIP) and basing your system around that. Kanban attempts to limit WIP for better throughput. David freely admits that there is no actual scientific evidence as of yet that proves smaller WIP increases productivity and quality, however, he offers up his case studies as well as others.

image

What I found very helpful is that David reviews the popular Scrum agile methodology and pokes some holes in it. He shows some of the weaknesses of time boxing (the “sprint”), estimating,  and the daily scrum and offers up alternatives via Kanban. David reminds us that agile is a set of values, not a set of rules. (Some people using Scrum today don’t like any change, they are so invested in Scrum that they forget that Scrum is about change.)  Scrum forces you to throw out completely your current system and replace it with Scrum. Kanban allows you to keep your existing process and make changes, changes that revolve around communication, WIP, and flow. Kanban will let your current methodology evolve, not complete revolutionize it.

I used a crude, early version of Kanban a few years ago at my startup in New York. (A blog post will come on this next month.) I also used Scrum pretty extensively over the past few years and realize that neither system is perfect. Kanban is more flexible and Scrum (in my opinion) is easier to get estimates to managers who value “deadlines”.  (What managers don’t?) There are strengths and weaknesses of both and David points this out in his book. A few people mix and match and use a “Scrum-ban” system. Personally I have seen the best success with Kanban and doing system maintenance and Scrum for greenfield start-ups with new teams.

If you are practicing any agile methodology or want to improve your current system, read Kanban. It is worth a try, even if you only implement a few ideas from the book.

posted on Tuesday, August 24, 2010 3:35:54 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback
# Monday, August 16, 2010

I will be speaking at my 15th Software Developers Conference in the Netherlands on October 25th and 26th. For some reason the Dutch keep asking me to come back, even though I make fun of the Dutch pretty much full time. The SDC is special for me; the very first international conference that I ever spoke at was the SDC in 1998. I have been back every year (except 2000) and even did a few of the smaller one day conferences. Over the years I have done some crazy things, including showing up for my session after just coming back from the Red Light District in Amsterdam. (Hey what happens in Amsterdam, stays in Amsterdam…) Richard Campbell and I once did a session called “Mid-evening Technical session with Beer.” The abstract said “Bring beer and hear Richard and Steve talk about the latest technology.”

image

This year I will be doing a Scrum v Kanban v XP v Whatever smack down that will really be a Q&A lead by Remi, Joel, and me. I will also be doing a RIA Services 101 talk, no slides, just demos.  If you are in Europe this fall, swing by.

posted on Monday, August 16, 2010 4:03:16 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [1] Trackback
# Friday, August 13, 2010

Thursday, August 19, 2010
Building Windows Phone 7 Games in 3D with XNA Game Studio 4.0

Subject: 
You must register at https://www.clicktoattend.com/invitation.aspx?code=149726 in order to be admitted to the building and attend.
Why would you be forced to buy a Mac and learn yet another language to write mobile games? The truth is you can reuse your finely honed .NET and C# skills to write games that will run on Windows, Xbox 360 and the hot new kid on the block: Windows Phone 7. Enter XNA Game Studio 4.0. Join ActiveNick in this session as your fast track to the world of mobile game development where we jump right away into the fun stuff. We’ll go through a quick recap of XNA Game Studio and dive right in. No, we won’t be building no Atari 2600-style 2D games, let’s mess around with the cool 3D stuff. We’ll cover designing games for mobile phones, adapting desktop & console XNA code for Windows Phone 7, tapping into the phone hardware, discuss media assets and the Content Processing Pipeline and basically cover as much demo code as the evening will allow. Forget SharePoint and Entity Framework, this is the kind of coding you signed up for when you decided to go pro as a coding geek.

Speaker: 
Nickolas Landry, Infusion
Nickolas Landry is Practice Manager in New York for Infusion Development, a Microsoft Gold Partner which offers quality software development services, developer training and consulting services for large corporations and agencies in the North America, the UK and Dubai (www.infusion.com). Known for his dynamic and engaging style, he is a frequent speaker at major software development conferences worldwide, a member of the INETA and MSDN Canada Speakers Bureaus, and a 6-year Microsoft MVP on Device Application Development. With over 18 years of professional experience, a software architect by trade and a career almost entirely dedicated to Microsoft technologies, Nick specializes in .NET mobility, Bing Maps & Location Intelligence, High-Performance Computing (HPC), Game Development with XNA, and Smart Clients. He wrote multiple articles for CoDe Magazine and several .NET mobility courses for Microsoft, has been a technical editor for many books, and holds several professional certifications from Microsoft and IBM. www.twitter.com/ActiveNick

Date: 
Thursday, August 19, 2010

Time: 
Reception 6:00 PM , Program 6:15 PM

Location:  
Microsoft , 1290 Avenue of the Americas (the AXA building - bet. 51st/52nd Sts.) , 6th floor

Directions:
B/D/F/V to 47th-50th Sts./Rockefeller Ctr
1 to 50th St./Bway
N/R/W to 49th St./7th Ave.

posted on Friday, August 13, 2010 3:42:24 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback
# Thursday, August 12, 2010

This coming October, I will be speaking at DevReach in Sofia, Bulgaria. DevReach is a great event and will be entering its 5th year. It is a two day event with A list speakers (excluding myself of course). World famous Scott Stanfield is the keynote speaker this year and there will be some great BI content presented by Andrew Burst.  Joel, Remi, and I will be leading a Scrum/Agile/KanBan/Scrum-but “smackdown” talk/discussion. At only 200 euros, it is the best bargain in Europe! You can register here.

I have spoken at all previous DevReach events and will keep speaking there until they tell me they don’t want me anymore. DevReach is special to me, at the first DevReach, I was able to play a very small role in helping the conference founder Martin Kulov recruit some speakers and plan the event. I also met for the first time at that first DevReach, my current employer, Telerik.

I liked it so much, I stayed. ;) Watch out, it could happen to you….

posted on Thursday, August 12, 2010 6:41:58 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback
# Wednesday, August 11, 2010

Microsoft recently released a CTP of the cloud based SQL Azure management tool, code named “Houston”. Houston was announced last year at the PDC and is a web based version of SQL Management Studio (written in Silverlight 4.0.) If you are using SQL Management Studio, there really is no reason to use Houston, however, having the ability to do web based management is great. You can manage your database from Starbucks without the need for SQL Management Studio. Ok, that may not be a best practice, but hey, we’ve all done it. :)

You can get to Houston here. It will ask you for your credentials, log in using your standard SQL Azure credentials, however for “Login” you have to use the username@server format.

image

I logged in via FireFox and had no problem at all. I was presented with a cube control that allowed me see a snapshot of the settings and usage statistics of my database. I browsed that for a minute and then went straight to the database objects. Houston gives you the ability to work with SQL Azure objects (Tables, Views, and Stored Procedures) and the ability to create, drop, and modify them.

image

I played around with my tables’ DDL and all worked fine. I then decided to play around with the data. I was surprised that you can open a .SQL file off your local disk inside of Houston!

image

I opened up some complex queries that I wrote for Northwind on a local version of SQL Server 2008 R2 and tested it out. The script and code all worked fine, however there was no code formatting that I could figure out (hey, that is ok).

I wanted to test if Houston supported the ability to select a piece of TSQL and only execute that piece of SQL. I was sure it would not work so I tested it with two select statements and got back one result.  (I tried rearranging the statements and only highlighted the second one and it still worked!)  Just to be sure I put in a select and a delete statement and highlighted only the select statement and only that piece of TSQL executed.

image

I then tried two SQL statements and got back two results, so the team clearly anticipated this scenario!

image

All in all I am quite happy with the CTP of Houston.  Take it for a spin yourself.

posted on Wednesday, August 11, 2010 6:16:33 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [1] Trackback
# Tuesday, August 10, 2010

Google and Verizon unveiled on Monday a proposal that would create two internets: an open one that we know and love today and another one that is more expensive with dedicated pipes and has premium content and services. In theory it would work like this: if you wanted something like YouTube in 3D HD quality with special content (like new movies, etc), that content would only be available on a different set of pipes, pipes you would have to pay for. This will  lead to a tiered, less open Internet.

As expected Net Neutrality supporters went nuts. As reported by Wired, Free Press Political Adviser Joel Kelsey said:

Google and Verizon can try all they want to disguise this deal as a reasonable path forward, but the simple fact is this framework, if embraced by Congress and the Federal Communications Commission, would transform the free and open Internet into a closed platform like cable television. ... It’s a signed-sealed-and-delivered policy framework with giant loopholes that blesses the carving up of the Internet for a few deep-pocketed Internet companies and carriers …

I am torn on this issue. I consider myself to be a free market libertarian. I know what Friedrich von Hayek would say: let Google and Verizon do what they want, tiered pricing is a way to deal with scarcity.

von Hayek is right, there are only so many fat and fast pipes on the internet (scarcity) and if people are willing to pay for premium content and services, like cable TV, then the market should allow for that. The theory also says that there will be positive externalities and the innovation will trickle down to the free/open/other internet. This was the case with cable TV, cable started with HD TV and innovative programming and “regular” free TV caught up.

On the other hand, the Internet is more important than cable TV. The Internet is a platform for business and entrepreneurship. The internet is also a platform for social change (and political protest in some countries.) Living in China I already live in a tiered environment. When I am home in Hong Kong, I can do whatever I want. When I travel 30 minutes north to Shenzhen, I am on the less open, firewalled internet. I see how people use the internet to create businesses and social change here in Hong Kong and how that does not happen in China. (Don’t be fooled about online entrepreneurship stories in China, it does not exist as it does in more open countries.)

While my example of China is a politically charged one and one that deals more with censorship, the internet is a great way to level the playing field. With cloud computing and cheap skilled software programming labor in developing counties, just about anyone can start a business today and be the next Google. If only certain applications and services were available over the “premium” internet, innovation, entrepreneurship, and social change will all suffer.

posted on Tuesday, August 10, 2010 4:38:20 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback
# Monday, August 09, 2010

Last Thursday I did a Scrum session at VSLive on Microsoft’s campus in Redmond, Wa. I lectured for about 30 minutes and then we went for a Q&A, just how I like it. Actually we really had a true conversation, people commenting on each other’s questions and comments, etc. Here is what we talked about:

  • The Agile Manifesto and how it is just four items
    • The Agile Manifest is about values, not rules
    • The values of the Agile movement: communication, delivering business value, collaboration, embracing change
    • How some agile practitioners are not really agile, they forgot the core values and are too rigid
  • Other agile methodologies like XP and Kanban
  • Where scrum came from: Japan and Harvard Business Review (1986)
  • The Scrum 101 stuff: the daily scrum, iterations, the team, backlogs
  • The world’s greatest project management tool: Microsoft Excel
  • A little on velocity and agile estimation
  • A lot on testing, where to put testers
    • One guy had his testers outside of the sprint-and it worked for him
    • One guy thought about staggering the testers one week behind the dev sprint (we had mixed reviews on that)
  • It is ok to change Scrum!
    • How the inventor of Scrum wants my head for that bullet ;)
    • The best approach is a “buffet table
  • Lean processes at Toyoda and how it relates to software development (a la  Kanban)

Glad that we had a conversation rather than a straight lecture.

posted on Monday, August 09, 2010 8:41:18 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback
# Thursday, August 05, 2010

Yesterday I did the “Building RESTFul applications with the Open Data Protocol” session at VSLive on Microsoft’s campus in Redmond, Wa. We had a lot of fun, we did the following:

  • Looked at some public OData feeds listed at Odata.org
  • We randomly picked a feed, the City of Vancouver street parking feed, and consumed it
    • We also discovered that they have weird primary keys
    • we also discovered the FireFox consumed the OData feed much faster then IE (this on Microsoft’s own network!)
  • Saw how to create a feed automatically from SQL Azure tables
  • Consumed a feed in Microsoft PowerPivot
  • Build a feed on the fly using the Entity Framework and WCF Data Services
  • Consumed that feed in ASP.NET and Silverlight
    • Also looked at Query Interceptors and Service Operations briefly
  • Talked about security, both at the service level and at the IIS/ASP level
  • Made fun of the previous speaker
  • Showed how you can create a feed using 3rd party tools

I wrapped up the talk with a discussion about when you would use OData compared to other technology, such as RIA Services. My rule of thumb was that if you are building an application that you control and your users will consume, you should consider technology such as RIA Services (if you are using Silverlight) or ASP.NET, MVC, etc. If you want to expose parts of your application as a data feed and let others consume it and build applications around it then consider OData.

You can download the slides and code here.

posted on Thursday, August 05, 2010 9:25:53 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [1] Trackback
# Wednesday, August 04, 2010

Microsoft has made two interesting announcements this summer: one is the WebMatrix initiative and other, made yesterday, is Visual Studio LightSwitch. Both have driven developers to the point of dogma over the role of these tools.

WebMatix, along with IIS Express and SQL Server Compact Edition, is a tool aimed at the geeky hobbyist or college kid in their dorm wanting to make a web application, or dad wanting to build a web site for the youth soccer team.  As part of WebMatrix there is ASP.NET Razor, a new streamlined ASP.net view engine, making it easier to mesh C#/VB and HTML. Let’s be clear, WebMatrix is not targeting the professional developer. To quote from Scott Gu’s blog:

If you are a professional developer who uses VS today then WebMatrix is not really aimed at you - at least not for your "day job".  You might find WebMatrix useful for quickly putting a blog on the web or doing lightweight scripting on the side.  But it isn't intended or focused on hard-core professional or enterprise development.  It is instead aimed more for people looking to learn how to program and/or who want to get a site up and running on the web without having to write much code.

Ok, glad that we cleared that up. ;) Well, the story goes on. As part of the WebMatrix stack Microsoft made some updates to the Microsoft.Data namespace. It was announced on this blog here and started a debate. One group on the blogs and Twitter, lead by Oren Eini, was very critical of the new Microsoft.Data. I can sum up the online debate like this:

Developers: Wow, are you crazy! SQL is dead, ORMs will inherit the earth. These changes should have come in .NET 2.0, not in 2010!

Microsoft: Yes we get the ORM thing. The changes to Microsoft.Data are for WebMatrix and beginning developers. If you have already used ORMs and implement best practices and patterns, great, keep going, these changes are for a different audience.

On top of all of this, yesterday, Microsoft released Visual Studio LightSwitch, beta1. LightSwitch, formally known as Kitty Hawk, is a RAD tool targeted at the non-professional developer who wants to build line of business applications.

Professional developers are like: Why do I need WebMatrix? Or LightSwitch? Some debates have even gotten downright nasty. The answer is, WebMatrix and LightSwitch are not for professional developers! (Or the changes to Microsoft.Data.)  A newbie at home or a college dorm would use WebMatrix to build a web site. A geeky guy in a corporate job would use LightSwitch to build a business application. This is a good thing.

What Microsoft is doing is building a bridge to .NET and professional development. Without any formal computer science training, I was once this target market. For example back about 18 years or so ago, I was a hobbyist hacker in my dorm room discovering PCs. (If that were me today, WebMatrix would target me, however, 18 years ago there was no web. <g>) About 16 years ago when I graduated university, I was that geeky guy in corporate who needed to build a line of business application. (If that was me today, LightSwitch would target me.)  I used Lotus Script and 1-2-3, FileMaker Pro, and Excel and Access. Eventually I taught myself some VBA and not to long after I “graduated” to VB, when VB 3.0 shipped the database compatibility layer (ok I am now dating myself!) Fast forward a few years later to VB 4.0 and 5.0 and I made the jump from a hacker geek to a professional developer. A few years later when .NET came out I was well into my professional developer career.

The problem is that there is no bridge today to .NET. Back in the mid-1990s, there was a bridge from hacker/corporate geek to professional developer: VBA. If you built some advanced formulas in Excel or some forms, reports, and database logic in Access, you would usually hit a wall and have to learn some VBA. This is in addition to your day job, you know, as financial analyst or credit adjuster. Along the way, you may realize that the coding thing is really your game, not your day job. That happened to me. Today there is no bridge and there hasn’t been for years. WebMatrix and LightSwitch are an attempt to build that bridge. I just hope that the professional developers today realize that.

Just as BMW has entry level cars, even completely different brands like Mini for one market segment, and the turbo charged hand made engine M series for another, Microsoft is segmenting the market, trying to build a bridge to .NET. I hope they succeed.

posted on Wednesday, August 04, 2010 9:30:56 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [3] Trackback
# Tuesday, August 03, 2010

If you are going to the Microsoft MVP Global Summit in late February 2011 in Seattle, Washington, or just will happen to be in the neighborhood, you should sign up for GeekGive. GeekGive is an organization that sponsors a one day charity event in the community where a bunch of geeks are congregating for a conference. The first GeekGive was a project was back in June for Habit for Humanity in New Orleans, where Microsoft TechEd was located.

image

At the MVP Summit, GeekGive will be supporting Northwest Harvest, Washington's own statewide hunger relief agency. In New Orleans many people wanted to help GeekGive but did not know about it or did not have enough time to plan their travel around the event. Well, the MVP summit is now 208 days away, so you have well over six months to plan. See you there!

posted on Tuesday, August 03, 2010 2:10:32 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback
# Monday, August 02, 2010

In a little over 5 weeks from now I will be headed back to Nepal. I will be going to visit the Hillary School in Khumjung and trek to Gokyo Peak and Mt. Everest Base Camp. I am doing all of this to raise awareness for a charity I am involved in, Education Elevated. (Donate here!) We are raising money to follow-up our September 2009 trip to Chyangba village where we built a library for the current school. Next April (2011) we will go back to Chyangba and distribute the school uniforms and text books that your last round of donations purchased. We will also start a campaign to raise money for a new building to house the school. Thanks to all of you who have donated!

image

PS I’ll also be carrying in donated supplies for a high altitude health clinic. If you want to donate, you can pay me directly via PayPal and I will be bringing in over the counter drugs and medical supplies from Hong Kong.

posted on Monday, August 02, 2010 5:38:27 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback