Searched for : gener

MACH5

After running two successful batches of the mobile focused AcceleratorHK in Hong Kong, Telerik is announcing the Mach5 Accelerator in Silicon Valley. Mach5 will focus on startups doing HTML5 Web or Mobile development using our HTML5 framework, KendoUI.

The accelerator will be located in our office on University Avenue in Palo Alto, the heart of Silicon Valley. We’ll run from January 6th until April 11th, 2014. Applications are open until November 22nd for the first batch: apply here.

The batch will be small, only three teams, but the benefits are huge. Besides office space, you’ll have a great 14 week program complete with Silicon Valley mentors, up to $25k USD investment (in exchange for 4%-6% equity), and a customer development and MVP boot-camp.

Telerik resources are at your disposal too. In addition to the mentors from Silicon Valley, Telerik will provide a senior developer from our Professional Services team onsite for a few weeks of the program to help the teams get started. In addition to the techie help, our demand generation, community, and “growth hacker” experts will provide assistance to the teams. While you are in the Valley for the program, tap into our Silicon Valley staff’s vast network. Lastly, our Video Production team will assist with some high quality videos for the teams to use in their marketing and fund raising campaigns.

The best applicants are two person startups with one techie and one business person doing HTML5 development willing to relocate to Silicon Valley for 14 weeks and work on the startup full time. Applications are open until November 22nd for the first batch: apply here.

posted on Monday, October 07, 2013 10:34:24 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

This was the last “normal” week at AcceleratorHK, normal being that we have our scheduled program 1:1 meetings, mentor visits, Friday check-in, and other activities. The following week will be the final week to prep for Demo Day, which is on August 13th.

One more team released an MVP! Icevault, Offline storage for Online currencies. You can easily sign up: a bitcoin address will be generated for you right away - and the private key securely saved offline and encrypted for you.

We had a great mentor come in and visit us. Michele Leroux Bustamante came in and spent about an hour and half with each team over two days, plus sat in on the Friday check-in and provided valuable feedback to the teams on their presentations.

 IMG_20130801_110543IMG_20130801_125545

Michele had her original flights rescheduled due to a foul up in San Francisco so she extended her trip until Saturday and we all got to spend a little more time with her. That means she got to get down as the teams blew off some steam in LFK on Friday night. Smile

IMG_20130803_012009

This coming week is the last week of the program! We will have five different people come in to spend time with the teams practicing their Demo Day presentations, all while trying to continue to build their products! It is crunch time to say the least.

See you all at Demo Day on August 13th!

posted on Saturday, August 03, 2013 11:33:25 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

As the program is driving to the Demo Day finish line, we had an action packed week at AcceleratorHK! All six teams have an MVP that is up and running. You can check out three of them here:

We also had Hristo Neychev come in and spend a few days mentoring with the teams. Hristo works at Telerik as the PM for Icenium and has a lot of mentoring experience with startups at Launchub, an accelerator in Bulgaria. He spend an hour or two with each team as well as extra time doing customer development of his own with the teams and other companies in Hong Kong using Icenium. Hristo also mentors teams on startup presentations, so he worked with each team on their presentations for 30 on Friday before Prototype Day.

IMG_20130725_103510

On Friday we had our second (and last) “Prototype Day” when the teams make their Demo Day presentations to a group of mentors and have live Q&A on their business model. This is different from each of the Friday check-ins that we do when the teams may present on what they have done the prior week or practice their investor or potential customer pitch that they may be doing that week. We had five awesome mentors come on in to listen to the presentations and provide feedback:

IMG_20130726_151016

The teams made pretty solid presentations and got a lot of feedback. Demo Day is only 2 weeks away and the teams should all be ready! Unfortunately Friday was Paul’s last day at AcceleratorHK. Sad smile After Prototype Day we went out for a few drinks to wish Paul well in his new life in LA.

mmexport1374847436766

Demo Day is August 13th, register here! See you all there…

posted on Saturday, July 27, 2013 9:58:39 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

Last Friday a team representing the startup community of Hong Kong went to the appWorks Demo Day in Taipei, Taiwan.

IMG_20130705_125748

appWorks is a venture firm that also has an incubator program of six months where 24 teams start out and get free co-work space and mentors. At the end of the six month period, there is a demo day. For this batch (Batch #6), twenty teams each did a five minute pitch in front of almost 700 people.

IMG_20130705_133823

The teams had a range from offline retail of organic dog food, urban street tee-shirts, women's health, politics, bio hacking, customized baseball gloves, and much more. The first team to go (urban street tee-shirts) started with a break dance. What a way to start a Demo Day!

IMG_20130705_140110IMG_20130705_133210

As opposed to traditional accelerator demo days, which tend to have more early stage companies, this demo day had quite a few teams with a lot of traction. In addition, the investment climate in Taiwan is only strong for hardware, so the teams tend to go for things that have revenue as soon as possible. You can see the investment climate’s effect on the startup ecosystem, very few taxi and other “instagram” mass consumer style apps, but rather more practical, more local, and less “big swing” companies.  It was a great event to watch.

After the Demo Day, the HK team went to Taipei 101, the second tallest (for now!) building in the world and got some dinner.

IMG_20130705_174825

We met up with the Cubieboard guys (similar to raspberry pi) and I became the first paying customer of their second generation board. I realized they were not kidding when they took photos of my money and sent it to their investor. This is a great device, duel core computer the size of an old PCIMCA card complete with an SD card, USB ports, Ethernet jack, infrared sensor, all for <$60 USD.

IMG_20130705_183218IMG_20130705_182952

Then we headed over to appWorks offices where the HK teams pitched the Taiwan teams and some other Taiwan teams that did not participate in Demo Day did the same. It was all done over beer and pizza, the fuel of startups.

IMG_20130705_195431IMG_20130705_201129

The co-working space is open only to the incubator startups and there is an additional floor where the graduates can rent out at below market rates.

IMG_20130705_195814

It was a great trip and we hope that by mixing the Taiwan and Hong Kong startup ecosystems we’ll open new markets for each other.

posted on Monday, July 08, 2013 6:51:01 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

The intensity at AcceleratorHK is cranking up. This week was action packed and eventful. On Wednesday we had a trip to visit the offices of Hong Kong startup Frenzoo. At Frenzoo, founder Simon Newstead walked the teams through the early days at Frenzoo, its customer development process, and what it was like being in an accelerator himself. It was great going to visit a living, breathing startup in Hong Kong and “get out of the building.”

 IMG_20130619_094017

On Thursday, we had Roland Yi, director & General Counsel at Gilkron Limited as well as a law professor at HKU, come in and spend time with us on intellectual property (IP) rights and laws. We talked about patents (avoid!) and trade secrets, copy rights, and trademarks. Very informative stuff for the teams.

IMG_20130620_104510

On Thursday night we had a rooftop pool party to wish one of the team members well as he is headed back to the US.

IMG_20130620_203355

On Friday night I did a presentation about raising money for startups. We covered angel investment, stock options, vesting,  dilution, valuation, venture capital, liquidation preferences, and lots more. This session was open to the public and despite being on a Friday night, we had a great crowd (and lots of beer.) Of course it was a PowerPoint-less presentation!

steveforblog

On Saturday we had a big day. We got up early and traveled to Shenzhen, China and visited the component markets in Huaqiangbei. These are the component markets for the global supply chain and they are something to be seen.

IMG_20130622_123138

Here is a photo of some of the teams inside of SEG Plaza, one of the most famous of the component markets.

IMG_20130622_130702

After a few hours at the component markets and adjacent consumer electronics markets (lots of phone cases, batteries, chargers, bluetooth speakers were acquired…), we headed to an evening of teambuilding with the staff of Social Agent, who’s founder, Mike Michelini is a mentor at AcceleratorHK. We went bowling, played pool, and ping pong with the staff and had a great group dinner and drinks before heading home to Hong Kong.

IMG_20130622_174416IMG_20130622_174958IMG_20130622_184123

A great week and more to come next week. Stay tuned…

posted on Sunday, June 23, 2013 4:55:15 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

It is hard to imagine that we have already passed the one month point! After five full weeks at AcceleratorHK, most of cohort 2 is starting to move past customer interviews and into serious MVPs. Paul and I keep reminding the teams that they are still testing assumptions with the MVPs and not building “beta” releases for “feedback.” (The classic mistake that leads down the road to tradition product development.)

On Friday we had our first “Prototype Day” or when the teams make their Demo Day presentations to a group of mentors and have live Q&A on their business model. This is different from each of the Friday check-ins that we do when the teams may present on what they have done the prior week or practice their investor or potential customer pitch that they may be doing that week. While Demo Day is a full two months away, we want to get everyone started and get feedback on their business from more folks than just the cohort and Paul and myself. During the course of the program we have two Prototype Days, usually around the end of the first month and at the end of the second month. (Prototype Day #2 is July 26th.)

IMG_20130614_100749

We had four rock star mentors show up:

The six teams made their presentations and the mentors gave them tons of feedback. The mentors really challenged the teams to think through their models and underlying value prop. The most surprising thing to the teams was that they had the “curse of knowledge” since sometimes the mentors had no idea what the team’s value proposition was all about. Some mentors even provided feedback on the team’s logos. Smile It was great for Paul and myself to take a week off from providing all the constructive criticism.

IMG_20130614_105825

The teams soaked up the feedback and after a few hours of presentations and Q&A, most of us went to the local Japanese place for lunch.

After Prototype Day, we had a scheduled rooftop party, however, it had to be postponed due to rain. Instead Team Portugal and I went to a MVP dinner and got to play with an “Appcessory” or rather a device that turns your iPhone into a pinball machine.

IMG_20130614_204404IMG_20130614_204854

We have a big week coming up, two mentor visits, the last public class for the “Early Venture Survival Series”, and of course all the regular 1:1s and check-ins! Stay tuned…

posted on Sunday, June 16, 2013 7:23:42 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

On Tuesday night I visited and delivered a talk to the cohort at Launchub, Bulgaria’s first software startup accelerator. Launchub is financed by a seed fund made up of private and EU money. Launchub is hosted at a great co-work space, betahaus, in downtown Sofia. It was great visiting betahaus and meeting the teams. I quickly noticed that the Bulgarian accelerator is the exact opposite of AcceleratorHK; in Bulgaria, the teams are engineering heavy, while in Hong Kong, the teams are business people/designer heavy. Too bad we are too far away for a merger. Smile

I delivered a talk titled: “Lessons Learned From a Career in Startups.” I spoke about raising money, how a business partner is like a wife/husband, how to align staff’s expectations with your own, and then some general customer development (pivots, mvps, and all the current popular lingo.) My bosses at Telerik are involved in the accelerator and are mentors, so I told some jokes about them too.

At the Q&A time, Lyuben Belov, the program director, wanted me to put a team on the spot and have them do their pitch. I turned the tables on him and first asked Lyuben to pitch to me to invest in Launchub. (One of the lessons was to be ready to go now!) Lyuben did a fabulous job and then we picked one team, Useful at Night, to pitch. (It is also cool since this team also applied to the AcceleratorHK, but they are already in Launchub. I invited the team to come to HK for a week and spend time with the AcceleratorHK cohort.)

photo

Evelin Velev from the team did a great job with absolutely no prep time.

To round out the evening, I put one of my Telerik colleagues on the spot when asked about the future of hybrid development. Hristo Neychev is the director of BizDev for Icenium, so he best be able to do this. Smile He did not disappoint and we had fun, I was making slides on the fly for him.

IMG_8370

After the talk and lively Q&A session, we went to a downtown bar and had some food and drinks and talked all night long about startups, technology, and why I joined a Bulgarian Startup when I came to Telerik many years ago.

posted on Thursday, October 11, 2012 5:28:45 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [1] Trackback

While speaking on a panel last week at the BizSpark European Entrepreneurship conference in London, I mentioned how in 1999 we raised $36m of Venture Capital at Zagat in order to get from idea (expressed in PowerPoint) to paying customers. I asserted that back then in the stone ages, you had to: buy lots of servers (usually at overcapacity if you had some peak and valleys in traffic), hire lots of expensive people, and spend a ton on marketing to reach the masses.

Then I asserted that how those numbers started to change due to the cloud and the infrastructure around it (Skype, outsourcing, etc). I talked briefly how I started a successful company in 2002 for only $300k of investment and another in 2007 for only $100k of investment. I also recently invested in a company where the total raise was only about $25k and in less than six months went from idea to revenue generating customers.

The panel moderator, David Rowan (Editor, Wired UK), then asked another panelist, Bernard Dalle a longtime VC from Index Ventures, if his fund is seeing a slowdown in investment. He mentioned his recent investments in Path and Flipboard raised millions of dollars. Is there a disconnect between what Bernard and I said?

Another panelist, Rob Fraser, CTO of Microsoft, mentioned how the cloud does change everything. Rob, Bernard, and I went on to explain that you still need to spend a lot of money, but the big, game changing difference is that you don’t need to spend it all up front.

At Zagat in late 1999, I spent well over a million dollars on infrastructure (server farm, switches, priority based load balancer, etc, etc) in order to be able to “scale” when we hit the millions and millions of users when we launched a few months later. As I “scaled” from 100 simultaneous users to 1000 simultaneous to 5000 simultaneous users over the course of a few months, I was still running on the multi-million dollar infrastructure. Since we had a spike in traffic at lunch and dinner times (go figure) and after Super Bowl ads, etc, we had to have a large server farm. It took us a year to start adding more servers to the farm to accommodate the nearly billion monthly unique page views.

Contrast this with today’s startup economics. Today everything is cheaper and better. You can augment your staff with programmers in far away places and keep in touch via Skype, etc. But most importantly, with the cloud, you only have to pay as you go with the server infrastructure.

In order to get started today, it is virtually free. Just sign up with one of the incubator programs at AWS or Azure and you are ready to go live. Once you grow out of the simple startup incubator phase (and you will pretty quickly), you start to pay only for the bandwidth/compute cycles that you need (and can peak and valley as you like.) You can start out with only a few thousand dollars and slowly increase your infrastructure spending over time as you grow.

Our point on the panel is that you may well wind up spending the same amount of money as I did in 1999, but not all at once, most likely over the course of several years. This drastically changes the economics of startups: you no longer need to go to VCs for lots of money in order to get from idea to customers. Now you can get to idea to at least beta testers on your own dime (or a small amount of Angel Investment) and go to the VCs later on. If you never get to that later stage, you never would have had to spend that $20-$36m in VC.

Welcome to the new new startup economics.

posted on Thursday, June 14, 2012 12:31:34 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

Twenty years ago when I entered the high tech industry, every aspiring young entrepreneur dreamed of building the next Microsoft and being the “next Bill Gates.” News articles told us that the next Bill Gates would probably come from Eastern Europe rather than from Silicon Valley (or Seattle where Microsoft is located). Ten years ago when Google got big and went public, every new entrepreneur wanted to be the “next Larry Page.” News articles told us that the next Larry Page would probably come from India or China rather than from Silicon Valley. As Facebook eyes its IPO next month, today young entrepreneurs hope to be the “next Mark Zuckerberg”. News articles now tell us that the next Mark Zuckerberg will come from Brazil, rather than from Silicon Valley.

While I am generally optimistic that the environment for entrepreneurship will only get better all over the world in the coming decades, it is important to realize that there are a number of things that make Silicon Valley unique and for that reason, it is more likely that the next Gates/Page/Zuckerberg will come from the Valley.

There are many things that a location needs in order to support entrepreneurship and its startups: access to capital, awesome infrastructure, a large talent pool, a world class education system, rule of law, contract enforcement and property rights, transparency/free media, tax structure, modern labor laws, and an underlying geopolitical system that supports all of the above. You can’t have a successful startup if the local government is going to tax you too high, can’t enforce a contract, or is unstable and about to be overthrown in a revolution (though a revolution is probably good for entrepreneurship in the longer term!)

Most of the places in the world today are moving in the right direction. Some developing nations support all the items above in my list. Unfortunately, that is only the entrance ticket to a startup culture.

Many places that meet the above criteria have a startup community, but lack a startup culture. A startup community is just that, a community of lots of startups who help each other, have regular meet-ups, co-work spaces, pitch nights, and even attract capital. What is lacking is the startup culture.

What is a startup culture? A culture that celebrates failure, a culture that encourages people to take risks, an ecosystem of startup support that will work on equity only or super reduced rates that range from office space, legal services, accounting services, design, advertising, PR, and so on.

Most importantly, you need a talent pool that has had several generations of people who have been through an “exit” or acquisition or IPO. These people serve as both the inspiration for new local startups (“I can’t believe that Bob from the neighborhood made it big at that local startup!”) but also as their mentors and even Angel Investors.  The second and third generation folks are willing to work for equity/reduced wages and inspire others who have not had an exit to do so too. This includes not just the founders and developers, but every position in the company. The more people in your location that has been through an exit, the easier it is to build a new company.

My beloved home town of New York and my adopted home town of Hong Kong both have vibrant startup communities, but are years away from building a proper startup culture. Why? They are both very expensive cities to live in and all the money is in the finance or real estate industries. So if you are starting a new business in New York or Hong Kong you are competing with the banks for not only your developers  and marketing people, but also for office space, accountants, and lawyers, etc. Only after several generations of startups reaching the exit will the floodgates open and the ecosystem will form.

Silicon Valley is one of the few places in the world where this ecosystem exits. I am watching as other locations are trying to build this ecosystem prematurely. Unfortunately, it will take time, potentially decades in some places.

Will the next Mark Zuckerberg come from Silicon Valley or somewhere else? I hope that he or she will come from somewhere else, however, my money is on Silicon Valley. Does this mean you should move, that your startup is doomed unless you are in Silicon Valley? No! All it means is that the odds are stacked against you, but with entrepreneurship the odds are always stacked against you anyway.

The company where I work, Telerik, started almost 10 years ago in Sofia, Bulgaria. At the time (sorry guys!) Sofia was an European backwater that was known more for its corruption and mafia than high-tech entrepreneurship. Telerik has defied the odds and has “made it” and has been selected as a Red Herring Global 100 company. How? By changing the culture and consistently earning the best place to work in Bulgaria award. The odds were stacked against Telerik too.  

posted on Wednesday, March 28, 2012 5:17:48 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [1] Trackback

A lot of people have posted tributes to Steve Jobs over the past week. I’ve seen him called the CEO of the Decade (something I agree with) and also compared to Henry Ford (I sort of agree with). I’d like to call attention to four lessons we can learn from Steve Jobs’ Apple, two positive and two negative. First the good:

Apple avoided falling into the trap of the Innovator’s Dilemma

Apple avoided falling into the trap of the Innovator’s Dilemma. In a nutshell, the Innovator’s Dilemma says the following (I am paraphrasing): when you invent something, first you are trying to penetrate a new market and convince people to buy your invention. At this stage you will do anything to get noticed. After a while, your invention becomes mainstream.  Your profits predictable. Your investors complacent. Then a new disruptive technology is starting to show up here and there. You ask your customers (who are all mainstream consumers or businesses) what they want and you build that for them. Pretty soon, you go out of business (or drastically lose share) because the new disruptive technology overtook you. You failed because you made good management decisions (focusing on profits, listening to customers, etc), hence the dilemma. Henry Ford is credited with saying:  “If I listened to my customers, I’d have built a faster horse.”

Apple constantly churned and churned out new products, defining new categories. The pace of innovation was breathtaking, as soon as a new iPhone was released, there were rumors of a newer and faster one. Some would say that Apple was going to cannibalize their older products with the new, but they forged ahead anyway, with the profits to show for it. Apple embraced the disruptive technologies, not fought them.

Apple worked to create an experience, not just raw technology

When you buy an iPad, you are buying an experience. With the integration with iTunes you can download Apps, books, movies, magazines, and of course music. There is a whole ecosystem around Apple and the iPad, that is why they don’t OEM iOS to other vendors to build a device, Apple wants to control the experience.

Android on the other hand has no such ecosystem. They build the OS and let the OEMs build the hardware. There are phones running Android that are much better than the iPhone and there are tablets that are just as good as the iPad, but don’t sell well. Why? There is no ecosystem. I went into the local electronics shop here in Hong Kong and played with the Lenovo and Samsung tablets and there was no true “feel” to them, it was just a screen waiting for you to configure stuff on. Good for geeks, but not for consumers. My mom needs the simplicity of an ecosystem and an integrated experience. 

Google and by extension their OEMs, figured that slick and cool technology was going to be enough to win. Apple realized that good technology was not enough, users demanded an experience, and Apple gave it to them.

Now some lessons from things that Steve could have done better:

Apple suffers from the “Curse of the superstar CEO”

When I was in business school, I read a case study called “Curse of the superstar CEO”. The article stated that recently we have looked to leaders (CEOs) who have a lot of charisma and we tend to worship them like a religious figure. The curse of the superstar CEO is very problematic, it leads to leadership succession problems as well as exaggerates the impact that the CEO has on the company they are leading.

Steve Jobs was larger than life, the black turtle neck shirt and jeans (which I liked) became a cultural icon. No matter how great a CEO Tim Cook will be, he will always be compared to Steve Jobs and will always disappoint simply for not being Steve. (If you don’t believe me, just ask Steve Ballmer how he is doing not being Bill Gates.)

Apple Took secrecy to an extreme

I understand how you want to keep things secret in a competitive marketplace. I also understand the value of trying to control the message. That all said, Apple took this all to an extreme. They shut down fan rumor sites (by suing fans who were kids!), sent the police to people’s homes to look for a lost iPhone prototype, and never talked to the press.

While this creates a tremendous amount of buzz, it also leads to misaligned expectations. When the MacBook Air and the iPhone 4S were announced, their reviews and reception were not that great as people were holding out and expecting something more. While the secrecy worked to generate buzz, it did not always work out as a positive. When secrecy is taken to such an extreme, it can work against you. While Apple is still super positive, they can get away with a lot, but not forever.

posted on Monday, October 10, 2011 11:43:59 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [3] Trackback

Today is day 3 of running the Windows 8 Build tablet. Apparently I am the only person in Hong Kong with the Build Win8 tablet and everyone I  know in Hong Kong wants to play with it. Some Telerik customers read my blog post from yesterday and asked me if they can play with the tablet too. I set up a meeting at a local pub in Hong Kong to allow some Telerik customers to play with the tablet. So today, the tablet went into the wild. Smile

Stop 1: Starbucks

I was a little early to meet our customers, so I hung out at Starbucks. Just about all of the PCs in the Starbucks were Macs. I whip out the WinPad and it turned a lot of heads. One guy even came up to me and asked me what type of tablet I had. I did an impromptu demo since I was in the middle of a tweet war with some Telerik colleagues back in Bulgaria. Passed the tech elite/Starbucks test. SMS from our customer, so time to meet at the pub.

Stop 2: The Pub

When the Telerik customers showed up at the pub they went to work with the tablet. They liked the desktop mode option and we got into a long discussion on Metro only vs non Metro only devices.  Played a little with Visual Studio and looked at the references for a native XAML app. Spent a lot of time on Metro. We all agreed that if Win8 delivers as promised, Apple has a ton of completion on its hands. After an hour of playing and talking, I can say that it passed the enterprise customer test. We’re late for a networking event, so it is time for the next pub.

Stop 3: The Next Pub

There is a monthly networking event in Hong Kong called Web Wednesday where the techies and social media types gather around and talk shop. When we walked into the pub, I bumped into Furuzonfar (Foo-bar for short), a buddy of mine who is a student in Hong Kong University. Furuzonfar is an avid WP7 user and took the tablet for a test run. Pretty soon there was a circle of people around him and they where playing for a long time. Passed the college kid test.

A guy from Intel came by and played with it for a while too. He was happy that it was running an Intel chip. Smile Then some other WP7 enthusiasts came by and I had to snag a photo. Suddenly WP7 is a lot more compelling.

2011-09-21 19.20.10

Furuzonfar was generating so much attention that a reporter from the Financial Times came by and wanted to know what was going on.  I wrestle the tablet away from the college kids/Intel dude and do a demo for the FT reporter. She particularly liked how you can snap an app to the side of the window. As a mac user, she was impressed, so it passed the reporter test.

Finally it was time for the tablet’s field trip to end and I headed home. All in all, a lot of activity for a tablet in one evening.  I head home to New York for the weekend and will see if it passed the hardest test of all, the Mom test.

posted on Wednesday, September 21, 2011 11:33:51 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [1] Trackback

Back in January, I argued that AppStores are not necessary as mobile economics mature and start to mimic web economics. Why do I need to download Skype from the AppStore when I can just go to Skype.com and do the same?

appstore

Apple changed the rules and suddenly the AppStore looks like it may die a toddler. Back in February, new rules for advertising revenue and media content were implemented by Apple. If your app is in the app store and you generate revenue from a new customer, you have to give Apple 30% of the revenue of everything you sell.  As per Steve Jobs:

"Our philosophy is simple -- when Apple brings a new subscriber to the app, Apple earns a 30% share... When the publisher brings an existing or new subscriber to the app, the publisher keeps 100% and Apple earns nothing."

Talk about a finders fee! Take the Kindle for example. If a new customer downloads the Kindle on the iPad and buys a book for $10, Apples gets $3 from Amazon, killing its margins. The same for the New York Times, Economist, Financial Times, and other magazines. What particularly vexed those publications is that Apple would tell the publisher absolutely nothing about the subscriber (Apple owns that data!), reducing any ability to personalize marketing to their own subscribers!

Content Producers Strike Back

The content producers started to fight back. Amazon was the first to strike with its web based Kindle Cloud Reader. It is a web application that uses web standards (HTML5) to allow users to read (online or offline!) their books. You can install a link on your iPad home screen making it look like an app, but it is not. It is just a web site and you completely bypass the AppStore, allowing Amazon to keep 100% of the revenue and customer data.

Another popular content producer struck an even deeper blow to the AppStore. The Financial Times, the winner of the Apple Design Award in 2010, has done the same as Amazon and released a cloud based version of their popular iPad app. Then in a move that can only be described as insurrection, the Financial Times has pulled its (award winning!) iPad and iPhone apps altogether from the AppStore!

With such moves by industry leaders Amazon and the Financial Times, the floodgates are open for others to follow. Apple can’t block the web in its devices, so it is the end of the AppStore as we know it. Even if Apple comes back and says, “ok ok, we will only take 3%, not 30%”, why would Amazon give Apple 3% when it can keep 100% for itself? Tasting freedom, publishers will never come back.

It was nice knowing you AppStore. RIP.

posted on Tuesday, September 06, 2011 5:51:05 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [7] Trackback

Thursday, June 16, 2011 
What's new in ASP .NET MVC 3.0

Subject: You must register athttps://www.clicktoattend.com/invitation.aspx?code=155683 in order to be admitted to the building and attend. 

Whether you are contemplating adding ASP .NET MVC to your toolbox or have already been using ASP .NET MVC 1 or 2, there is something for you in this session. John will present the major new features in ASP .NET MVC 3, which include the Razor -based views, sessionless controllers, new SEO enhancements, new helper methods and Dependency Injection enhancements to name a few. In addition, John will illustrate how to incorporate IIS Express into your development efforts. Time will be allocated for general questions you may have regarding Visual Studio, general development topics, etc.
 


Speaker: John Petersen

John Petersen has been developing software for 20 years, starting with dBase, Clipper and FoxBase + thereafter, migrating to FoxPro and Visual FoxPro and Visual Basic. Other areas of concentration include Oracle and SQL Server - versions 6-2008. John is the Philadelphia Microsoft Practice Director for CEI America (www.ceiamerica.com), a Microsoft Gold Partner. From 1995 to 2001, he was a Microsoft Visual FoxPro MVP. Today, his emphasis is on ASP MVC .NET applications. He is a current Microsoft ASP .NET MVP. In 1999, he wrote the definitive whitepaper on ADO for VFP Developers. In 2002, he wrote the Absolute Beginner’s Guide to Databases for Que Publishing. John was a co-author of Visual FoxPro Enterprise Development from Prima Publishing with Rod Paddock, Ron Talmadge and Eric Ranft. He was also a co-author of Visual Basic Web Development from Prima Publishing with Rod Paddock and Richard Campbell. In 2004, John graduated from the Rutgers University School of Law with a Juris Doctor Degree. He passed the Pennsylvania and New Jersey Bar exams and was in private practice for several years.


Date: Thursday, June 16, 2011 

Time: Reception 6:00 PM , Program 6:15 PM 

Location:  Microsoft , 1290 Avenue of the Americas (the AXA building - bet. 51st/52nd Sts.) , 6th floor
Directions: B/D/F/V to 47th-50th Sts./Rockefeller Ctr
1 to 50th St./Bway
N/R/W to 49th St./7th Ave.

posted on Monday, June 06, 2011 3:49:21 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

clip_image001

 

Telerik Australia event: Focus on Developer Productivity

Telerik, the market-leading provider of end-to-end solutions for application development, automated testing, agile project management, reporting, and content management across all major Microsoft development platforms, is coming to Australia.

We invite you for in-depth sessions with industry experts and Telerik Senior Leadership.   All attendees will receive a copy of Telerik JustCode, valued at $199.

Please note these are 4 separate seminars; you need to register for all those you intend on attending.

 

The Agile Buffet Table: Implementing your own Agile Process  with Microsoft ALM Tools

New to Agile? Having challenges implementing an agile process in your organization? Have you been using Scrum, but need to bend the rules to make it work in your organization? Can’t get the business to “buy-in”? Come and learn about implementing an agile process in your organization. You'll look at the “buffet table” of agile processes and procedures and learn how to properly decide “what to eat.”  We’ll start by defining XP, Scrum, Kanban and some other popular methodologies and then learn how to mix and match each process for various scenarios, including the enterprise, ISVs, consulting, and remote teams. Then take a look at agile tools and how they will aid in implementing your development process. You’ll see how Microsoft Team Foundation Server 2010 provides process templates for Agile that facilitate better planning and metrics. Learn how Microsoft’s application lifecycle management (ALM) tools can support your development process. Lastly, we will talk about how to “sell” agile to your business partners and customers. The speakers have a very interactive style so participation is encouraged and there will be plenty of time for Q&A.

PRESENTERS:

Stephen Forte, Chief Strategy Officer of Telerik

Tuesday, March 15, 2011 9:00 AM - 12:00 PM (GMT+1000)

REGISTER NOW

(by invite only event, use password: Telerik&&ALM2)

Location:

Citigate Central Sydney

169-179 Thomas Street

Haymarket, NSW

Sydney, 2000

  Joel Semeniuk, Founder of Imaginet Resources,

                 Microsoft Regional Director

All attendees will receive a copy of Telerik JustCode, valued at $199.


Agile Testing

As more product teams move to Agile methodologies, the need for automated testing becomes essential to generate the velocity needed to ship fully tested features in short iterations. In this session we will look at the differences between traditional testing and agile testing, explore some tools and strategies that can help make your automation more productive as well as how to get the automation effort started for both new and existing agile projects.

PRESENTER:

Christopher Eyhorn, Executive VP of Telerik’s automated testing tools division

Tuesday, March 15, 2011 2:00 PM - 5:00 PM (GMT+1000)

REGISTER NOW

(by invite only event, use password:TestingTelerik)

Location:

Citigate Central Sydney

169-179 Thomas Street

Haymarket, NSW

Sydney, 2000

All attendees will receive a copy of Telerik JustCode, valued at $199.


20 Things to Consider When Selecting a CMS

Choosing a CMS can be a daunting task.  There are plenty of Content Management Systems to choose from; ranging in price from free to extremely expensive.  From this crowded landscape it can be difficult to find a CMS that effectively enables an organization to accomplish their goals.  In this session, I will identify 20 things to consider when evaluating a CMS that will help you select the ideal CMS for your project.

PRESENTERS:

Gabe Sumner, Developer Evangelist at Telerik

Martin Kirov, Executive Vice President of the Sitefinity CMS division of Telerik

Tuesday, March 15, 2011 9:00 AM - 12:00 PM (GMT+1000)

REGISTER NOW

(by invite only event, use password:TelerikAustralia)

Location:

Citigate Central Sydney

169-179 Thomas Street

Haymarket, NSW

Sydney, 2000

All attendees will receive a copy of Telerik JustCode, valued at $199.


Streamline Development with ASP.NET MVC Extensions

Tired of dealing with the bloated pages generated by your WebForms application? Wondering what the whole deal is with MVC? Already into MVC but want to get maximum performance and functionality out of your applications? In this presentation we will take a look at how ASP.NET MVC, together with the Telerik MVC Extensions, can have you developing applications with high performance and functionality, while output light-weight and easily readable HTML.

PRESENTER:

Malcolm Sheridan, Microsoft awarded MVP in ASP.NET

Speeding up Development Using 3rd Party Controls

Learn how to cut Silverlight development time significantly using your new Telerik RadControls. As a TechDays attendee, you will receive a complimentary license for Telerik’s RadControls for Silverlight. This TurboTalk will demonstrate how you can speed up application development while adding more functionality to your Silverlight applications with the Telerik tools. See how high-performance data controls like RadGridView and RadChart can take your applications to the next level. See how layout controls like RadDocking and RadTileView can add both richness and increased functionality, helping you maximize screen real estate. And see how RadRichTextBox is unlocking Silverlight’s power to enable editing of HTML, DOCX, and XAML content. Jumpstart your development with the RadControls for Silverlight and get the most out of your new tools by joining this developer-to-developer talk.

PRESENTER:

Jordan Knight, Solution Architect | Microsoft MVP - Silverlight

Tuesday, March 15, 2011 2:00 PM - 5:00 PM (GMT+1000)

REGISTER NOW

(by invite only event, use password: DevelopersRock)

Location:

Citigate Central Sydney

169-179 Thomas Street

Haymarket, NSW

Sydney, 2000

All attendees will receive a copy of Telerik JustCode, valued at $199.

posted on Wednesday, March 09, 2011 3:52:17 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

Thursday, March 17, 2011
Line of Business Apps Made Easy with Microsoft LightSwitch

Subject:
You must register athttps://www.clicktoattend.com/invitation.aspx?code=154306 in order to be admitted to the building and attend.
LightSwitch is the next generation Microsoft tool for rapidly building business applications. In this session you will learn about the basics of building and end to end LightSwitch application as well as customizing the user interface. LightSwitch ships with only a basic set user interface controls - but many of your business applications require a level of sophistication and interaction that LightSwitch does not provide. You'll will learn how to integrate custom controls into the LightSwitch shell, how to build add-ins for a richer UI experience, and how to integrate a custom shell experience.

Speaker:
Jason Beres, Infragistics
As the Vice President of Product Management, Community, and Evangelism, Jason spearheads customer-driven, innovative features and functionality throughout all of Infragistics' products. Mr. Beres is a Microsoft .NET MVP, on the INETA Speakers Bureau, and the author of multiple books on various .NET technologies, the latest being Silverlight 4 Professional from Wrox Press.

Date:
Thursday, March 17, 2011

Time:
Reception 6:00 PM , Program 6:15 PM

Location: 
Microsoft , 1290 Avenue of the Americas (the AXA building - bet. 51st/52nd Sts.) , 6th floor

Directions:
B/D/F/V to 47th-50th Sts./Rockefeller Ctr
1 to 50th St./Bway
N/R/W to 49th St./7th Ave.

posted on Monday, March 07, 2011 12:18:43 AM (Eastern Standard Time, UTC-05:00)  #    Comments [1] Trackback

Making Agile Development work in your organization

Telerik & e-Zest Solutions Ltd. invites you for Two Free Seminars:

Agile Development

INTRODUCTION

The Agile methodology has been adopted by many organizations around the globe. Unfortunately, many still struggle with the various methodologies (XP, Scrum, Kanban etc), and can’t settle on just one. While some organizations are successful in implementing Agile with the development teams, they tend to forget other vital parts of the process, mainly testing.

Implementing your own Agile Process Seminar at Pune, India

A session on how to choose which Agile methodology (or how to mix and match several pieces) to implement in your organization and how to do it.

Are you new to Agile? Have challenges implementing an Agile process in your organization? Have you been using Scrum, but need to bend the rules to make it work in your organization? Are you interested in using Kanban? What about XP? Can’t get the business “buy-in”? Come and learn about implementing an Agile process that suits all your organisational needs.

The “buffet table” of Agile processes & procedures session is aimed at learning how to properly mix & match each process, will cover:

  • Defining XP, Scrum, Kanban and some other popular methodologies.
  • How to implement a custom process for the enterprise, ISVs, consulting and remote teams?
  • How Agile tools aids in implementing your unique process?
  • How to “sell” Agile to your business partners and customers?

Seminar Coverage
Time Slot

Free Registration
9:00 am-9:55 am

Speaker Introduction
9:55 am-10:00 am

Session on “The Agile Buffet Table”
10:00 am-1:00 pm

Agile Testing Seminar at Pune, India

This session dives into the value of Agile testing, how to use automated Agile testing tools and how your organization will benefit from Agile testing.

As more product teams move to Agile methodologies, the need for automated testing becomes essential to generate the velocity needed to ship fully tested features in short iterations.

The Session will cover:

  • The differences between traditional testing and Agile testing.
  • Exploring tools and strategies, that can help make your automation more productive as well as how to get the automation effort started for both new and existing Agile projects?

Seminar Coverage
Time Slot

Free Registration
2:30 pm-2.50 pm

Speaker Introduction
2.50 pm-3.00 pm

Session on “Agile Testing”
3:00 pm–5:00 pm

Conclusion of Program
5:00 pm

SPEAKERS

Stephen Forte

Stephen Forte is the Chief Strategy Officer of Telerik (A leading vendor of developer and team productivity tools, automated testing and UI components) also a certified scrum master. Involved in several startups, was the Co-founder of Triton Works (acquired by UBM in 2010), CTO and Co-founder of Corzen, Inc. (acquired by Wanted Technologies (TXV: WAN) in 2007). He also speaks regularly at Industry conferences around the world. He has written several books on application and database development including Programming SQL Server 2008 (MS Press). Prior to Corzen, Stephen served as the CTO ofZagat Survey in New York City and also was co-founder of the New York based software consulting firm The Aurora Development Group. He currently is a Microsoft MVP award recipient, INETA speaker and is the Co-moderator & founder of the NYC .NET Developer User Group.

Christopher Eyhorn

Christopher Eyhorn is the Executive VP of Telerik’s automated testing tools division, where they build the next generation of automated testing tools. Formally Co-founder and CEO of ArtOfTest. He has written automation frameworks and functional testing tools and has worked in a variety roles ranging from developer to CEO within the company. Christopher has worked with a variety of companies ranging in size and Industry. He is a licensed pilot that loves to fly every chance he gets and truly appreciates the importance of hardware and software testing every time he takes off.

WHO SHOULD ATTEND?

Agile Buffet Table:
The discussion is advantageous for professionals, using the Microsoft .NET platform as well as Product Managers, Technical Directors, Project Managers, Architects and Sr. Developers.

Agile Testing:
Professionals interested in learning how to make their testing efforts more efficient as well as produce more automated tests at the end of each sprint as well as Project Managers, Software Quality Managers, Test/ QA Leads and Sr. Test Engineers.

Date & Time:

Tuesday, January 18th 2011
Agile Buffet Table, from 9:00 am – 1:00 pm
Agile Testing, from 2:30 pm – 5:00 pm

Venue:

Sumant Moolgaokar Auditorium, Ground floor,
MCCIA Trade Tower, ICC Complex, 403,
Near Senapati Bapat Road, Pune

Pre-registration: Mandatory

Kindly confirm your participation immediately by sending us your contact
details on seminar@e-zest.net

e-Zest Help line number: 020–25459802/03/04

e-Zest Solutions Ltd. (www.e-zest.net) is a CMMI Level 3 & ISO 9001:2008 certified, Product Engineering and Enterprise Solutions provider, focused on solutions and services based on Microsoft .NET (3.0/3.5/4.0), Sun Java EE (5.0) & LAMP. e-Zest is also Telerik Sales & Training India partner, a Microsoft Gold Certified Partner and Sun Associate Partner.

Telerik (www.telerik.com), is a leading vendor of ASP.NET AJAX, ASP.NET MVC, Silverlight, WinForms & WPF controls & components, as well as .NET Reporting, .NET ORM , .NET CMS, TFS, Code Analysis & Web Application Testing Tools. Building on its expertise in interface development & Microsoft technologies.

posted on Wednesday, January 12, 2011 1:34:00 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

Happy New Year! As I enter my 9th year of blogging, I will open the year with more predictions. I started off last week with predictions in the Microsoft space. This week I will look at general industry trends. Today I start with the AppStore.

Apple has one, Android has one, Windows Phone 7 has one, and even BlackBerry has an AppStore. Apple has the most popular platform to date; Citibank predicts that Apple will sell $2billion in 2011 in their AppStore. Gartner predicts the AppStore market will be $4billion in 2010.

AppStores are everywhere. My colleague Joel Semeniuk and many others argue that we will see a proliferation of AppStores for the many different platforms. I disagree.

Fred Wilson argues that mobile economics will start to look like web economics, meaning that as the mobile platforms mature and become mainstream, the behaviors and business models of mobile will mimic those of the web.  We don’t rely on an AppStore to market Web applications and contain comments/ratings, we rely on social media for that. On the web today if we want an application, we don’t go to an AppStore, we go to a web site and download it. On that website there is always a “choose your platform” options, as shown on Skype’s home page below.

image

As behaviors on the mobile internet merge with the behaviors on the “regular” web, we’ll see more vendors offering their products this way. (Google Voice avoided the Apple AppStore this way at one point when Apple was blocking it.) I can already bypass the Android marketplace and download many apps directly. The most popular iPhone game, AngryBirds, is also available on the Android, but they bypassed the Android MarketPlace and went with Getjar downloads. (What is also interesting is that AngryBirds on Android is free but that is a different conversation about paid v ad supported content.)

It is not just about avoiding the commission you have to pay the AppStores, it is about controlling your brand and extending your brand across platforms. Skype, AngryBirds, and others want to control their interaction with their users and customers, not have Apple or Google control it. The content developers know you may have a PC at work, a Mac at home, and an Android in your pocket, so they want to interact with you directly, not through an intermediary.

I can’t see content developers giving up control. The reason why the AppStore succeeded at first was that the mobile platform was new and there was only one important player (Apple) who only allowed you on their platform via the AppStore. Now mobile is everywhere and Apple is no longer the sole dominate player (Android has more market share actually). Of course Apple has tight control over the iPhone and it is not going to change anytime soon, however, as the other platforms emerge and gain market share, the web model will prevail, making the Apple AppStore look like Lycos and Excite in 1999.

Lastly, there is a technical pressure against AppStores as well. HTML5 is being positioned as a way to avoid having to write your app at least 3 times (iOS, Android, Windows Phone 7). While I don’t believe all of the hype behind HTML5, undoubtedly some companies will choose HTML5 over native apps. Those companies will easily avoid the AppStores, even the iPhone one.

2011 will see the beginning of this trend.

posted on Monday, January 03, 2011 7:35:01 AM (Eastern Standard Time, UTC-05:00)  #    Comments [1] Trackback

Last year I made some long range predictions saying that 2010 will be the turning point for a few trends. I was not saying that on December 31st, 2010 (today!) you will sit back and say I am right on all of these, but I was saying that by December 31st, 2011 or 2012 you will. The trends I spoke about last year are: .NET has hit the end of the road, BI is the next big thing, the cloud will emerge, Google and Apple will go to war, and content providers will strike back.

I still think these trends hold true. It looks like #4 (Google v Apple) and #5 (Content Providers strike back) already started to happen in 2010.

It is now time for some predictions in the Microsoft space and the industry in general. Today I will start with Microsoft, here are three bold predictions in the Microsoft space:

Windows Phone 7 will have more apps than both iPhone and Android in 2 years

Windows Phone 7 (WP7) is playing catch-up and will start to gain some market share in 2011. While it will still be a distant 3rd place behind Android and the iPhone in the “Smartphone” category, it will shine in one area: apps. There are tons of apps out there for both iPhone and Android and only very few for WP7 as of this writing. That will change, and one story this year will be the explosion of quality apps for WP7. 

We are already seeing great growth in the Windows 7 Marketplace. Currently there are over 5,000 apps in less than two months since launch. It took Android over 5 months to get to the 5,000 number. WP7 has well over double the speed of the growth Android had at launch. While this does not mean much, if the trend continues as I think it will, Microsoft’s phone will have the most apps within a couple of years.

Developers build applications on a platform for two reasons: the platform has reach and it is easy to develop for. WP7 will have broad reach as it gains market share this year and developer ecosystems are in Microsoft’s DNA, not Apple’s and Google’s. While developers will continue to develop for the iPhone and Android, within two years, WP7 (or 8?) will have the most applications. Both the XBox (XNA) and Silverlight platforms to develop on are quite easy and already have a tremendous amount of developers.

Windows 8 will ship a beta and the surprise story will be Silverlight, not HTML5

While Windows 7 is  the most successful operation system Microsoft has shipped to date, we will see a Windows 8 beta this year, most likely at the PDC in the fall. Speculation is that Windows 8 will be based on HTML5 and not have any support for Silverlight included, a hint as to where Microsoft has put its priorities. My bold prediction is that while HTML5 will be “everywhere” in Windows 8, Silverlight will ship as part of the core OS, putting it on equal footing with HTML5 in Windows 8.

Office 365 will dominate over GoogleApps

Later this year Office 365 will launch and compete head on with GoogleApps. Microsoft has the model right, online applications that integrate with the locally installed ones, Exchange integrations, and managed support. At $6 per user, your business can have a fully functional Exchange, Sharepoint, and Office solution without any IT costs. GoogleApps are good, however, they can’t compete with what Office 365 is offering.

posted on Friday, December 31, 2010 8:30:40 AM (Eastern Standard Time, UTC-05:00)  #    Comments [1] Trackback

At the end of each year we have time to look back and reflect on the year before and think about the year ahead. 2010 was once again a challenging year for Telerik as well as our customers. The economy continues to be stubborn in many parts of the world. Microsoft released new versions of .NET and Visual Studio that we had to be ready for. Silverlight 4 was released, then pronounced dead, then alive again, then Silverlight 5 previews were shown. Even Mother Nature in April gave us trouble; the eruption of Mt. Eyjafjallajökull in Iceland stranded over a dozen of our employees, including our CEO, on three continents. (Don’t feel too bad, some of them were stranded in Las Vegas!)

Looking back, 2010 was also a year of growth for Telerik. Our team grew very rapidly with lots of great new hires-we are already outgrowing our new office building in Sofia. We added two divisions (Testing and Team Productivity) and two offices (Austin, Texas and Winnipeg, Canada.) We added many new products including: Windows 7 Phone Toolkit, JustCode, JustMock, TeamPulse, and had major upgrades of some existing products such as our Web Testing Framework, OpenAccess’ LINQ implementation, and the SiteFinity 4.0 release candidate. In partnership with Microsoft, at the Silverlight Firestarter a few weeks ago, Scott Guthrie announced our release of a great reference application for Silverlight: f!acedeck. (We are proud of the work we did on f!acedeck, so give it a try!) 

We continue to push the envelope and drive innovation in our existing products. Our products won several awards in 2010, including the industry standard JOLT awards, Best of TechEd 2010 and “Best” by the Visual Studio Magazine Awards. Telerik was also selected as a Red Herring Global 100 company, the “Best Company to work for in Bulgaria”, and was a finalist in the prestigious European Business Awards.

In 2010, Telerik reached more customers at events this year. Our team traveled to all four corners of the globe to visit customers. For example we had speakers and booths at five TechEds in the USA, Europe, Australia, India, and Brazil. We sponsored major events in North America, South America, Europe, Africa, Asia, and Australia, visiting customers in some familiar places (such as Las Vegas) and some new ones (Brazil and Hong Kong). Besides the big events, we also sponsored and spoke at several CodeCamps, user groups, and other events around the world, including DevReach in our own backyard. Our evangelists and marketing teams would even travel to Antarctica to visit customers!

Looking forward to 2011, we hope that the world economy and the general business climate improve. We have some great releases of our products planned, such as SiteFinity 4.0 in January and our Q1 2011 release in March. We have a brand new product that is currently top secret, but you’ll get a preview of it soon. We also plan to open some more offices worldwide as well as attend and sponsor even more events. We are hitting the ground running in 2011, we are putting on an event in Pune, India in January.

We hope to see you in person in 2011 since we do this all for you. I want to close by thanking all of you, our customers, for your dedication and support. Happy New Year!

Stephen Forte, CSO

Sofia, Bulgaria

posted on Thursday, December 16, 2010 7:47:31 AM (Eastern Standard Time, UTC-05:00)  #    Comments [4] Trackback

Telerik and e-Zest will be sponsoring two Agile seminars on January 18th at the MCCIA in Pune, India. Hope to see you there!

Seminars on
Agile Development and Testing
Tuesday January 18th 2011 @ MCCIA, Pune

e-Zest logo CMMI 


INTRODUCTION

The Agile methodology has been adopted at many organizations. Unfortunately, many still struggle with the various methodologies (XP, Scrum, Kanban, etc) and can’t settle on just one. While some organizations do have successes is implementing Agile with the development team, they tend to forget other vital parts of the process, mainly testing. We will present two separate seminars, one on how to choose which agile methodology (or how to mix and match several pieces) to implement in your organization and how to do it. The second seminar dives into the value of Agile testing, how to use automated Agile testing tools, and how your organization will benefit from Agile testing.

Morning Seminar: The Agile Buffet Table: Implementing your own Agile process

New to Agile? Having challenges implementing an agile process in your organization? Have you been using Scrum, but need to bend the rules to make it work in your organization? Can’t get the business to “buy-in”? Come and learn about implementing an agile process in your organization. You'll look at the “buffet table” of agile processes and procedures and learn how to properly decide “what to eat.” We’ll start by defining XP, Scrum, Kanban and some other popular methodologies and then learn how to mix and match each process for various scenarios, including the enterprise, ISVs, consulting, and remote teams. Then take a look at agile tools and how they will aid in implementing your development process. You’ll see how Microsoft Team Foundation Server 2010 provides process templates for Agile that facilitate better planning and metrics. Lastly, we will talk about how to “sell” agile to your business partners and customers. The speakers have a very interactive style so participation is encouraged and there will be plenty of time for Q&A.

Afternoon Seminar: Agile Testing

As more product teams move to Agile methodologies, the need for automated testing becomes essential to generate the velocity needed to ship fully tested features in short iterations. In this session we will look at the differences between traditional testing and agile testing, explore some tools and strategies that can help make your automation more productive as well as how to get the automation effort started for both new and existing agile projects.

Seminar Coverage

Time Slot

Developer Event Registration

9:00-9:55

Speaker Introduction

9:55-10:00

Agile Development Event

10:00-1pm

Break

1pm-2:30pm

Agile Testing Event Registration

2:30-3pm

Speaker Introduction

3-3:10pm

Agile Testing Event

3:15-5pm

Conclusion of Program

5:00pm

WHO SHOULD  ATTEND?


Agile Buffet Table: Developers and development managers, especially those using the Microsoft .NET platform.

Agile testing: any agile team member (dev or tester) that is interested in learning how to make their testing efforts more efficient as well as produce more automated tests at the end of each sprint.

FACULTY
Stephen Forte

Stephen Forte is the Chief Strategy Officer of Telerik, a leading vendor of developer and team productivity tools, automated testing and UI components. Stephen is also a certified scrum master. Involved in several startups, prior he was the co-founder of Triton Works, which was acquired by UBM in 2010 (London: UBM.L), and was the Chief Technology Officer (CTO) and co-founder of Corzen, Inc., which was acquired by Wanted Technologies (TXV: WAN) in 2007. Stephen also speaks regularly at industry conferences around the world. He has written several books on application and database development including Programming SQL Server 2008 (MS Press). Prior to Corzen, Stephen served as the CTO of Zagat Survey in New York City and also was co-founder of the New York based software consulting firm The Aurora Development Group. He currently is a Microsoft MVP award recipient, INETA speaker and is the co-moderator and founder of the NYC .NET Developer User Group. Stephen has an MBA from the City University of New York.

Christopher Eyhorn

Christopher Eyhorn is the Executive VP of Telerik’s automated testing tools division where he is building the next generation of automated testing tools.  Formally co-founder and CEO of ArtOfTest, he has written automation frameworks and functional testing tools and has worked in a variety roles ranging from developer to CEO within the company.  Christopher has worked with a variety of companies ranging in size and industry.  He is a licensed pilot that loves to fly every chance he gets and truly appreciates the importance of hardware and software testing every time he takes off.

posted on Wednesday, December 15, 2010 8:51:21 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

If you are creating OData or WCF services in your application and have been using the Telerik Data Services Wizard, things just got a whole lot easier. As I have shown before, you can go from File|New to a new CRUD application in 30 seconds using the wizard. With the Q3 release last month, Telerik gives you more control over the wizard and its output. Some of the new features are: the ability to isolate your service in its own project, the ability to select which CRUD methods gets created for each entity, and Silverlight data validation. Let’s take a look.

When you run the wizard, on its first page you now have the option, as shown here, to separate the service in its own class library. You can check the checkbox and type in a project name and the Data Services Wizard create the implementation files for the service in this new class library for you.

image

In previous versions of the wizard, the wizard would create all of the CRUD operations for you automatically. We received feedback from customers that said they would like more control over this process, allow some entities to be read only for example. The Q3 version of the wizard now allows you to select which CRUD methods to generate for each entity.

image

Lastly, if you choose the automatic Silverlight application generation, the wizard will read the database validation rules and replicate them as client side validation rules, saving you a lot of configuration and coding!

image

Enjoy the new wizard’s improvements!

posted on Wednesday, December 08, 2010 8:35:15 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

At TechEd Europe 2010 in Berlin, I will be doing three breakout sessions and one panel discussion: Silverlight v HTML5.

My breakout sessions are:

  • Scrum, but
  • Agile Estimation

This is the first time I am formally doing the Scrum, but session, however, Joel and I did it at TechEd North America at our pre-con. We’ll (I invited Joel, even though he is not an “official” speaker to do the session with me) walk the audience through a few slides about Scrum, Kanban, XP and “Agile is about values, not rules” and the “buffet table” approach. After about 10 minutes of us blabbing, we will open it to questions. We are prepared to speak about four scenarios: Scrum/Agile in the Enterprise, Consulting, Remote (and outsourced) teams, and ISVs. Should be fun, and interactive! Bring lots of questions!!!

The Agile Estimation talk is a repeat of my talk last year and we are doing it twice since there is a lot of demand.

The session times are below. When I am not doing sessions, I will be at the Telerik Booth. You can find us at E 83+E84, very close to where we were last year. I’ll be glad to talk to you about the sessions, agile in general, or the Telerik tools. If you check out our CTP of WP7 controls now, we have a special offer for you: As a special gift to all TechEd Europe attendees visiting our booth, you will receive a free license of these controls once they are finally released, if you download the CTP now.

See you in Berlin! Winking smile

Code

Session

Day

Time

DPR301

Scrum, but

Breakout Session

Stephen Forte

Having challenges implementing Scrum in your organization? Have you been using Scrum but need to bend the rules to make it work in your organization? Do you practice a little Scrum with a mix of Kanban? Then this session if for you! Come and learn about implementing Scrum, but with a few changes. We'll look at customizing Scrum in your environment and look specifically at how to implement Scrum for the enterprise, ISVs, consulting and remote teams.

Tuesday, November 9

2:30 PM - 3:30 PM

DPR201

Agile Estimation

Breakout Session

Stephen Forte

We’re agile, so we don’t have to estimate and have no deadlines, right? Wrong! This session will review the problem with estimations in projects today and then give an overview of the concept of agile estimation and the notion of re-estimation. We’ll learn about user stories, story points, team velocity, and how to apply them all to estimation and iterative re-estimation. We will take a look at the cone of uncertainty and how to use it to your advantage. We’ll then take a look at the tools we will use for Agile Estimation, including planning poker, Visual Studio TFS and much more.

Thursday, November 11

12:00 PM - 1:00 PM

DPR201 (R)

Agile Estimation (repeat)

Breakout Session

Stephen Forte

We’re agile, so we don’t have to estimate and have no deadlines, right? Wrong! This session will review the problem with estimations in projects today and then give an overview of the concept of agile estimation and the notion of re-estimation. We’ll learn about user stories, story points, team velocity, and how to apply them all to estimation and iterative re-estimation. We will take a look at the cone of uncertainty and how to use it to your advantage. We’ll then take a look at the tools we will use for Agile Estimation, including planning poker, Visual Studio TFS and much more.

Thursday, November 11

6:00 PM - 7:00 PM

posted on Friday, November 05, 2010 5:16:58 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [2] Trackback

Today I spoke at DevReach in Sofia, Bulgaria and spoke on:

In the RIA services talk, just like the other times I did it, I built a simple application from scratch. Here is what I did:

  • Added a Silverlight Business Application
  • Changed the Title to DevReaCH (I accidently hit cap locks in the session)
  • Mapped an EF model to Northwind
  • Created a Domain Service
  • Wrote a business rule in said service
  • Made fun of Canada
  • Showed the client site generated code
  • Added a DataGrid and wrote code to fill it
  • Asked the audience if they thought the code would work
  • Fixed the bug I introduced in my code
  • Dragged and dropped a Data Source to datagrid with automatic binding
  • Added a data pager with no code
  • Added a filter with no code
  • Added a “Save” button with no code
  • Added Steve Jobs as a customer (and told the audience how much I hate him)
  • Went into the metadata class and added validation
  • Viewed the validation
  • Exposed the RIA Service as an OData feed
  • Told everyone about OData in <5 minutes (and said they were excused from my OData talk later in the day)

The OData talk did more of the same, same as my TechEd talk, so you can download the slides and demos here.

I also recorded an episode of .NET Rocks with Richard and Carl.

Tomorrow is a Scrum talk with Joel.

Good times.

posted on Monday, October 18, 2010 9:55:59 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [1] Trackback

Thursday, September 16, 2010
Paul Sheriff: Unit Testing Basics in Visual Studio

Subject:
You must register athttps://www.clicktoattend.com/invitation.aspx?code=149443 in order to be admitted to the building and attend.
Everyone knows that they should be writing better test cases for their applications, but how many of us really do it? In Visual Studio unit testing is an integrated part of the development environment. So there is no longer any reason to avoid not doing test driven development and automated unit testing. In this seminar you will learn how to architect your applications to make testing quicker and easier. You will learn to use the tools in Visual Studio to help you do the testing.
You will Learn
1. How to architect for test driven development
2. Creating test cases
3. Using the Visual Studio Unit Testing tools.

Speaker:
Paul D. Sheriff is the President of PDSA, Inc. (www.pdsa.com), a Microsoft Partner in Southern California. Paul acts as the Microsoft Regional Director for Southern California assisting the local Microsoft offices with several of their events each year and being an evangalist for them. Paul has authored literally hundreds of books, webcasts, videos and articles on .NET, WPF, Silverlight and SQL Server. Paul can be reached via email at PSheriff@pdsa.com. Check out Paul's new code generator 'Haystack' at www.CodeHaystack.com.

Date:
Thursday, September 16, 2010

Time:
Reception 6:00 PM , Program 6:15 PM

Location: 
Microsoft , 1290 Avenue of the Americas (the AXA building - bet. 51st/52nd Sts.) , 6th floor

Directions:
B/D/F/V to 47th-50th Sts./Rockefeller Ctr
1 to 50th St./Bway
N/R/W to 49th St./7th Ave.

posted on Monday, September 06, 2010 10:03:29 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

See also:

In Part I we looked at when you should build your data warehouse and concluded that you should build it sooner rather than later to take advantage of reporting and view optimization. Today we will look at your options to build your data warehouse schema.

When architecting a data warehouse, you have two basic options: build a flat “reporting” table for each operation you are performing, or build with BI/cubes in mind and implement a “star” or “snowflake” schema. Let’s take a quick look at the first option and then we will take a look at the star and snowflake schemas.

Whenever the business requests a complex report, developers usually slow down the system with a complex SQL statement or operation. For example, pretend in our order entry system (OLTP) the business wants a report that says this: show me the top ten customers in each market including their overall rank. You would usually have to perform a query like this:

  1. Complex joins for unique customer
  2. Rollup the sales
  3. Ranking functions to determine overall rank
  4. Partition functions to segment the rank by country
  5. Standard aggregates to get the sales
  6. Dump all of this to a work table in order to pull out the top 10 (if you don’t do this, you will lose the overall rank)

A typical SQL statement to do steps 1-5 would look like this:

With CTETerritory
As
(
   Select cr.Name as CountryName, CustomerID, 
                Sum(TotalDue) As TotalAmt
   From Sales.SalesOrderHeader soh 
   inner join Sales.SalesTerritory ter
   on soh.TerritoryID=ter.TerritoryID
   inner join Person.CountryRegion cr 
   on cr.CountryRegionCode=ter.CountryRegionCode
   Group By cr.Name, CustomerID
)
Select *, Rank() Over (Order by TotalAmt DESC) as OverallRank,
Rank() Over
     (Partition By CountryName Order By TotalAmt DESC,
            CustomerID DESC) As NationalRank
From CTETerritory

Argh! No wonder developers hate SQL and want to use ORMs! (I challenge the best ORM to make this query!)

Instead you can create a table, lets call it SalesRankByRegion, with the fields: CountryName, CustomerID, TotalSales, OverallRank, and NationalRank, and use the above SQL as part of a synchronization/load script to fill your reporting table on a regular basis. Then your SQL statement for the above query looks like this:

SELECT * FROM SalesRankByRegion
WHERE CustomerNationalRank Between 1 and 10
ORDER BY CountryName, CustomerNationalRank

The results look like:

clip_image001

That is more like it! A simple select statement is easier for the developer to write, the ORM to map, and the system to execute.

The SalesRankByRegion table is a vast improvement over having to query all of the OLTP tables (by my count there are three tables plus the temp table). While this approach has its appeal, very quickly, your reporting tables will start to proliferate.

Your best option is to follow one of the two industry standards for data warehouse tables, a “star” or a “snowflake’ schema. Using a schema like this gives you a few advantages. They are more generic than the SalesRankByRegion, which was a table built for one query/report, giving you the ability to run many different reports off each table. Another advantage is that you will have the ability to build cubes very easily off of a star or snowflake schema as opposed to a bunch of SalesRankByRegion tables.

The design pattern for building true data warehouse tables are to build a “fact” table, or a table that contains detail level (or aggregated) “facts” about something in the real world, like an order or customer for example. Inside of the fact table you will also have “measures” or a numeric value that represents a “fact.” To support your fact table you will have “dimension” tables. Dimensions are a structure that will categorize your data, usually in the form of a hierarchy. A dimension table for example could be “time” with a hierarch of OrderYear, OrderQuarter, OrderMonth, OrderDate, OrderTime.

There are tons of tutorials on the internet that show you how to build a star or snowflake schema and the difference between them, so I will not repeat them here. (You may want to start here.) I’ll give you the high level on a simple star schema here.

Let’s say we have an order entry system, such as Northwind (in the Microsoft SQL Server sample database.) You can have a fact table that revolves around an order. You can then have three (or more) fact tables that focus on: time, product, and salesperson. The time dimension would roll-up the order date by year, quarter, month, and date. The product dimension would roll-up the product by the product and category. (In most systems you would have a much deeper hierarchy for products.) The salesperson dimension would be roll-up of the employee, the employee manager and the department they work in. The key in each of these tables would then be foreign keys in the fact table, along with the measure (or numerical data describing the fact.)

There is an example similar to this in Programming SQL Server 2008, a book where I am a co-author. Here is modified version of that demo:

Dimension tables:

CREATE TABLE [dwh].[DimTime] (
[TimeKey] [int] IDENTITY (1, 1) NOT NULL Primary Key,
[OrderDate] [datetime] NULL ,
[Year] [int] NULL ,
[Quarter] [int] NULL ,
[Month] [int] NULL 
) 

CREATE TABLE [dwh].[DimProduct] (
[ProductID] [int] not null Primary Key,
[ProductName] nvarchar(40) not null,
[UnitPrice] [money] not null,
[CategoryID] [int] not null,
[CategoryName] nvarchar(15) not null
) 

CREATE TABLE [dwh].[DimEmployee] (
EmployeeID int not null Primary Key,
EmployeeName nvarchar(30) not null,
EmployeeTitle nvarchar(30),
ManagerName nvarchar(30)
)

Fact table:
CREATE TABLE [dwh].FactOrder (
[PostalCode] [nvarchar] (10) COLLATE SQL_Latin1_General_CP1_CI_AS NULL ,
[ProductID] [int] NOT NULL ,
[EmployeeId] [int] NOT NULL ,
[ShipperId] [int] NOT NULL ,
[Total Sales] [money] NULL ,
[Discount] [float] NULL ,
[Unit Sales] [int] NULL ,
[TimeKey] [int] NOT NULL 
)

We have the basis of a star schema. Now we have to fill those tables and keep them up to date. That is a topic for Part III.

posted on Tuesday, August 31, 2010 7:30:42 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [2] Trackback

I was lucky enough to get my hands on an advanced copy of Kristin Arnold’s book Boring to Bravo and I highly recommend it. This is a book about being a better presenter. It stands out because it is the first book that I have seen that acknowledges the generational change of the audiences and what the consequences of those changes are (like embrace folks twittering in your meeting rather that have them switch off their cell phones.)

I have been a public speaker for 15 years, a professional one for over 13, and found this book very useful. I learned several things while reading it, including many things I am doing wrong! Based on the advice in the book, I am going to use some of the techniques at my two talks at VSLive in Redmond next week.

The book is a fun read with lots of checklists, sidebars, illustrations, and to do lists. Kristin even quizzes you at the end of each chapter, often using the techniques she demonstrated in the chapter, a brilliant way to reinforce the points! She stresses energy and engagement with the audience and also makes you think of the small things (the side of the stage you walk in on, passive v active voice, using inclusive language, etc) and how they effect the mindset of the audience. If you want to be a more engaging, dynamic speaker, read this book!

image

posted on Thursday, July 29, 2010 5:46:15 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [1] Trackback

Yesterday Telerik released the Q2 version of OpenAccess ORM as well as the rest of the product line. Yesterday (Part I) I showed you the menus, Data Service Wizard, and new XAML templates. Today I will show you round tripping. Next week I will talk about RIA Services and model first.

OpenAccess Q2’s new LINQ implementation and Visual Designer introduces database round tripping.  Now you can make changes in your model and persist them to your database table. (You always had the ability to make a database schema change and refresh your model.) Let’s see this feature in action. The Model Schema Explorer gives you a view of your mapped entities. If you right click on a table you can choose Edit Table from the context menu to begin editing your entity.

image

This brings up the Edit Table.. dialog where you can insert, edit, and delete columns and set metadata such as data types and nullable.

image

I’ll go ahead add a CurrentCustomer column as a bit to indicate if the customer is current or not. That is all there is to it, so I will right click on my model and select Update Database From Model as shown here.

image

This brings up the Schema Update Wizard. This wizard will allow you to execute the script right away or generate it and save it for later. It will also give you the choice between making an update (alter table) compared to a create (create table).  I’ll decide to make an update and execute it now and click next.

image

After I tell the wizard what table to update the database with (and make sure you have mapped your new column to the model and recompiled the project before running the wizard), you are presented with the script and given the ability to save the script, execute it, or copy it to the clipboard.

image

Being a database geek, I am only going to copy to the clipboard the part of the code that I need, the ALTER TABLE command with the ADD column and run that against my SQL Server. I could let OpenAccess run the code for me, but as I said, I am a database geek and like complete control. The tool gives you whatever level of control you desire. Once you run the TSQL either via the wizard or by hand, the process is complete.

Enjoy!

posted on Friday, July 16, 2010 4:00:15 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

The battle for mobile supremacy has really heated up. Apple and Goolge had round 1 back in January with the release of the Google Nexus One. With the release of the iPhone 4 and the Droid X we are well into round 2.  I am not going to debate which device is better or worse, that is for the market to shake out. Rather I want to comment on how the popularity of each device is strengthening its underlying platform. The iPhone 4.0 and iPad 1.0 run on iOS 4. Google’s mainline devices run Android 2.1 or will be upgraded to Android 2.2. “Froyo'”.  It has been reported that Google will release an iPad style “Google Pad” based on Android 2.x as well. Developers are lining up to write applications for these two platforms, each expanding from the phone to a slate/tablet device. It is possible you may see netbook style devices running iOS and Android soon. That said, looking ahead 5 years from now, which one will “win” the most mindshare?

Apple’s iOS is quite popular since the iPhone and iPad are selling so well. Developers are turned off by the AppStore’s approval process and Objective-C in general. Apple also maintains complete control over iOS and you can’t license it and put it on your own consumer electronics device. Android is more open and easier to program for since it uses the more mainstream Java language. It is also possible that you can use Android on other devices (I know a company here in Hong Kong building a consumer electronics device based on Android.) Also, Google’s marketplace is not restricted (hence you can download porn apps if you like.)

In the long term my money is behind Google for two reasons: it is easier to code for and it more open. Eventually what you will see is applications appearing first on the Android then on the iPhone, with some never making it over for AppStore reasons or for Objective-C reasons. (This already happened with several World Cup focused applications.) Applications are what make a platform, you can have a more “cool” platform with less apps and the less “cool” platform with more apps will still win. Think Mac v PC 15+ years ago.

Speaking of PCs, where is Microsoft in all of this? The Zune based Windows Phone 7 is not slated to come out any time soon. By the time WP7 ships we will be talking about iPhone 5.0 rumors, Android 3.0 rumors, and the next generation iPad. Microsoft has a lot of catching up to do.

posted on Friday, June 25, 2010 5:40:43 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [2] Trackback

Tuesday, June 29, 2010
Parallel Programming with .NET 4 and Visual Studio 2010

Subject:
You must register athttp://parallelcomputingtalk.eventbrite.com/ in order to be admitted to the building and attend.

In the past, introducing concurrency and parallelism into libraries and applications was difficult, time consuming, and error-prone. However, as the hardware industry shifts towards multi-core and manycore processors, the key to high-performance applications is parallelism. The .NET Framework 4 and Visual Studio 2010 offer solutions to help make coding, debugging, and profiling concurrent applications significantly easier. In this interactive deep-dive, we’ll examine Parallel LINQ-to-Objects (PLINQ), the Task Parallel Library (TPL), new coordination and synchronization types, and Visual Studio tooling support in order to provide a look at the next generation of parallel programming with .NET.

Speaker:
Stephen Toub, Microsoft
Stephen Toub is a Principal Program Manager on the Parallel Computing Platform team at Microsoft. He’s excited to be back in Manhattan, where he lived and worked for several years.

Date:
Tuesday, June 29, 2010

Time:
Reception 6:00 PM , Program 6:15 PM

Location: 
Microsoft , 1290 Avenue of the Americas (the AXA building - bet. 51st/52nd Sts.) , 6th floor

Directions:
B/D/F/V to 47th-50th Sts./Rockefeller Ctr
1 to 50th St./Bway
N/R/W to 49th St./7th Ave.

posted on Tuesday, June 22, 2010 4:48:59 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [1] Trackback

Read the other posts in this series:

In the previous blog posts listed above, I showed how Telerik’s new LINQ implementation works with WCF RIA Services. I showed how to build your own Domain Service as well as build custom query methods. In this post I will show how to build a metadata class. (Note: future versions of the OpenAccess LINQ Implementation will produce the metadata class for you automatically.)

The WCF RIA Services metadata class is a separate class from the DomainService that contains information about the entities. In this class you can write custom validation logic, set attributes of the properties of the entities or indicate whether the property is to be generated on the client or not.

To create this class, create a new class in Visual Studio and name it: YourDomainSeriveClassName.metadata.cs. For example, our DomainService is DomainService1, so the metadata class is: DomainService1.metadata.cs.

Erase everything in the class and then replace it with the following, using the proper namespace in your project:

 

   1:  namespace SilverlightApplication6.Web
   2:  {
   3:      using System.ComponentModel.DataAnnotations;
   4:   
   5:      // The MetadataTypeAttribute identifies CustomersMetadata as the class
   6:      // that carries additional metadata for the Customers class.
   7:      [MetadataTypeAttribute(typeof(Customers.CustomersMetadata))]
   8:      public partial class Customers
   9:      {
  10:          internal sealed class CustomersMetadata
  11:          {
  12:              // Metadata classes are not meant to be instantiated.
  13:              private CustomersMetadata()
  14:              {
  15:              }
  16:              public string Address { get; set; }
  17:              public string City { get; set; }
  18:              public string CompanyName { get; set; }
  19:              public string ContactName { get; set; }
  20:              public string ContactTitle { get; set; }
  21:              public string Country { get; set; }
  22:              public string CustomerID { get; set; }
  23:              public string Fax { get; set; }
  24:              public string Phone { get; set; }
  25:              public string PostalCode { get; set; }
  26:              public string Region { get; set; }
  27:          }
  28:      }
  29:  }

As you can see this class has each of the properties of your entity (lines 16-26), now you can set them as required, specify a length, or validate with a RegEx pattern. You can also specify that a property should not be sent down to the client. Of course you can specify much more sophisticated rules, you can even write your own methods.

Let’s do a quick example on the CompanyName property, we will set it to required, set an error message to be displayed if the field is not entered as well as set a length of 32. This is done with two attributes:

   1:  [Required(ErrorMessage = "CompanyName is Required!!")]
   2:  [StringLength(32)]
   3:  public string CompanyName { get; set; }

Now when you perform databinding, RIA Services will enforce these rules for you on the client. For example, if try to edit our data in the application built in Part II, RIA Services automatically adds validation for us and passes on the error message we specified in the attribute. (Note you have to add an UpdateCustomer method to your DomainService1 class to enable editing.)

clip_image002

Enjoy!

posted on Monday, June 21, 2010 6:55:05 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

In my last blog post, I showed how Telerik’s new LINQ implementation works with WCF RIA Services. In that post I built a Domain Model from the Northwind database as well as a RIA Services Domain Service. I then showed the drag and drop features of RIA Services and created a simple Silverlight application with no code. Today we are going to take that example one step further by creating some custom server side Query Methods.

A query method is just a facility to query a data source. In RIA Services, you define a query method explicitly so it can be used on the client. This is pretty straight forward with RIA Services. Let’s create a query method to query the Customer table by its primary key (CustomerID) in the database. To do this, open the project we used in the previous blog post and add this code to the DomainService class in the server project.

   1:      //This query method will return only 1 customer
   2:      [Query(IsComposable = false)]
   3:      public Customer GetCustomersByID(string customerID)
   4:      {
   5:          //must also include the Germany restriction
   6:          //to keep in sync with the GetCustomers business logic
   7:          return this.DataContext.Customers.SingleOrDefault
   8:              (c => c.CustomerID == customerID 
   9:                  && c.Country=="Germany");
  10:      }

 

This method will return one customer and you need to specify that by the attribute IsComposable=False (Line 2). Everything else is pretty basic, you have a method signature that accepts a parameter (Line 3) and a LINQ statement that filters the data by CustomerID as well as by country (lines 8-9). We are filtering by country as well because in our original business logic (in Part I) we had a GetCustomers() method that filtered all of the records by the country Germany. This new GetCustomersByID method knows nothing of the GetCustomers() method so we have to replicate that business logic here. (We have hard coded the value of Germany, in a production application, you would most likely obtain this value from a database or cookie after authentication.)

Let’s create a second query method, one that will filter the Customer data source by the ContactName field and return a collection, not a single item. We define an IQueryable collection of Customer as the return value in the method signature (Line 3) and accept a parameter. This parameter is used in our LINQ statement to filter the data source (Lines 9-10). In addition, just like the previous example, we must also filter by the country Germany; also replicate the OrderBy of our GetCustomers() method (Line 11).

   1:  //This query method will return a collection of customers
   2:  //filtered by the letters passed in on the contact name
   3:  public IQueryable<Customer> GetCustomersByLetter(string letter)
   4:  {
   5:      //must also include the Germany restriction
   6:      //to keep in sync with the GetCustomers business logic
   7:      //also since we are returning a collection, must
   8:      //respect the OrderBy as well from the business logic
   9:      return this.DataContext.Customers.Where
  10:          (c => c.ContactName.StartsWith(letter) == true
  11:              && c.Country == "Germany").OrderBy(c => c.CustomerID);
  12:  }

 

Now that we have defined two query methods, let’s wire them up to our XAML form in the Silverlight application.

In our Silverlight application, delete the grid that we had dragged onto the form in Part I. Replace it with two labels, two text boxes, two buttons and a grid (set the grid’s AutoGenerateColumns property to True.) Your XAML page should look something like this:

image

Now we have to write some code.

In the last blog post we were able to use the drag and drop features of RIA Services and not write any code. Today I will show you how to perform similar and more advanced functions with just a little bit of code. First we need two using statements in order to get working:

using SilverlightApplication6.Web;
using System.ServiceModel.DomainServices.Client;

Next we need to create a global variable for the RIA Services DomainService’s context.

   1:  //domain context for all RIA operations
   2:  private DomainService1 domainContext = new DomainService1();

 

Next we will load the grid with all of the data the first time the XAML form loads. We load the data by calling the GetCustomers() method we created in the previous blog post (we use the domainContext global variable in line 6.).

   1:  void MainPage_Loaded(object sender, RoutedEventArgs e)
   2:  {
   3:      //since we are going across the wire, must explicitly tell
   4:      //RIA Services that we are going to load data 
   5:      LoadOperation<Customer> loadOperation = 
   6:          domainContext.Load<Customer>(domainContext.GetCustomersQuery());
   7:      //the actual binding of the results, RIA takes care of the async
   8:      this.dataGrid1.ItemsSource = loadOperation.Entities;
   9:  }

 

This code does the same thing as the drag and drop did in the previous blog post, call GetCustomers() (Lines 5-6) and bind the results (line 8). Notice in the codegen on the client, RIA Services appends the word “Query” to all query methods.  In the previous blog post this was done automatically, but today we did it via code. If we run this it will give us the following view:

image

Now let’s wire up the buttons so we can perform the filters. First we will wire up the button that will search by CustomerID. That button click event will call the GetCustomerByID query method (lines 11-13) and bind the results (line 15.) We have to pass in the data the user entered in the text box, make sure in production to validate this data!

   1:  private void button1_Click(object sender, RoutedEventArgs e)
   2:  {
   3:      //disable the buttons during the async load
   4:      //to prevent the user from clicking twice while waiting
   5:      button1.IsEnabled = false;
   6:      button2.IsEnabled = false;
   7:   
   8:      //since we are going across the wire, must explicitly tell
   9:      //RIA Services that we are going to load data 
  10:      //Also here is where you pass the parameter in 
  11:      LoadOperation<Customer> loadOp = domainContext.Load
  12:          (domainContext.GetCustomersByIDQuery(textBox1.Text), 
  13:              CustomerLoadedCallback, null);
  14:      //the actual data binding, RIA takes care of the async
  15:      dataGrid1.ItemsSource = loadOp.Entities;
  16:  }

As part of the operation, RIA Services will handle the asynchronous processing for you. The problem is that users are not used to async operations, so they may try to click on the button more than once. We account for this by disabling the buttons (lines 5-6) until the operation is complete.  We have to catch the end of the async operation in a callback function and pass that in as a parameter to the operation (line 13). The callback function is here:

   1:  //callback function for when the load is complete
   2:  private void CustomerLoadedCallback(LoadOperation<Customer> loadOperation)
   3:  {
   4:      //re-enable our buttons
   5:      //if you want to display an "IsBusy" graphic
   6:      //this is where you would remove it
   7:      button1.IsEnabled = true;
   8:      button2.IsEnabled = true;
   9:  }

 

Let’s run this and test it out. If you filter by “ALFKI”, the results look like this:

image

Now let’s do the same for the the filter by ContactName. The code behind the button event is here:

   1:  private void button2_Click(object sender, RoutedEventArgs e)
   2:  {
   3:      //disable the buttons during the async load
   4:      //to prevent the user from clicking twice while waiting
   5:      button1.IsEnabled = false;
   6:      button2.IsEnabled = false;
   7:   
   8:      //since we are going across the wire, must explicitly tell
   9:      //RIA Services that we are going to load data 
  10:      //Also here is where you pass the parameter in 
  11:      LoadOperation<Customer> loadOp = domainContext.Load
  12:          (domainContext.GetCustomersByLetterQuery(textBox2.Text),
  13:              CustomerLoadedCallback, null);
  14:      //the actual data binding, RIA takes care of the async
  15:      dataGrid1.ItemsSource = loadOp.Entities;
  16:  }

Similar to the previous example, we are calling the query method, this time GetCustomersByLetter (lines 11-13) and passing in the value the user typed into the text box. When we run this and filter by all contacts that start with the letter H, it looks like this:

image

Hopefully with these two examples you can see the power of using Telerik’s new LINQ implementation and WCF RIA Services.

Enjoy!

posted on Friday, June 18, 2010 5:20:56 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

With the Q1 release of Telerik OpenAccess ORM, Telerik released a brand new LINQ Implementation and supporting Visual Entity Designer. With the upcoming Q2 release next month, we will introduce full WCF RIA Services support. If you want to get started now you can wire up the services yourself pretty easily. Let’s take a look at how to get your feet wet with RIA Services and Telerik’s LINQ implementation.

Before you get started, you will need a few things installed:

  • Visual Studio 2010
  • Silverlight 4
  • WCF RIA Services for Visual Studio 2010
  • Northwind sample database
  • Telerik OpenAccess ORM Q1 Service Pack 1 or higher

Getting Started: The Easy Stuff

Let’s create a new Silverlight application first. In the New Silverlight Application dialog, check the “Enable WCF RIA Services” checkbox. This will enable RIA Services.

image

The next step is to create a new Telerik Domain Model in the server (ASP.NET) project. I have a detailed walk through here on how to do that. We’ll create a new Domain Model by right clicking on the server project and selecting “Add” and choosing the Telerik Domain Model from the menu. Then we will map all of the tables from Northwind using the wizard. We’ll also keep the default model name of NorthwindEntityDiagrams.

image

We’re in good shape. So far if you have used the new LINQ Implementation nothing is new (or LINQ to SQL/EF for that matter.)  Now let’s add the RIA Services stuff.

Housekeeping-Adding References

Since our RIA Services support is still beta, you have to wire up a few things manually, including some references. You need to add:

  • Telerik.OpenAccess.Ria.Extensions.dll (found under “Browse: Program Files|Telerik|OpenAccess ORM|Bin)
  • System.ServiceModel.DomainServices.Server.dll
  • System.ServiceModel.DomainServices.Hosting.dll
  • System.ComponentModel.DataAnnotations.dll

image

Now we are ready to create the domain class.

Creating the Domain Class

Add a new Domain Service Class by right clicking and selecting Add|New Item and choose Domain Service Class.

image

Accept the defaults in the dialog and then we are ready to go. (Note at this time OpenAccess does not support creation of the class for metadata, but will soon, possibly even before Q2.)

image

Once you accept this dialog, a new empty class is generated.

   1:      [EnableClientAccess()]
   2:      public class DomainService1 : DomainService
   3:      {
   4:      }

 

We need to add a using statement so we can make sure our DomainService uses the OpenAccess model: using Telerik.OpenAccess;

Now change the inheritance of DomainService1 to this:

   1:  [EnableClientAccess()]
   2:  public class DomainService1 : OpenAccessDomainService<NorthwindEntityDiagrams>
   3:  {
   4:  }

Now we have one last step to create our DomainService, we have to add the CRUD methods. (In the future all of this will be done automatically for you!)

   1:  {
   2:      public IQueryable<Customer> GetCustomers() 
   3:      { 
   4:          return this.DataContext.Customers; 
   5:      }
   6:   
   7:      public void InsertCustomer(Customer c)
   8:      {
   9:          this.DataContext.Add(c);
  10:          this.DataContext.SaveChanges();
  11:      }
  12:      public void DeleteCustomer(Customer c)
  13:      {
  14:          this.DataContext.Delete(c);
  15:          this.DataContext.SaveChanges();
  16:      }

These are the methods of your DomainService. You can also add business logic here. Let’s do that with our GetCustomers() query.  I will write some business logic that filters all of the customers by the country of Germany. Of course you would have more complex business logic here, however, I just want to demonstrate the point. All clients that use this DomainService will inherit this business logic, even if you expose your service as an OData feed. Our implementation is here:

   1:  public IQueryable<Customer> GetCustomers() 
   2:  { 
   3:      return this.DataContext.Customers
   4:          .Where(c=> c.Country=="Germany")
   5:          .OrderBy(c=> c.CustomerID); 
   6:  }

 

Now you are done. Compile and let’s get cracking on a Silverlight client.

Creating the Silverlight Client

This is the easy part. We’ll use the RIA Services drag and drop features. Open MainPage.XAML in the Silverlight application and in the Data Sources window, drag and drop the Customer entity to the XAML form. (Tip: if the Data Sources window is blank or not showing up, you can manually force it to come up via the “Data” menu option on the main menu in Visual Studio.)

Once you drag and drop the entity to the form, a grid will automatically show up.

image

Now press F5 and see the application running.

image

That's it! We just created an OpenAccess based RIA Services application!

Of course there is a lot more to RIA Services than just binding a grid, however, this demonstration should show you that once you create your DomainService class, all of the RIA Services “stuff” just works for you. In future posts we will look at more RIA Services features as well as creating a query method.

Enjoy!

posted on Thursday, June 17, 2010 9:06:30 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [3] Trackback

Yesterday Joel and I did the day long Agile precon at TechEd in New Orleans, LA. We had a great crowd and were able to keep them engaged for 8 hours. You can download the materials here.

We used an “Agile presenting” technique where we put the agenda in an “Agenda Backlog” and we reprioritized after every sprint (agenda item) and let the audience decide what we would talk about next. To our surprise the audience voted against two planned sections and we did two new sections on the fly. We talked about:

  • Agile theory and agile methodologies (XP, Scrum, FDD, DSDM, *DD, Kanban, etc)
  • Intro to Scrum
  • Agile Estimation
  • Challenges to Implementing Agile in General
  • Challenges to Implementing Agile: In the Enterprise
  • Challenges to Implementing Agile: Remote Teams
  • Tools
  • QA and Documentation

We got into a discussion on what happens when the team finishes early, do you stop the sprint, or give them more work to do. (Joel and I both go against the agile literature and give the team more work!)

We also took a few micro-breaks to rest our brain to talk about the iPhone v Android, how I buy Joel clothes, and movie quotes from the Matrix (I know Kung Fu) and What about Bob (Baby Steps).

We also recommended a book, one of my favorite management books of all time: Peopleware. For those of you non-techies reading this blog (I don’t know why!) if you manage teams, this book is also for you.

Hope to do this seminar again soon!

posted on Monday, June 07, 2010 12:04:51 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [1] Trackback

Last week Telerik released a service pack for OpenAccess Q1. The service pack fixes a few issues with Telerik’s new LINQ implementation working under Visual Studio 2010. In addition, the service pack shipped the Data Services Wizard; the Data Service Wizard is no longer a separate install. You can use the Data Service Wizard with traditional OpenAccess classes or the new LINQ implementation. Today I will show show you a new feature of the wizard: automatically creating a styled Silverlight application from your domain model. Future blog posts will show how to use RIA Services as well as SQL Azure.

To get started open up Visual Studio 2010 and create a new ASP.net application. Create a new domain model by right clicking on the project and say Add|New Item and choose Telerik OpenAccess Domain Model from the dialog.

image

This will bring up the OpenAccess ORM Data Wizard to create your domain model by mapping tables to entities. Let’s use Northwind and map all tables and use the default model name: NorthwindEntityDiagrams. While there are other advanced features of the wizard, like T4 templates to override the default codegen, let’s just accept the defaults and finish.

image

Once you click Finish the wizard will add a new domain model to your project.

image

Now that we have our domain model, it is time to run the Data Services Wizard to automatically create the a RESTful service using WCF Data Services. (The data service wizard also gives us the ability to create a WCF endpoint as well.) This can be done from the Telerik|Data Services Wizard menu option from the Visual Studio main menu (or right clicking on the EntityDiagrams1.rlinq file.) The first page of the wizard asks you where the domain model lives and what project to place the new service in. We’ll do it all in the same project.

image

The next page of the wizard asks you what entities you want to expose in your service as well as if you want to generate a Silverlight Application. If you check this you are given the option to use the standard Microsoft Silverlight controls or the Telerik controls (if you have them installed.) If you choose the Telerik controls, you will also be able to choose a theme for your Silverlight application. The drop down will show you a list of all of the installed themes. I’ll go ahead and choose the Windows7 theme.

image

After you click through the finish page of the wizard, the next step is to hit F5 and run your application. A basic, but styled, Silverlight application is created for you, getting you started with all of the CRUD methods. The Telerik version has all of the features your would expect, built in sorting and filtering, grouping, etc. I know that you will want to change the baseline application, but this is a great start, eliminating all of the asynchronous CRUD code that you have to write. Not bad for about 30 seconds of work!

image

Enjoy!

posted on Thursday, June 03, 2010 3:41:03 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

I’ll be speaking at TechEd North America in New Orleans next week and hope to have a front row seat to some Microsoft announcements. Microsoft practices CDD or “Conference Driven Development” where major announcements have been made only at large trade shows like PDC, MIX, and TechEd. Since there is no PDC this year, TechEd is an opportunity for Microsoft to make some BI, developer or IT pro announcements. There are two keynotes by “Microsoft executives” and I’ll be looking (hoping) for Microsoft to make the following announcements, in order of probability:

iPhone Development Kit

This one has been rumored for a while. While I am sure we may see some kind of minor Windows Phone 7 announcement at Teched, is there going to be an iPhone SDK for Silverlight and Visual Studio 2010?

Azure Pricing Changes

Windows Azure seems to have been accepted by the Tech community as something that is well architected and very stable, but the general consensus is that it is a tough sell to customers privacy/cloud wise and cost wise. While I don’t have a major problem with Azure pricing, it could have more competitive pricing plans in place in order to do a land grab on adoption.

Silverlight 5.0 Roadmap

It is too early for a beta, but it would be great to have some info on Silverlight 5.0, what it will support, what it will look like, etc. Microsoft has been so quick to ship new iterations of Silverlight that people expect the same out of Silverlight 5.0. For example, Silverlight 2.0 shipped in October 2008 and Silverlight 3.0 beta appeared in March 2009, only 5 months later. Silverlight 3.0 shipped in July 2009 and Silverlight 4.0 beta 1 shipped in November 2009, only 4 months later. Silverlight 4.0 shipped in April 2010, so June 2010 is a reasonable timeframe for Microsoft to talk about Silverlight 5.0 features and design goals, if not a beta timetable.

Windows 8 Sneak Peek

OK I admit it, I love Win7 and have zero complaints or don’t even have a wish list for Win8. But it is the geek inside of me that wants a sneak peek of Win8. I am pretty sure that this will not happen, but Vista shipped in January 2007 and we had an alpha in October 2008, so not unheard of. I don’t need an alpha, just a sneak peek.

SQL Server 2012 Info

Ok, if it too soon to talk about Silverlight and Windows, then why not SQL Server 2012? TechEd is more of an IT Pro conference and it is co-located with the BI Conference, so maybe, just maybe we will get some info on SQL.next.

Steve Ballmer’s Retirement

Don’t laugh. At TechEd in Boston in 2006 it was announced that Bill Gates was retiring. Steve has been in charge a long time, while I don’t expect a leadership transition at this point in time, you never know.

Free Windows 7 Phone

Hey, Google gave out new Android phones at I/O twice. Microsoft gave us all a tablet at the PDC. Why not a phone?

posted on Tuesday, June 01, 2010 10:52:51 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

Telerik has released Service Pack 1 of OpenAccess ORM, available now for download. There are a lot of new features (and of course bug fixes) but the three most important are:

  • The new LINQ implementation works fully with Visual Studio 2010
  • The Data Services Wizard is now fully integrated with the product, no separate install
  • A beta of RIA Services support

I will be doing a blog post on each of these in the coming weeks (expect some delays with TechEd US in the way.) One more thing to mention is that the Data Services Wizard now generates a Silverlight client and will give you the ability to automatically style the application, a feature I previewed last week at the Sydney User Group.

Enjoy!

posted on Friday, May 28, 2010 3:02:32 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

On Wednesday I presented an hour long talk on an introduction to Scrum, titled “To Scrum or not to Scrum” at the PMI’s Project Management Day in Bucharest, Romania.  It was a great event and I presented Scrum from a project manager’s point of view. About 15% of the audience is not from the IT field and I also tried to present Scrum in a way that is more generic. (You can download the seminar slides here, they are the same slides I have used all year.)

I asked the audience to turn off their cell phones, but asked them to stand up and take my photo while they did it, so I took their photo at the same time. This is about half the room, sorry to the other half. :)

IMG_20100526_102552

After the event I hung out with some of the PMI guys and walked down to a local restaurant/beer hall Caru Cu Bere in the historic old town (there was even a statue of Dracula there!) On the walk down I saw a Romania Arc de Triumph.

IMG_20100526_200314

Looking forward to my next visit!

posted on Thursday, May 27, 2010 2:49:05 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

I had a quick visit to Sydney this week, my first time back to Sydney in something like 18 years. Flying in from Hong Kong was only about 8 hours, and when Adam Cogan picked me up from the airport, he took me directly to Watson’s Bay where we went Stand-up Paddle Boarding. The weather was cold and a storm was brewing, making conditions, well a little crazy. TJ, Adam, and I risked hypothermia and had a blast!

IMG_0652 IMG_0666

On Wednesday morning, I did my now (in)famous Scrum Seminar. You can download the seminar slides here. I said that the most important things to success in implementing scrum are the engagement of the product owner (both with writing the user stories and in the daily scrum) and that it is ok to change scrum. We spent a lot of time on estimation and Team Velocity as well. We had some laughs at Adam’s expense as well as got off topic with a quote of mine that “Windows is nothing without Excel!”

That evening I spoke at the Sydney .NET User Group. I was doing a “Silverlight Line of Business” talk. It was basically about 2.5 hours and in the first half I did my WCF Walk through with some extra bells and whistles including building an Astoria service and consuming it. At the break I showed the Telerik Data Services Wizard and it was a huge hit, specifically the auto-generation of the SIlverlight Application. (I also showed a feature that will ship very soon that allows you to style your application via the wizard.)

After the break I talked about WCF RIA Services that shipped the other day. Just like before, I ripped off Brad Abrams blog here and showed items 1-5, plus #8 with a PowerPivot client. The PowerPivot client seemed very popular!

It was a great trip and I hope to be back soon.

Technorati Tags:
posted on Thursday, May 20, 2010 7:59:38 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

Last year I went to Chyangba, Nepal, my Sherpa's village in a very remote section of Nepal and helped build a library for the local school. I was part of a charity effort and we raised a lot of money, a good portion from the Microsoft .NET Community. I’ll be headed back to Nepal this September and will help start a new drive to raise money for a new school building.

IMG_0490

While we were in Chyangba, Engineers without Borders started a water project. An engineer from the US, Jim, arrived and started working on bringing running water to the 60 homes in Chyangba. This is a big deal since the common water tank usually has poor water. Well, 7 months later, the project is a success. Below is the full report from Jim the engineer.

IMG_3344

 

Water is coming

We've had the system running off and on for about 10 days, but now it is completely online and serving all 56 houses.  It is also set up to serve two more houses that will be built in the coming months.  The people seem very happy and some of the older women especially are grateful to have taps in their houses.  I have a lot of kahtahs.  I'm in Kathmandu.  I arrived yesterday and leave in two days for the US.

All in all it seems to be working remarkably well.  I didn't get to measure the flow rate at every house, but the houses get 1 L between 7 and 13 seconds depending on their location relative to the tank that serves them.  It's a pretty good flow rate.  As Phula so eloquently puts it, "the people are really satisfaction with the water."

Boring Engineering Notes

There were a few bugs that we had to work through. 

1.  The pipeline from the intake to the reservoir was having low flow issues.  At times we were only getting .4 LPS down to the reservoir when we would measure 1 LPS or more at the intake.  Installing a control valve on the tank inlet and closing the air vent at the intake appears to have solved he problem.  I think the pipe was not flowing full and that caused the water to slow down when coming through some u-profiles.  Blowing into the air vent pipe at the intake would cause the pipeline to flow anywhere from 1.5 to 2 LPS for half an hour to 2 hours, but then the flow would drop to .4 LPS.  With the control valve, were are getting .83 LPS into the reservoir, which is enough water.   I measured the flow over 48 hours and it stayed constant.  I've been talking to Phula every day since I left the village he said the water is still coming the same.  

2.  The pipeline from the reservoir to PB B1 does not flow well unless the tank is full.  First we were able to fix the problem by adjusting control valves to PB A1 and B1 to keep the tank full, but today Phula installed float valves in the three tanks in Community A and he said that works much better.  He said that keeps the tanks in Community B mostly full, which is good. 

3.  We had a blocked pipe on one of the house lines--it ended up being easy to find because it was before the first tee and the first tee was close to the PB.  We never found out what the block exactly was--I presume it was mud because dirty water came through the pipe we dug up and disconnected.  After rejoining it, the water was working fine.  We have had only tarps covering the PBs as we've been playing with valves and checking flows at all the tanks.  The roofs are too heavy to be moving on and off several times per day.  That allowed some dirt to get into the tanks, which is what must have caused the block. 

4.  Phula is currently cleaning all the tanks and putting the roofs on, but it will take a few days.  We had built a rim around the water tank at each PB for the roof to fit into, but we have had some tolerancing issues--some of the roofs don't fit inside the rim because the corners aren't perfectly square, despite multiple measurements a few inches of safety margin.  The rims will have to be chipped slightly and then replastered.  It is really only a cosmetic issue; I liked the idea of rims to help keep dirt from sneaking in under small spaces between the roof slab and the top of the water tank, but it is difficult to do things with much precision here, at least with these masons.  The masons have generally refused to use levels, claiming they could eyeball it.  Some of the tops of the tanks are little sloped or uneven, and the roof does not fit snugly.  For now I think putting a tarp under the roof will be sufficient, but it is a repair job that we should address in the future.

5.  There are seven houses that are sometime affected by a weird air block.  If the tank above them is drain and then refilled, water won't flow to the houses.  However, the problem is an easy fix--the houses take their pipe outside, open the tap, and once the water comes, close the tap and put it back into the house.  The pipes enter the houses by climbing up the outside stone wall, through a hole in the rafters, and then back into the kitchen.  The people seem to prefer this to tunnelling to the mud and stone walls or foundations.  The seven houses are the five houses in Community C, one house in Community A, and one house in Community B.  The house in Community A is probably not far enough below the tank that serves it.  At only 7 meters of elevation difference, it's right at the limit of the design recommendations.   The water level in the tank adds about 80 cm of head.  However, the water flows well once the pipeline has been reset--that house gets 1 L in 13 seconds.  The house in Community B is served by an HDP pipe PB and is 12 meters below it.  The pipe tanks are small and they don't add much head to the outflow, but the pipe into that house also climbs steeply up into the house.  The five houses in Community C are more of a mystery to me:  blowing into the air vent at the PB C1 outlet or taking the pipe outside and opening the tap fixes the problem.  PB C1 is small and only adds about 40 cm of head to the outflow, but the pipes in Community C climb straight up two meters into the top floor of the houses, which is more than in most places.  All of the houses are between 23 and 29 meters below the tank, which seems like it should be more than enough to force the air out of the pipeline. However, all of the houses are affected simultaneously--it makes me wonder if there is some air block before the first tee.  There is a small U-profile of 115 cm, which seems like it should be insignificant as it occurs 9 meters below the tank, but maybe because the water level in the tank is relatively short it makes a big difference.  None of the houses get this problem when we leave the water running continuously, which we only did the last 48 hours I was in Chyangba.  I will continue to check in with Phula about it, but he has not reported any more problems.  In this case, I don't fear not hearing about problems due to the Sherpa cultural taboo on disappointing a guest; the villagers have not hesitated in the past to tell me--sometimes quite rudely--that their water was not coming.

Work Left to be Done

Right now, Phula is working on some tasks like building a fence at the reservoir and at the intake; cleaning all the tanks and putting on the roofs and making sure they fit; organizing and cleaning the tools; and backfilling the partially buried joining and tee areas (which not filled in to see if there any leaks once we got water running).  He thinks he will be finished in one week.  I left him with a specific list and I have a copy of it; he signed an MOU stating that he will email pictures for me to review before he will be paid by Pem.  I get the sense that he is doing a good job with it.  He does have some pride in the work he has done, but I think he doesn't express it when I am around because he focuses on trying to guilt trip me into getting him either more money from EWB or a visa to USA.  It's annoying and disappointing, but it's part of this game.  At least he does the work. 

Sishakhola

Due to the troubleshooting, I did not have time to survey Sishakhola.  I think an adequate survey time would be three of four days, and I didn't have the time.  It was more important to me to get Chyangba's system working well enough to where I felt comfortable leaving it with the village and Phula.  I know that may disappoint some of you and I know I said I would make the time for it, but respect that I have worked every day for 7 months straight in considerably frustrating conditions.  We really need to see how well Chyangba's system works in the long run before we start on Sishakhola and an inadequate survey would only create the same problems for Charlie that I had.  Additionally, surveying is more than measuring; when you do it, the people expect you to come back and build it the system.  I don't know what is going on with money and commitments at home and I am certainly not ready to make a commitment to Sishakhola myself.  Speaking of you Charlie, I really think you should come for a short visit first, see the village, and know what you are getting into before you spend 3-4 months here.  You will benefit a lot from what I have learned and that will make it easier for you than it was for me, but this work is difficult. 

Trip in the Fall and the Training Course

I think a trip in the fall is still very important to check on the Chyangba system and making sure it is being used properly and the people are taking care of it.  Phula and I did a one day training course the day before I left; the people impressed me with how quickly they learned, how handy they are, and that they have the patience to fiddle with valves and pipe wrenches.  They can be incredibly creative and come up with simple and effective solutions with limited resources--they made tweezers out of bamboos to clean the tap drains, for example.  While the villagers and those on the training course seem like they know how to do everything to take care of the system, I worry that they don't understand why they must care for it.  They complain a lot about their old government system--that there wasn't enough cement used or that is was a "Nepali project"--but the reality is that system worked surprisingly well, and got away with fewer and smaller PBs.  Had the pipe been adequately buried, it would probably be still functioning.  The real problem is that the people in Chyangba never took any responsibility or leadership for trying to fix the old system.  They could have easily remade roofs for the PBs from relatively inexpensive local materials like slate and wood--concrete is not always necessary.  They could also have made fences from local wood around the PBs to protect them.  What they had before was good enough that they could live with it--there was no motivation among the villagers to try to improve it.  That's what worries me about the future of this system--their lack of motivation, not lack of knowledge, skills, or resources.  I hope that between 40 days of labor from each house and 500 rupees from every house, they will value their own efforts and money and will want to take care of it.  I wish I had more time to spend on the training course, but I had to spend the time to fix the bugs in the system.  Phula will create a new water council from the training course members that has several women and fewer monkey brains--it will be important to keep working with them in the future.

A trip in the fall could focus on reviewing the training course and maintenance requirements and surveying Sishakhola.  A bigger intake and a collection chamber would capture more water and would make the flow to reservoir greater and will probably be necessary if Chyangba's spring serves Sishakhola; that could also be done in the fall.  As I keep in touch with Phula and learn more about how the system works, it may be that float valves everywhere would make it better.  If that were the case, we would need to build 5 more PBs in the fall.  I'm hoping that won't be necessary.

Jim

posted on Monday, May 03, 2010 3:10:17 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

Web 2.0 has made our world more transparent. President Barack Obama has a Twitter account, as does Communist Dictator Fidel Castro. (Britney Spears has more followers than both Obama and Fidel put together, but that is another story.) Product roadmaps are now public as is proposed legislation. The world is a much more open place.

One company that has not gotten the transparency memo is Apple. They are so secret that they sue their customers for publishing blogs that speculate what new products are coming out. The tremendous secrecy surrounding Apple has served it well and I see no reason why they will change.

That said, the saga of the lost iPhone is starting to get real ugly. By now you know the story: last month, an Apple employee lost a next generation prototype of the iPhone 4G at a bar and the person who found it sold it to Gizmodo for $5000. Gizmodo promptly put an exclusive scoop on their web site reviewing the phone.

When that review went live, Apple went ballistic and said they want the phone back. To their credit, Gizmodo gave it back, but kept the web page up. Apple was not satisfied and then sent the police to raid the Gizmodo writer’s house and the police seized computers, hard drives, etc. Apple apparently is going after the person who found the phone and sold it to Gizmodo.

California, and several other US states, has a Shield Law, or a law protecting a journalist from revealing their source. Journalists are protected by free speech and obtain secret information all the time. While the ethics of buying the phone from the person who found it in the bar is somewhat questionable, it does not break any laws since the phone was lost and not stolen. The person who found the phone tried to return it to Apple, but did not have his calls returned. (Apparently he even tried an alphabetical search on Facebook for someone to talk to, but Apple is uber secret.) When Apple did not get back to him, he sold it to Gizmodo.

Nothing Apple can do now will make the leak and product review go away. Going after Gizmodo is like going after the New York Times for publishing the Pentagon Papers, no chance they are going to beat 230 years of free speech and free press. Apple has no case there. They can’t go after the person who sold the phone since Gizmodo is protected from reveling their source via California's Shield Law. Apple has no case there either. With each legal move and police raid, Apple is looking more and more arrogant. What should they do? Take the high road: drop it and move on. Apple should also enjoy the free publicity.

posted on Wednesday, April 28, 2010 6:57:09 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

After a great beta cycle, Telerik is proud to announce today the commercial availability of the OpenAccess Data Service Wizard. You can download it and install it with Telerik OpenAccess Q1 2010 for both Visual Studio 2008 and 2010 RTM. If you are new to the Data Service Wizard, it is a great tool that will allow you to point a wizard at your OpenAccess generated data access classes and automatically build an WCF, Astoria (WCF Data Services), REST or ATOMPub collection endpoint, complete with the CRUD methods if applicable.

4-1-2010 1-39-01 PM

If you are familiar with the Data Service Wizard already, there will be two new surprises in the release version.

If you generated a domain model with the new OpenAccess Visual Entity Designer, you have only one file added to your project, mydomainmodel.rlinq for example. The first surprise of the new Data Service Wizard is that if you right click on the domain model in Visual Studio, you can use an “express” version of the Data Service Wizard and generate your service with just one click! This is pretty awesome, you can create your domain model from a database and create a service in well under 60 seconds.

4-1-2010 1-30-24 PM

Surprise number two is that if you are using the new Visual Entity Designer, we now give you the option, in both the full wizard and the right-click “express” version to create a new Silverlight application as a consumer of your new service. The Wizard will generate a Silverlight application with the full CRUD methods for you. You can go from File|New Project in Visual Studio to a full domain model generated from the database, a full WCF or Astoria service, and a fully functional CRUD Silverlight client in under 60 seconds!

4-1-2010 1-47-12 PM

The Silverlight application generation feature is a very “1.0” feature and we have big plans for it moving forward. We will look forward to your feedback as what to add to this application generation feature next. While I expect you to put your own skin on it and write some validation code, the application we build for you is a great starter and will save you from having to write all of the asynchronous CRUD code in your client. Visit our forums and let us know what you think.

Lastly, when OpenAccess releases its Q1 Service Pack later this month, the Data Service Wizard will be part of the main product install, so there is no need for a separate install moving forward. Our release cycle will now be in sync with OpenAccess and we have a lot planned for Q2, I will post an updated roadmap here soon.

Technorati Tags: ,
Bookmark and Share
posted on Thursday, April 08, 2010 9:58:27 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

This week Telerik released a new LINQ implementation that is simple to use and produces domain models very fast. Built on top of the enterprise grade OpenAccess ORM, you can connect to any database that OpenAccess can connect to such as: SQL Server, MySQL, Oracle, SQL Azure, VistaDB, etc. While this is a separate LINQ implementation from traditional OpenAccess Entites, you can use the visual designer without ever interacting with OpenAccess, however, you can always hook into the advanced ORM features like caching, fetch plan optimization, etc, if needed.

Just to show off how easy our LINQ implementation is to use, I will walk you through building an OData feed using “Data Services Update for .NET Framework 3.5 SP1”. (Memo to Microsoft: P-L-E-A-S-E hire someone from Apple to name your products.) How easy is it? If you have a fast machine, are skilled with the mouse, and type fast, you can do this in about 60 seconds via three easy steps. (I promise in about 2-3 weeks that you can do this in less then 30 seconds. Stay tuned for that.)

 Step 1 (15-20 seconds): Building your Domain Model

In your web project in Visual Studio, right click on the project and select Add|New Item and select “Telerik OpenAccess Domain Model” as your item template. Give the file a meaningful name as well.

image

Select your database type (SQL Server, SQL Azure, Oracle, MySQL, VistaDB, etc) and build the connection string. If you already have a Visual Studio connection string already saved, this step is trivial.  Then select your tables, enter a name for your model and click Finish. In this case I connected to Northwind and selected only Customers, Orders, and Order Details.  I named my model NorthwindEntities and will use that in my DataService.

image

Step 2 (20-25 seconds): Adding and Configuring your Data Service

In your web project in Visual Studio, right click on the project and select Add|New Item and select “ADO .NET Data Service” as your item template and name your service.

image

In the code behind for your Data Service you have to make three small changes. Add the name of your Telerik Domain Model (entered in Step 1) as the DataService name (shown on line 6 below as NorthwindEntities) and uncomment line 11 and add a “*” to show all entities. Optionally if you want to take advantage of the DataService 3.5 updates, add line 13 (and change IDataServiceConfiguration to DataServiceConfiguration in line 9.)

   1:  using System.Data.Services;
   2:  using System.Data.Services.Common;
   3:   
   4:  namespace Telerik.RLINQ.Astoria.Web
   5:  {
   6:      public class NorthwindService : DataService<NorthwindEntities>
   7:      {
   8:          //change the IDataServiceConfiguration to DataServiceConfiguration
   9:          public static void InitializeService(DataServiceConfiguration config)
  10:          {
  11:              config.SetEntitySetAccessRule("*", EntitySetRights.All);
  12:              //take advantage of the "Astoria 3.5 Update" features
  13:              config.DataServiceBehavior.MaxProtocolVersion = DataServiceProtocolVersion.V2;
  14:          }
  15:      }
  16:  }

 

Step 3 (~30 seconds): Adding the DataServiceKeys

You now have to tell your data service what are the primary keys of each entity. To do this you have to create a new code file and create a few partial classes. If you type fast, use copy and paste from your first entity,  and use a refactoring productivity tool, you can add these 6-8 lines of code or so in about 30 seconds. This is the most tedious step, but don’t worry, I’ve bribed some of the developers and our next update will eliminate this step completely.

Just create a partial class for each entity you have mapped and add the attribute [DataServiceKey] on top of it along with the key’s field name. If you have any complex properties, you will need to make them a primitive type, as I do in line 15. Create this as a separate file, don’t manipulate the generated data access classes in case you want to regenerate them again later (even thought that would be much faster.)

   1:  using System.Data.Services.Common;
   2:   
   3:  namespace Telerik.RLINQ.Astoria.Web
   4:  {
   5:      [DataServiceKey("CustomerID")]
   6:      public partial class Customer
   7:      {
   8:      }
   9:   
  10:      [DataServiceKey("OrderID")]
  11:      public partial class Order
  12:      {
  13:      }
  14:   
  15:      [DataServiceKey(new string[] { "OrderID", "ProductID" })]
  16:      public partial class OrderDetail
  17:      {
  18:      }
  19:   
  20:  }

 

Done! Time to run the service.

Now, let’s run the service! Select the svc file and right click and say “View in Browser.” You will see your OData service and can interact with it in the browser.

image

Now that you have an OData service set up, you can consume it in one of the many ways that OData is consumed: using LINQ, the Silverlight OData client, Excel PowerPivot, or PhP, etc.

Happy Data Servicing!

Technorati Tags: ,,

Bookmark and Share
posted on Saturday, March 13, 2010 4:29:07 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

Love LINQ to SQL but are concerned that it is a second class citizen? Need to connect to more databases other than SQL Server? Think that the Entity Framework is too complex? Want a domain model designer for data access that is easy, yet powerful? Then the Telerik Visual Entity Designer is for you.

Built on top of Telerik OpenAccess ORM, a very mature and robust product, Telerik’s Visual Entity Designer is a new way to build your domain model that is very powerful and also real easy to use. How easy? I’ll show you here.

First Look: Using the Telerik Visual Entity Designer

To get started, you need to install the Telerik OpenAccess ORM Q1 release for Visual Studio 2008 or 2010. You don’t need to use any of the Telerik OpenAccess wizards, designers, or using statements. Just right click on your project and select Add|New Item from the context menu. Choose “Telerik OpenAccess Domain Model” from the Visual Studio project templates.

image

(Note to existing OpenAccess users, don’t run the “Enable ORM” wizard or any other OpenAccess menu unless you are building OpenAccess Entities.)

You will then have to specify the database backend (SQL Server, SQL Azure, Oracle, MySQL, etc) and connection.

image

After you establish your connection, select the database objects you want to add to your domain model. You can also name your model, by default it will be NameofyourdatabaseEntityDiagrams.

image

You can click finish here if you are comfortable, or tweak some advanced settings. Many users of domain models like to add prefixes and suffixes to classes, fields, and properties as well as handle pluralization. I personally accept the defaults, however, I hate how DBAs force underscores on me, so I click on the option to remove them.

image

You can also tweak your namespace, mapping options, and define your own code generation template to gain further control over the outputted code. This is a very powerful feature, but for now, I will just accept the defaults.

 image

When we click finish, you can see your domain model as a file with the .rlinq extension in the Solution Explorer.

image

You can also bring up the visual designer to view or further tweak your model by double clicking on the model in the Solution Explorer. 

image

Time to use the model!

Writing a LINQ Query

Programming against the domain model is very simple using LINQ. Just set a reference to the model (line 12 of the code below) and write a standard LINQ statement (lines 14-16).  (OpenAccess users: notice the you don’t need any using statements for OpenAccess or an IObjectScope, just raw LINQ against your model.)

   1:  using System;
   2:  using System.Linq;
   3:  //no need for an OpenAccess using statement
   4:   
   5:  namespace ConsoleApplication3
   6:  {
   7:      class Program
   8:      {
   9:          static void Main(string[] args)
  10:          {
  11:              //a reference to the data context
  12:              NorthwindEntityDiagrams dat = new NorthwindEntityDiagrams();
  13:              //LINQ Statement
  14:              var result = from c in dat.Customers
  15:                           where c.Country == "Germany"
  16:                           select c;
  17:   
  18:              //Print out the company name
  19:              foreach (var cust in result)
  20:              {
  21:                  Console.WriteLine("Company Name: " + cust.CompanyName);
  22:              }
  23:              //keep the console window open
  24:              Console.Read();
  25:          }
  26:      }
  27:  }

Lines 19-24 loop through the result of our LINQ query and displays the results.

image

That’s it! All of the super powerful features of OpenAccess are available to you to further enhance your experience, however, in most cases this is all you need.

In future posts I will show how to use the Visual Designer with some other scenarios. Stay tuned.

Enjoy!

Technorati Tags: ,,

Bookmark and Share
posted on Thursday, March 11, 2010 9:26:16 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

By now there have been a lot of blog posts on Windows Azure billing. I have stayed out of it since I figured that the billing scheme would generate some sticker shock on our end and some rethinking on Microsoft's end. For the most part it has, but I now want to tell my story since I think most early Azure users are thinking along my lines.

When Windows and SQL Azure went live, I wanted to deploy an application using some of Telerik’s products to “production”. I put my free MSDN hours into the Azure system for billing and uploaded the application. I actually could not get it to work and left it up there figuring I would get back to it and fix it later. Periodically I would go in and hoke around with it and eventually fixed it. For the most part I had nothing more than an advanced “Hello World” and simple Northwind data over forms via SQL Azure up there.

Recently, I received a bill for $7 since I went over my free 750 hours by about 65 hours. (I guess I had a test and production account running at the same time for a while.) Even thought for the most part I had no hits other than myself a few times, I still incurred charges since I left my service “live” in production. My bad, I learned a lesson as to how Azure works, luckily, it was only a $7 lesson.

It was then that I realized that I was guilty of treating Windows Azure as a fancy web hosting account. The problem is that Windows Azure is not web hosting, but rather a “web operation system” or a “Cloud” service hosting and service management environment. We’re not only paying for hosting, we are paying for Azure to manage our application for us- much like enterprise middleware 10-15 years ago, but for the cloud. I now look at Azure differently and this is good since I will use it differently (and how it was intended.)  I am guessing that other developers with $7 bills in their inbox this month will do the same.

That said, I was in Redmond a month or two ago and had a chance to talk to the head of MSDN. I complained about how the MSDN subscription offer was only for 8 months, etc. He told me that for the first time in Microsoft’s history, they have hard physical assets that have to be paid for with this service. It is not like if they want to give me a free copy of Windows, it does not cost Microsoft anything except the bandwidth for me to download (which is a fixed cost.) I get that, and I am sure that there will be a cost effective MSDN-Azure “developer only” subscription option in the future. Or at least there should be. :)

Technorati Tags:

Bookmark and Share
posted on Tuesday, March 09, 2010 5:23:54 AM (Eastern Standard Time, UTC-05:00)  #    Comments [2] Trackback

If you have been following me on Facebook, you know that last week I traveled to Vancouver, Canada, to watch the Winter Olympics. I love to take photos and videos and of course took a million photos and videos. The problem is that I apparently broke the law well over 100 times while I was up in Canada. These laws and their enforcement need to be updated.

Let’s start with a photo of Scott Stanfield and I being the ugly Americans wearing our Team USA jersey at a hockey game (USA crushed Norway 6-1!).  A friend’s wife took it for us using my personal camera. While I did not ask Scott if I can post it, having known me for 10 years, he knows that if you pose for a photo with me, it will be online-so permission is implied. Nothing wrong with this photo, right?

image

According to the International Olympic Committee (IOC), this is a borderline case. While it is ok to take the photo of ourselves at the venue, live action is going on in the background. Good news for us is that you can’t see it in the photo. I am safe, the IOC won’t send lawyers to shut this blog down.

Now take a look at this photo:

image

Similar in nature to the one above of Scott and me, this photo is in the stands of a spectator. Sure this crazy cow-bell ringing Swiss dude did not give me his permission, but that is between him and me, not the IOC. (Trust me, he wants to be photographed!)

I posted this photo on a sports blog along with a small video of the same (to show the world how exciting and crazy Curling, yes curling is, and how rowdy the Swiss fans are with their cow bells!)

Not so fast according to the IOC. They sent me a nasty-gram legalese email and made me pull the photos and video down. You can see the ice in the lower right hand corner as well as the “articles of play” or the stones used by the curlers as well as one of the Olympic judges and logos. I am violating the IOC’s copyright right now, just posting it here again. (And YOU can go to jail just for looking at it!)

WTF?????

The old school copyright laws are out of date. There is a difference between me downloading movies and me taking a photo at a live sporting event. (Or any live event for that matter.) My views on the RIAA and MP3s are well known (they are pure evil), however, let’s take a minute to think about the copyright at the Olympics.

I understand that NBC and other broadcasters paid the IOC a lot of money for the exclusive rights to show the Olympics on TV. I also understand that without that money, the Olympics would be difficult to stage. If I recorded an entire event, or even a very important small part of an event (like the winning shot for the hockey Gold metal), I understand that that takes away from NBC’s exclusive coverage.

That said, that is not what I am doing. I was taking photos and videos of the atmosphere, the venue, the fans and surroundings. While at times I did get some live action in my frame, mostly it was stuff that the TV cameras did not care about. For example, most readers of this blog are technology savvy people who think that curling is a waste of time. I went to Canada believing the same thing. After attending curling, I was in awe of curling and its strategy, skill and the excitement of the plays coming down to the wire. I enjoyed it so much, I went to a second match!

image

I was also blown away by the crowd. At the US-Swiss men’s game, the Swiss spectators were out of control. (Switzerland had a huge come from behind win on the last extra end shot.) It was like the 7th game of the World Series (or final match at the World Cup for you non-Americans) chanting over and over at the top of their lungs: Go Swiss!  Pounding the floor with their feet over and over. Boom boom boom! And the cow-bells. Oh the cow-bells! Singing the Swiss National Anthem after the match. Totally awesome! I captured the essence of this sheer excitement in the photo above. The IOC wants me to remove it.

Here is an example where a law is meant to protect a party (the IOC) and my violation of that law in actually helping the “protected” party. My photos are free advertising for the IOC. In addition with my enthusiasm, I am helping spread the word about curling, how much fun the Olympics were in person, and bring more attention to the Olympics in general. Someone who was not interested in curling and the Olympics may decide to go to the Olympics in 2012 or watch it on TV because of my blog post and photo. Or someone may google Olympic Curling and be brought to an Olympic site and possibly buy something or watch a video, a video that was sponsored and brought in revenue to the IOC. More to the point, the collection of photos by the thousands of spectators on flickr, Facebook, and blogs, etc, not just mine, will bring in even more to the IOC. The more people the violate the copyright, the more value for the IOC is created.

By violating the law, I am helping the IOC make money. If I follow the law, I am doing economic harm to the IOC in potential lost profits and free advertising. The system is clearly broken. The more photos on flickr, Facebook, and blogs, etc, the better off the IOC is. Copyright laws and their enforcement need to change, catch up with digital media and social networking.

image

Technorati Tags:

Bookmark and Share
posted on Tuesday, March 02, 2010 4:38:01 AM (Eastern Standard Time, UTC-05:00)  #    Comments [2] Trackback

Telerik has released the latest beta of the OpenAccess Data Service Wizard. We now support Visual Studio 2010 RC! You can also choose to use WCF 4.0 as one of the services you can build. Based on your feedback we also added a new feature: the ability to automatically generate dependent entities.

Download it today and give us your feedback. Next stop is the full release as part of our 2010 Q1 release of OpenAccess. See the OpenAccess roadmap here.

 image

Technorati Tags:

Bookmark and Share
posted on Monday, February 22, 2010 10:13:20 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

Fifteen years ago I was a programmer on Wall Street. Times were good, it was the boom economy and Fidelity Investments where I worked was flush with cash as the Dow just hit 4,000 for the first time. (Yes you read that right.) I had a great office in the (now gone), World Trade Center looking at the river and I coded client server applications all day. We were waiting for the conversion from 16 bit to 32 bit with the arrival of Windows 95. Except for arguing with my annoying co-worker Ronald who wanted to write his own grid (I wanted to buy a grid, so it is funny that 15 years later I work at a component company), life was good. I was a good programmer and I use to dream of being CTO of Fidelity Investments one day.

Then one day one of my buddies and I went to an event for IT professionals hosted by Netscape. It was about the Internet, the browser, and this new Java thing. At the session, they threw my entire 3-tier, client-server world upside down. “Dude they are talking about going back to the days of Rumba dumb terminal” my friend said to me. The speaker kept saying that the browser is going to be ubiquitous. (I had to look up ubiquitous when I got home.) A very tall guy from Sun said that “The Network is the Computer.”

I went home that night and canceled my AOL account and joined pipeline.net, an ISP that allowed you to surf the “real” web with Netscape Navigator 1.0 via dialup. Over the next few weeks I took a class on Java and taught myself HTML and put up a web page. (Full disclosure, I abused the <Blink> tag. Sorry, I know some of you now think lesser of me.)  Later that year when Fidelity did not embrace the Internet fast enough for me, I quit and stared my own business to focus on “the internet and databases.”

Somewhere around 1998, the guy from Netscape was right, the browser was ubiquitous. Every Super Bowl ad had a “www'” at the bottom as did every magazine ad. HTML ruled the world. It continues to rule the world to this day. It is hard to believe that HTML is only on version 4.

Then came the iPhone. Web pages on the small screen just don’t work well. Enter the world of applications or apps. So today, instead of web pages, we interact with the sites we like with Apps. Use Facebook on the web? Download the App. Need a currency converter, weather notification, even news and sports scores, there is an app for those as well. No longer do you need to go to a web page, you are using a native application on the device you are using. This will only proliferate with the iPad and rumored Google gPad.

I have never been a believer of 100% “The Network is the Computer” or “back to dumb terminal” browser only computing. Hardware is too fast and too cheap to not take advantage of local graphics APIs, local memory, and even local storage for caching and backup. Why code to the least common dominator? Why should you have “Google docs” just in a browser when you can take advantage of the local device for spell check, rendering, and cache? A hybrid approach is the best bet, with the ultimate storage in the cloud, but the application will store a cached version locally and also have a local App that takes advantage of the local API and rendering engine. This is what all my apps on my Android phone do now, from TripIt to Facebook to a simple currency converter (which I can use offline).

HTML and the web page dominance is now over. A whole generation of users are growing up using devices and interacting with the internet only via Apps. Apps are our future; we are now living in the App Economy, as Business Week puts it.

Apps are the new HTML.

Technorati Tags:

Bookmark and Share

posted on Thursday, February 11, 2010 4:01:40 AM (Eastern Standard Time, UTC-05:00)  #    Comments [1] Trackback

The content middle men (Hollywood studios, record labels, book publishers, etc) are suspicious of digital content. I predicted last month that they will fight back this year against digital distribution, most notably against Netflix and Amazon. Last month Warner Brothers held Netflix hostage and threatened to withhold its content unless Netflix held back new releases for 30 days. Netflix had no choice but to capitulate.

Inspired by their motion picture brothers’ success with Netflix, book publisher Macmillan recently held Amazon hostage. They threatened to withhold their entire collection of books, print and digital, unless Amazon raised their prices for the Kindle. Amazon challenged, but lost and yesterday had to capitulate as well. New books from Macmillan will now cost between $12.99 and $14.99 for the kindle. (FYI, Steve Jobs of Apple said that the iPad’s pricing model will be identical to Amazon as well.) I am now embarrassed that my first book was a Macmillan imprint.

You can’t blame Netflix and Amazon, they had a gun to their head. They are pioneering a new way to legally consume digital content, so we always knew that the middle men would fight back. While the studios hold all the power today, that will not be the case tomorrow.  People who use Netflix never go back to the old model, same with the Kindle. (I say, if it is not on the Kindle, it doesn’t exist.) As Kindles, iPods, iPads, Sony eReaders, etc, all grow in numbers, the studios and publishers will no longer be in a superior position, and the market will remember the barriers they are putting up today. This day is almost here, my 68 year old uncle now streams movies with Netflix.  My parents get the Kindle. My mom has an iPod. An entire generation is now growing up with iTunes and Kindles-my 13 year old niece will not leave the house without her iPod and Kindle.

Today consumers and innovation lost a battle. But the war is far from over.

Technorati Tags:
posted on Monday, February 01, 2010 6:04:38 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

Last week Telerik released the Data Service Wizard Beta 1. It will automatically create for you the end points for an Astoria, WCF, or RESTful service. New in the beta of the Data Service Wizard is the ability of the wizard to automatically generate the DataServiceKey attribute required to make relationships in Astoria work.

When you use "Astoria" (ADO.NET||WCF) Data Services, by default Astoria tries to map the primary keys in your entities using a convention. This is important for your service to work. The mapping works out of the box for the Entity Framework, however, if you are using LINQ to SQL or Telerik Open Access, it does not since some of your tables may have a primary key that will not map to the CLR primitive types that follow the Astoria convention for key mapping. (Order Details in Northwind bombs for example since both of its composite key are entities and not primitive CLR types.)

There is a very simple fix for this. You have to make your entity a partial class and then decorate the entity using the DataServiceKey attribute, in the constructor. Recently we added support for this in the Data Service Wizard: by default we do this for you by adding a “DalDataServiceKeys.cs“ (or VB) file to your data access layer project automatically.

image

The code is show below for our DalDataServiceKeys.cs file shown in the Telerik.OA.DAL project above. You will notice on Line 36 we will even convert the complex type to a primitive CLR type so Astoria can handle it.

   1:  namespace Telerik.OA.DAL
   2:  {
   3:      using System.Data.Services.Common;
   4:   
   5:      /// <summary>
   6:      /// Category Class Data Service Key Fix
   7:      /// </summary>
   8:      [DataServiceKey("CategoryID")]
   9:      public partial class Category
  10:      {
  11:      }
  12:      /// <summary>
  13:      /// Customer Class Data Service Key Fix
  14:      /// </summary>
  15:      [DataServiceKey("CustomerID")]
  16:      public partial class Customer
  17:      {
  18:      }
  19:      /// <summary>
  20:      /// Employee Class Data Service Key Fix
  21:      /// </summary>
  22:      [DataServiceKey("EmployeeID")]
  23:      public partial class Employee
  24:      {
  25:      }
  26:      /// <summary>
  27:      /// Order Class Data Service Key Fix
  28:      /// </summary>
  29:      [DataServiceKey("OrderID")]
  30:      public partial class Order
  31:      {
  32:      }
  33:      /// <summary>
  34:      /// OrderDetail Class Data Service Key Fix
  35:      /// </summary>
  36:      [DataServiceKey(new string[]{"OrderID","ProductID"})]
  37:      public partial class OrderDetail
  38:      {
  39:      }
  40:      /// <summary>
  41:      /// Product Class Data Service Key Fix
  42:      /// </summary>
  43:      [DataServiceKey("ProductID")]
  44:      public partial class Product
  45:      {
  46:      }
  47:      /// <summary>
  48:      /// Region Class Data Service Key Fix
  49:      /// </summary>
  50:      [DataServiceKey("RegionID")]
  51:      public partial class Region
  52:      {
  53:      }
  54:      /// <summary>
  55:      /// Shipper Class Data Service Key Fix
  56:      /// </summary>
  57:      [DataServiceKey("ShipperID")]
  58:      public partial class Shipper
  59:      {
  60:      }
  61:      /// <summary>
  62:      /// Supplier Class Data Service Key Fix
  63:      /// </summary>
  64:      [DataServiceKey("SupplierID")]
  65:      public partial class Supplier
  66:      {
  67:      }
  68:      /// <summary>
  69:      /// Territory Class Data Service Key Fix
  70:      /// </summary>
  71:      [DataServiceKey("TerritoryID")]
  72:      public partial class Territory
  73:      {
  74:      }
  75:  }

This will enable you to use Astoria with OpenAccess for all of the tables in your database. I converted my Tech*Ed “Data Access Hacks and Shortcuts” session demo to use OpenAccess and Astoria from the Entity Framework in less than 5 minutes. (I will show it and give away the code on my blog in a week or two.)

image

Enjoy!

Technorati Tags: ,
posted on Saturday, January 23, 2010 6:24:52 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

Telerik is proud to announce that the Data Services Wizard beta was released today. If you have used the wizard while it was a Telerik Labs project, you will notice a ton of new features and improvements. If you are new to the wizard, now may be a good time to give it a try and give us your feedback.

The Data Service Wizard works with Telerik OpenAccess Q3 or higher and Visual Studio 2008. Our next beta, due in February, will support Visual Studio 2010 and WCF 4.0. The wizard will create a service layer for you using “Astoria” 1.0, the latest version of “Astoria”, WCF, or the WCF REST or AtomPub project templates. You can get a walk through here.

To highlight some of the new features, I will give you some screen shots below.

First we made the navigation and project selection much easier. Now you can select your data access layer and your service project in one simple screen.

image

You asked for it, we delivered it: we are proud to announce Visual Basic .NET support!

image

We have also made the code preview page page optional.  As you can see we generate VB code. :)

image

Here is the completed Astoria service:

image

We’ll post some more how to and videos soon.

Enjoy!

Technorati Tags:
posted on Wednesday, January 20, 2010 4:03:36 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

Thursday, January 21, 2010
Leveling the LINQ to XML Playing Field

You must register at https://www.clicktoattend.com/invitation.aspx?code=144593 in order to be admitted to the building and attend.

Subject: 

This talk covers a wide range of techniques for working with XML in .NET. We’ll start with streaming techniques and XmlDocument, run through a quick introduction of general LINQ mechanics, and then examine how LINQ to XML greatly enhances XML access in .NET 3.5. Learn how to combine streamed XmlReader access with LINQ to XML and see how these old and new technologies integrate with one another in a very elegant way by implementing a simple custom iterator. We’ll work through demos in both C# and VB .NET, and also examine XML literals (an extremely handy VB-only feature) along the way.
Contrary to popular belief, all LINQ providers are not created equal. In fact, LINQ to XML has in one way proven to be the “weakest LINQ” of all. Unlike other major LINQ providers which give you strongly-typed objects, LINQ to XML offers no typed schema definitions (and thus, no type safety) for your code. There isn’t much recourse to this beyond writing code gen tools, using 3rd party solutions, or gambling on the LINQ to XSD provider (an MS incubation project). Lenni will demonstrate how the LINQ to XSD provider fills the gaping schema hole left by LINQ to XML. Attend this session (no prior LINQ knowledge required) and get the full LINQ story for LINQ to XML.

Speaker: 
Leonard Lobel
Leonard Lobel is a principal consultant at twentysix New York, a Microsoft Gold Certified Partner. Programming since 1979, Lenni specializes in Microsoft-based solutions, with experience that spans a variety of business domains, including publishing, financial, wholesale/retail, health care, and e-commerce. Lenni has served as chief architect and lead developer for various organizations, ranging from small shops to high-profile clients. He is also a consultant, trainer, and frequent speaker at local usergroup meetings, VSLive, SQL PASS, and other industry conferences. Lenni is also lead author in the MS Press book "Programming Microsoft SQL Server 2008".

Date: 
Thursday, January 21, 2010

Time: 
Reception 6:00 PM , Program 6:15 PM

Location:  
Microsoft , 1290 Avenue of the Americas (the AXA building - bet. 51st/52nd Sts.) , 6th floor

Directions:
B/D/F/V to 47th-50th Sts./Rockefeller Ctr
1 to 50th St./Bway
N/R/W to 49th St./7th Ave.

posted on Monday, January 18, 2010 8:31:21 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

The Microsoft developer community has always been a giving group. The amount of free time we give away writing blog posts, answering questions in forums, organizing user groups, code camps, and speaking at events is pretty amazing.

I have also been impressed by all of the MVPs who run 5ks for charity or organize other events for charity. In the past I have organized a few events that helped raise money for charity including cancer research, a school in rural Nepal, and the Indonesian Tsunami relief fund. Each time I have asked my peers in the Microsoft developer community to donate time, money, or even just a simple blog post to raise awareness. Each time I have always been impressed by just how vast and generous the response has been.

I have decided to organize a Facebook group, MVPs for Charity, and will ask all MVPs, User Group leaders, active community members, and Microsoft employees to join. On this group, I hope we can all keep each other informed of what we are doing for charity as well call on each other for help whenever there is a charity event or need.

After the disaster last week in Haiti, some of my peers in the Microsoft developer community asked me if I was going to organize another auction or fund raising drive. I am giving money to two charities the ClintonBush fund (at the request of President Obama, Bill Clinton and George W. Bush are raising money), and http://www.yele.org/. I will ask you all to choose a fund and donate as well.

posted on Sunday, January 17, 2010 9:46:12 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

Have a startup? Want free software? The Microsoft BizSpark Camp is for you. You have to sign up by Monday. See below for more details.

Via Sanjay Jain

BizSpark Camp

With several successful Microsoft BizSpark Incubation Weeks (Azure Atlanta, Win7 Boston, Win7 Reston, CRM Reston, CRM Boston, Win 7 Irvine, Mobility Mountain View,), we are pleased to announce Microsoft BizSpark Camp for Windows Azure in New York, NY during 28–29 January 2010. Based upon your feedback we have made several changes including offering cash prize, compressed time commitment, and much more. We hope you're tapping into the growing BizSpark community.

The current economic downturn is putting many entrepreneurs under increasing pressure, making it critical to find new resources and ways to reduce costs and inefficiencies. Microsoft BizSparkCamp for Windows Azure is designed to offer following assistance to entrepreneurs.

· Chance to win cash prize of $5000

· Nomination for BizSpark One (an invitation only program) for high potential startups

· Learn and build new applications in the cloud or use interoperable services that run on Microsoft infrastructure to extend and enhance your existing applications with help of on-site advisors

· Get entrepreneurs coaching from a panel of industry experts

· Generate marketing buzz for your brand

· Create opportunity to be highlighted at upcoming launch

We are inviting nominations from BizSpark Startups interested in Windows Azure Platform that target one or more of the following:

The Microsoft BizSparkCamp for Windows Azure will be held at Microsoft Technology Center, New York, NY from Thu 1/28/2010 to Fri 1/29/2010. This event consists of ½ day of training, 1 day of active prototype/development time, and ½ day for packaging/finishing and reporting out to a panel of judges for various prizes.

This event is a no-fee event (plan your own travel expenses) and each team can bring 3 participants (1 business and 1 – 2 developer). It is required to have at least 1 developer as part of your team.

To participate in the BizSpark camp, you must submit your team for nomination to Sanjay or your BizSpark Sponsor. Visit Sanjay’s blog for details on how to submit your nomination by Monday, January 18th, 2010. Nominations will be judged according to the strength of the founding team, originality and creativity of the idea, and ability to leverage Windows Azure Scenarios.

You may want to enroll into Microsoft BizSpark, an exciting new offering that enables software startups to leverage Microsoft development and platform technologies to deliver next generation web and Software + Services applications. For details see here.

posted on Friday, January 15, 2010 4:41:18 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

Last week Telerik released the December CTP of the Data Services Wizard. I posted on my blog a video that shows how to get started, however, for those of you that like walkthroughs better, here is one using a WCF Data Services (Astoria) service.

Getting Started: Mapping Data With OpenAccess

To get started, first download and install the Data Services Wizard. After that, fire up Visual Studio and create a new Class library application named Telerik.DSW.Demo.Astoria.DAL. Run the OpenAccess “Enable Project to use ORM” wizard and then run the Reverse Mapping Wizard and map to your database. For this demo I mapped the Northwind database.

Map as many tables as you like, you can also manually remove the complex properties (Customer Collection from the Order entity for example) via the mapping wizard if you want to use that entity in your service. (Don’t worry we will have a solution for this when we have our January beta!)

Note: your wizard may not have created the ObjectScopeProvider in your DAL project. If you don’t have one in your DAL project via the main menu choose Telerik|Open Access|Configuration|Connection Strings. Then select the ObjectScopeProvider check box showed in the dialog below and click on ok.

image

Next Up: Using the Wizard

The WCF Data Service that the wizard will create has to reside in another (Web) project. So let’s create a Web project named Telerik.DSW.Demo.Astoria.Web.

image

Now it is time to start the wizard. Just select from the main menu Telerik|Data Services Wizard.

image

This will bring up the first page of the wizard, Select DAL Project. Here you select the name of the project that has your OpenAccess entities. Select the DAL project and click Next.

image

The Select Data Service screen is where you have to enter in some important information.

First put in the namespace, this is the namespace of your web project. (Future versions of the wizard will default to this namespace), and the name of your service, I choose Northwind as my creative service name. Also select which entities to generate as part of your service. I choose Customer, Order, and OrderDetail. Lastly, select which type of Service to create, in this case a WCF Data Service (our wizard did not catch up with the name, so you have to select ADO.NET Data Service (Astoria).)

image 

After you click next, you can preview the generated code on the View Output screen.

image

Click next and then you will be asked to choose which project to add the service to on the Finish screen. Select the web project, or Telerik.DSW.Demo.Astoria.Web and click next.

image

Now the wizard does a lot of work for you.

First it sets a reference to all of the WCF Data Services libraries (System.Data.Services and System.Data.Services.Client.) Next it sets a reference to the DAL project for you (in our case Telerik.DSW.Demo.Astoria.DAL) and also sets a reference to the Telerik OpenAccess DLLs for you (Telerik.OpenAccess and Telerik.OpenAccess.Query.) Lastly, the wizard created the Northwind.cs OpenAccess reference file as well as the actual data service (svc and cs) files.

image

The last step is to run the service. Just right click on the SVC file and choose View in Browser from the context menu. You will see your RESTful service come up in the browser. From here you can set up a client to consume the service in ASP.net, Silverlight, or any other Microsoft and non-Microsoft technology.

image

Enjoy!

Technorati Tags:
posted on Tuesday, December 15, 2009 5:06:18 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

This week we released our third “alpha” CTP build of our Telerik OpenAccess Data Services Wizard on Telerik Labs. We have received tons of feedback on the tool and look forward to more feedback. The wizard’s development team and the entire OpenAccess team have come up with a roadmap and would like some feedback from the community on it. In the spirit of a transparent design, I am going to publish the entire roadmap here for community review. Of course all of this can change blah blah blah. (The lawyers made me say that.)

Beta 1: January 2010

  • Using T4 Code generation instead of text templates
  • VB.NET code generation (we had a ton of requests for this one!) 
  • Add WCF Data Services 1.5 full integration using the new Data Service Provider (of course we need to make changes to OpenAccess core to implement the DSP, however, the DSP is currently not fully documented by Microsoft, so we don’t know how much work this is just yet.)
  • Generate only the primitive types and prevent generation of complex entities for all services. (WCF barfs on complex types and entities.)

Beta 2: February 2010

  • Full support for WCF RIA Services
  • Full support for Azure Services
  • If you use the wizard to build a service, give you the option to automatically create a Silverlight client along with some code generation to consume the service to get you started
  • ASP.NET Dynamic Data support

Release: Telerik Q1 2010 Release

  • Full integration with OpenAccess’ installer. No longer making the wizard a separate install, it will just install as part of OpenAccess
  • Use of OpenAccess internal APIs
  • Visual Studio 2010 support
  • WCF 4.0 Support. We currently require the WCF 3.5 REST Starter Kit for two of our output modes: REST Collection and AtomPub. .NET 4.0 mode will eliminate this dependency and give you the option to produce WCF 4.0 REST services, etc.

After the Q1 release the roadmap is not super clear. I will assume that core development will slow down a little and that we will then focus mostly on adding support for new service types. I am sure that by next spring, there will already be some new service types to support like the release build of WCF RIA Services, maybe a new CTP of Astoria, etc.

Drop me a line and let me know what you think! Also, the tech support team is adding a separate Data Service Wizard forum in the Telerik support forums, so even though the wizard is technically a “lab” project and not supported, drop by and ping the team with your questions there. We’ll answer all your support questions there as well.

Enjoy and thanks for any feedback you send.

Technorati Tags: ,
posted on Thursday, December 10, 2009 8:10:33 AM (Eastern Standard Time, UTC-05:00)  #    Comments [1] Trackback

We are proud to announce our December CTP of the Telerik OpenAccess Data Services Wizard (formerly known as the WCF Services Wizard). The Data Services Wizard (DSW) allows you to easily create a CRUD data service layer for your application. The DSW does this by using a data access layer already built by OpenAccess and automatically generating the C# code for your endpoints. The types of endpoints you can create are:

  • WCF Data Services (FKA Astoria and also FKA ADO. NET Data Services)
  • Raw WCF endpoints
  • WCF REST Collection endpoints
  • WCF ATOMPub endpoints

This version of the Data Service Wizard is very robust; we have made lots of changes based on your feedback. Our #1 piece of feedback from customers has been to integrate the DSW with OpenAccess and Visual Studio. I am proud to announce that in this version the DSW is fully integrated with Visual Studio!

We have also started to integrate the DSW into the OpenAccess product itself. No longer is the DSW a standalone product; the wizard is now  located under the “Telerik” menu in Visual Studio and looks like the other OpenAccess wizards. Integration will get tighter in the near future and soon just be part of OpenAccess proper, not a separate download.

The basics of the wizard are the same. The DSW will ask you which project your OpenAccess entities are located in and then which entities you will expose in your endpoint.

ScreenShot1

After you choose the entities you want to expose and what type of service you want to build (WCF, Astoria, etc), you can preview the code that is generated.

ScreenShot2

A major improvement with the December CTP is that you can now automatically insert these service endpoint files into a project, eliminating the manual step of copying them on over. This will make building the services so much easier!

image011

Go grab the wizard here. It is still considered a Telerik Labs project, but we will move it to a fully supported beta in January with our next build. On the dock for the next build (early January) are:

  1. Using T4 Code generation instead of text templates
  2. VB.NET code generation
  3. Add WCF Data Services 1.5 full integration (using the new Data Service Provider)
  4. Prevent the generation of complex entities for all services

Future versions of the product will be fully part of OpenAccess and will also support RIA Services, Azure Services, and Visual Studio 2010 and WCF 4.0. Download today and send us feedback!

posted on Monday, December 07, 2009 9:39:31 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

Telerik Reporting is a great reporting package. If you using it, you may be happy to know that you can use Telerik OpenAccess as a data source. Let’s take a look at how to use it with a SQL Azure Database as a back end.

Getting Started

First you need to map your SQL Azure tables to OpenAccess entities. I demonstrated this before on my blog, if you have not used OpenAccess and SQL Azure yet, read this post. (Don’t worry I’ll wait.) What I did for this demo is create a library project called Telerik.Reporting.DAL and mapped all of the Northwind tables in my SQL Azure database to OpenAccess entities in a class library.

Next I created another library project to contain the reports called Telerik.Reporting.RptLib. This solution will contain all of your reports. It is good to create all of the reports inside one isolated solution so you can reuse these reports in an ASP.NET solution as well as other projects like Silverlight. After we created this project, I ran the OpenAccess enable project wizard to get all of the proper references set up to use OpenAccess. In addition, I set a reference to our Telerik.Reporting.DAL project so we can see the entities we just mapped.

Creating a Report

Now it is time to create a report. Just right click on the project and select Add| New Item and choose the Telerik Report Q3 2009 template from the Reporting category.

image

The Telerik Reporting Wizard will come up. Select the option to build a new report and a new data source. For a data source type, select Business Object from the options available.

image

Next drill down into the Telerik.Reporting.DAL namespace and then select the Customer entity.

image

The next few pages of the wizard ask you how to lay out your report and what data fields to choose from. The wizard is self explanatory, so I will not go into detail here, but just choose any style that you like and show some customer fields on the report. I choose to show Customer ID and Company Name in standard report. My report looks pretty basic in design view:

image

 

Wiring up the data

We are almost there. Next step is to write a little LINQ code behind your report to wire up the OpenAcces data source with your report. Right click on the design canvas of the report and select “View Code.” That will take you to the code behind. Create a function like the one below that uses the OpenAccess LINQ implementation and return all of the customers (We could more complex LINQ queries here if we wanted to. )

   1:  public static List<Customer> GetCustomers()
   2:  {
   3:      //LINQ Query to fetch all of the Customers via OpenAccess            
   4:      //data context (IObjectScope)
   5:      IObjectScope dat = ObjectScopeProvider1.GetNewObjectScope();
   6:      //Get a ILIst of Customers
   7:      List<Customer> result = (from c in dat.Extent<Customer>() select c).ToList();
   8:      return result;
   9:  }

The function above returns a List of Customers. The IObjectScope in line 5 is the data context and the LINQ statement is in line 7.

But we have one problem. While we set all of the proper references, we are missing a bunch of using statements. You can manually add them here:

   1:  using System.Collections.Generic;
   2:  using System.Data;
   3:  using System.Linq;
   4:  using Telerik.OpenAccess;
   5:  using Telerik.Reporting.DAL;

 

Or you can use the new Telerik JustCode tool to organize and fix your using statements:

image

You need to add one more line of code to your report. Add the following to the report’s construtctor, it will wire up your function as the data source of the report:

   1:  DataSource = GetCustomers();

The ReportViewer Control

Now it is time to view the report. To do that create a new ASP.NET application called Telerik.Reporting.Web. Set a reference to the Telerik.Reporting.RptLib project where the report you just created is located. After that drag a Telerik ReportViewer control from your palate to the ASP.NET web form.

image

All you need to do now is add one line of code to the page load event (or a button click event, etc) to load the CustomerRpt we just built into the ReportViewer.

   1:  if (!IsPostBack) { ReportViewer1.Report = new CustomerRpt(); }

Next step is to run it and see your report in action!

image

Enjoy!

Technorati Tags:
posted on Tuesday, December 01, 2009 9:24:03 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

Last week at PDC, Telerik launched a new product named JustCode. It was a stunning success since we had a lot of people show up for the launch and we gave away 1,000 free licenses at the event. Back in March at a planning meeting at Telerik HQ, we decided that we would embark on an Apple style “secret” strategy and go for the most buzz at launch.

Back at that meeting in March we decided that secrecy, which goes against a lot of our values at Telerik, would be required for the most buzz at the launch. But keeping a secret is not easy in the world of Twitter, blogs, and Facebook. Back in March only the development team, their closest buddies, and the senior management knew about JustCode. But that had to change soon as we started to dogfood JustCode a month or so later at Telerik. We were certain if we communicated the goal of being secret that there would not be any leaks. At the same time we decided to extend JustCode outside Telerik to a handful of vendors (super thanks to Imaginets for not only building the Work Item Manager and Dashboard, but doing it with clunky pre-alphas of JustCode.) A little later on, we also gave a super early look to the Telerik MVPs and DevReach speakers. Nobody let the news out.

Nobody that is, except me- one of the architects of the “secret” plan.

The fist boo-boo I made was mention it to a fellow Telerik employee back in March just after that meeting. Oops, but no big deal, it was at least in the family.

The next snafu was in Durban, South Africa, back in August. I was speaking at TechEd South Africa and I used my non-presenter VPC to do one of my demos since that had the particular Silverlight 3.0 bits on it and my presentation VPC had only Silverlight 2.0 (long story but a different demo needed SL 2.0 at the time. Remember SL 3.0 only shipped the week before..) My non demo VPC of course had JustCode early alpha on it since we were dogfooding it at Telerik. Most of you know me and know that I love to write a lot of code in my sessions. Well I had to do some refactoring in one of my talks and boom, without thinking, used JustCode on stage. Big oops! Luckily the handful of folks who came up to me after to ask “do you have a super fast beta of Resharper on your machine?” were sworn to secrecy and kept their word of the secrecy of JustCode. (The free license I promised them also didn’t hurt.)

Next came the awesome video product teaser that generated a lot of buzz. If you didn’t see it, watch it here.

Next came Basta in Germany in September. Our marketing team printed up flyers with all of our products on it. Somehow JustCode made it to the flyer! After a few frantic calls back and forth, the team at Basta decided that we would test the waters and give the flyers out. The first person who noticed it asked if we support F#. Everyone who came to the booth was sworn to secrecy.  After Basta the flyers were destroyed. (Look for a few of them on eBay, they are now a collector’s item.)

Lastly was the day of the launch. I was wearing a JustCode tee shirt well before the launch. I was filming an MSDN video and also speaking at my BOF talk, so Stefan decided to put tape over the “Code” on my tee shirt to generate some buzz. It worked but I took the tape off and put it back on about an hour before the launch to reposition the tape for the unveiling, and the C and E were now showing, so people were able to guess.

Despite my attempts to sabotage our well laid plans, the launch went great. Note to Telerik: next time don’t tell me the secret product launch!

Technorati Tags: ,
posted on Monday, November 23, 2009 3:22:34 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

Today at the PDC, Microsoft announced a new SQL Azure developer tool that is still pre-alpha: Code Name Houston.  Houston is a web based developer tool for SQL Azure databases. Built in Silverlight and hooked into the SQL Azure developer portal, Houston allows you to rapidly create tables, views, procedures, add data, delete data, etc. It kinda reminds me of Microsoft Access, but in a good way. This tool is not for admin stuff like adding users, just rapid database development in the cloud.

Houston is not available yet, but was demoed at PDC. Building a table was done very fast. It was not demoed, but I did see a button for import and export of data. When asked about general availability, no dates were given but calendar 2010 was indicated as the target. Can’t wait…

posted on Thursday, November 19, 2009 6:24:28 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

Following the lead of the Department of Justice (DOJ) in Washington, DC, the Attorney General of the State of New York, Andrew Cuomo, has brought a lawsuit against Intel, calling them a monopolist. While Intel has recently settled legal claims with rival AMD (mostly due to patent disputes as well as some anti-competition charges), Cuomo is suing Intel on the grounds that they are a monopoly and have stifled competition.

While Intel’s market share is huge, over 80% of chips sold are “Intel Inside”, the free market has regulated the industry very nicely and lead to innovation. Intel and its cheap and low powered Atom processor started the netbook revolution (I now see as many netbooks as Macs in Starbucks). Look at the progress with multi-core and x64 architecture. (Actually three years ago I thought AMD’s x64 chips were better since their high end chips had more cores at the time. I remember buying an AMD based 2xquad core x64 SQL Server machine in that time frame and was impressed that AMD’s multicore server chips were so much better.)

Over ten years ago, I lobbied the US Congress against the DOJ’s case against Microsoft on similar grounds. At the time did Microsoft do some bad “evil empire” things that they were able to do since they were so big? Yes. Enough to warrant an anti-trust legal battle? No. The free market was able to sort it out on its own, far better than the legal remedies brought by the DOJ. When Microsoft got all big and lazy with dominate Internet Explorer market share, boom, Firefox came out of nowhere and handed Microsoft its lunch. Now Microsoft is starting to invest and innovate in the browser space, but now has to deal with not only Firefox, but Chrome and Safari. The free market did loads more to spur innovation and regulate Microsoft than the anti-trust trial even dreamed of doing! Same with Intel, allow the free market to decide, not lawyers.

Fellow New Yorker and good friend Andrew Bust wrote an opinion here. Andrew is a registered Democrat and I am a registered Republican. We both agree on this issue. The last time we agreed on a political issue was when DOS was the primary operating system used.

Let the free market regulate the industry and don’t let the government stifle innovation. Sign a petition here.

posted on Sunday, November 15, 2009 4:38:07 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

I have a simple demo application that uses ADO.NET Data Services as a data service back end for a Silverlight application.  My ADO.NET Data Service uses the Entity Framework to map the Northwind database tables of Customers, Orders, and Order Details. Once the Silverlight applications sets a service reference to the ADO.NET Data Service, you can use the client side LINQ libraries to build your application. My application looks like this, it has a drop down filled with customers, a grid with Order and a grid with Order Details. As you click on each one, it will filter the rest.

image

The LINQ statement for the drop down looks something like this:

   1:  //this uses the LINQ to REST proxy (servicereference1)
   2:  NorthwindEntities dat = new NorthwindEntities(
   3:      new Uri("Northwind.svc", UriKind.Relative));
   4:   
   5:  //linq query to get customers in ComboBox
   6:  var customers = from c in dat.Customers
   7:                  orderby c.CustomerID
   8:                  select c;

 

Pretty basic LINQ stuff. What I would like to do next is bind my drop down combobox to customers. There is one catch, since we are in Silverlight, this processing has to be done asynchronously, so that data binding code has to be done elseware.

There are a few ways to do this, the most straight forward it to set a delegate and catch an event, etc. Another is to use a code block and catch the event right in the same method.

While both of these solutions are fine, I don’t like them. I don’t like them because they look funny and pollute my data access code with tons of async communication stuff. Lastly, for each area where we have a LINQ statement, we have a lot of repetitive similar looking code. Every bone in my body wants to make that generic and only call it once.

Enter the AsyncLINQManager class I wrote. Forget about the details of this class for now, I will list it below in full. For now let’s show how to use the LINQ statement with the helper. First you have to create an instance of the AsyncLINQManager and then register an event. (No getting around the events!) You can do this in the page load handler:

   1:  //ref to the linq manager
   2:  alm = new AsyncLINQManager();
   3:  //register an event so we can do the databinding
   4:  alm.OnEntityFetched += Page_OnEntityFetched;

Now your LINQ statement needs one more line of code. Here is the same LINQ statement from above, passing customers to the AsyncLINQManager:

   1:  //this uses the LINQ to REST proxy (servicereference1)
   2:  NorthwindEntities dat = new NorthwindEntities(
   3:      new Uri("Northwind.svc", UriKind.Relative));
   4:   
   5:  //linq query to get customers in ComboBox
   6:  var customers = from c in dat.Customers
   7:                  orderby c.CustomerID
   8:                  select c;
   9:  //call async functions for the linq query
  10:  alm.LinqAsync(customers);

Line 10 is the only new line of code. Now the LINQ manager will take care of all of the async processing for us and we just have to put our data binding code in Page_OnEntityFetched() shown here:

   1:  //this event handler will do the actual databinding
   2:  void Page_OnEntityFetched(EntityEventArgument args)
   3:  {
   4:      switch (args.TypeName) //we get this info from the event
   5:      {
   6:          case "Customers":
   7:              CustomerCbo.ItemsSource = args.returnedList;
   8:              break;
   9:          case "Orders":
  10:              dg.ItemsSource=args.returnedList;
  11:              break;
  12:          case "Order_Details":
  13:               dg_Details.ItemsSource = args.returnedList;
  14:              break;
  15:   
  16:      }
  17:  }

 

You will notice that we do all of our data binding here, for all of our LINQ statements. This is the value of the AsyncLINQManager, now all of my binding code is in the same place. (I am sure that there will be some who disagree, but hey, build a better AsyncLINQManager and blog about it and I will link to it. :) )

So let’s take a look at the code to query the orders, you will notice that it will call the same LINQ manager and then have to come back to Page_OnEntityFetched() to do the binding:

   1:  //orders
   2:  private void AsyncBindOrdersCbo(string customerid)
   3:  {
   4:   
   5:  //this uses the LINQ to REST proxy (servicereference1)
   6:  NorthwindEntities dat = new NorthwindEntities(
   7:      new Uri("Northwind.svc", UriKind.Relative));
   8:   
   9:  //linq query to filter the Orders in the grid
  10:  var orders = from o in dat.Orders
  11:               where o.Customers.CustomerID == customerid
  12:               orderby o.OrderDate
  13:               select o;
  14:   
  15:      alm.LinqAsync(orders);
  16:   
  17:  }

What I really  like is that you can go ahead and write a simple LINQ statement like you are use to, pass the result to the AsyncLINQManager for processing and then just have one event handler take care of all of your data binding. To me, your code is more clean and your developers can code the LINQ statements almost like normal (minus that one extra line of code) and forget about all of the async stuff.

The code for the AsyncLINQManager is here. All it is doing is sending out the async request, catching it, and then returning an IList and object name in the event args.

   1:  using System;
   2:  using System.Linq;//for IQueryable
   3:  using System.Data.Services.Client;//for DataServiceQuery
   4:  using System.Collections;//for ILIST
   5:   
   6:  namespace LinqUtilities
   7:  {
   8:      //ASYNC Linq stuff
   9:      public class AsyncLINQManager
  10:      {
  11:          //see the EntityEventArgument class below for the event args
  12:          public delegate void EntityFetchCompleted(EntityEventArgument args);
  13:          //developer must register this event in the UI code to catch the IList
  14:          public event EntityFetchCompleted OnEntityFetched;
  15:   
  16:          //pass in linq query object for execution
  17:          public void LinqAsync<T>(IQueryable<T> qry)
  18:          {
  19:              //generic async call to start the linq query
  20:              DataServiceQuery<T> dsq = (DataServiceQuery<T>)qry;
  21:              //Call the code async and assign OnFetchComplete to handle the result
  22:              dsq.BeginExecute(OnFetchComplete<T>, dsq);
  23:           }
  24:   
  25:          //method to handle the async result
  26:          void OnFetchComplete<T>(IAsyncResult result)
  27:          {
  28:              //catch the status of the async call
  29:              DataServiceQuery<T> dsq =(DataServiceQuery<T>)result.AsyncState;
  30:              //if we are done, then stuff the data into a untyped List
  31:              if (OnEntityFetched != null)
  32:              {
  33:                  //delegate for event
  34:                  OnEntityFetched(new EntityEventArgument
  35:                                 { returnedList = dsq.EndExecute(result).ToList() });
  36:              }
  37:          }
  38:   
  39:      }
  40:      
  41:      
  42:      //event args class for the event on the client to 
  43:      //see what linq query they are handling
  44:      public class EntityEventArgument : EventArgs
  45:      {
  46:          public IList returnedList { get; set; }
  47:          public string TypeName
  48:          {
  49:              get { return returnedList.Count == 0 ? string.Empty : returnedList[0].GetType().Name; }
  50:          }
  51:   
  52:      }
  53:  }

You can download the sample code and the AsyncLINQManager code from my “DataAccess Hacks and Shortcuts” session demos here.

Enjoy!

Technorati Tags: ,,
posted on Tuesday, November 03, 2009 4:42:33 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

Last month I showed how to manually migrate schema and data from SQL Server 2008 to SQL Azure. I also mentioned the SQL Azure Migration Wizard developed by George Huey in that post. Since then George has updated the tool (currently on 0.2.7) and it now has some great features that make it far more compelling than creating and fixing scripts generated by SQL Server.

To get started, the tool will allow you to select a SQL Server database and then ask you to select which objects to migrate.

image

It will then do an analysis of your database and attempt to fix the problems. For example, if you have a table with a TEXT column, it will automatically make it a varchar(max) and will also unbind XML schemas and UDTs. It will remove keywords (like PAD_INDEX as shown below) not supported by SQL Azure.

image

After you connect to SQL Azure, the tool makes a script for you. You have the option to run the script as is, or you can save it to disk, or you can modify it in the window and run it in real time. The scripts have good comments in there telling you about some potential problems like XML data types, etc. After you play with the script, you can execute it and the tool will let you know the success or failure of the script.

image

Now that you have migrated the schema, the hard part is over. Next you can migrate the data. The easiest way to do this so far is via Red Gate SQL Compare as I showed last time or via scripting the data via the SQL Server script wizard. To get there right click on a database in SQL Server 2008’s SQL Server Management Studio and select Tasks|Generate Scripts. After you select your database you are brought to the Choose Script Options dialog. Turn off all of the other options except “Script Data” and choose on the next screen only the tables you have moved using the SQL Azure Migration Wizard.

image

After you choose the tables to move over, you can then choose to send the script to a new window. You will notice that the only thing that was scripted was the INSERT INTO statements.

image

Now select the tab that contains your query and then choose Query|Connection|Change Connection from the main menu. Enter in the login credentials you have for your SQL Azure account and then click on the Options tab and enter in the name of the database you are connecting to. After you connect you can attempt to run your query. Just make sure that your script will be compatible with SQL Azure before you run it. Meaning if you removed any XML data types, etc, using the Migration tool, you will have to do the same to your script. There is always some cleanup to run, but is pretty straight forward. For really large databases, you may want to highlight sections of the script and run it a table or so at a time to prevents timeouts to SQL Azure. You will also have to make sure that you arrange the script to sequence the INSERTs to coincide with the foreign keys constraints. SQL Server is smart enough to put Order before Order Details, but not all objects does it do this for you.

image

When you are done you can view your database using the SQL Azure Manager I talked about yesterday.

image

Enjoy!

Technorati Tags:
posted on Thursday, October 08, 2009 5:52:46 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

I have just completed an amazing 8 day journey to the village of Chyangba in a remote mountain area of Nepal. Chyangba is a village of about 55 homes inhabited by the ethnic group called Sharpa. Most of you will know Sherpas as the folks who climb up to Mt. Everest, and several famous Sherpas come from Chyangba. My friend and guide in 2003 and 2008, Ngima Sherpa, comes from Chyangba and I was going to visit him. In addition, I was working with a charity called Education Elevated to help fix up a school and set up a library. While in Chyangba, I worked on the school and library for 4 days.

Getting to Chyangba

Getting to Chyangba is not easy. We had to fly on a 16 seat Twin Otter from Kathmandu to Phaplu. Phaplu has an “airport” consisting of a dirt strip and a dude with binoculars and a radio. After landing in Phaplu we trek a few hours (mostly in the dark) to our camp.

IMG_0329

Our camp was visited by some local kids in the morning and had great views of the valley. We then trek the whole next day and finally arrive in Chyangba.

Visiting the School and Library

Upon arrival, all the school children were lined up waiting for us. We then walked around the school and library for a few hours and took hundreds of photos. Imagine hiking for 7 hours and going directly into a photo shoot. :)

IMG_0372

The kids are super cute.

IMG_0490

Project Planning

We start to size up the job ahead of us. Here is a photo of an empty room we will convert into the school library.

IMG_0376

Being geeks we decide to be agile and use the scrum methodology. We decided we would re-assess the situation twice a day and see how far we get. We took stock of what furniture we had in the building (school desks, etc) and since we are MVP geeks, we decided to use a GUID system (globally uniquely identifier for Tanya and my mom, the only two non-techies I know who read my blog.)  We put the benches into four categories: good enough, reinforce,  take apart and put back together with some new wood, and ask Roger (the scrum master).  Here is a photo of a school bench with GUID # 8.

IMG_0418

Getting to Work

Roger the carpenter and general contractor (and scrum master) worked wonders. We computer geeks just hung around and he told us what to do. Before I knew it I was taking apart school chairs, benches, desks, etc, and rebuilding them. I got pretty good with a hammer.

IMG_0472

We continued for a few days, constantly reassessing. I did not think we could fix all of the furniture in the four days we had as well as build a library (shelves, tables, and desks.) But Roger kept us on target. He did have electricity from 9:30am to about 2pm each day and was able to use a power saw. Awesome. But the kids were attracted to it like moths to the light, so I had to distract them by balancing wood on my head. As the week progressed I got better and was able to balance an entire bench on my head while standing on one foot (in the Dancing Shiva position for you Yogis.)

IMG_3193

The kids started to imitate me.

IMG_0487

Sprints 6 and 7

We did two sprints a day. Sprint 6 was on day 3 and we (mostly Roger) installed the shelves. We brought about 100 lbs of books and started to stack the shelves. After that some of us read to the kids and helped them practice counting in English.

IMG_0465

Sprint 7 was awesome. We gave out all of the school uniforms to the kids. (In Nepal you can’t go to school if you don’t have a uniform.)

IMG_0496

After we give out the uniforms, the kids all ran to change and then do a little dance for us. After we celebrate and I teach some of the kids the fist bump.

IMG_0504

Leaving :(

After spending the last few hours with the kids and helping them read and count, we departed for a final meal at our campsite. The Sherpas cooked us a chocolate cake, I have no idea how they did that over a campfire. We then went to one of the local's house for a party and drank the local drinks: Chang and Roxi. They are evil drinks. Apparently it is a Sherpa custom to refill your drink immediately after you take a sip. I have no idea how much Chang I drank, but I think I can still feel it. We then turned the house into a Sherpa Disco and danced the night away to local music. (Sherpas can get down.)

The next day we had a final going away ceremony with the whole village and they put tons of Buddhist koda and flowers on us. Since we were mostly going down, we trekked the whole way back to Phaplu in one day. We treated ourselves to $5 a night hotel rooms and flew back to Kathmandu the next day.

IMG_0539

This was a great experience, we spent a week in a local village, a village not even on the map, and made a difference. For geeks, we did the best we could-which was far more than I thought we could do. I hope that the tech community can donate a lot in small amounts, it only takes $10 to buy a school uniform or a few books so a kid can go to school. You can donate here. :)

Technorati Tags: ,
posted on Monday, October 05, 2009 8:30:25 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

I like the idea of a database in the cloud. We have sort of been doing it for years, connecting to a SQL Server database over 1433. With SQL Azure, we take this one level further and let Azure worry about the hardware. So we don’t have to worry about scaling out with the bandwidth, RAIDs, etc. 

Last week I showed the basics on how to migrate data from a SQL Server 2008 database to SQL Azure. Yesterday I showed using SQL Azure with Telerik OpenAccess  and the WCF REST Toolkit. Today I will show how to build a simple REST based application using ADO.NET Data Services (aka Astoria.)

To get started we need a SQL Azure database. See my previous blog post about the CTP and getting data into your SQL Azure database. Once you have a SQL Azure database all set up let’s get to work.

The next thing we need to do is to create a new Web project and create our Entity Framework data model. I’ll go ahead and create an Entity Data Model against my local SQL Server 2008 Northwind database that has the same schema as my SQL Azure one. This is because SQL Azure and the Entity Framework chokes on the designer (or at least my version!) I will map:

  • Customers
  • Orders
  • Order Details

Now that my EDM is all set up, I will go in and change the connection string in my web.config to use SQL Azure. Here is my new connection string:

<add name="NorthwindEntities" 
connectionString="metadata=res://*/Northwind.csdl
|res://*/Northwind.ssdl|
res://*/Northwind.msl;
provider=System.Data.SqlClient;
provider connection string=&quot;Data Source=tcp:tpzlfbclx123.ctp.database.windows.net;
Initial Catalog=Northwind_Lite;
Integrated Security=False;UID=Stevef;PWD=GoMets!;
MultipleActiveResultSets=False&quot;"
providerName="System.Data.EntityClient"/>

You have to manipulate the EF connection string and put in the SQL Azure server name of your CTP in the “Data Source” and put in the database name in the Initial Catalog, turn off integrated security and put in the UID/PWD from the CTP. I set MARS set to false since SQL Azure does not support MARS.

Now let’s create the Astoria Service. Add a new “ADO.NET Data Service” to your project. I named mine NwindRestService.

image

Astoria can’t make it any easier for you to get the service up and running. All you need to do is set up the name of your EDM in line 2, in our case it was NorthwindEntities and also set the access permissions on line 8. I just uncommented the generated line and put in an “*” so all of my entities will inherit the AllRead access rule. With that we are good to go!

   1:  //Enter the name of your EDM (NorthwindEntities)
   2:  public class NwindRestService : DataService<NorthwindEntities>
   3:  {
   4:      public static void InitializeService(IDataServiceConfiguration config)
   5:      {
   6:          //Must set up the AccessRule, here I allow read only access
   7:          //to all entities. I can also do this one by one.
   8:          config.SetEntitySetAccessRule("*", EntitySetRights.AllRead);
   9:      }
  10:  }

 

For the reality check, let’s run the service in the browser, being a RESTful service, Astoria will allow you to browse all of the Customers by typing in this URL:

http://localhost:1075/NwindRestService.svc/Customers

We should see this:

image

I also edited my first row in Northwind (ALFKI) to say “SQL Azure” at the end of the customer name so I know I am working with the SQL Azure and did not mess up my connection strings. That is it, you now have a RESTful service that is hooked up to SQL Azure.

The hard part is over. Now let’s build a simple ASP.NET client to consume the RESTful data.

First you have to set a reference to your service. This will give you a proxy to write some LINQ (to Astoria) code against.

image

 

Next we will create a simple ASP.NET GridView control and bind some data to it on the page load event. (Sure we can do a lot more, but this is just to get our feet wet with SQL Azure.)

   1:  //the address of our service
   2:  Uri url = new Uri("http://localhost:1075/NwindRestService.svc/", UriKind.Absolute);
   3:  //a ref to our proxy 
   4:  ServiceReference1.NorthwindEntities dat = 
   5:          new ServiceReference1.NorthwindEntities(url);
   6:   
   7:  //link statement to get the data, can use WHERE, Orderby, etc
   8:  var customers =
   9:      from c in dat.Customers
  10:      where c.Country == "Germany"
  11:      orderby c.CustomerID
  12:      select c;
  13:   
  14:  //bind to the grid
  15:  GridView1.DataSource = customers;
  16:  GridView1.DataBind();
 

This is pretty basic code from here.  Line 2 is a Uri reference to our service (which is technically in the same project, but it could (and should) be in a different project.) Line 4-5 is setting up a reference to the proxy we created and this is also our data context, representing the Astoria service. Lines 8-12 is a simple LINQ to Astoria statement to filter by the German customers (look like LINQ to SQL? That is the point!) and Lines 15-16 is where we do the data binding to the ASP.NET GridView. Our gridview looks like this, notice the ALFKI record says it is coming from SQL Azure:

image

That is all there is too it. Enjoy.

posted on Thursday, September 10, 2009 5:39:59 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

Feeds are part of what power Web 2.0. You can use RSS or ATOM to syndicate your content and allow readers to subscribe to your content. It is what powers Twitter, Facebook, as well as CNN and the New York Times. ATOM is a popular alternative to RSS. Just about every blog and “RSS” reader will support ATOM. When you are talking about ATOM, you actually are talking about two things: Atom Syndication Format, an XML language for feed definitions, and the Atom Publishing Protocol, a very simple HTTP protocol for creating and updating feed based content.

The WCF REST Starter Kit allows you to create services that will produce an ATOM feed or expose your data as a service via the ATOM Publishing Protocol. The WCF REST Starter Kit gives you a Visual Studio template to get your started.

You can use Telerik OpenAccess as a data source for the collections of your ATOMPub service. To do so you have to wire up some code in your svc file back to your OpenAccess entities. This can be accomplished by using the OpenAccess WCF Wizard.

Creating the ATOMPub Service and a Silverlight Front End

To get started, let’s create four projects in Visual Studio, one data access layer using OpenAccess, one AtomPub service project using the new WCF REST Toolkit Visual Studio template, and a Silverlight Web/Client.

image

Then you can use the Telerik OpenAccess WCF Wizard to automatically create the SVC and CS files for an ATOMPub service (the arrow above.) Once you have that and allow the project to talk to the DAL with a reference to that project, view the service in the browser for a reality check. You can do the format, similar to REST of: http:// server name / service name / resource name, for example:

http://localhost:54669/NorthwindAtomPubService.svc/Customers will return a list of all the customers and if you add an /CustomerID to the end of the URL like this: http://localhost:54669/NorthwindAtomPubService.svc/Customers/ALFKI then you will bring up one individual customer.

image

Now let’s consume this from a Silverlight application. Pretty easy stuff, but first we have to create a XAML grid:

<data:DataGrid x:Name="dataGridCustomers" 
               AutoGenerateColumns="False" ItemsSource="{Binding}">
    <data:DataGrid.Columns>
        <data:DataGridTextColumn 
            Binding="{Binding Path=CompanyName}" Header="Company Name">
        </data:DataGridTextColumn>
        <data:DataGridTextColumn 
            Binding="{Binding Path=ContactName}" Header="Contact Name">
       </data:DataGridTextColumn>
    </data:DataGrid.Columns>
</data:DataGrid>

After you build your XAML grid, let’s set a service reference to the AtomPub service and then start to write some code. Just like before, I will have a LoadData() method to fill the grid. Of course being Silverlight it has to be asynchronous. Here is the code.

   1:  private void LoadData()
   2:  {
   3:  //the URL of our ATOM Pub service
   4:  string uri = "http://localhost:54669/NorthwindAtomPubService.svc/Customers";
   5:  //set up the web request
   6:  HttpWebRequest request = HttpWebRequest.Create(
   7:      new Uri(uri)) as HttpWebRequest;
   8:  //HTTP GET (REST uses the standard HTTP requests)
   9:  request.Method = "GET";
  10:  //begin the async call in a code block
  11:  request.BeginGetResponse(ar =>
  12:      {
  13:          Dispatcher.BeginInvoke(() =>
  14:              {
  15:                  //catch the HTTPWebRequest
  16:                  HttpWebResponse response = 
  17:                      request.EndGetResponse(ar) as HttpWebResponse;
  18:                  //get the customers back
  19:                  var result = 
  20:                      Customer.GetCustomersFromAtom20Stream(response.GetResponseStream());
  21:   
  22:                  //stuff the customers into a LIST
  23:                  List<Customer> customers = new List<Customer>();
  24:   
  25:                  foreach (var customer in result)
  26:                  {
  27:                      customers.Add(customer);
  28:                  }
  29:                  //bind the LIST to the Silverlight DataGrid
  30:                  dataGridCustomers.DataContext = customers;
  31:              });
  32:      }, null);
  33:  }

 

This code is easier than it looks. We start off with a HTTPWebRequest for the service (line 6-7) using an HTTP GET (line 9) and a code block to handle the async call (starting on line 11) . This code block works similar to a callback method. Inside the code block we catch the HTTPWebRequest asynchronously and get the results into the implicitly typed local variable var on line 19. After that is just basic LIST stuff, filling the list and adding it as the source of the dataGrid.

image

Pretty easy. Since that was so easy, let’s shake it up a little bit. How about we make the backend database be SQL Azure instead of SQL Server 2008.

Connecting Telerik OpenAccess to SQL Azure

Turns out that using SQL Azure and Telerik OpenAccess is pretty easy. If you have an Azure schema that is the same as your SQL Server 2008 schema, all you have to do it change the connection string in the OpenAccess DAL project’s app.config.  Let’s change our connection string in the DAL project to use SQL Azure like this:

   1:  <connection id="Connection.Azure">
   2:    <databasename>Northwind_Lite</databasename>
   3:    <servername>tcp:tpzlfbclx123.ctp.database.windows.net</servername>
   4:    <integratedSecurity>False</integratedSecurity>
   5:    <backendconfigurationname>mssqlConfiguration</backendconfigurationname>
   6:    <user>Stevef</user>
   7:    <password>gomets</password>
   8:  </connection>

 

Line 3 is the server you get from the SQL Azure CTP program and the user name and password is what you set up when you got the CTP. (Don’t have a SQL Azure CTP? Sign up here!)

That is it. When I run it I get the same results, except that the data is coming from SQL Azure and not from SQL Server 2008. I went into my Northwind_Lite database in SQL Azure and edited the first Customer row so I always know the data is coming from SQL Azure:

Update Customers 
Set CompanyName= 'Alfreds Futterkiste SQL Azure'
Where CustomerID='ALFKI'

Now when I run the project, I see the data from SQL Azure:

image

That is all there is too it!

You can get the WCF REST Starter Kit here, the Telerik OpenAccess WCF Wizard here, and the code from this blog post here.

Enjoy!

posted on Tuesday, September 08, 2009 5:09:58 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

SQL Data Services underwent a massive change with the last CTP, eliminating the entity bag container, whatever they called it thingie and moved to a fully cloud-based relational model. Another major change was that it was given a new and more appropriate name: SQL Azure. You can get the CTP and have access to what I call SQL Server lite in the cloud. Since SQL Azure supports the fully relational model along with stored procedures and views, you can connect to SQL Azure with a regular old ADO.NET connection string like the following one, allowing you to code against SQL Azure with .NET the same way you did with plain old SQL Server.

Server=tcp:tpzlfbclx1.ctp.database.windows.net;Database=Northwind_Lite;User ID=Stevef;Password=myPassword;Trusted_Connection=False;

Once you are all signed up for the CTP you can go into the web based admin tools and create a database. I created a database called Northwind and another one called Northwind_Lite for testing.

image

To be honest, I am not sure what else you can do in the web interface. So you have to connect via SQL Management Studio to create your database schema. There is the first problem. SQL Azure does not support the object explorer view that you get in SQL Management Studio, so you will have to hack a little bit.

Connecting to SQL Azure with SQL Server Management Studio

This is not as easy as it sounds. :) Since you can’t connect through the object explorer, you will have to open a new TSQL Query window.

image

In the log in dialog, enter in the server name from the CTP’s connection string and the user name and password that you choose to administer the CTP.

image

SQL Azure does not support the “Use” statement, or the ability to change databases on your connection. So you have to cheat and use some of the advanced options when logging in. You can do this by selecting the “Options >>” button on the log in dialog and then selecting “Connection Properties”. Under the Connect to database option, you have to select the database that you want to work with, since the default will be the Master database and most likely you will not be building any applications using the Master database.

image

After you connect you will get an error about the inability to apply connection settings, which you can ignore.

image

You will notice right away that there is nothing in your database as the following SQL statement will show:

select * from sys.objects

We now have to migrate some database objects from our SQL Server database to SQL Azure.

Migrating Existing SQL Server Objects to a SQL Azure Database

It would be cool if there were some easy way to migrate your databases to SQL Azure in this CTP. There is not. I suspect that in future CTPs this will not be a problem. But for now, you have to get creative. Some hacks and shortcuts are in order.

To get started, let’s just copy over one table. To do this, open your local SQL Server in the object explorer. Drill down to the Northwind database and drill down into the Customers table. Right click and select Script Table as|CREATE To|Clipboard and you will have a nice CREATE TABLE statement on your clipboard.

 

image

Then paste the TSQL into the Query Window that is connected to your SQL Azure database. Here is what my generated TSQL looks like:

   1:  USE [Northwind]
   2:  GO
   3:   
   4:  /****** Object:  Table [dbo].[Customers]    Script Date: 09/04/2009 03:35:38 ******/
   5:  SET ANSI_NULLS ON
   6:  GO
   7:   
   8:  SET QUOTED_IDENTIFIER ON
   9:  GO
  10:   
  11:  CREATE TABLE [dbo].[Customers](
  12:      [CustomerID] [nchar](5) NOT NULL,
  13:      [CompanyName] [nvarchar](40) NOT NULL,
  14:      [ContactName] [nvarchar](30) NULL,
  15:      [ContactTitle] [nvarchar](30) NULL,
  16:      [Address] [nvarchar](60) NULL,
  17:      [City] [nvarchar](15) NULL,
  18:      [Region] [nvarchar](15) NULL,
  19:      [PostalCode] [nvarchar](10) NULL,
  20:      [Country] [nvarchar](15) NULL,
  21:      [Phone] [nvarchar](24) NULL,
  22:      [Fax] [nvarchar](24) NULL,
  23:   CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED 
  24:  (
  25:      [CustomerID] ASC
  26:  )
  27:  WITH
  28:   (
  29:  PAD_INDEX  = OFF, 
  30:  STATISTICS_NORECOMPUTE  = OFF, 
  31:  IGNORE_DUP_KEY = OFF, 
  32:  ALLOW_ROW_LOCKS  = ON, 
  33:  ALLOW_PAGE_LOCKS  = ON) ON [PRIMARY]
  34:  ) 
  35:  ON [PRIMARY]
  36:   
  37:  GO
  38:   

We already know that SQL Azure does not support USE, so eliminate lines 1&2 and press F5. You will see that line 5 also is not supported, so eliminate that and keep going by pressing F5 again. You will see that ANSI_NULLs, PAD_INDEX, ALLOW_ROW_LOCKS, ALLOW_PAGE_LOCKS, and ON [PRIMARY] are not supported, so you will have to eliminate them as well. Your new trimmed down SQL Azure SQL script looks like this:

   1:  SET QUOTED_IDENTIFIER ON
   2:  GO
   3:  CREATE TABLE [dbo].[Customers](
   4:      [CustomerID] [nchar](5) NOT NULL,
   5:      [CompanyName] [nvarchar](40) NOT NULL,
   6:      [ContactName] [nvarchar](30) NULL,
   7:      [ContactTitle] [nvarchar](30) NULL,
   8:      [Address] [nvarchar](60) NULL,
   9:      [City] [nvarchar](15) NULL,
  10:      [Region] [nvarchar](15) NULL,
  11:      [PostalCode] [nvarchar](10) NULL,
  12:      [Country] [nvarchar](15) NULL,
  13:      [Phone] [nvarchar](24) NULL,
  14:      [Fax] [nvarchar](24) NULL,
  15:   CONSTRAINT [PK_Customers] PRIMARY KEY CLUSTERED 
  16:  (
  17:      [CustomerID] ASC
  18:  )WITH 
  19:      (STATISTICS_NORECOMPUTE  = OFF, 
  20:          IGNORE_DUP_KEY = OFF) 
  21:  ) 
  22:  GO
  23:   

Run this and you will have a new Customers table! Unfortunately there is no data in there, but we will get to that soon.

image

If you are moving a lot of tables and foreign key constraints, etc, you should use the SQL Azure Migration Wizard developed by George Huey. This tool,available on codeplex, will assist you in migrating your SQL Server schemas over to SQL Azure. Wade Wegner blogged about it here, including an instructional video.

Unfortunately there is no such tool for migrating data that I know of. Time for the next hack.

Migrating Data from SQL Server to SQL Azure

I thought that maybe I can cheat the same way I altered the connection settings and use SSIS to migrate the data. I choose the ADO.NET option and entered in all of the data, but it bombed. Then I tried my old reliable tool, Red Gate’s SQL Data Compare. No go. But it was worth a try, since it got me thinking. I created a new database locally called “Azure_Staging” and ran the same CREATE TABLE script there, creating a blank Customers table. I then ran SQL Data Compare using the full Customer table in Northwind as my source and my newly created blank Customer table in Azure_Staging as the destination.

Of course SQL Data Compare found 91 missing rows and I launched the Synchronization Wizard.

image

Click through it and on the 3rd page, click on the “View SQL Script…” button and copy and paste the generated SQL.

image

Copy and paste just the 91 INSERT INTO statements into your SQL Azure Query Window and run it. Now we have data in SQL Azure!

image

Unfortunately this is not the best situation, having to manually create some TSQL scripts, but this is an early CTP. I am sure that future CTPs will make this much easier.

posted on Friday, September 04, 2009 4:19:51 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

Developers have been using the REST specification for some time. If you are using Microsoft tools, ADO.NET Data Services aka Astoria is a very popular way to work with REST data. What you may not know is that Astoria works on top of WCF and you can write your own REST services outside of the Astoria model using WCF. WCF 3.5 SP1 gives us quite a few hooks to build our own RESTful services, however, it still takes a lot of manual wiring up by the developer. By now you all should know that I hate plumbing code.

Microsoft introduced the WCF REST Starter Kit, a set of WCF extensions and Visual Studio project templates to eliminate the plumbing code when building RESTful service with WCF. The five Visual Studio templates are:

  • REST Singleton Service
  • REST Collection Service
  • ATOM Feed Service
  • ATOMPub Service
  • HTTP Plain XML Service
  •  

    As explained in the developers guide to the WCF REST Starter Kit, the templates do the following:

    REST Singleton Service-Produces a service that defines a sample singleton resource (SampleItem) and the full HTTP interface for interacting with the singleton (GET, POST, PUT, and DELETE) with support for both XML and JSON representations.

    REST Collection Service-Similar to the REST Singleton Service only it also provides support for managing a collection of SampleItem resources.

    Atom Feed Service-Produces a service that exposes a sample Atom feed with dummy data.

    AtomPub Service-Produces a fully functional AtomPub service capable of managing collections of resources as well as media entries.

    HTTP Plain XML Service-Produces a service with simple GET and POST methods that you can build on for plain-old XML (POX) services that don’t fully conform to RESTful design principles, but instead rely only on GET and POST operations.

    While the REST Singleton is interesting, it is only useful if you are exposing one item, so the REST Collection is more suitable for interaction with a database driven dataset. The Atom Feed template is interesting but it is more useful if you are building feeds similar to RSS, so the AtomPub Service is more useful. The POX is a good option if you need to do something custom.

    While the REST WCF Starter Kit also provides some client libraries for easier interaction with your RESTful data, we will focus on the creation of the services.

    You can use Telerik OpenAccess as a data source of your REST Collection service. To do so you have to wire up some code in your svc file. Sound like a lot of work? Enter the OpenAccess WCF Wizard I wrote about before.

     

    If you create a project in Visual Studio to contain your data access layer and another to contain the REST Collection (using the new REST Collection template available from the WCF REST Starter Kit), you can point the Telerik WCF OpenAccess WCF Wizard at the data access layer project and then automatically generate the svc file and the corresponding CS file (shown by the arrow in our Visual Studio solution below.)

    image

    Just for a sanity check, let’s run our service by selecting the svc file and saying “view in browser”. You should see the RESTful XML representation as show below (make sure you turn off feed view in IE):

     image

    Now let’s consume this service from a Silverlight application. The WCF REST Starter Kit provides the developer with two classes, HttpClient and HttpMethodExtensions to help you consume the WCF RESTful service. Unfortunately they are not supported in Silverlight (or at least I can’t figure it out. :) )

    We’ll use plain old HTTPWebRequest instead.  But first I will create a grid in my XAML code like so:

    <data:DataGrid x:Name="dataGridCustomers" AutoGenerateColumns="False" ItemsSource="{Binding}">
        <data:DataGrid.Columns>
            <data:DataGridTextColumn Binding="{Binding Path=CompanyName}" Header="Company Name">
            </data:DataGridTextColumn>
            <data:DataGridTextColumn Binding="{Binding Path=ContactName}" Header="Contact Name">
            </data:DataGridTextColumn>
        </data:DataGrid.Columns>
    </data:DataGrid>

    I will create a LoadData() method to load the data on the page load or a “refresh” button event. Being Silverlight, of course we will use some asynchronous processing.

       1:  private void LoadData()
       2:  {
       3:      //address of your REST Collection service
       4:      string url= "http://localhost:60613/Customers.svc";
       5:      //set up the web resquest
       6:      HttpWebRequest rest = HttpWebRequest.Create(new Uri(url)) as HttpWebRequest;
       7:      //HTTP GET (REST uses the standard HTTP requests)
       8:      rest.Method = "GET";
       9:      //async callback
      10:      rest.BeginGetResponse(new AsyncCallback(ReadAsyncCallBack), rest);
      11:  }

     

    First we have to set a reference to our service in lines 4 & 6. Then we tell the HttpWebRequest to use an HTTP GET (line 8), this is the power of REST, it uses the standard HTTP requests (GET, POST, PUT, DELETE). On line 10 is where we begin our asynchronous call to ReadAsyncCallback() shown here.

       1:  private void ReadAsyncCallBack(IAsyncResult iar)
       2:  {
       3:      
       4:      //catch the HttpWebRequest
       5:      HttpWebRequest rest = (HttpWebRequest)iar.AsyncState;
       6:      HttpWebResponse response = rest.EndGetResponse(iar) as HttpWebResponse;
       7:      
       8:      var result = Customer.GetCustomersFromAtom10Stream(response.GetResponseStream());
       9:      //code block to handle the async call
      10:      this.Dispatcher.BeginInvoke( () =>
      11:          {
      12:              //build a collection (customers)
      13:              var customers = 
      14:                new System.Collections.ObjectModel.ObservableCollection<Customer>();
      15:              foreach (var customer in result)
      16:              {
      17:                  customers.Add(customer);
      18:              }
      19:              //bind to the grid when done
      20:              this.dataGridCustomers.DataContext = customers;
      21:          });
      22:  }

    ReadAsyncCallback() is the handler for the asynchronous call we did in LoadData(). We obtain a reference to the HttpWebRequest (lines 5-6) and then get the results back in line 8. Then we use a code block to build an ObservableCollection of Customers and fill them in a loop of the results (lines 15-18) and bind the data to the grid in line 20. The results are data binded to a grid as shown below.

    image

    Since Silverlight doesn’t support HTTP methods that do an update, we can’t do updates without a wrapper on the server. So we will stop the demo with just the read operation. Remember, if you are using a non-Silverlight client such as ASP.NET you can use the HttpClient and HttpMethodExtensions classes for an easier client coding experience.

    Grab the Telerik OpenAccess WCF Wizard here and the sample code from this blog post here.

    posted on Thursday, September 03, 2009 8:59:03 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    I have been a big fan of using the RESTful ADO.NET Data Services (aka Astoria). If you want to use Astoria out of the box without any modifications you really have to use the Entity Framework. But as long as you are using a data access library that supports IEnumerable and IQueryable (and IUpdateable to be useful for CRUD) you can use Astoria with just a little bit of extra work. 

    Telerik OpenAccess supports LINQ and IUpdateable and can be used in Astoria as shown here. In a nutshell you have to first create your data access layer, then a web site to host your ADO .NET Data Service and then a client (in our case a Silverlight app.) You would have a solution that will look like this:

    image

    The problem is that if you use anything other than the Entity Framework, you need to do a little extra plumbing. In our case using OpenAccess, you have to create the IQueryable interfaces for all of your entities manually. You would need something like this:

       1:  public IQueryable<Customer> Customers
       2:  {
       3:      get
       4:      {
       5:          return this.scope.Extent<Customer>();
       6:      }
       7:  }

     

    The scope variable you see on line # 5 is an instance of IObjectScope. IObjectScope is how OpenAcces implements the actual data context. According to the OpenAccess team, you have to add a few extra lines of code with these IQueryable interfaces into an “OADataContext” class. To be honest, I don’t want to do that, too much plumbing code.

    Enter the OpenAccess WCF Wizard I wrote about before.

    You can use the Wizard to generate the OADataContext file as well as the Astoria service file. Just point the wizard to the DAL layer file and select Astoria from the menu and generate the files and add them to the project (it is ok to overwrite the original CS file supporting the SVC file, just make sure you have the correct namespaces).

    image

    Now your Astoria service will look like this:

       1:      [System.ServiceModel.ServiceBehavior(IncludeExceptionDetailInFaults = true)]
       2:      public class WebDataService : DataService<AstoriaDataContext>
       3:      {
       4:          protected override void HandleException(HandleExceptionArgs args)
       5:          {
       6:              base.HandleException(args);
       7:          }
       8:   
       9:          // This method is called only once to initialize service-wide policies.
      10:          public static void InitializeService(IDataServiceConfiguration config)
      11:          {
      12:              //let all the entities be full access (CRUD)
      13:              config.SetEntitySetAccessRule("*", EntitySetRights.All);
      14:          }
      15:      }
      16:  }

    This is pretty standard Astoria stuff, line #13 does all of the Astoria magic.

    Now let’s look at our Silverlight client.  First let’s create some XAML, I will just show you the DataGrid code here:

       1:  <data:DataGrid Grid.ColumnSpan="2" x:Name="dataGridCustomers" AutoGenerateColumns="False" ItemsSource="{Binding}">
       2:      <data:DataGrid.Columns>
       3:         <data:DataGridTextColumn Binding="{Binding Path=CompanyName}" Header="Company Name"></data:DataGridTextColumn>
       4:         <data:DataGridTextColumn Binding="{Binding Path=ContactName}" Header="Contact Name"></data:DataGridTextColumn>
       5:      </data:DataGrid.Columns>
       6:  </data:DataGrid>

     

    As I wrote the other day, you still have to use a service and call the service asynchronously. In our Silverlight application we will then set a service reference to our Astoria Service, allowing us to write LINQ statements against the RESTful Astoria service.

    image

    Just like before we will have a LoadData() private method that our page load and “refresh” buttons will call. This will use LINQ to talk to the Astoria service and fetch all of the Customers. First we set up a LINQ data context (lines 4-5) and then a LINQ query (lines 8-9). Then we will use a code block (lines 13-14) to catch the async code (instead of a separate event handler) and perform the actual binding to the grid.

       1:  private void LoadData()
       2:  {
       3:      //this uses the LINQ to REST proxy (WebDataService)
       4:      AstoriaDataContext dat = new AstoriaDataContext(
       5:          new Uri("WebDataService.svc", UriKind.Relative));
       6:      
       7:      //link statement to get the data, can use WHERE, Orderby, etc
       8:      var customers = 
       9:          (from c in dat.Customers select c) as DataServiceQuery<WebDataService.Customer>;
      10:   
      11:      //use code block to perform the binding at the end of the async call
      12:      //casting the Customers to a LIST
      13:      customers.BeginExecute(
      14:          (ar) => dataGridCustomers.DataContext = customers.EndExecute(ar).ToList(), null);
      15:  }

    When we run our code, we will get data right away:

    image

    We can also perform an update. We have a global collection called editedCustomers and we trap the BeginEdit method of the Silverlight grid and put an instance of each row (a Customer) that was edited into this collection. Then we provide an update button that will call UpdateData() shown below. UpdateData() will loop through each dirty customer in the editedCustomers collection (lines 8-11) and update them using the Astoria LINQ services (lines 10-11). Inside of our loop we are doing another code block (lines 12-25) to catch the async update. We use a counter to figure out when we are done (lines 16-22), since in an async model, your 3rd update out of 5 can be the last one to finish!

     

       1:  private void UpdateData()
       2:  {
       3:      //this uses the LINQ to REST proxy (WebDataService)
       4:      AstoriaDataContext dat = new AstoriaDataContext(
       5:          new Uri("WebDataService.svc", UriKind.Relative));
       6:   
       7:      //editedCustomers is a local collection containing only the dirty records
       8:      foreach (WebDataService.Customer customer in editedCustomers)
       9:      {
      10:          dat.AttachTo("Customers", customer);
      11:          dat.UpdateObject(customer);
      12:          //code block to handle the async call to updates
      13:          dat.BeginSaveChanges(
      14:              (ar) =>
      15:              {
      16:                  updatedCounter++;
      17:                  //once we are finished with all items in the collection, we are done
      18:                  if (updatedCounter == this.editedCustomers.Count)
      19:                  {
      20:                      MessageBox.Show("Customers have been changed successfull!", "Saving Changes", MessageBoxButton.OK);
      21:                      this.editedCustomers.Clear();
      22:                  }
      23:   
      24:                  dat.EndSaveChanges(ar);
      25:              }
      26:              , null);
      27:      }
      28:  }

     

    That is it to build an Astoria based Silverlight application using OpenAccess. The OpenAccess WCF Wizard takes care of building the OpenAccess specific plumbing DataContext class for you as well as the Astoria service. Soon I will post another video by .NET Ninja in training Peter Bahaa on this demo as well as some more code examples using the WCF REST Toolkit and the OpenAccess WCF Wizard.

    The code is available here. Enjoy!

    posted on Tuesday, August 25, 2009 11:31:37 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    Microsoft’s Silverlight 3.0 is a great new platform for building line of business applications. With every new technology advancement, we always seem to lose something. With Silverlight we lose the dirt simple data access since Silverlight does not support System.Data.SQLClient and talking to a database directly. This forces us into a service oriented architecture and an asynchronous model. While this is definitely a best practice, it sometimes takes a little getting used to.

    With Silverlight we have to wrap up our data access layer into WCF services. (Or Astoria, RIA Services, or something similar.) It is also pretty standard to use some kind of ORM like the Entity Framework or Telerik OpenAccess to map to your database tables and then expose those entities as part of your WCF service. Mapping tables to entities may save you time, however, the part that bothers me is that there is a lot of generic “plumbing” code that has to get written for the WCF service to function.

    That is why Telerik created the OpenAccess WCF Wizard. The Telerik OpenAccess WCF Wizard is a tool that will automatically create the C# “plumbing” code and necessary project files for using OpenAccess entities with the following services:

    • Astoria (ADO .NET Data Services)
    • WCF
    • REST Collection Services (WCF REST Starter Kit)
    • ATOM Publishing Services (WCF REST Starter Kit)

    Using the wizard is pretty easy. If you already have a project with OpenAccess mapped entities in it, all you have to do is point the wizard to that location and have it generate the scv and .cs files of the WCF service (or Astoria, REST Collection or ATOM Pub service) as shown below. This will eliminate the need for you to write all of the WCF plumbing code.

    image

     

    We’ll map the Northwind tables and run the wizard and then create a WCF service project. Once you create the service, replace the .cs file with the one generated by the wizard and then create a new Silverlight project (or any project that wants to communicate with your OpenAccess enabled WCF service.) Then set a Service Reference to the WCF service and you can start to consume the it. Your projects in the solution will look like this:

    image

    Where:

    • OA.WCFWizard.Demo.DAL is the project containing your OpenAccess entities (mapped to Northwind tables).
    • OA.WCFWizard.Demo.SL is the project containing your Silverlight project, including the service reference to your WCF service. (First arrow.)
    • OA.WCFWizard.Demo.WCFService is the project containing the WCF service created by the WCF wizard (second arrow.)
    • OA.WCFWizard.Demo.Web is the asp.net site hosting your Silverlight application

    Writing some Silverlight code to consume the WCF service

    Consuming the service is pretty easy once you have the service reference set up. We’ll start with some basic XAML, but I will spare you the major XAML details since I am not a XAML expert. Here is my datagrid in XAML, notice that I am using data binding and predefined columns to make life a little easier:

    <data:DataGrid x:Name="dataGridCustomers" Grid.ColumnSpan="2" AutoGenerateColumns="False" ItemsSource="{Binding}">
         <data:DataGrid.Columns>
             <data:DataGridTextColumn Binding="{Binding Path=CompanyName}" Header="Company Name"></data:DataGridTextColumn>
             <data:DataGridTextColumn Binding="{Binding Path=ContactName}" Header="Contact Name"></data:DataGridTextColumn>
         </data:DataGrid.Columns>
    </data:DataGrid>

    When filled, it looks like this:

    image

    Pretty basic stuff, if you want more advanced looking grids and controls, talk to a designer. :)

    Ok, so how can we easily consume this service and bring this data from Northwind via OpenAccess and WCF to the Silverlight grid? Once you have a service reference set up (NorthwindWCFService) in your Silverlight project, you have to do a few things to call it. I created a LoadData() method to contain the code to fill the grid. We can call this on the page load event as well as on the UI with a “refresh” button, or after any “save” method.

    Inside of the LoadData() method, first you have to create an instance of the proxy or the “client” as shown in line 4 of the first code block below. Once you have this set up, you then have to register an event for the method you are going to call, as shown on line 6 below. This means that when you call your ReadCustomers method asynchronously the client_ReadCustomersCompleted event handler will fire when the ReadCustomers method is done. This is where you put your binding code (more on that in a second.)  Lastly we have to call our ReadCustomers method, we can only call the asynchronous version of this method as shown in line 9.

       1:  private void LoadData()
       2:  {
       3:      //ref to our service proxy
       4:      NorthwindWCFService.SampleWCFServiceClient wcf = new NorthwindWCFService.SampleWCFServiceClient();
       5:      //register the event handler-can move this up if you want
       6:      wcf.ReadCustomersCompleted += new EventHandler<NorthwindWCFService.ReadCustomersCompletedEventArgs>(client_ReadCustomersCompleted);
       7:      //make an async call to ReadCustomer method of our WCF serice
       8:      //get only the first 100 records (default)
       9:      wcf.ReadCustomersAsync(0, 100);
      10:  }

     

    The code for the event handler is pretty straightforward, we just set the datagrid’s DataContext to the e.Result of the event handler. This will contain an ObservableCollection<Customers> that was defined automatically for you via the proxy when you created your service reference.

       1:  void client_ReadCustomersCompleted(object sender, OA.WCFWizard.Demo.SL.NorthwindWCFService.ReadCustomersCompletedEventArgs e)
       2:  {
       3:      dataGridCustomers.DataContext = e.Result; 
       4:  }

     

    That is it! I will soon post a video here showing how to do this in more detail as well as an update method since our grid is editable (it is super easy.) Next week I will show some examples using ADO .NET Data Services (Astoria) where the wizard will also automatically create the IObjectScope OpenAccess and Astoria plumbing code for you.

    posted on Wednesday, August 19, 2009 6:04:20 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    I am over in Bangalore, India speaking at the Great Indian Developer Conference, and as I get on stage for my first session my laptop does not project to the monitor. Oh well, I guess I have to reduce my five gazillion by one trillion screen resolution. Still not working. Tried the old reliable, rebooting. Still no dice. We try another laptop just to make sure it is me, not the monitor, sure enough it is me.

    I was the first speaker at the conference and now the conference organizer is sweating. He offers his laptop and I say as long as you have SP1 on it. He said, Windows XP SP1? I was like, not that SP1, Visual Studio 2008 SP1. No dice. Now I was sweating (it was 40C/104F). Did I mention that my session is now 5 minutes late? I determine it is my Win7 video driver and give up trying.

    I decide to let fate take over. I make an announcement: “Anyone in the audience have a laptop that I can borrow? One that has a lot of ram and Microsoft Virtual PC 2007 installed?” Blank stares. Now I am getting nervous, brought me back to a time in 2001 where I demoed beta2 of .NET without .NET installed on my machine. Time to hand wave and make jokes about George Bush. (That always worked in Egypt.) Then my hero showed up. Prashant lent me his laptop and we got going and life was good. I had to borrow the generic AV laptop for my Scrum session later in the day and Satheesh lent me his for my last session on Data Access hacks and shortcuts. In Belgium at TechDays Joel did an agile talk with no slides: I wrote the slides on the fly (we were being agile!) Now I will start speaking at conferences without a laptop! (Er, maybe not.)

    Last night in my hotel the TV talked about a prison riot. Don’t ask me why, but prison riots always get my attention. I watched the story and it turns out that the inmates were not complaining about the conditions, they were complaining that they were not allowed to watch cricket. Yes, cricket.

    So I started to pay attention. The next story was about a huge win by Chennai in the Indian Premier League (IPL).  (Yes more cricket.) Then the next story was about a flamboyant bollywood star who owns a team. They were caught with Paris Hilton or something, but the point was the news wanted to know how this would affect his team. More cricket. Did I mention that there are major national elections going on in India tomorrow. These elections will determine who is the next Prime Minister, but the news can only talk about cricket.

    So I did some more investigation. The IPL was started last year. It is an Indian professional league for cricket-club based, representing cities. This is a new concept in India and has been wildly successful. The opening matches were only played a few days ago and season two is under way. Talking to a finance guy about the IPL today, I discovered that the larger markets attracted larger investors who spent a ton of money and have huge payrolls (sounds like the Yankees.) So the smallest market, Rajasthan, the team with the smallest payroll, are the defending champions (sounds almost like the Tampa Bay Rays.)

    I was about done with my IPL education when I came across this blog post by fellow Regional Director Vinod Unny. The IPL web site, a site with more hits than you can imagine, streams the matches using Silverlight. The site also has a pretty cool interactive Silverlight based scoreboard where you can get real time stats and drill down into a player’s history. There are even tons of photos using deep zoom. Pretty awesome stuff (even thought it is cricket!)

    IPLT20.com is estimated to get over 400 million unique page views from 45 million visits and 10 million unique visitors during this tournament. A huge win for Silverlight and proof that i can’t get away from technology ever, even when investigating a prison riot….

    posted on Wednesday, April 22, 2009 12:14:54 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    In Part I of this series we looked at the tool Telerik is building and how to model an entity in MSchema and MGraph. Part II dug deeper into the modeling process and we saw the value of MGraph and data visualization to help your model along. We did a little refactoring and are now more or less happy with the model for these domain entities. After modeling the application in M, I realize the power of a textual modeling language. Boxes and lines in UML does not excite me, but a textual modeling language makes complete sense.

    So far we have ignored the fact that these entities will live in a database. You can push your M into the repository or you can push it to plain old TSQL. Let’s do that today.

    Inside of iPad you can go to the “M Mode” menu option and choose “Generic TSQL Preview.” This will split your code with the M code on one side and the TSQL on the other as shown below.  (Note, you can also choose M Mode|Repository TSQL Preview, however I am still avoiding the Oslo repository at the moment. I have my M files under source control in TFS and will push to the repository a little alter in the process. Once again, I am still learning, so this may or may not be a best practice.)

    image

    Let’s take a look at the TSQL produced.

    This type that we build in Part I:

    //mschema to define a user type
    type ApplicationUser
    {
        UserID : Integer64=AutoNumber();
        FirstName :Text#15;
        LastName : Text#25;
        Password : Text#10;      
    } where identity UserID;

    Will produce a CREATE TABLE TSQL statement like this:

    create table [Telerik.MigrationTool].[ApplicationUserCollection]
    (
      [UserID] bigint not null identity,
      [FirstName] nvarchar(15) not null,
      [LastName] nvarchar(25) not null,
      [Password] nvarchar(10) not null,
      constraint [PK_ApplicationUserCollection] primary key clustered ([UserID])
    );
    go

     

    Ok, a few things here. First my table name is [modulename].[MGraph instance name]

    Ug! ApplicationUserCollection is a horrible name for a table. I incorrectly assumed that the type name would be what we have as a table name. (I guess I should have actually done the M labs at the last SDR instead of goofing off with Michelle Bustamante.) Well this is new technology, so live and learn. :) I have to refactor all my types and instances. I guess I have learned pretty quickly that “collection” is not a good name.

    Here is the renamed base type, I named it “UserType” since I can’t think of a good name, however,  I will do this with all my types:

    //mschema to define a user type
    type UserType
    {
        UserID : Integer64=AutoNumber();
        FirstName :Text#15;
        LastName : Text#25;
        Password : Text#10;      
    } where identity UserID;

    Here is the new MGraph, I am using ApplicationUser here instead of ApplicationUserCollection:

    //mgraph to get some test data in
        ApplicationUser : UserType*; ApplicationUser
        {
            //using a named instance (Steve, etc)
            Steve {
            FirstName="Stephen",
            LastName="Forte",
            Password="Telerik"
            },
            Vassimo {
            FirstName="Vassil",
            LastName="Terziev",
            Password="123"
            },
            Zarko {
            FirstName="Svetozar",
            LastName="Georgiev",
            Password="456"
            },
            Todd {
            FirstName="Todd",
            LastName="Anglin",
            Password="789"
            }
        }

     

    Now the M Mode|Generic TSQL Preview will show this:

    create table [Telerik.MigrationTool].[ApplicationUser]
    (
      [UserID] bigint not null identity,
      [FirstName] nvarchar(15) not null,
      [LastName] nvarchar(25) not null,
      [Password] nvarchar(10) not null,
      constraint [PK_ApplicationUser] primary key clustered ([UserID])
    );
    go

     

    And the insert statements are also generated:

    insert into [Telerik.MigrationTool].[ApplicationUser] ([FirstName], [LastName], [Password])
    values (N'Stephen', N'Forte', N'Telerik');
    declare @Telerik_MigrationTool_ApplicationUser_UserID0 bigint = @@identity;

    insert into [Telerik.MigrationTool].[ApplicationUser] ([FirstName], [LastName], [Password])
    values (N'Vassil', N'Terziev', N'123');

    insert into [Telerik.MigrationTool].[ApplicationUser] ([FirstName], [LastName], [Password])
    values (N'Svetozar', N'Georgiev', N'456');
    declare @Telerik_MigrationTool_ApplicationUser_UserID2 bigint = @@identity;

    insert into [Telerik.MigrationTool].[ApplicationUser] ([FirstName], [LastName], [Password])
    values (N'Todd', N'Anglin', N'789');

    I ran the entire TSQL and then the next step is to load it into the database.

    I opened SQL Management Studio and created a new database called oslotest1 as shown here:

    create database oslotest1
    go

    Now I will copy and paste the TSQL in the preview pane of iPad and run it. Fingers crossed. :)

    As you can see in the image below, all my tables were created successfully.

    image

    Let’s take a look at some of the sample data. A simple SELECT * FROM ApplicationUser shows us:

    image

    As you can see MGraph creates a SQL Server schema [Telerik.MigrationTool] out of the module name in our M file. This is a pretty cool feature (SQL 2005/08 schemas are not used enough, there is too much DBO floating around out there.) I guess I can use an easier to work with schema in the future like migrationtool instead of telerik.migrationtool.

    Let’s now query some of the sample data in SQL Server. Here is the result of a query looking at the results of Project ID #1 and the first run of that project, all from the data that we modeled in MGraph:

     

    image

    I am pretty satisfied with the results of my model. I think the next step is to hand off the user stories and M code to the developers and get started. I will post their reactions, they know nothing about Oslo besides what they read in this blog. :) I will also post my progress and thinking on the repository. I think that now we are going to be working with a team (and a team in another country than me), we can get some benefits by using the repository.

    posted on Tuesday, February 17, 2009 10:48:48 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

    In Part I of this series, I talked about Oslo in general and about the tool Telerik is building for Oslo. Where we stand today is that I modeled a simple entity (User) and I still have to model some domain entities in MSchema and MGraph. The application I am modeling will allow a user to create a “project” that has the connection strings to the two Oslo repositories they are comparing. Then they will have to in a very Red Gate SQL Compare like fashion compare the entities in the repository and report back a status, including showing the offending M code that is causing a problem side by side with the good M code. Let’s get started modeling my top level domain with M.

    As I am thinking now I need a “project” entity. Here is my first stab at a project entity.

    //mschema to define a Project type
    type Project
    {
        ProjectID : Integer64 = AutoNumber();
        ProjectName : Text#25;
        ConectionStringSource : Text;
        ConectionStringDestination : Text;
        DateCompared: DateTime;
        Comment: Text?;
        ProjectOwner: ApplicationUser;
    } where identity ProjectID;

    You can see that I am making a reference to the ApplicationUser type in my “ProjectOwner” field. Down the line we will have this as a foreign key relationship in SQL Server, but we don’t have to worry about that now, for now we just realize that a ProjectOwner will refer back to the ApplicationUser type we build in Part I.

    Here is how the type looks in iPad:

    image

    Just like before, I need to see some data before I can really figure out what my type is doing. Call me old school or a “database weenie” but I just connect the dots better when I see some data. So using MGraph, I am showing the data here:

    //this will define a SQL foreign key relationship
    ProjectCollection : Project* where item.ProjectOwner in ApplicationUserCollection;

    ProjectCollection
    {
        Project1{
            ProjectName = "My Project 1",
            ConectionStringSource = "Data Source=.;Initial Catalog=MyDB1;Integrated Security=True;",
            ConectionStringDestination = "Data Source=.;Initial Catalog=MyDB2;Integrated Security=True;",
            Comment="Project Comment",
            DateCompared=2009-01-01T00:00:00,
            ProjectOwner=ApplicationUserCollection.Steve //direct ref to steve (FK)
        },
        Project2{
            ProjectName = "My Project 2",
            ConectionStringSource = "Data Source=.;Initial Catalog=MyDB1;Integrated Security=True;",
            ConectionStringDestination = "Data Source=.;Initial Catalog=MyDB2;Integrated Security=True;",
            Comment="Project Comment",
            DateCompared=2009-01-01T00:00:00,
            ProjectOwner=ApplicationUserCollection.Zarko //direct ref to Zarko (FK)
        }
    }

    Notice that we define a relationship between the ProjectOwner and the ApplicationUserCollection from yesterday. This gives us the ability to use the named instances of the users and even gives us IntelliSense as shown below:

    image

    We are now going to model the results of the comparison of the repositories. I envision a grid showing you each object, its status, name, M code, and asking you to take some action. Let’s model the results. First we will need the Stats lookup values:

    //Status type
    type ComparisonStatus
    {
        StatusID:Integer64=AutoNumber();
        StatusDS:Text#25;
    } where identity StatusID;

    //mgraph to get some data in to the status
    StatusCollection:ComparisonStatus*;

    StatusCollection
    {
        Status1{StatusDS="Exist Only in Source"},
        Status2{StatusDS="Exist Only in Destination"},
        Status3{StatusDS="Exist in Both, Identical Structure"},
        Status4{StatusDS="Exist in Both, Changes"}
    }

    Next I need to model the results with a results type.

    //mschema for the results
    type ComparisonResults
    {

    ProjectRunID: Integer64=AutoNumber();
    ProjectRunDate: DateTime;
    ProjectID:Project;   //FK to Project
    SourceTypeName: Text?;
    SourceTypeM: Text?; //is it possible to generate this on the fly? is there value in storing it?
    DestinationTypeName: Text?;
    DestinationTypeM: Text?; //is it possible to generate this on the fly? is there value in storing it?
    StatusID: StatusCollection; //FK

    } where identity ProjectRunID;

    After I put some data into this type, I immediately realized that the user will run the project multiple times and we will have to have a 1:M relationship between the run of the project’s results and the types. Meaning when you get the results there will be many types associated with each results. So I will spare you the iterations I went through with MGraph, but because of MGraph, I realized that this model was flawed! Here is the refactored version:

        //wow, we need refactoring tools badly in iPad! :)   
        //mschema for the results

        type ComparisonResults
        {
            ProjectRunID: Integer64=AutoNumber();
            ProjectRunDate: DateTime;
            ProjectID:Project;   //FK to Project
        } where identity ProjectRunID;


        //this will define a SQL foreign key relationship
        ResultsCollection : ComparisonResults* where item.ProjectID in ProjectCollection;
        //mgraph for some test data
        ResultsCollection
        {
            Result1{
            ProjectRunDate=2009-01-01T00:00:00,
            ProjectID=ProjectCollection.Project1
            }
        }

    Notice how we have some relationships stored back to ProjectCollection.

    Now we need to model the details:

    //mschema for details
    type ComparisonResultDetail
    {
        ProjectRunID:ComparisonResults; //FK
        TypeID: Integer64=AutoNumber();
        SourceTypeName: Text?;
        SourceTypeM: Text?; //is it possible to generate this on the fly? is there value in storing it?
        DestinationTypeName: Text?;
        DestinationTypeM: Text?; //is it possible to generate this on the fly? is there value in storing it?
        StatusID: StatusCollection; //FK
    } where identity TypeID ;//need a composite PK of ProjectRunID and TypeID

    Now we need to add some data via MGraph. Remember it was above in MGraph where I had this breakthrough.

    //this will define a SQL foreign key relationship, two FKs actually separated by a comma
    ResultsDetailCollection : ComparisonResultDetail* where item.StatusID in StatusCollection,
         item.ProjectRunID in ResultsCollection;

    ResultsDetailCollection
        {
            {
            ProjectRunID=ResultsCollection.Result1,
            SourceTypeName="Customers",
            SourceTypeM="m code here",
            DestinationTypeName="Customers",
            DestinationTypeM="m code here",
            StatusID=StatusCollection.Status1
            },
            {
            ProjectRunID=ResultsCollection.Result1,
            SourceTypeName="Orders",
            SourceTypeM="m code here",
            DestinationTypeName="Orders",
            DestinationTypeM="m code here",
            StatusID=StatusCollection.Status2
            } ,
            {
            ProjectRunID=ResultsCollection.Result1,
            SourceTypeName="Order Details",
            SourceTypeM="m code here",
            DestinationTypeName="Order Details",
            DestinationTypeM="m code here",
            StatusID=StatusCollection.Status3
            },
             {
            ProjectRunID=ResultsCollection.Result1,
            SourceTypeName="Products",
            SourceTypeM="m code here",
            DestinationTypeName="Products",
            DestinationTypeM="m code here",
            StatusID=StatusCollection.Status4
            }
        }

    So today I modeled some domain entities and learned that when you play around with adding data via MGraph, you will learn and evolve your model much better. I suspect that showing this to the users will help, that is one of the goals of Quadrant. So with this model, I still have not pushed it into the repository yet, I am saving the data on disk in M files. I think that pushing to the repository may be important to do soon (time will tell if this is a best practice or not, remember I am learning!) It is now time to start playing with the MGraph and MSchema transformations to TSQL, that will be the subject of Part III.

    posted on Thursday, February 12, 2009 7:51:14 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

    Earlier today the Oslo SDK January CTP was released on MSDN. A lot of people have been saying since the PDC, “What is Oslo?” Oslo is a new platform from Microsoft that allows you to build data driven applications. Oslo revolves around the application’s metadata. As Chris Sells describes on in a great white paper on Oslo:

    Metadata can be defined as the data that describes an application, for example, how a home page is rendered or how the purchasing workflow operates. Your application data represents the state of the execution of an application, such as what Bob put in his shopping cart on your Web site or his shipping address in your checkout workflow.

    To provide a common set of tools for defining metadata so that it can be stored in a place that provides the same set of features as normal application data, Microsoft is creating "Oslo," a platform for building data-driven applications. "Oslo" is composed of three elements: a family of languages collectively called "M," a visual data manipulation tool called "Quadrant," and a data store called the repository.

    Telerik is building some cool Oslo utilities and I am in the middle of designing them. As I was talking to Chris about some of the specs the other day, he asked me: “What are you using to keep track of the metadata of your application in your design process?” I was like: “Pen, paper, whiteboard, Word and Excel.” He said why are you not using Oslo? Then it struck me, I was in .NET programmer mode. So last decade. While I am using Visual Studio 2008, WPF, SQL Server 2008 and the Oslo SDK to build an application for Oslo, I was not using Oslo to help build the application.

    The application is in its earliest phases (just moving from idea and drawing on a whiteboard to design.) I confess, I made my first mistake, I did not think about a model, I was thinking about the app. So I started over and started to model what the app would do using Oslo. How do you model an application using Oslo? You use the M language.

    Specifically at this phase you would use the MSchema portion of the M specification.  I started by creating a schema using MSchema to hold some application artifacts. This requires a different way of thinking, but it is worth the effort because now information about my application is stored in the repository and I will have version history and a much easier time generating the application when the time comes. (You can also use the MGraph portion of the M specification to create a domain specific language (DSL), however, that part of the process won’t come for this application until a little later on.)

    As I make progress designing and building this application, I will post it here. You can follow along and learn from my mistakes. Stay tuned, look for the “Oslo” category on this blog.

    posted on Friday, January 30, 2009 11:12:43 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

    Due to my comment spam problem, the link to the ORM white paper I wrote got deleted. A month or so ago, I wrote a white paper for Telerik on ORMs in general and their ORM in particular. This white paper is mostly an intro to data access layers, what an ORM will give you and how they work. Here is the link.

    posted on Monday, January 26, 2009 10:54:18 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

    Ever since the release of the Entity Framework and the Linq to SQL team’s move to the ADO.NET team we have been hearing about Linq to SQL being dead. The ADO.NET team (which owns the EF as well) released the roadmap where they said:

    “We’re making significant investments in the Entity Framework such that as of .NET 4.0 the Entity Framework will be our recommended data access solution for LINQ to relational scenarios.  We are listening to customers regarding LINQ to SQL and will continue to evolve the product based on feedback we receive from the community as well.”

    This caused an uproar as you might imagine. So a few days later Tim Mallalieu, PM of the Linq to SQL and Linq to EF teams clarified by saying this:

    “We will continue make some investments in LINQ to SQL based on customer feedback. This post was about making our intentions for future innovation clear and to call out the fact that as of .NET 4.0, LINQ to Entities will be the recommended data access solution for LINQ to relational scenarios….We also want to get your feedback on the key experiences in LINQ to SQL that we need to add in to LINQ to Entities in order to enable the same simple scenarios that brought you to use LINQ to SQL in the first place.”

    Sounds pretty dead to me. If you don’t believe me, just talk to any Fox Pro developer, VB 6 developer, or any Access developer who loves the Jet engine. They are still “supported” as well. As this has shaken out over the past month or so, there are two camps:

    I told you so! and No way man, it is part of the framework, it will be supported for 10 years!

    Well they are both wrong.

    The I told you so crowd is claiming victory. While Linq to SQL may be dead,  Linq to SQL has a lot of traction in the developer community. According to Data Direct Technologies’s recent .NET Data Access Trends Survey (November 24th, 2008), 8.5% of production .NET applications use Linq to SQL as their primary data access method. While this number is not huge, you can’t ignore these developers voting with their feet by using Linq to SQL in their applications.

    The “It is in the Framework” crowd also has it wrong. Just because something is in the Framework does not mean it will have a bright future. Windows Forms is in the framework and WPF is the “preferred” UI for Windows apps. ADO.NET is in the framework and Linq to SQL and EF are suppose to replace that?  Is anyone using System.Object anymore, or are we all using Generics?

    So what should the Linq to SQL developer do? Throw it all away and learn EF? Use nHibernate?

    No. The Linq to SQL developer should continue to use Linq to SQL for the time being. If the next version of the EF is compelling enough for a Linq to SQL developer to move to EF, their investment in Linq to SQL is transferrable to Linq to Entities. If Linq to SQL developers are to move in the future, Microsoft will have to provide a migration path, guidance, and tools/wizards. (The EF team has started this process with some blog posts, but the effort has to be larger and more coordinated.) When should the Linq to SQL Developers move to the EF? When this happens:

    • The EF feature set is a superset of the Linq to SQL feature set
    • Microsoft provides migration wizards and tools for Linq to SQL developers

    If Microsoft is serious about the Entity Framework being the preferred data access solution in .NET 4.0, why will have to do a few things:

    • Make EF 2.0 rock solid. Duh.
    • Explain to us why the EF is needed. What is the problem that the EF is solving? Why is EF a better solution to this problem? This  My big criticism of the EF team, the feedback I gave them at the EF Council meeting, is that they are under the assumption that “build it they will come” and have not provided the compelling story as to why one should use EF. Make that case to us!
    • Engage with the Linq to SQL crowd. This group can continue to provide feedback to the EF team since Linq to SQL has many features that EF/Linq to Entities needs.
    • Engage with the nHibernate crowd. Data Direct Technologies’s survey says that 18% of all .NET production applications use a framework like nHibernate, Open Access, or Spring.NET. (they also included ASP AJAX in this question, which is strange to me.) While you may not win all of these people over, you should find out what they like about their tools.
    • Engage with the “stored procedure” crowd.  The EF team on several occasions said that “Nobody is building an application using stored procedures and straight ADO anymore.” According to Data Direct Technologies’s survey, almost 65% of .NET developers are using straight Stored Procedures and ADO.NET and 14% are using the Enterprise Library, which is just a wrapper for SPs and ADO.NET. I am not attacking or defending this architectural decision, but the EF team has to realize that if this many of their customers are using this approach and there needs to be guidance, training, and migration tools, not to mention a compelling reason to move to EF.

    How will this shake out? I can’t tell you since I have no idea.The EF team (and nHibernate crowd) talk like the train has arrived at the destination already while in reality it has not even left the station. We are still at the station buying tickets (to an unknown destination). Stay tuned.

    posted on Sunday, December 07, 2008 10:14:31 AM (Eastern Standard Time, UTC-05:00)  #    Comments [9] Trackback

    I have been a fan of the cloud since Amazon first released its first APIs. We have been waiting for Microsoft to enter in the cloud space and we have been seeing some stuff drip out over the last year, Astoria (while not cloud, it is a RESTful service that allows us to be cloud ready), Live Mesh (which people confuse as a consumer offering, but actually is a development platform), and SQL Server Data Services (SSDS).

    Last week at PDC, Microsoft spoke abut Windows Azure, its cloud services platform. It will consist of web application and service hosting, .NET Services (basically the BizTalk services stack), storage, and data services (SSDS, or now just SDS). Some developers at the PDC were like “this is like the ASP model ten years ago, Azure is doomed to fail.” So the question is, will Azure succeed where the ASP model failed?

    The ASP model was about a generic application hosted by an ISP and sold as a service to you. Picture instead of using ADP for your accounting software, you would log onto the ADP web site and use it as a service. This model did not completely fail, but it did not deliver on its mission. It was also a lot of .com hype and about 10-15 years ahead of its time with both the technology and the business acceptance.

    While things like Live Services and hosted Exchange is part of Azure, Azure is not about ASP, but about Hosting your app, services, and data in the cloud. There is a need for this: Amazon EC2 and S3 are quite successful, even with the crowd that you think would never put their data in the cloud: Banks. It will take time, but companies will buy into this paradigm and make the shift. The first thing to go into the cloud in masse will be hosted Exchange, then file server capabilities, then applications, then data. Small businesses will go first. It may take years for the shift to be complete, but it will happen. It just makes sense to have your applications hosted in the cloud, why bother and worry about the infrastructure. Infrastructure will be a commodity by 2012. By then most new apps will be hosted in the cloud or using the cloud infrastructure for .NET Services or data.

    Only 12 years too late! During the .com era, when I was a CTO of a large .com, I spent 65% of my time worrying about the infrastructure (bandwidth, RAID arrays, load balancing, switches, etc.) Years later at Corzen to support our spidering engines, I focused on infrastructure about 50% (only reason why it was lower than .com era was due to virtualization.) Now I need more bandwidth, more load balancing, it is just a click of a button. Sure it is not going to be this easy, but even if it delivers on 50% of its vision, it will reduce my focus on infrastructure by 95%.

    .NET Services (formerly BizTalk Services) in the cloud will get adopted by developers as it matures and as apps get moved to the cloud. SQL Services will get adopted in version 2 when you can do more relational features just as tables, joins, views, etc, instead of the “flexible entities” approach of the first release.

    Bottom line is that Azure will succeed, but it will take time for the world to catch up to Microsoft’s vision. Amazon (and to some degree Google) have paved the way.

    posted on Monday, November 03, 2008 9:56:29 AM (Eastern Standard Time, UTC-05:00)  #    Comments [2] Trackback

    A few weeks ago the PDC people sent to the Regional Directors the goals and theme of today’s Keynote by Don and Chris. They said: “ It will be Don and Chris writing code in Notepad for an hour!”

    I replied that I hate when Don codes in Notepad, I think it is a silly thing to do. It only confuses the audience and sends a message that this technology is just a hack since you have to use Notepad. Then the other Regional Directors chimed in with loads of reasons why Notepad is not good for a PDC keynote.

    Don and Chris got the message and used Visual Studio in the keynote and the keynote was a lot of fun. Don starts the keynote with the goal to write a service that iterates through all of the processes running on the demo box. Chris then wrote a method that will delete a process (kill) via a service. Don then said “No presenter would ever use Notepad to deliver a session.” He then opened four instances of Notepad and wrote a call to the service to kill all four Notepad instances. Don then said “Regional Directors around the world are now applauding.” (We then did!) Don said if you read between the lines: “OK RDs I took your advice, but I am still Don Box!” I love that.

    Don and Chris then took the same service and pointed it to a Live Mesh desktop (and it deleted a folder named notepad). They then took the service and hosted it in Windows Azure services. They had a service running in the cloud that manipulated processes running on their desktop. (Well in the real world you would not do this, but hey it is pretty cool.)

    After being called out by Don Box in the keynote, I spent the rest looking at the Oslo stuff. Oslo is a new modeling platform for application development that revolves around a language (M), a tool (quadrant), and a repository (SQL Server).

    M is a language where you can model your application and data as well as create your own contextual Domain Specific Languages (DSL). The DSL piece is the best part of M, but I will not write anything about it until later this week as not to steal the thunder of the M: DSL session on Thursday at PDC.

    Here is a simple speak peek at M in general, I model a type called Employee (which will map to a SQL table) and insert data in a code specific way. (At the end of the day this code will produce TSQL.) While this may not appear all that sexy, it will be far more efficient for a team to develop, implement, and maintain applications and services (and just imagine putting a DSL on top of this!) You can use a DSL to map text to data so you can import data real easily or have end users write data for you.

    module HRApplication

    {

    type Employee

    {

    Id : Integer32 = AutoNumber();

    Name : Text;

    Title : Text;

    Salary: Integer32;

    } where identity(Id);

    Employees : Employee*

    {

    { Name = "Vassil", Title = "CEO" , Salary=1000000},

    { Name = "Steve", Title = "CSO" , Salary =250000},

    { Name = "Todd", Title = "CE" , Salary=500000}

    }

    }

    Go get M and the Oslo SDK here: msdn.microsoft.com/oslo.

    Lastly, I ran into old friend Dan Fernandez and he had me dress up as Dr. Code.

    stevef

    posted on Tuesday, October 28, 2008 7:41:29 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    Thousands of developers have flooded LA for the 2008 PDC. The first keynote yesterday has highlighted the Windows Cloud Services called Azure. Microsoft is finally getting serious about the cloud by offering storage, hosting, SQL, and .NET services in the cloud. This changes the economics of producing software as well as how we think about infrastructure. Hosting, bandwidth, storage, and management are not a commodity.

    In addition to the cloud, Microsoft has show so far C# 4.0 and the .NET Framework 4.0. Included is the Dynamic Language Runtime (DLR). C# 4.0 is entering the world of dynamic languages by adding a static type called dynamic. (Pause for effect.) C# can now interoperate very easily with Ruby and Python and do things like COM interop much easier due to support for default parameters and optional parameters.

    This actually seems like a small thing, however, along with generics, delegates and LINQ features (anonymous types, lamdba, etc) support in 2.0 and 3.0, you can now eliminate JavaScript in your SilverLight applications and use C# 4.0 in the code behind. This can change the way we program for the web in a big way.

    C# is being influenced by the dynamic languages like JavaScript, Ruby, and Python. This is a good thing. We even got a sneak peek at C# 5.0 where C# looks extremely dynamic. C# is becoming the best of both worlds.

    posted on Tuesday, October 28, 2008 1:46:48 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    This week is the meeting of the Data Programmability Group's Advisory Council. I'll be headed out to Seattle to participate in a conversation with the Data Programmability team on the next version of Microsoft's data access strategy, including the Entity Framework.

    Roger Jennings today pointed out that my dismissal of ORM in general led him to wonder why I was chosen for the Data Programmability Group's Advisory Council. My pal Julie Lerman emailed me a few months ago asking "I did not know you were a DDD guy?"

    I was glad that Danny Simmons asked me to be on the council since I have participated on several data access councils at Microsoft over the years (Including one with Roger Jennings about 11 years ago.) I've watched Microsoft move from ODBC, DAO, RDO, ODBCDirect, OLE DB, ADO.NET and now to a more conceptual model.

    Sure I am not a true DDD guy and I do tend to dismiss ORM in general, so my views will insert a different view point into the conversation. Why have a council that is all the same? The whole point of this conversation is to have a dialog and listen to each other (and learn from each other.) By discussing our use cases with Microsoft, we can help them make better design decisions, and refine our own views. Anything else is just dogma.

    posted on Sunday, July 27, 2008 10:35:40 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    Thursday, July 17, 2008
    "Speaker Idol" Competition

    Subject: 
    You must register at https://www.clicktoattend.com/invitation.aspx?code=129952 to be admitted to the building
    Five technical presentations, with a panel of judges including Mary Jo Foley
    1. Zino Lee: Introduction to F#
    We will see what F# is and what features/differences there are between F# and "imperative" languages like C#
    2. James Curran: Castle Monorail
    MonoRail is a MVC framework for Web Development inspired by ActionPack. It is part of the large Castle Project which includes the Windsor IoC container and the ActiveRecord data mapper.
    3. John Carnevale: Upgrading Legacy Code
    Learn how to read the code and determine a path of action to be taken with the code upgrade. See how to evaluate what could be upgraded, what to rewrite and when to start all over from scratch.
    4. Bill Fugina: Arithmetic in Generic Classes
    Bill will show some examples of the benefits of doing arithmetic in generic classes and some utility classes and interfaces that make it extremely easy to do so
    5. Gerardo Arevalo: Casual and More Hard-Core WCF
    In his demonstration, Gerard is going to quickly build a pair of WCF client applications to demonstrate the use of the MVP design pattern to guarantee a contract with the clients and facilitate unit testing

    Speaker: 
    Zino Lee has been working on wall street investment banks for the past 12 years, and doing .NET for 4 years. Currently the VP and manage a group that takes care of all GUI work for a trading desk. In graduate school at NYU James did some OCaml work when F# project started.
    James M. Curran is a Senior Developer at BarnesAndNoble.com and as a hobby, the Owner/Operator of NJTheater.com which is being converted into a MonoRail based site (under-development version viewable at www.njtheater.org). Previously, he was a Microsoft MVP for VisualC++.
    John Carnevale is working at Purvis systems stationed at the FDNY converting legacy code to .NET for the Starfire system.
    Bill Fugina works as a software developer for Coleman Insights, a music industry market research company in Research Triangle Park, North Carolina. He visits the office three or four days each month and otherwise telecommutes from his home office in Windsor Terrace, Brooklyn.
    Gerardo Arevalo is relatively new to the New York (Tri-State) area. He is from El Salvador, lived in North Florida, then packed up for the North East to be closer to the techno

    Date: 
    Thursday, July 17, 2008

    Time: 
    Reception 6:00 PM , Program 6:15 PM

    Location:  
    Microsoft , 1290 Avenue of the Americas (the AXA building - bet. 51st/52nd Sts.) , 6th floor

    Directions:
    B/D/F/V to 47th-50th Sts./Rockefeller Ctr
    1 to 50th St./Bway
    N/R/W to 49th St./7th Ave.

    posted on Monday, July 14, 2008 2:43:50 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    Unless you have been living under a rock, you should already know about the controversy over the nHibernate mafia's Microsoft Entity Framework's Vote of No Confidence.  The manifesto says that the upcoming Entity Framework (to be released soon via .NET 3.5 SP1) is just not usable. As far as ORMs go, there are some valid points working against EF- in order of (my) importance:

    • Lack of Lazy Loading (The EF team said that they had "explicit" lazy loading, but that is just playing with words and the team has since retracted from that.)
    • Lack of persistence ignorance (at best this is inelegant design, at worst it will cripple the performance of your application and make it a burden to maintain)
    • Too many merge conflicts when using a model in a source control environment
    • Bad design for TDD development (and in my opinion, CI as well)

    The manifesto also brings up some Domain Driven Design issues that I feel are less important than the above four bullets. To their credit the normally reserved EF team has responded quickly and nicely to the manifesto. On the heels of last week's transparent-design process announcement, Tim Mallalieu gave a detailed reply which has started a good discussion and wiki. They are listening and that is good.  Since the Entity Framework is starting work this week on v.next, they have also put together an advisory council including Eric Evans, Martin Fowler, Paul Hruby, Jimmy Nilsson, and myself.

    To some degree the vote of no confidence worked, they got Tim's attention. I think that the manifesto has some very valid points, but I also think it lacks professionalism. Anyone can say something sucks, it is more important to give valid feedback, suggest solutions, and engage in a dialog. (The vote of no confidence was just too in your face-and a dialog was stated only because of the professionalism of the EF team.) In addition there are some mafia style attacks on anyone who does not agree with them, most recently against the always honest and open Julie Lerman.

    So this blog post sounds like an endorsement of the Entity Framework and ORMs in general, right?

    Wrong.

    My first problem with ORMs in general is that they force you into a "objects first" box. Design your application and then click a button and magically all the data modeling and data access code will work itself out. This is wrong because it makes you very application centric and a lot of times a database model is going to support far more than your application. In addition an SOA environment will also conflict with ORM.

    I prefer to build the application's object model and the data model at about the same time, with a "whiteboarding" approach that outlines the flows of data and functionality across the business process and problem set. Maybe it is the MBA talking but I tend to be "business and customers first" when I design a system. (Those of you that know me know that I have designed some very large and scalable systems in my day.) I usually like to "follow the money" as Ron Rose of Priceline taught me 10 years ago.  In addition, I prefer a very agile design process where you are constantly making changes to your design in the early sprints, the database and object model are both part of this process.

    My second major problem with ORMs is that ORMs are a solution to a problem that should not be solved. Developers who write object oriented and procedural code like C# and Java have trouble learning the set-based mathematics theory that govern the SQL language. Developers are just plain old lazy and don't want to code SQL since it is too "hard." That is why you see bad T-SQL: developers try to solve it their way, not in a set-based way.

    The premise of EF, LINQ, nHibernate, and Luca Bolongese’s original premise with ObjectSpaces, is that set-based theory causes an “impedance mismatch” between data access and all the other (more procedural) coding we do.  And it’s ORMs to the rescue to resolve the impedance mismatch.

    So ORMs are trying to solve the issue of data access in a way that C# and VB developers can understand: objects, procedural, etc.  That is why they are doomed to fail. The further you abstract the developer from thinking in a set-based way and have them write in a procedural way and have the computer (ORM) convert it to a set-based way, the worse we will be off over time.

    What I am saying (and have been saying for a long time) is that we should accept, no, embrace the impedance mismatch!  While others are saying we should eradicate it, I say embrace it.

    ORM tools should evolve to get closer to the database, not further away.

    One of the biggest hassles I see with LINQ to SQL is the typical many-to-many problem. If I have a table of Ocean Liners, vessels,  and ports, I’ll typically have a relational linking table to connect the vessels and ports via a sailing. (Can you tell I am working with Ocean Freight at the moment?) The last thing I want at the object layer is three tables! (And then another table to look up the Ocean Liner that operates the vessel.) Unfortunately, this is what most tools give me. Actually I don't even want one table, I want to hook object functionality to underlying stored procedures. I really want a port object with a vessel collection that also contains the ocean liner information. At least the Entity Framework does this however I have major concerns about the performance of the code it produces.

    The ironic thing I’m now seeing is developers who are lazy and don't want to learn SQL using tools that will produce SQL for them. The SQL is bad, and now those same anti-SQL lazy developers are struggling to read pages of generated and near-unreadable SQL trying to solve performance problems. They’re dealing with SQL that’s more verbose and orders of magnitude harder to understand than what was needed in the first place!

    So where do we go from here? We can't just keep pretending that this mismatch problem can be solved and keep throwing good money after bad. As Ted Neward said that is the Vietnam of Computer Science.

    I say that developers should start embracing the impedance mismatch. I also say that Microsoft and other ORM vendors need to realize that embracing the mismatch is a good thing and design ORM tools that allow this. (This is the advice I will give at the EF council.) I am not anti-ORM (even tho I did trash LINQ on stage at TechEd last year), but I am pro using the right tool/technology for the right job. ORM is real good for CRUD and real bad at other things. Let's start from there. To quote Ted, "Developers [should] simply accept that there is no way to efficiently and easily close the loop on the O/R mismatch, and use an O/R-M to solve 80% (or 50% or 95%, or whatever percentage seems appropriate) of the problem and make use of SQL and relational-based access (such as "raw" JDBC or ADO.NET) to carry them past those areas where an O/R-M would create problems."

    Hopefully "cooler heads will prevail."

     

    Update 6/29/08

    My fellow RD Greg Low brought to my attention that my discussion on Linq to SQL above is very similar to an email he sent not to long ago. At Triton Works (the ocean containers) we used his example in our own entity model debate and I realized after he pointed it out to me that the words above are just too close to his, so I am giving him the credit above. (Though he did not call programmers lazy and anti-SQL, I did.)

    FYI, we went with POAN or Plain Old ADO.NET, ocean containers are old school.

    The inspiration for this post came from not only the public debate on the EF vote of no confidence but also a private email discussion on nHybernate, EF and Linq to SQL between:  Greg, Andrew Brust, Rocky Lhotka, Sten Sundbland, Joel Semeniuk, Barry Gervin, and myself. I won't tell you who was on what side. If you want to know, you will have to ask them. Hint, hint, Canadians are lazy and anti-SQL. :)

    posted on Friday, June 27, 2008 9:04:49 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [4] Trackback

    Looking over the PDC content for this fall you can definitely see a trend: Cloud Services. This is to counter Amazon's S3 offering.

    First let's define what Cloud Computing is and what it means. Cloud computing is not Gmail, Hotmail, or even Google Documents. (This is why you won't see Microsoft Office moved to the Cloud in a very robust way, that is just web access for popular productivity software.) Cloud Computing is replacing an application's hardware and plumbing infrastructure with a service hosted by someone else. For example let's look a typical corporate system:

    • Back-end data storage (SQL Server, Oracle)
    • Back-end application server (IIS + .NET, COM+/ES, Tuxedo, even CICS (that dates me))
    • Front end machine to access the application (Either a workstation via a browser or windows app, or mobile device)

    It gets expensive for small and even large firms to build the network, firewalls, domain services, user accounts and of course the hardware (with its underlying RAID arrays, UPSes, racks, power, etc.) Sometimes this is a barrier to growth (or even entry) for small to medium organizations. Not only do you have to learn all about subnets and RAID arrays, you also have to learn about UPS and power strategy, and manage a development project. (Man, I just want to write some stored procedures, who cares about an UPS and RAID array!)

    This is a problem, epically for start-ups. You have to build a rack before you even build a company and product. That is expensive and an up-front fixed cost. More than likely you will build a rack that at overcapacity so you can grow, since it is time consuming to add more capacity later on. Also, you usually have to hire people to help you figure this out. Building a rack is expensive and also expensive to maintain.

    Cloud computing is providing the back end infrastructure, the first two bullets, to firms as a service. The beauty of this model is you pay for what you are using, bandwidth, storage, and clock cycles. So a startup does not have to spend $20,000 on a new rack, they can pay a monthly cost to host their application in the "cloud" and only pay for what they need. In the beginning this will be a small fee since a startup has no customers and then the fee will grow (as revenue grows hopefully!) Amazon S3 is attracting many startups and is changing the dynamics of funding a startup since they don't need as much expensive hardware and IT expertise.

    So many critics of Microsoft say that they are missing the boat by not putting MS Office up in the cloud to compete with Google Apps. While Microsoft may have to compete with Google one day in this space (I personally prefer offline Office + a service like SharePoint to share my documents.) that day is not today. The real battle will be over the true Cloud-infrastructure and ultimately the stack.

    Selling customers on your Cloud platform locks them into your technology. So if Microsoft offers storage (SQL Server Data Services), application synchronization (Live Mesh), workflow/relying (BizTalk Services) and application tier (IIS as a service? or at least VPCs) services, they will get developers to use the Microsoft stack (SQL, .NET, etc.) for another generation. Throw in Hosted Office and Exchange, and they are really hooked.

    Judging from the PDC content, with sessions all around Microsoft Cloud services, I think that they get it. Too bad we have to wait until October, but I sure hope that it is good stuff.

    posted on Friday, June 20, 2008 11:05:42 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    On May 22nd and 23rd  the New York chapter of the International Association of Software Architects (IASA) held its first annual two-day IT Architect Regional Conference where I was a speaker.  This event received the highest rating of any conference run by IASA to date.  It was considered so successful that we are already planning the 2009 version. 

    IASA is a vendor-neutral association of/for/by architects all over the world.  The New York IASA chapter is one of the most successful IASA chapters in the United States. Sponsors of the event included Sun, Oracle, Microsoft and Robert Half.  Although this was a regional conference attendees came from as far away as Nashville Tennessee, Boston and Redmond Washington. 

    The conference featured over 30 speakers from sponsor companies, the local architect community and others arranged in common keynote sessions and four breakout tracks (Enterprise Architecture, Software Architecture, Infrastructure Architecture and Architecture Fundamentals).  Keynote speakers included presentations on:  Software Plus Services, presented by Joseph Williams, Chief Technology Officer, Microsoft Enterprise Services; Interesting Real-world Architectures and the Handbook of Software Architecture presented present by Grady Booch, Chief Scientist, IBM Corporation; The Next Generation SOA Grid--Not Your MOM's Bus, presented by David Chappell, Vice President and Chief Technologist, Oracle and Technology Strategy in the World, presented by Paul Preiss, President of IASA.  The Breakout sessions also featured many presentations by local architects from companies such as Bloomberg, Group Health, Hartford Insurance, Weightwatchers and Microsoft.  

    For more information about the conference agenda see http://www.iasahome.org/web/itarc/nyc/agenda.

    posted on Sunday, June 08, 2008 5:40:39 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    My lunch with Bill Gates yesterday at Tech*Ed in Orlando was very cool. His first word to us was "super" as was his second.

    After snapping a few photos, we got right down to it. There were about 12 of us from the community who were invited and and we were eager to chat. First question: "Hey Bill, what are you going to do with your free time?" Bill indicated that he is going to be "super" busy and will be working full time on his foundation.

    We then got into talking about schools and how he envisions most schools around the world to be textbook free and just use computers. Then about how university level learning will be done via the Internet for lectures. In poor countries "best of breed" lectures can be put on DVD and repeated. Bill was blown away  by the organic chemistry distance learning made available by MIT and wants to bring this model to everyone. He said that the traditional lecture will die at the university level. He was extremely passionate about teachers, schools, and education in general.

    Andrew Brust asked about the UN, and Bill complained that there were too many agencies with more TLAs than Microsoft. (WHO, WFP, etc..)

    I got to finally get a word in edgewise. :) I had lots of questions sent to me by readers of this blog ranging from "What's next" to "Why not just buy the Mets, that is charity at this point."

    I asked about Microfinancing. The answer surprised me. While Bill's foundation gave $300 million in total to microfinance, Bill said the micro-financing had too high of interest rates and the opportunities to make loans were not as abundant as you would think. Bill had a passion for micro-savings.

    He said that too many poor people have no access to banks due to distance and bank fees. So banks become something only the rich can have access to in many countries. (Including the US in many regards too, witness check-cashing shops in the inner city.) Bill went on to say how most poor will stash their currency somewhere or buy jewelry, only to get stolen or inflated away. Or some will buy livestock as a way to store their wealth and save only to be stolen or come down with disease. Bill described a system that he worked on that will allow poor people in remote areas to make microdeposits in the bank via a local retailer. Then they can view their balance via that retailer or on their cell phone if they have one. Then can spend the money via local retailers or via an ATM. He also spoke about the need for a quality interest rate for the microsavings. This was really amazing, all you hear about these days is micro-loans, but Bill turned the tables on everyone wants to combine micro-loans with micro-bank accounts. Makes complete sense.

    The last question by Kate Gregory, was on how does Bill deal with the public sector/volunteer sector's non type A personality. Bill basically indicated that he was results oriented and brings the same passion and project management skills to his non-profit work.

    I can't say that I had lunch with Bill Gates since he did not really eat, we just kept asking him questions and he never really got to eat his lunch. All in all he spent about 1.5 hours with us and it was great. I have to say that it was really inspirational to talk the whole time with him about non-technical issues (not a single techie question was asked). Here is a man who is one of the greatest technical minds (still!) around, the chairman of a Fortune 500 company, and we talked about his passion for his foundation. I am looking forward to what his foundation will do with him working there full time.

    posted on Wednesday, June 04, 2008 7:48:15 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    On Monday I will be speaking before representatives of the US House of Representatives about legacy systems. The question is to invest in brand new systems and technology or just to kind of glue together things on top of old systems. The question boils down to one of public policy, should the Congress pass laws mandating this, or should they give some autonomy (and budget to attract some talent) to their in-house IT staffs.

    I am working with the Association for Competitive Technology on this issue. We feel that scraping the old and leapfrogging over a generation of technology or two is the best bet. Get some creative destruction on Capital Hill from new IT systems.  Treat the IT departments like a business, not a governmental agency. Give them budgets and goals and have them develop the applications and processes required. A little autonomy can go a long way.

    posted on Sunday, May 18, 2008 10:01:39 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    Thursday, May 15, 2008
    WPF Beyond the Basics: Playing Tricks with the Visual Tree

    Subject: 
    You must register at https://www.clicktoattend.com/invitation.aspx?code=126267 in order to be admitted to the building and attend.
    The Visual tree is one of the core concepts of the WPF framework. All things visible in a WPF application are objects from the Visual tree. In this talk I'll give a quick overview of the Visual Tree and then get into interesting ways of manipulating it. We will also look into the styling and templating aspects of visuals. The ideas presented here should be immediately useful to custom-control developers and application developers in general. The session will be very hands-on with cool demos and live coding! The techniques discussed here were used in my blog posts on ElementFlow, GlassWindow, Drag 'n' Drop with attached properties, Genie Effect, etc.

    Speaker: 
    Pavan Podila
    Pavan Podila has worked on a wide variety of UI technologies with current focus on WPF/Silverlight, Flash/Flex and DHTML. He has a Bachelors and Masters degree in Computer Science with specialization in Graphics and Image Processing. He has been working with .Net since 2004 and WPF since 2005. In the past he has worked with Java Swing, Eclipse plugins, AJAX UI frameworks and Trolltech Qt. His primary interests are in 2D/3D Graphics, Data Visualization, UI architecture and computational art. He blogs actively on http://blog.pixelingene.com.

    Date: 
    Thursday, May 15, 2008

    Time: 
    Reception 6:00 PM , Program 6:15 PM

    Location:  
    Microsoft , 1290 Avenue of the Americas (the AXA building - bet. 51st/52nd Sts.) , 6th floor

    Directions:
    B/D/F/V to 47th-50th Sts./Rockefeller Ctr
    1 to 50th St./Bway
    N/R/W to 49th St./7th Ave.

    posted on Tuesday, May 13, 2008 7:38:08 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    I have always watched the development community's fascination with Ruby on Rails with much concern. It seemed like it was gaining much popularity because it was easy to use and spit out web sites based on an easy to use framework rather quickly. What's wrong with that?

    A lot. Rails makes it easy to build an application by drag and drop and stitch things together with some glue code. It gives you a platform for most of the plumbing and never forces you to understand the mechanics of objects or other more sophisticated coding techniques. This leads you to some fast and easy web sites that don't scale past the RoR framework. Great for a fun site or a prototype, but not so good if you need to scale past what the RoR framework has to offer.

    Some sites are learning this the hard way. Twitter has had some major outages recently and some very public scaling problems. They are mostly a RoR shop and there are rumors that they are going to swap out RoR, rumors that they of course are denying. If Twitter moves away from Ruby, it could do much damage to Rails' adoption in the future at startups that have large aspirations. I am not saying that all of Twitter's problems are caused by RoR, some very large consumer facing sites are built on Rails, but rather are a byproduct of using an application framework to build a large public site (not to be confused with an API framework like .NET or J2EE). Rails gives you a framework and makes it simple to build sites that fit into that general framework. Once you step off the reservation, you are in for a world of hurt. If you are building a site that fits the Rail mold, then if you have good engineers you may be able to scale to a gazillion users, but you lost most of the ease of use of Rails by doing so. If you are building a site that does not fit the Rails mold, then you will have scaling issues, mostly because Rails was not designed to do what you want it to do.

    Some in the rails community have broken ranks, the most entertaining one is Zed Shaw, a god in the RoR community, with his infamous exit rant Ruby is a Ghetto back in January.

    What I am really saying is that there are no shortcuts. You have to learn how to code and use platforms that scale to the goals of your application. Sometimes this means writing your own code and object model and data access layer.

    posted on Friday, May 02, 2008 11:19:30 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    Thursday, April 17, 2008
    2008 Community Launch: Show Me The Data

    Subject: 

    You must register at https://www.clicktoattend.com/invitation.aspx?code=126827 in order to be admitted to the building and attend.
    With the release of Visual Studio 2008 and the .NET Framework 3.5 comes a dizzying array of facilities for storing, querying and presenting data. Between new features in ADO.NET’s core; LINQ; The Entity Framework; new ASP.NET Data source and data bound controls; and the new data binding models in WPF and Silverlight 2, there are now so many new data features, that it presents a bit of a crisis. How are you supposed to learn all of these new technologies, much less continue to use the older ones with mastery? The answer is to understand each of these data access and data binding technologies in the context of the others. Many common concepts exist between these models and many of them can be combined. If you learn the generalities, you'll be able to master the specifics that interest you.
    With that in mind, this session will start with a quick look at ADO.NET, typed datasets, Windows Forms and ASP.NET (including ASP.NET AJAX) data binding, and the enhancements to them in Visual Studio 2008. We'll then look at LINQ to DataSets, LINQ To SQL, The Entity Framework and LINQ to Entities and see how to use them with the old binding models. We'll finish with a look at WPF, its rich data binding model and how well it translates to Silverlight 2.0.

    Speaker:  Andrew J. Brust
    Andrew J. Brust is Chief, New Technology at twentysix New York, a Microsoft Gold Certified Partner in New York City. Andrew is lead author of Programming Microsoft SQL Server 2005 (Microsoft Press), serves as Microsoft Regional Director for New York and New Jersey, is a Visual Basic MVP and a member of Microsoft’s Business Intelligence Partner Advisory Council. Andrew is a Vice-Chairman of the New York Software Industry Association (NYSIA), a member of INETA’s Speaker Bureau and is a highly rated speaker at conferences throughout the U.S. and internationally. Often quoted in the technology industry press, Andrew has 20 years' experience programming and consulting in the Financial, Public, Small Business and Not-For-Profit sectors. He can be reached at andrew.brust@26ny.com.

    Date:  Thursday, April 17, 2008

    Time:  Reception 6:00 PM , Program 6:30 PM

    Location:   Microsoft , 1290 Avenue of the Americas (the AXA building - bet. 51st/52nd Sts.) , 6th floor

    Directions: B/D/F/V to 47th-50th Sts./Rockefeller Ctr
    1 to 50th St./Bway
    N/R/W to 49th St./7th Ave.

    Swag List:                                                                                                           

    • · 5 NFR Launch Kits, including:
    • o NFR, legal copy of Windows Server Enterprise 2008 (64-bit and 32-bit)
    • o NFR copy VS 2008 Standard Edition
    • o SQL 2008 CTP (64-bit and 32-bit)
    • o Voucher for eval-only SQL 2008 Standard, redeemable when SQL 2008 is Generally Available
    • · 2 NFR copies Windows Vista Ultimate with SP1 with Windows Live Services, including 90-day trial of Windows OneCare
    • · 3 vouchers for a free 1 yr subscription to TechNet Plus Direct
    • · 1 Windows Server 2008 Application Readiness Resource Kit
    • · 1 SQL Server 2008 Technical Readiness Kit
    • · 1 .NET Framework 3.5 Developer Resource Kit
    • · 2 Copies, Virtualization For Dummies
    • · MS learning Solutions 40% off Exam Vouchers
    posted on Sunday, April 13, 2008 2:07:35 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    The dot com era was crazy. Companies that had no business plan, no revenue, no customers, but a great team, web site and investors would IPO for $100 million. Everyone had stock options and got rich on paper. Once all of these companies went bankrupt and were delisted in the crash of April 2000, everyone was poor again since their options were underwater and worthless. The common phrase is “I wallpapered my house with my useless stock options.”

    I too wallpapered my apartment with Zagat Survey stock options. I was the Chief Technology Officer for two years during the .com era and saw it all. I got there as a consultant in 1998 when the company had just 30 employees and the server for the web site was under Sal’s desk. (Sal being the entire IT department at the time.) When I joined as CTO in late 1999, I helped with my colleagues secure $34 million in Venture funding from General Atlantic and Kleiner Perkins and build a great team.

    The place became a true .com with 27 year old Harvard MBAs running around, employees bringing in their dogs to work, an air hockey table, and a web site that had one mission: drive traffic. The company swelled up to 200 people, but I build out an amazing web farm and an .NET application a year before .NET shipped. We filed for an IPO. Then the crash happened. I then had to preside over massive layoffs and the eventual loss of my own motivation and left in January 2002 to start Corzen.

    Today it was announced that Zagat is up for sale and at a valuation of at least $200 million. When General Atlantic and KPCB invested in the height of the .com bubble, Zagat was valued at $96 million. That means that all the employees and former employees with vested stock options (including myself) now have .com options that are above water. Well above water. I am going to scrape down the wallpaper and deposit them into my brokerage account (I hope Fidelity Investments does not mind the glue.) I guess the .com era is not over if some companies are still paying out.

    Why would Zagat sell? They do a nice little business of book sales (estimated 5.5 million books sold a year) and online paid subscriptions. The problem is that Zagat is so Web 1.0. While it is technically user generated content (the ratings are not by reviews, but surveys), Zagat is still stuck in the Web 1.0 mindset (no-one pays for content anymore! Wait that was Web 1.0 too!) and has to compete with Chowhound, Facebook applications, blogs, and scores of other user generated sites. Its business model is obsolete in a Web 2.0 world. It is adapt or die. Or adapt or sell to the highest bidder and let them figure out how to make Zagat 2.0.

    posted on Monday, January 14, 2008 9:49:20 PM (Eastern Standard Time, UTC-05:00)  #    Comments [1] Trackback

    At the start of a new year we have an opportunity to be reflective and think about the experience of the past year and how we can spot trends and apply any lessons learned in the new year. At the start of 2008 I am equally reflective on the past year and what it has taught me. A lot has happened in the last year: I completed 75% of my MBA degree, I sold my company Corzen and find myself mired in a new startup, I traveled so much that the government had to give me a new passport, and I attended many weddings and unfortunately a few funerals of friends and loved ones.

    While a lot has went on, I find myself looking at the impact of technology on my life and the world in general. In a year where blogs have helped shape the presidential debates and VOIP has made communication so much easier, the world has gotten smaller. Microsoft released new versions of Windows, Office and Visual Studio, and as usual I got to travel the world to explain it to developers. In the past year I got the pleasure to visit many countries and several parts of the United States.  As I visit these places, I develop close friendships. I seem to attend more weddings overseas then at home!

    Because of technology, the world is smaller. You realize just how small the world is when major news becomes personal. For example minutes after Benazir Bhutto was killed, I received several text messages and emails from my friends and colleagues in Pakistan. A bomb goes off in Hyderabad, India, and Kim Tripp texts me that she is ok since she knows I know she is there.

    Why I Love Technology

    My career in technology is completely accidentals. I was studying for a PhD in History when I went to Wall Street after I graduated University to earn some money before I went to graduate school. I was in my managers office and he just wrote 20 reviews in a MS Word template and kept hitting “Save” not “Save As..” He asked me to retrieve the documents (but asked me not to read them since they were my and my colleagues annual reviews and bonus. I told him it was impossible since he overwrote them all. He told me to go report to the IT department the next day for a new (and better) job.  My knowledge of Save As in DOOM games got me my first technology job!

    I love technology because technology is a great disruptive force. It levels the playing field. It creates new business models. It breaks up monopolies.  It makes the world smaller. Think of life 20 years ago in the United States. A political leader in another country is killed. What do you do? Turn on TV and get the “official” version of the story at 6pm or 11pm. In 2008 we get instant stories from local sources with videos of the event almost immediately on blogs from folks on the scene. We also have CNN and other networks. What if you want to call your loved ones overseas to see if they are ok? AT&T will charge you $3.55 a minute to connect to Pakistan. In 2008, there is no more AT&T as we knew them and it is free on Skype, or just $0.02 cents a minute on VOIP.

    Take the music industry.  In the past you had to deal with the big, evil, monopoly RIAA. In 2008, artists are promoting their own music on MySpace and their own web sites and MP3 files are available for  $0.99 on iTunes or free if you are willing to break the law, but I still download free music to protest the RIAA. But now Radiohead broke the mold and bypassed the RIAA and record labels and posted their new album on the web and said that you can name your own price to download. How is that for a “strategic inflection point” for an industry?

    The list goes on and on. Just try looking for a job today, who uses the newspaper anymore? Or the Yellow Pages? Technology creates a new opportunity for us all.

    Do you believe that software can change the world?

    I had the pleasure to work on a project in a small way that can greatly help society. Microsoft sponsored a project to be built by InterKnowlogy for The Scripps Research Institute. The Scripps Research Institute in La Jolla, California, is one of the largest private, nonprofit biomedical research organizations in the US and a world leader in the structure of biological molecules.  Scientists at Scripps Research wanted a better way to organize biological research information and share it with their colleagues.  InterKnowlogy developed an application built on .NET 3.0 with WPF, and Windows Vista giving scientists a powerful tool to visualize and annotate research results.  This application allowed for faster scientific collaboration, easier access to data and a dynamic development process.  (You can read the full case study on Microsoft.com.)

    I came across this application about 18 months ago. It used technology to break down barriers in Cancer research. In the past if a doctor was looking at a sample, they would annotate it and then mail it to other doctors who would look at it and mail it to more doctors. This is called “peer review” and is very important, but it takes a ton of time. InterKnowlogy built an app that used SharePoint, Office 2007 and WPF to make this collaboration instant and permanent. The application is speeding up the peer review and collaboration to levels not imagined just a few years ago. It was so impactful that Tim, the owner of InterKnowlogy got to help Steve Ballmer in New York with the Vista launch. I was invited to hang with the big boys since Tim, via technology, is a good friend of mine.

    I then suggested to someone at Microsoft that they should help pay for phase II of the application. They liked it so much that they “hired” me (for free!) to recruit a virtual team of four developers overseas to help Tim with Phase II. I put out a call for developers on my blog, nothing else. I got hundreds of responses. Ultimately I referred four developers, one each from: Egypt, Mexico, Poland, and India. Microsoft paid their salaries and Tim gave them tasks to do. They worked on it for six months and came up with an amazing application. We went on .NET Rocks this summer to talk about it.

    Later this year I met the Polish developer in Bulgaria at a conference. Tim hired him and now he is working full time at InterKnowlogy. When he met me he told me point blank that I changed his life. I was moved by that and realized the power of technology. Not only did we work together to cure cancer by empowering doctors and researchers, we were helping people in other countries get new jobs that make a difference and more money, all from home.

    How Technology Will Change the Future

    This is the tip of the iceberg. This is what little old me could accomplish in 2007; I was able to put together a team of developers from three continents and really help cure cancer (the doctor from Scripps will probably get the Nobel Prize) without leaving my house. What can you do?

    posted on Wednesday, January 02, 2008 6:44:35 PM (Eastern Standard Time, UTC-05:00)  #    Comments [1] Trackback

    Yesterday Microsoft released BizTalk Services CTP to the public. This one little CTP changes everything. Honest.

    In the Beginning

    It used to be very hard to build a distributed application. The pioneers in this field were Napster, Seti@Home and ICQ. To make applications like this work, you needed to have clients identify themselves and a message relay on the back end. Think of a switchboard for your applications telling user 1 how to communicate to user 2. If you wanted to build these connected systems, it would require a lot of infrastructure-a tremendous amount if you your application became popular. This has always been the barrier to entry. 

    The Enterprise Service Bus

    As time passed and web services came on to the scene, things got easier. A lot of the hard part of the pluming was taken care of; transport was easy via HTTP and speaking the same language was easy with SOAP. As time moved on and things got easier, people started pushing Web Services to the limit and the vendors started to really support building connected systems. After nearly a decade of XML and SOAP, most developers take it for granted. Enterprises now rely on this technology.

    Enter the Enterprise Service Bus (ESB). ESB is an acronym (we always need a TLA, don’t we?) that is hot right now. The notion is that you have a set of shared services in an enterprise that act as a foundation for discovering, connecting and federating services. This is the natural evolution of the technology, as enterprises rely on the technology and themselves grow more federated, enterprises will standardize discovering, connecting and federating services.

    Internet Service Bus

    As Clemens argued yesterday, the release of the BizTalk Services CTP creates the first Internet Service Bus. Clemens says:

    Two aspects that make the idea of a "service bus" generally very attractive are that the service bus enables identity federation and connectivity federation. This idea gets far more interesting and more broadly applicable when we remove the "Enterprise" constraint from ESB it and put "Internet" into its place, thus elevating it to an "Internet Services Bus", or ISB. If we look at the most popular Internet-dependent applications outside of the browser these days, like the many Instant Messaging apps, BitTorrent, Limewire, VoIP, Orb/Slingbox, Skype, Halo, Project Gotham Racing, and others, many of them depend on one or two key services must be provided for each of them: Identity Federation (or, in absence of that, a central identity service) and some sort of message relay in order to connect up two or more application instances that each sit behind firewalls - and at the very least some stable, shared rendezvous point or directory to seed P2P connections. The question "how does [MSN] Messenger work?" has, from an high-level architecture perspective a simple answer: The Messenger "switchboard" acts as a message relay.

    Changing Business Models

    In order to build distributed applications (and make money!) you have to scale to support the load your users and customers will add. This forced businesses to spend a disproportionate amount of money on hardware and too much time building the plumbing software.  Let’s take my business Corzen for example. Corzen collects specific data from the internet via spidering (like Google.) Corzen then crunches the data: we de-dupe it, aggregate it, match it with Dun & Bradstreet and US Govt data, and then apply some statistical models. We then deliver this “crunched” data to customers on a weekly basis.

    The value add is the “crunching” of the data. Where do you think 75% of my technology budget goes to? You guessed it, the spidering infrastructure. We spider 24X7 and collect between 4-10 million records each week from spidering. As you can imagine our customers want more and more sites to be spidered and more frequently. We spent a year building an amazing XML based, queue and batch based, distributed application with WCF.  We can load URLs into a queue and then go to a web page where we can send that job to several spider servers via WCF. Our spider servers are low cost Virtual Private Servers on ISPs around the world running a simple spidering engine that uses reflection to dynamically compile C# code instructions and apply RegEx patterns. When Corzen was young, we manually started these engines via RDP. As we scaled, we had to build this system.

    How does an Internet Service Bus change the Corzen business model and cost structure? Every few weeks we get new requests by customers to add more sites to our list of sites and we have to add spidering capacity. Since spidering is so basic, I have always wanted to have our customers spider for us (or fan out the spidering to 3rd parties for a small fee). I can then offer our customers a discount on their monthly fees based on the amount of spiders they will run for us.

    This is win win win, they get more data faster, and I lower my overhead and pass that savings on to the customer. This drastically changes my focus. I spend 50% of my time managing the spidering, worrying about capacity, and expanding the spidering infrastructure. Corzen’s cost structure changes as does the relationship with our customer-we become partners in the data acquisition “plumming” and Corzen can focus on the analytics-what the customers really want us to do. In other words we get out of the raw material business (spidering) and focus on the manufacturing of the product (the analytics.) Corzen was recently acquired by a company that does spidering as well, but not the analytics. This is an exciting move since we can join forces with the spidering capacity and Corzen can focus more on the analytics.

    Today, the problem with having our customers do spidering (or paying you on your idle time to do some spidering for us) is simple. With over 100 customers, you try getting our WCF application to work on all of their servers through our firewall  and their firewall. BizTalk services solves this problem. BizTalk services will provide a globally addressable name for Corzen’s service and securely expose that service to the Internet from behind a firewall or NAT as shown here.

    As Dennis argues:

    Use the Relay at http://connect.biztalk.net. We’ve shipped an SDK with a few samples showing you how you use the relay and identity services together. If you’re familiar with Windows Communication Foundation, you’ll find this trivial to use (by design!). Basically, your service opens at a URI on the connect.biztalk.net machines. Then a client connects to that URI and can start sending messages. We don’t want to be in the way of your app, so our relay will immediately try to establish a direct connection between clients. More details on this how this all works in a later post. Here’s a quick diagram that describes it at a high level.

     

    Microsoft vs. Google

    Everyone always likes to compare the “war” between Microsoft and Google. Maybe it is Google’s stock price or the popularity of its search engine or Gmail system. I look at the companies as light years apart. At its core, Google is an advertising company (highlighted by its recent acquisition of DoubleClick.) Sure Google will expose some APIs for developers, however, everything it does from Gmail to search is to gain more eyeballs for its ads.

    Microsoft is a company about selling Windows. What has made Windows so popular is that Microsoft gives developers amazing tools to build applications around Windows, Office and the Internet. The strength of Microsoft is the developer community surrounding its products. You always hear about the next great thing that is going to “take Microsoft down.” The only thing that will take Microsoft down is a company with a compelling platform that also provides tools for developers to create applications on that platform.

    The latest thing to come to take Microsoft down is Software as a Service. Think Google Spreadsheet. Businesses will all use the Spreadsheet in the sky and store its data on Google’s servers. Ditto with Gmail, why bother with Exchange?

    Microsoft does have a cool differentiating factor: its hybrid approach. Microsoft is offering Software + Services.  As I said in eWeek, with such a huge commitment to the OS and other installed software already, Microsoft is actually in a position to deliver software and a service on top of it.

    The marketplace wants a hybrid solution and Microsoft is the only one who can deliver it in the short to medium term, giving Microsoft a competitive advantage. Everyone thought Google Docs would kill Office but in reality, Google docs are cool but Enterprises have issues today with using it offline and inside of a browser (copy and paste is strange, so is right clicking in a cell). Personally I use Google Spreadsheet to keep track of simple things but Microsoft Excel to do the more processor intensive operations.  In addition, I work offline a tremendous amount as well don’t trust Larry and Sergey to store my very sensitive documents.

    Think of Excel as software + services. Excel can be sold and run on your computer. You can store your documents locally or up in the cloud. Processor intensive operations can utilize your local super-fast Pentium 1million Processor and your 2 GB of RAM. Collaborative efforts can be handled by the cloud.  Additional services like statistical number crunching or anything that needs to be distributed can be handled by the cloud. Multiple editors and viewers, in the cloud. You get the point.

    This is where BizTalk Services come in. It is an early way for developers to deliver Software plus Services.

    The Future of Business

    As I said, this changes everything. We can all agree that distributed applications are the future. In order to make money you have to scale to support the load your users and customers will add. This forced businesses to spend a disproportionate amount of money and focus on technology (and pay CTOs way too much!)

    We are so focused on technology that CEOs and Venture Capitalists are desperately trying to learn about the basics of technology-time they should be spending working through business models and looking for competitive advantages.

    BizTalk services and ultimately all of the Software plus Services (including other vendors, not just Microsoft) will change the way we do business in 5-10 years. Imagine if we had to run a switchboard to run our phones in the office? So an oil company or a bank would have to develop the technical expertise to run the phones. This infrastructure is solved by the phone company (and now VoIP!) In the future, businesses will only have to focus on their core businesses and most software will run locally with services up in the cloud, drastically reducing the investment in core IT infrastructure internally. It’s a brave new world out there.

    posted on Thursday, April 26, 2007 3:33:17 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    Thursday, February 15, 2007
    Amazon Web Services Presentation: Web-Scale Computing

    Subject:  What's possible in a post Web 2.0 world? Innovation continues at a mind-bending pace, and this presentation by Mike Culver from Amazon Web Services will showcase some thought-provoking new directions that Web Services are headed in. The presentation will provide an overview of Amazon Web Services, including a Web Service named Mechanical Turk that allows computers to make requests of people, an online storage service, and more. You’ll also see a C# coding demo that enables you to understand what’s involved when you want to consume one of these services. Amazon spent ten years and over $2 billion developing a world-class technology and content platform that powers Amazon web sites for millions of customers every day. Most people think “Amazon.com" when they hear the word; however developers are excited to learn that there is a separate technology arm of the company, known as Amazon Web Services or AWS. Using AWS, developers can build software applications leveraging the same robust, scalable, and reliable technology that powers Amazon's retail business. AWS has now launched ten services with open APIs for developers to build applications, with the result that over 200,000 developers have registered on Amazon's developer site to create applications based on these services.

    Speaker:  Mike Culver, Amazon Web Services

    Mike Culver joined Amazon Web Services after many years at Microsoft, where he managed a team of developer evangelists who helped launch the .NET Framework. At Amazon, Mike is focused on developer-to-developer engagements and how Web services can be used in new and innovative ways. When not up to his eyeballs in code and technology, Mike can be found at the airport flying his 1947 Luscombe taildragger. In fact, he runs a flying website about General Aviation (www.PopularAviation.com).

    Date:  Thursday, February 15, 2007

    Time:  Reception 6:00 PM , Program 6:15 PM

    Location:   Microsoft , 1290 Avenue of the Americas (the AXA building - bet. 51st/52nd Sts.) , 6th floor
    Directions: B/D/F/V to 47th-50th Sts./Rockefeller Ctr
    1 to 50th St./Bway
    N/R/W to 49th St./7th Ave.

    posted on Monday, February 12, 2007 10:31:18 AM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

    We return to Barcelona for TechED 2006!  Of course I will be doing some sessions (see below) as well as judging the “Speaker Idol” contest.

    See you in Spain!

    SQL312 T-SQL Querying: Tips and Techniques

    Stephen Forte , Richard Campbell

    Wed Nov 8 10:45 - 12:00

    Take your queries to the next level! This interactive session focuses solely on advanced querying techniques to get the most out of your SQL Server. See a series of real-world examples to extract data from your databases in ways you've never seen before. Techniques demonstrated include an ultra-fast way to do crosstab queries in SQL Server, running totals and ranking. Along the way you'll get some insight into how SQL Server works and the new capabilities in SQL Server 2005.

     

     

    SQL407 XQuery Deep Dive: How to Write and Optimize Your XQuery

    Stephen Forte

    Thu Nov 9 09:00 - 10:15

    SQL Server 2005 provides deeply integrated native support of XML. Besides storing the data as XML, it provides XQuery support as the key to unlock the information stored inside the XML document. This session gives you an introduction to SQL Server's XML and XQuery support and it demonstrates how to write and optimize your XQuery expressions. In particular, it discusses the use of XML Indices and how to read XQuery generated query plans.

     

     

    SQLWD04 The Query Governor: SQL CLR in Action

    Richard Campbell , Stephen Forte

    Thu Nov 9 17:30 - 18:45

    See how .NET takes SQL Server 2005 to a whole new level! In this Whiteboard Discussion learn how to build a query governor, a set of tools for evaluating whether or not a query should be run. Most query governors are simple limiters, automatically cancelling queries when they run too long or aborting queries with too high of a cost. The CLR makes it possible to programmatically evaluate the cost of a query without executing it! Combined with some techniques for determining the state of the server, you can build a governor is flexible and smart. This interactive Whiteboard Discussion makes it easy to explore different applications of this technology beyond the query governor.

     

    posted on Tuesday, October 31, 2006 3:35:47 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    I will be speaking at the NY Metro SQL Server Users Group on Thursday at 6pm on XQuery in SQL Server. Hope to see you all there.

    Topic:

    Using XQuery to Retrieve and Manipulate XML Data with SQL Server 2005

    Speaker:

    Stephen Forte, Microsoft Regional Director

    Date:

    6:00 PM on October 26, 2006

    Place:

    Microsoft Office in Manhattan

     

    The Axa Financial Building

     

    1290 6th Avenue, NY, NY

    Due to new security guidelines at the building, you will have an easier time getting in if you confirm your attendance via email to joelax@dbdirections.com. Otherwise you'll have to wait till someone comes downstairs to sign you in. Also remember to have a photo id with you.


    Blogs, Web Services and general interoperability have proliferated the use of XML in recent years. With all of that XML out there, there needs to be an easy way to incorporate XML data with SQL Server relational data.

    This session will look at how to use XQuery to retrieve and manipulate XML data inside the database. We'll start with a look at the new XML datatype in SQL Server 2005, then the ability to validate with XML Schema (XSD) and then creating XML indexes for use with XQuery statements. After a brief look at the W3C XQuery specification we quickly move to SQL Server’s implementation of XQuery 1.0. We'll incorporate XQuery in SELECT and WHERE clauses to retrieve information as well as see how to manipulate XML data with XQuery DML.

    Pizza and refreshment will be served at the meeting, and there will be a drawing for several giveaways.   

    posted on Tuesday, October 24, 2006 8:29:58 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    Our next meeting is Thursday. Because of security you now have to register for this free event! Register here.

    Thursday, October 19, 2006
    CAB and the Smart Client Software Factory

    Subject:  Microsoft Pattern & Practices Team’s Composite UI Application Block (CAB) and Smart Client Software Factory (SCSF) ease the development of modular, extensible, and maintainable smart clients.

    Starting with a general, theoretical overview of smart clients, we’ll quickly move into a deep examination of CAB centered on working code. We’ll dig into the anatomy of CAB/SCSF, uncovering some key design patterns used in the toolset: Model-View-Presenter, Publish-Subscribe, and Dependency Injection. Throughout the talk we’ll share best practices and consider design decisions for achieving modularity and extensibility in your own smart clients with the CAB/SCSF tools and guidance.

    By the end of the tour, those new to CAB should find their learning curves greatly reduced. Intermediate-to-advanced CAB hackers will take away some hard fought tips-toward and tricks-to taking their composite smart clients and plug-in architectures to the next level.

    Speaker:  David Laribee, President, Xclaim Software

    David Laribee is President of Xclaim Software, an ISV offering document, claim, and policy management software for the commercial property and casualty insurance industry. He has 10+ years experience designing, developing, and architecting enterprise applications with Microsoft technologies. David has worked with the .NET Framework since the zero-day in internal IT, product development, and rapid prototyping contexts across a wide variety of industries. He writes about agile practices, software architecture, and the business of software on his blog at http://laribee.com/.

    Date:  Thursday, October 19, 2006

    Time:  Reception 6:00 PM , Program 6:30 PM

    Location:   Microsoft , 1290 Avenue of the Americas (the AXA building - bet. 51st/52nd Sts.) , 6th floor
    Directions: B/D/F/V to 47th-50th Sts./Rockefeller Ctr
    1 to 50th St./Bway
    N/R/W to 49th St./7th Ave.
     

    posted on Tuesday, October 17, 2006 4:25:50 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    Thursday, January 19, 2006 2:30 PM - Thursday, January 19, 2006 8:00 PM (GMT-05:00) Eastern Time (US & Canada)
    Language: English-American

    Microsoft Corporation
    Central Park Conference Room
    1290 Avenue of the Americas
    6th Floor Microsoft Facility New York New York 10104
    United States

    General Event Information
    Products: Visual Studio.

    Recommended Audience: Developer and IT Professional.

    Do you want to speak directly to the team that has brought Visual Studio 2005 to market?

    Prashant Sridharan, Group Product Manager, Developer Marketing, and the NY .net User Group are pleased to announce an in-depth look at Visual Studio 2005 and what it can do for Enterprise Customers. The session will feature a project room where you can get assistance on your projects from Microsoft and community experts.

     Agenda

    • 2:30 - 3:00 PM - Registration/Welcome
    • 3:00 - 4:00 PM – Visual Studio Team System + Team Foundation Server
    • 4:00 - 4:45 PM – What’s New for Web Developers? (ASP.NET)
    • 4:45 - 5:00 PM Coffee break
    • 5:00 - 5:45 PM – What’s New for Smart Client Developers? (Windows Forms/Click-Once Deployment)
    • 5:45 - 6:15 PM Pizza

    The session will be followed by the NY .net User Group meeting.  This will feature a presentation on: What's New in the .Net Framework?

    For more information on the NY User Group meetings, please visit: http://www.nycdotnetdev.com/

    posted on Thursday, January 12, 2006 2:22:49 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

    My brother, Richard Campbell, and Carl will be in town tomorrow night, come on and check it out:

    NET Rocks NYC!

    .Net Rocks NYC! at our very own user group. Be part of the fun. Click
    here to register.
    https://msevents.microsoft.com/CUI/EventDetail.aspx?EventID=1032280187&Culture=e\
    n-US

    Friday, October 14, 2005 6:00 PM - Friday, October 14, 2005 9:00 PM (GMT-05:00) Eastern Time (US & Canada)
    Language: English-American

    Microsoft Corporation
    1290 Avenue of the Americas
    6th Floor New York, New York 10104
    United States


    General Event Information
    Products: .NET.

    Recommended Audience: Developer.

    That's right, America! Carl Franklin, Richard Campbell, Geoff the sound guy, and a makeshift podcasting crew are hitting the highway in an RV on a coast-to-coast road trip from Boston to San Francisco October 12th to November 7th, 2005!

    They'll be hosting evening events and producing DNR shows in 18 cities: Boston, Hartford, New York, Philadelphia, Baltimore, Washington DC, Raleigh, Atlanta, Jacksonville, Nashville, Memphis, Dallas, Houston, Austin, Phoenix, San Diego, and Los Angeles; and ending at the launch of Visual Studio .NET 2005 in San Francisco!! 

    In each city, a sneak peek at new and exciting things coming in Visual Basic 2005 and Mobility Development in Visual Studio 2005, and lots of giveaways including DNR swag, sponsor software, and even mobile devices!! AND post-event DNR interviews with local developers who are doing cool things with .NET 1.1 and the beta of 2.0!

    There will be parties along the way! Of course, they'll be blogging and podcasting photos and video (for the next DNR Movie), and a new .NET Rocks! show online every day during the road trip! Ok, maybe not EVERY day, but they're producing a show in every city!

    posted on Thursday, October 13, 2005 5:26:38 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    SQL Server 2005 Notification Services is what you would traditionally call “middle ware” or part of an application server. SQL Server is traditionally a “back end” or an infrastructure component to your architecture. SSNS is middleware provided by Microsoft as part of the SQL Server 2005 package. Microsoft has not traditionally provided a lot of middleware, but ever since the success of “Viper” or Microsoft Transaction Server (MTS), Microsoft has been providing more and more reliable and scalable middleware. Since SSNS is middleware and not part of the core database engine, SSNS is a separate setup option that is turned off by default when you initially set up your SQL Server 2005 server. You will need to select SSNS to install it at initial setup. 

    If you choose to install SQL Server 2005 Notification Services, all of its components and support files install in a separate subdirectory (typically the \Notification Services subdirectory of your default SQL Server installation.). If you are like me and need to know what everything is, you can see by inspecting the subdirectory that SSNS is made up of a few components, the important ones are explained here:

    microsoft.sqlserver.notificationservices.dll-this is the actual guts of Notification Services. A managed .NET assembly that contains the core code for SSNS and the built-in providers you use to retrieve data and create notifications and subscriptions.

    NSService.exe-the executable shell of microsoft.sqlserver.notificationservices.dll used for running as a Windows Service. Each instance of NSService.exe runs as a Windows Service and can be managed by SQL Management Studio. Instances of NSService.exe are independent of SQL Server 2005 instances.

    Providers-is the extensible framework that SSNS is built around: event providers, formatters and delivery protocol providers.

    Nscontrol.exe-this is a utility program that is used when you compile your application and generates the SQL Server Database and objects that the Windows Service application uses to retrieve data and create notifications.

    XML Schemas-When you generate a SSNS application, you use configuration files that are XML based. The schemas provided as part of the framework validate those documents.

    Sample applications-more than most other tools, the community has used the sample applications to fully show the power of Notification Services. We will explore them as well.

    This is good stuff. More later..

    posted on Tuesday, August 09, 2005 4:03:48 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    In October 2003, Microsoft released Visual Studio Tools for the Office (VSTO). This new group of class libraries brings .NET Framework-based development to Word and Excel 2003 by enabling developers to write managed code in C# or VB .NET that responds to events within the Word and Excel automation models. While not as integrated as Visual Basic for applications (VBA), building on the tradition of Visual VBA and COM-based automation, VSTO)provides developers with significant benefits for building Office solutions, including a familiar coding experience, improved deployment, and improved security.

    The Visual Studio 2005 release of VSTO brings significant enhancements to the development of solutions based on Excel and Word 2003. Building on top of VSTO 2003, Visual Studio 2005 Tools for Office will address some of the biggest hurdles facing Office solution developers today, including separation of data and view elements, deployment of Office solution assemblies, server-side and offline scenarios, and seamless integration with the Visual Studio toolset.

    One of the primary successes of VSTO 2005 is the separating of “data” from “view” in Office documents in order to simplify the creation of Office-based solutions. Today, Excel spreadsheets and Word documents consist of an amalgamation of either cell values or text (representing the data) and descriptive information about that data, such as the font (representing the view). Because Word and Excel have no built-in concept of programmatic data like Microsoft Access does, developers are limited in their ability to effectively develop solutions around the data stored within the documents.

    VSTO 2005 will separate the data from the view in Office documents by enabling data to be embedded as an independent XML data island. This provides a well understood and easily addressable structure that developers can rely on when programming in addition to the benefit of offline support for views of the data. The developer is able to separate presentation (view) and data, and is thus able to update the data directly without concern of writing presentation code. Typed data sets will be used to provide a schema-oriented programming model for interacting with the data island, ensuring IntelliSense support for the managed code being written. Data binding will be used between the data island and the view to keep these two in sync. The developer will also be able to add validation code to the data that is independent from the document view components. Typed DataSets are now exposed as partial classes so you can add validation code very easily and encapsulate it as part of the DataSet itself.

    Programming directly to data by way of an XML schema-based model provides improved productivity for developers over previous coding paradigms. Code that works with data does not need to address the Excel and Word object models at all. This simplifies much of the code involved in building Office solutions as well as shields your data code from changes in the document. The resulting code is loosely coupled because it does not rely on hard coded references to specific cells, ranges and tables that can be arbitrarily moved around by end users, rather, your code directly accesses XML data islands.

    Working with XML Data Islands enables new server-side opportunities. Most importantly, the data island embedded in the document can be manipulated without starting the individual Office application. This is a major shift from the current model, by which, in order for code to manipulate the contents of the document, Excel or Word must be installed and running. This limitation blocked many solutions from being created such as programmatically creating Office documents from within an ASP.NET application.

    The VSTO 2005 runtime will support instantiation on a server without the need to instantiate and run Excel or Word. The data island in a document can then be manipulated from the server-side code as any XML data can. When the user opens the associated Office document the view would be re-synchronized with the data island and the user would be automatically presented with the updated data. In this scenario, Excel and Word are not needed to write to the data on the server, but rather only to view it on the client, limiting potential security holes. This updated model will also provide higher scalability and the ability to perform high performance batch processing of multiple documents (such as T&E documents) containing data islands on the server.

    Storing the data in a data island also provides a way to enable rich offline scenarios. When a document is first requested from the server or first opened by the user, the data island will be filled with the most recent data. The data island can then be cached in the document and made available offline. The data could then be manipulated by the user and by code without a live connection. When the user reconnects, the changes to the data could be propagated back to a server data source by code that you provide.

    In addition to improving the data programming model, VSTO 2005 introduces enhancements to the way developers programmatically access user interface, elements, such as ranges, lists, and bookmarks. Developers can write code today to manipulate these elements, but they are impacted by the extent to which the Office object models expose events, properties, and methods. For example, the Excel object model provides a WorkSheet_Change event, but does not provide similar events for individual cells or ranges, creating the need for additional code to handle the occurrence of a change to a specific element. VSTO 2005 introduces enhancements to the Excel and Word object models in the area of these user interface elements. Elements, such as cells, ranges, lists, and bookmarks will become first class controls that can be easily accessible in your code. Each control will be uniquely identified and will enable data binding and will provide a more complete event model, making it easier for the developer to manipulate Word and Excel.

    VSTO 2005 will also makes it much easier to develop applications with Excel and Word with Visual Studio. With VSTO 2003, developers wrote managed code in Visual Studio .NET then they had to switch to Excel or Word in order to create the user interface. In VSTO 2005, Excel and Word will be hosted directly in the Visual Studio 2005 IDE as designers as live documents. Developers will be able to design Office documents within the Visual Studio environment using the full collection of Windows Forms controls in Excel and Word by simply dragging and dropping managed controls, including third-party controls, from the Toolbox. Just like in other Visual Studio environments, double-clicking on a managed control in Excel or Word will invoke the code view in which customizations can be written inside the auto generated event handler for that control.

    Managed control hosting within Word and Excel documents, combined with Excel and Word integration within the Visual Studio IDE, will make Office just another target platform for Visual Studio developers in addition to Windows Forms, ASP .NET, Mobile and Web Services.

    What does this mean for InfoPath? Not sure?

     

    posted on Wednesday, August 03, 2005 8:49:29 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [92] Trackback

    Come to mine today:

    DBA304  Advanced Querying Techniques, Tips & Tricks Using Transact-SQL
    Speaker(s): Richard Campbell, Stephen Forte
    Session Type(s): Breakout
    Track(s): Database Administration
    Day/Time: Monday, June 6 3:15 PM - 4:30 PM Room: S 310 A
    Take your querying to the next level! This session gets away from the fundamentals of SQL queries and into the hard stuff. See two experts in SQL Server compare and contrast querying techniques between SQL Server 2000 and SQL Server 2005. This session has a series of real world examples to show how creative SQL queries can generate solutions in record time. Some techniques you'll learn include how to do crosstab queries that take seconds to execute instead of hours, exploiting sub-queries and taking advantage of self-joining. Along the way, get some insight into how SQL servers work, as well as how SQL Server 2005 is going to make advanced querying even easier.
    posted on Monday, June 06, 2005 12:44:10 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

     

    So last night during the geek night session at the SDC, the Dutch, inspired by Richard Campbell called me on my SMO Backup and Restore GUI that had a progress meter. They thought I was hacking it, not that I was actually providing a true representation of the progress made by the status of the backup. Here is the progress meter in action, as the database backup makes progress we update the progress meter:

     

     

     

    To do a backup programmatically you can to use SMO (see yesterday). Begin by setting the variables to get started.

     

    Server svr = new Server();//assuming local server

    Backup bkp = new Backup();

     

    Cursor = Cursors.WaitCursor;

     

    Then you have to set the device to backup to and what database to backup. Notice in the comments the code to the progress meter

     

    try

    {

          string strFileName = txtFileName.Text.ToString();

          string strDatabaseName = txtDatabase.Text.ToString();

                           

          bkp.Action = BackupActionType.Database;

          bkp.Database = strDatabaseName;

     

          //set the device: File, Tape, etc

          bkp.Devices.AddDevice(strFileName, DeviceType.File);

          //set this when you want to do Incremental

          bkp.Incremental = chkIncremental.Checked;

     

          //progress meter stuff

          progressBar1.Value = 0;

          progressBar1.Maximum = 100;

    progressBar1.Value = 10;

     

          //this gives us the % complete by handling the event

          //provided by SMO on the percent complete, we will

          //update the progress meter in the event handler

                           

          //set the progress meter to 10% by default

    bkp.PercentCompleteNotification = 10;

    //call to the event handler to incriment the progress meter

    bkp.PercentComplete += new PercentCompleteEventHandler(ProgressEventHandler);

         

    //this does the backup

          bkp.SqlBackup(svr);

          //alert the user when it is all done

          MessageBox.Show("Database Backed Up To: " + strFileName, "SMO Demos");

     

                           

                           

          }

    catch (SmoException exSMO)

          {

          MessageBox.Show(exSMO.ToString());

     

          }

    catch (Exception ex)

          {

          MessageBox.Show(ex.ToString());

          }

     

    finally

          {

          Cursor = Cursors.Default;

          progressBar1.Value = 0;

          }

     

     

    Here is the ProgressEventHandler, notice that I made it generic enough that I can call it from both the backup and restore methods!

     

    public void ProgressEventHandler(object sender, PercentCompleteEventArgs e)

          {

                //increase the progress bar up by the percent

                progressBar1.Value = e.Percent;

          }

     

    posted on Tuesday, May 31, 2005 6:28:53 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [1] Trackback
    DBA304  Advanced Querying Techniques, Tips & Tricks Using Transact-SQL
    Speaker(s): Richard Campbell, Stephen Forte
    Session Type(s): Breakout
    Track(s): Database Administration
    Day/Time: Monday, June 6 3:15 PM - 4:30 PM Room: S 310 A
    Take your querying to the next level! This session gets away from the fundamentals of SQL queries and into the hard stuff. See two experts in SQL Server compare and contrast querying techniques between SQL Server 2000 and SQL Server 2005. This session has a series of real world examples to show how creative SQL queries can generate solutions in record time. Some techniques you'll learn include how to do crosstab queries that take seconds to execute instead of hours, exploiting sub-queries and taking advantage of self-joining. Along the way, get some insight into how SQL servers work, as well as how SQL Server 2005 is going to make advanced querying even easier.
     
    posted on Friday, April 29, 2005 8:13:37 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [1] Trackback

    The auction should go live in 24 hours. Bookmark this post as I will put the link here and the start date as soon as we have it!

     

    This auction is a charitable contribution. Bidders will pay for an hour each of consulting time to benefit the Tsunami victims of Banda AcehALL of the money will go to help the victims (tax benefit for you as well).

     

    Bid for an hour of a .NET Celebrity Consultant’s time. Winners can pick the brain of a .NET Expert for an hour (highest bidders will be first in the “draft” for the consultant assigned to them). Winners can call, email or IM the consultant and use the hour to answer that nagging question, do a code review, or just get some general .NET advice.

    There will be 30 winning bids. eBay rules require that all 30 winners pay the lowest bid price. So you are required to pay the lowest bid amount, but are encouraged to pay for your final bid (we will invoice you for both, it is your choice how much you want to donate to IDEP (payments via PayPal)). Here are the participants we have RDs and INETA speakers from all 6 continents and 12 countries!

    Michelle Leroux Bustamante, Kimberly L. Tripp, Jonathan Goodyear, Andrew Brust, Richard Campbell, Adam Cogan, Malek Kemmou, Jackie Goldstein, Ted Neward, Kathleen Dollard, Hector M Obregon, Patrick Hynds, Fernando Guerrero, Kate Gregory, Joel Semeniuk, Scott Hanselman, Barry Gervin, Clemens Vasters, Jorge Oblitas, Stephen Forte, Jeffrey Richter, John Robbins, Jeff Prosise, Deborah Kurata, Goksin Bakir, Edgar Sánchez, Thomas Lee, J. Michael Palermo IV, Vishwas Lele, and John Lam:

    Bios:

    Michelle Leroux Bustamante (CAN/USA),

    Michelle Leroux Bustamante, is Principal Software Architect of IDesign Inc., Microsoft Regional Director for San Diego, Microsoft MVP for XML Web Services and BEA Technical Director. She has over a decade of development experience development applications with VB, C++, Java, C# and VB.NET and working with related technologies such as ATL, MFC and COM. At IDesign Michele provides training, mentoring and high-end architecture consulting services, focusing on Web services, scalable and secure architecture design for .NET, and interoperability. She is a member of the International .NET Speakers Association (INETA), a frequent conference presenter, conference chair for SD’s Web Services and .NET tracks, and is frequently published in several major technology journals. Michele is also Program Advisor for UCSD Extension, and is the .NET Expert for SearchWebServices.com.

     

    Jeffrey Richter (USA)

    Jeffrey Richter is a co-founder of Wintellect, a training, debugging, and consulting firm dedicated to helping companies build better software, faster. Over the years, Jeffrey has consulted for many companies including Intel, DreamWorks and Microsoft. In fact, for Microsoft, he has contributed both design and code to the following products: Windows (all 32-bit and 64-bit versions), Visual Studio .NET, Microsoft Office, TerraServer, the .NET Framework, "Longhorn" and "Indigo". Even today, Jeffrey is still consulting with Microsoft's .NET Framework team (since October 1999) and XML Web Services and Messaging Team ("Indigo") (since January 2003). He is the author of several best selling .NET and Win32 programming books. Jeffrey is also a contributing editor to MSDN Magazine where he authors the .NET column and has written many feature articles.

     

    Jeffrey holds both helicopter and airplane licenses and is a member of the International Brotherhood of Magicians. He also enjoys playing drums and keyboards. He attends concerts regularly to indulge his passion for jazz bands. He also loves to travel and explore new places. Jeffrey can usually be found tinkering with some new technology living his life on the bleeding edge. His lot in life is to always want to purchase something that should be shipping any day now. Jeffrey lives in Bellevue, WA with his wife Kristin, their son Aidan, and their cat Max.

     

    Kimberly L. Tripp (USA)

    Kimberly is a SQL Server MVP and a Microsoft Regional Director and has worked with SQL Server since 1990. Since 1995, Kimberly has worked as a Speaker, Writer, Trainer and Consultant for her own company SYSolutions, Inc. (www.SQLskills.com) where she focuses on creating interesting and educational content around building scalable and available SQL Server-based systems. Focusing mostly on performance tuning and availability, Kimberly frequently writes for SQL Server Magazine, was a technical contributor for the SQL Server 2000 Resource Kit and co-authored the MSPress title SQL Server 2000 High Availability. Kimberly has lectured for Microsoft Tech*Ed, SQL Server Magazine Connections, PASS and VSLive and is consistently a top rated speaker. Kimberly works closely with Microsoft to provide new and interesting technical resources including the SQL Server 2000 High Availability Overview DVD – featuring more than 9 hours of in-depth technical content, demos and peer chats with MVPs.

     

    Clemens Vasters (Germany)

    Clemens Vasters is co-founder and Chief Technology Officer of the Germany-based developer services firm newtelligence? AG. newtelligence? is specialized in providing world-class developer education on Microsoft technologies as well as architectural consulting, architectural review and project coaching services. Clemens Vasters has over 14 years of professional experience as developer and architect, is the author of several books, principal architect of the Microsoft examples “Proseware” and “FABRIQ”, and is one of Europe’s most popular conference speakers on Microsoft technologies. In 2004 alone, he spoke at over 50 events in 24 countries, including Microsoft TechEd USA, Microsoft TechEd Europe, the Microsoft Longhorn Developer Preview, and the Microsoft EMEA Architect Forum. Also this year, he received the Microsoft MVP Award 2004 as Solution Architect and the “Outstanding Microsoft Regional Director Award” for his contribution to the Microsoft Regional Director program.

     

    Scott Hanselman (USA)

    Scott Hanselman is Chief Architect and Voyager SDK Product Manager at the Corillian Corporation, an eFinance enabler. He has twelve years experience developing software in C, C++, VB, COM, and most recently in VB.NET and C#. Scott is proud to have been appointed the Oregon's MSDN Regional Director for the last four years, developing content for, and speaking at Developer Days and the Visual Studio.NET Launch in both Portland and Seattle. Scott was in the top 5% of audience-rated speakers at TechEd in Dallas and spoke at PDC 2003. Scott also presented at the Windows Server 2003 and VS.NET 2003 Launches in Seattle. He's spoken internationally on Microsoft technologies in Asia and Africa, and has co-authored three books from Wrox Press. In 2001, Scott spoke on a 15-city national tour with Microsoft, Compaq and Intel featuring Microsoft Technologies and evangelizing good design practices. In 2002, he was a highly rated speaker at TechEd Malaysia, giving 3 sessions, including one on the .NET Framework. Last year, Scott spoke at the Windows Server 2003 Launch event in 4 PacWest Cities. Scott and Corillian also participate in a number of Working Groups with the Web Service Interoperability Organization (WS-I). His thoughts on the Zen of .NET, Programming and Web Services can be found on his blog at http://www.computerzen.com.

     

    John Robbins (USA)

    John Robbins is a cofounder of Wintellect, where he heads up the consulting and debugging services side of the business. He also travels the world teaching his Debugging .NET Applications and Debugging Windows Applications course so that developers everywhere can learn the techniques he uses to solve the nastiest software problems known to man. As one of the world's recognized authorities on debugging, John takes an evil delight in finding and fixing impossible bugs in other people's programs. John is based in New Hampshire USA, where he lives with his wife, Pam, and the world-famous debugging cats, Pearl and Chloe. In addition to being the author of the books Debugging Microsoft .NET and Windows Applications (Microsoft Press 2003) and Debugging Applications (Microsoft Press, 2000), John is a contributing editor for MSDN Magazine, where he writes the Bugslayer column. He regularly speaks at conferences such as Tech-Ed, VSLive, and DevWeek. Prior to founding Wintellect, John was one of the early engineers at NuMega Technologies (now Compuware NuMega), where he played key roles in designing, developing, and acting as project manager for some of the coolest C/C++, Visual Basic, and Java developers' tools on the market. The products that he worked on include BoundsChecker (versions 3, 4, and 5), TrueTime (versions 1.0 and 1.1), TrueCoverage (version 1.0), SoftICE (version 3.24) and TrueCoverage for Device Drivers (version 1.0). He was also the only developer at NuMega with a couch in his office. Before he stumbled into software development in his late 20's, John was a paratrooper and Green Beret in the United States Army. Since he can no longer get adrenaline highs by jumping out of airplanes in the middle of the night onto unlit, postage-stamp-size drop zones carrying full combat loads, he rides motorcycles at high rates of speed - much to his wife's chagrin.

     

    Jonathan Goodyear (USA)

    Jonathan is the President of ASPSOFT, Inc. He has been working with .NET since before it was made available to the general public. Jonathan is a contributing editor for both Visual Studio Magazine and asp.netPRO Magazine, and frequently speaks at major technology conferences such as VSLive, ASP.NET Connections and .NET user groups through the International .NET Association (INETA). Jonathan wrote one of the first books about .NET development, Debugging ASP.NET, by New Rider's Publishing, and appeared in a video, Visual Studio .NET - An Introduction, by WatchIT.com. Jonathan has been awarded Most Valuable Professional (MVP) status by Microsoft and has recently been named a Regional Director.

     

    Andrew Brust (USA)

    Andrew J. Brust is Chief, New Technology at Citigate Hudson, Inc., a Microsoft Gold Certified Partner specializing in Business Intelligence and custom database applications built with .NET, SQL Server, and other Microsoft technologies. Prior to joining Citigate Hudson, Andrew was the President of Progressive Systems Consulting, which he founded in 1994 and merged with Citigate Hudson in 2004. Andrew is Microsoft's Regional Director for New York and New Jersey, a contributing editor to Visual Studio Magazine, a regular speaker and Conference Chair at VSLive and a featured speaker at other conferences throughout the U.S. and internationally. Andrew has over 15 years of experience programming and consulting in the Financial, Public, Small Business, and Not-For-Profit sectors. Mr. Brust is a Vice-Chairman of the New York Software Industry Association (NYSIA) Board.

     

    Richard Campbell (Canada)

    Richard Campbell has been working with computers since 1977 and is the   president of Campbell & Associates, based in Vancouver, British Columbia, Canada. He is a consultant to a number of companies in North America, focusing on building high-performance, large-scaling web sites utilizing Microsoft technology. Richard is also one of the technical editors for Access/VB/SQL Advisor and co-author of the Advisor Answers column. In addition to consulting and writing, Richard speaks at conferences around the world.

     

    Malek Kemmou (Morocco)

    Abdelmalek Kemmou (Malek), Founder and CEO of kemmou consulting, a consulting company based in Casablanca Morocco, is a senior consultant, a skilled trainer and a leader of the developer community in North Africa. Malek is recognized as an expert on Microsoft technology (certified on most Microsoft products and technologies: 23 MCP certification on last count), with a strong background on other technologies (J2EE/Java, Perl, C++, Cisco …etc, certified on some of those technologies: Sun Certified Java Programmer, Cisco Certified Network Professional). He specializes in solutions architectures, integration technologies and mobility. Malek speaks at various international conferences around the world.

     

     

    Goksin Bakir (Turkey)

    Goksin Bakir is the founder of Yage Ltd., a software company based in Istanbul, mainly dedicated to vertical manufacturing markets and consultancy. He acts as the Chief Software Engineer for Yage. He also works as a part time faculty for Bogazici University, MIS department. Besides his experience of over 15 years in IT Industry, Goksin is the Microsoft Regional Director for Middle East and Africa. MSDN Regional Directors are independent 3rd party advocates of Microsoft technologies whose mission it is to inform, educate and congregate the Windows development community.

    Goksin also acts as the Regional Manager for INETA Middle East and Africa. He is in the INETA Speakers Bureau and has spoken to many groups worldwide.

    Goksin’s main areas of interest are Microsoft technologies and software architecture. These technologies are .NET Application Development, Security, internet / intranet application architecture and B2B application Integration. He has completed many projects with his teams, His recent work include Enterprise solutions on .Net technologies. Goksin is a frequent speaker at international conferences and Seminars like Teched, MDC, PDC, NDC. He also contributes to IT publications like PC Week, IT Weekly. He also has many excellence awards from leading software and IT companies.

     

    Adam Cogan (Australia)

    Adam Cogan is the Chief Architect at SSW, a Microsoft Certified Partner specializing in Office and .NET Solutions. At SSW, Adam has been developing custom solutions for businesses across a range of industries such as Government, banking, insurance and manufacturing since 1990 for clients such as Microsoft, Quicken, and the Fisheries Research and Development Corporation. Adam develops in Microsoft technologies such as SQL Server 2000, Winforms and Webforms using both VB.NET and C#, Access 2002, Outlook 2002/Exchange Server 2000 and now Office 2003 often using n-tier architecture. One of his Latest Projects was the Smart Tag Implementation for Quicken Australia. Adam is one of only two Microsoft Regional Directors in Australia . In this role, he regularly presents for Microsoft such as TechEd USA and Australia and visits Microsoft headquarters in Seattle to learn the latest on Microsoft strategic directions and to undertake training in development technologies.

     

    Edgar Sánchez (Ecuador)

    Edgar Sánchez is co-founder and CEO of Logic Studio, a software house based in Ecuador, specialized in the development of custom software solutions using object oriented technologies. Edgar has been writing business applications for almost 20 years, coming from Pascal and C in text screens, through PowerBuilder, Lotus Notes, Java, and finally .NET, which he embraced enthusiastically since its first betas. Currently Edgar helps big projects define its architecture and select the tools used at every tier but he stills loves to write real-world code as often as possible. He has got a penchant for numerical computing and functional programming but in his real life he loves playing with his kids, jogging, reading novels, and going to the movies.

     

    Jackie Goldstein (Israel)

    Jackie Goldstein is the principal of Renaissance Computer Systems, specializing in consulting, training, and development with Microsoft tools and technologies. He has 20 years experience developing and managing software applications in the U.S. and Israel, and is known for his ability to help developers understand and take advantage of new technologies. Jackie is a Microsoft Regional Director, the founder of the Israel VB User Group, and is a featured speaker at international developer events including VSLive!, TechEd, Microsoft Developer Days, and SQL2TheMax. Jackie also works closely with Microsoft both in Israel and in the U.S. and was awarded by Microsoft the "Microsoft Regional Director of the Year."  He is a member of the INETA Speakers Bureau and the lead co-author for a book on database programming with .NET ("Database Access with Visual Basic.NET", Addison-Wesley , ISBN 0-67232-3435).  At the end of 2003, Microsoft had Jackie do a VB Upgrade Tour in 10 different cities throughout Europe. In December 2003, Microsoft recognized Jackie as a Software Legend!

     

    Ted Neward (USA)

    As a technical lead and architect, have led and managed teams implementing client/server solutions using a variety of languages, tools, and operating systems. Complete software lifecycle experience, from conception through beta-test, ship and maintenance. Have excellent communication skills, interacting with technical and non-technical audiences. I bring passion to software development, because I love what I do.  As an instructor, I spend significant amounts of time bringing developers "up to speed" on the latest technologies--Java, C#, servlets, JSPs, EJB, and more. I've mentored programmers who knew nothing of Java and turned them into Java experts by project's end.

     

    Kathleen Dollard (USA)

    Kathleen Dollard is a consultant, author, trainer, and speaker. She’s been a Microsoft MVP since 1998, wrote “Code Generation in Microsoft .NET” (Apress) and is a regular contributor to Visual Studio Magazine. She speaks at industry conferences such as VSLive, DevConnections, and Microsoft DevDays as well as local user groups. She’s the founder and principal of GenDotNet. Her passion is helping programmers be smarter in how they develop by learning to use Visual Studio, XML related technologies, .NET languages, code generation, unit testing, and other tools to their full capacity. She’s currently working on full life cycle improvements, such as better debugging and capturing business intent in metadata and test definitions. When not working, she enjoys woodworking, snowshoeing, and kayaking depending on the outdoor temperature.

     

    Hector M Obregon (Mexico)

    Hector Obregon is the Chief Executive Officer and co-founder of emLink, a Mexico City based ISV providing mobility solutions to clients nationally. Hector is also the Microsoft Regional Director for the Mexico City region. He speaks regularly at industry conferences like Developer Days, Comdex, and TechEd. Hector has served as the technical architect of mobile solutions including Sales Force Automation, Mobile Collection, and Software Distribution and Support solutions. Prior to emLink, Hector served as the CEO of Air-Go Technologies in San Francisco, CA.

     

    Patrick Hynds (USA)

    Patrick Hynds, MCSD, MCSE+I, MCDBA, MCSA, MCP+Site Builder, MCT, is the Microsoft Regional Director for Boston and the CTO for CriticalSites. Named by Microsoft as a Regional Director, he has been recognized as a leader in the technology field. An expert on Microsoft technology (with at last count 55 Microsoft certifications) and experienced with other technologies as well (Websphere, Sybase, Perl, Java, Unix, Netware, C++, etc.), Patrick previously taught freelance software development and Network Architecture. He has been a successful contractor who enjoyed mastering difficult troubleshooting assignments. A graduate of West Point and a Gulf War veteran, Patrick brings an uncommon level of dedication to his leadership role at CriticalSites. He has experience in addressing business challenges with special emphasis on security issues involving leading-edge database, web and hardware systems. In spite of the demands of his management role at CriticalSites, Patrick stays technical and in the trenches acting as Project Manager and/or developer/engineer on selected projects throughout the year.

     

    Fernando Guerrero (Spain)

    Fernando G. Guerrero is one of the founders of Solid Quality Learning. He works with SQL Server since 1993, and for more than 20 years he has been designing information systems, providing training and mentoring in many countries worldwide. His teaching experience started in 1981 when he was a lecturer at the Universidad Politécnica de Valencia, followed by many international projects where mentoring was a key part of his tasks, until he became a Principal Technologist and SQL Server Product Consultant at QA plc., the leading IT training and consulting company in the United Kingdom. Fernando has presented at numerous conferences including TechEd, PASS, VBUG, VBITS, MCT Conference, SQL Server Magazine Live, and DevWeek. Fernando is a Civil and Hydrologic Engineer, MCDBA, MCSE+I, MCSD, MCT, a SQL Server MVP (Most Valuable Professional) and an INETA speaker. His book "Microsoft SQL Server 2000 Programming By Example", from QUE, was published in April 2001.

     

    Kate Gregory (Canada)

    Kate Gregory is the Microsoft Regional Director for Toronto and a founding partner of Gregory Consulting. Based in Peterborough, Ontario, Gregory Consulting has been providing consulting and development services throughout North America since 1986, specializing in software development with leading-edge technologies, integration projects, and technology transfer. They also provide training, mentoring, and technical writing services.Kate Gregory is the author of over a dozen books including Microsoft Visual C++ .NET 2003 Kick Start. She teaches  .NET, XML, UML, and C++ and is in demand as an expert speaker, with numerous cross-Canada tours for Microsoft Canada, and sessions at DevDays, TechEd (USA, Europe, Africa,) and VSLive Toronto, among others. Kate is a C++  MVP, a founding sponsor of the Toronto .NET Users Group, the founder of the East of Toronto .NET Users group, a member of the INETA speakers bureau, and a member of adjunct faculty at Trent University in Peterborough.

     

    Joel Semeniuk (Canada)

    Joel Semeniuk is a founder and VP of Software Development at ImagiNET Resources Corp, a Manitoba based Microsoft Gold Partner in Ecommerce and Enterprise Systems. Joel is also the Microsoft Regional Director for Winnipeg, Manitoba. With a degree in Computer Science from the University of Manitoba, Joel has spent the last twelve years providing educational, development and infrastructure consulting services to clients throughout North America. Joel is the author of "Exchange and Outlook: Constructing Collaborative Solutions", from New Riders Publishing and contributing author of "Microsoft Visual Basic.NET 2003 KickStart" from SAMS. Joel has also acted as a technical reviewer on many other books and regularly writes articles for .NET Magazine and Exchange and Outlook Magazine on a variety of infrastructure and development related topics.

     

     

    Barry Gervin (USA)

    Barry Gervin is a Principal Consultant and Instructor with ObjectSharp. He is a technical leader with over 15 years experience helping development teams design and build large software projects. Barry is skilled in the Architecture, Design and Development of Databases and Distributed Systems. Barry is also a MS Regional Director.

     

    Jorge Oblitas (Peru)

    Jorge Oblitas is a Portals and Intranets solutions from Peru, who worked close with main financial and telecommunications companies in his country to create Knowledge Managements solutions to let them work better.  His Intranet functional designs were awarded internationally by Microsoft. One of the biggest points of interest for people who hire his services is to get help from him about how to decide why they need, build a proposal and evaluate companies to do the project. In a second instance he works closely with the company and the solution provider to get the best results. Jorge is the Microsoft Regional Director for Peru since year 2000 and was member of the Microsoft Corporation Partner Advisory Council in Portals and Intranets year 2002. He was speaker at many events in Peru, Other Andean region countries and Microsoft TechEd Latin America. Topics are: Intranets, Knowledge Management, Education, project management and Web solutions.

     

    Stephen Forte (New York)

    Stephen Forte is the Chief Technology Officer and co-founder of Corzen, Inc, a Manhattan (USA) based provider of online market research data for Wall Street Firms. Stephen is also the Microsoft Regional Director for the NY Metro region. He speaks regularly at industry conferences like Tech*Ed, North Africa Developers Conference and other conferences around the world. He has written several books on database development and currently is writing the MS Press book SQL Server 2005 Core Developers Guide. Prior to Corzen, Stephen served as the CTO of Zagat Survey in New York City and also was co-founder and CTO of the New York based software consulting firm The Aurora Development Group. He currently is the co-moderator and founder of the NYC .NET Developer User Group.

     

     

     Jeff Prosise (USA)

    Jeff Prosise is a co-founder of Wintellect - a training, consulting and debugging firm that specializes in Microsoft .NET technologies -  where he makes his living programming Microsoft .NET and teaching others how to do the same. His latest book, Programming Microsoft .NET, was published by Microsoft Press in May 2002. His previous book, Programming Windows with MFC, has won awards for readability and is widely considered to be the definitive work on MFC programming. A former engineer who discovered after college that programming is immeasurably more fun than designing lifting fixtures and computing loads on mounting brackets, today Jeff travels the world teaching ASP.NET programming and enlightening conference audiences about the new platform. He works closely with Microsoft developers in Redmond, WA, to track the development of the .NET Framework. Jeff is a contributing editor to MSDN Magazine, where he writes feature articles about Microsoft .NET and authors the Wicked Code column, and to asp.netPRO magazine, where he writes the monthly Ask the PRO column. And in 2000, Jeff cofounded Wintellect to provide .NET consulting and education services to developers everywhere. In his off-time, Jeff enjoys spending time with his wife and three kids, attending church, going scuba diving, playing softball, and jamming with garage bands. During his quieter moments, he dreams of playing baseball for the Atlanta Braves and playing guitar like Stevie Ray Vaughan. But in his heart, he realizes that writing code is the next best thing.

     

    Deborah Kurata (USA)

    Deborah Kurata is a software architect, designer and developer and the author of several books, including ‘Best Kept Secrets in .NET’ (Apress), 'Doing Objects in Visual Basic 6.0' (SAMS) and 'Doing Web Development: Client-Side Techniques' (Apress). She speaks at conferences, such as VSLive, DevDays, and TechEd, is co-chair of the local East Bay.NET user group, and writes for MSDN and CoDe magazine. She is a member of the INETA Speaker’s Bureau and a Microsoft Most Valuable Professional (MVP). Her preferred INETA topics include .NET architecture, object-oriented design and development, and best practices for .NET development. She especially enjoys giving her ‘Best Kept Secrets’ talk at user groups because it provides tips and tricks for all levels of software developers. Deborah is cofounder of InStep Technologies Inc., a professional consulting firm that focuses on turning your business vision into reality using Microsoft .NET technologies.

     

     

     Thomas Lee (UK)

    Thomas is Chief Technologist at QA, the UK's largest independent training and consulting firm. A respected author with over 20 years experience as a Microsoft practitioner, Thomas is responsible for the strategic development of QA's training Portfolio and the delivery of specialist training sessions on advanced design and architecture for Windows 2000 and Windows .NET Server. Thomas attended Carnegie Mellon University in America. He then worked for industry-leaders Comshare, ICL and Andersen Consulting. Thomas has led the development of a variety of technical training courses covering the range of Microsoft system products and spent time working for Microsoft in Redmond assisting in the development of the Windows 2000 enterprise courses. Thomas is Windows Editor of ESM Magazine and Security Editor for FYI magazine. Thomas is a Fellow of the British Computer Society, a Microsoft MSDN Regional Director, and a Microsoft MVP.

     

    John Lam (USA)

    John is a Partner at ObjectSharp. He blends practical experience with deep technical knowledge: he has shipped 9 software products over the years. He is a recognized authority on COM, XML and .NET. He has co-authored two books: Essential XML and the Delphi Developers Handbook. He was a technical columnist for PC Magazine. He was an invited speaker at many international conferences over the years, including the PDC, Tech-Ed, Win-Dev, VS Live!, VS Connections, and WinSummit. He has contributed original research to the academic computer science community that was presented at the 1st International Aspect Oriented Software conference in the Netherlands. He maintains a popular blog, at www.iunknown.com, which draws over 7000 unique visitors per week.

     

    J. Michael Palermo IV (USA)

     J. Michael Palermo IV is currently the lead developer for kbAlertz.com, and operations manager for myKB.com. Michael is also a developer instructor at Interface Technical Training, a Microsoft Gold Partner in Phoenix, AZ. Michael has been endorsed by Microsoft as a Microsoft Solutions Framework Practitioner & Trainer. He has been awarded the title of Microsoft Regional Director, Most Valuable Professional (MVP) for XML technologies, and is a member of ASP Insiders. Michael's passion is sharing technology information with the community. He has also set time aside for co-authoring several books, engaging developers at user groups and speaking at DevDays and MSDN events.

     

    Vishwas Lele (USA)

    Bio here soon!

    posted on Friday, January 21, 2005 11:28:03 AM (Eastern Standard Time, UTC-05:00)  #    Comments [3] Trackback

    When you think of the Microsoft Regional Directors what usually comes to mind are really amazing speakers at conferences like DevDays, Tech*ED (last year we had two RDs place #1 at two Tech*Eds) and the North Africa Developers Conference. You think of great authors and things like .NET Rocks.

     

    RDs are truly great. But even if you collect together 150 amazing colleagues from around the world and put them all in a room together, there would be nothing without the proper organization. Leadership is key. General George Washington was less of a man (and General) without Martha (just ask the solders who she brought socks to).

     

    For the past three years I have had the pleasure or working with the best PM at Microsoft, Eileen Crain. As of today Eileen is no longer the PM of the Regional Director Program or as I like to refer to it, the “RD Mom”. She has went on to bigger and better things at Microsoft.

     

    Eileen has worked behind the scenes to make sure that the RDs got speaking engagements, in front of large customers, or any other kind of exposure. She also would always offer to take us out to dinner when we were in town! (Or drive me home when I drank too much. ). Whether it was planning a new marketing initiative or RD party at Tech*ED, she did it very thoroughly.

     

    The RD program has been around for over 10 years and in years past you only heard of RDs at DevDays. Eileen has worked real hard to make us knows-and it worked. In the last few years the visibility of the RDs has grown and it is all due to Eileen.

     

    Also Eileen has been someone who I would turn to for business advice and even personal advice. She would even pick up the phone at 2am when I was complaining about “the girl” or when a group of RDs would call (in a drunken stupor) from Cairo, Casablanca or Kuala Lumpur.

     

    Eileen I will miss you and I wish you the best.

     

    PS-people usually have to think really hard to figure out my titles most of the time, but this title was one of those “you had to be there ones” in Dallas at Tech*ED in 2003, a bunch of RDs (led by me!) got on stage and did “Killing me Softly” Karaoke and dedicated it Eileen.

    posted on Wednesday, December 01, 2004 12:52:06 PM (Eastern Standard Time, UTC-05:00)  #    Comments [12] Trackback

    My favorite city, Kathmandu is under a blockade by the Maoists rebels. This is a problem, the rebels have been targeting Kathmandu more and more in recent months (starting with the General Strike and such when I was there a year ago). A few days ago a hotel I stayed in in 2002 had 5 bombs go off in it. This is going to kill tourism to the South Col Route to Everest and force climbers to attempt Everest via the China side.

    posted on Thursday, August 19, 2004 10:25:43 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [10] Trackback

    $613 million that is? The European Commission has fined Microsoft a record $613 million. What are they going to do with the money, further subsidize Airbus? Further subsidize French farmers? Lower German taxes? Give the money to Linux “research”? Send troops to Iraq?

     

    I think that Microsoft is victim of anti-American sediment in Europe right now. The fine is excessive. It surpasses fines the Commission has imposed on price-fixing cartels and it sends the wrong message about antitrust enforcement priorities.

     

    The US Attorney General’s Office agrees with me. "Imposing antitrust liability on the basis of product enhancements and imposing 'code removal' remedies may produce unintended consequences," US Assistant Attorney General Pate said. "Sound antitrust policy must avoid chilling innovation and competition even by 'dominant' companies. A contrary approach risks protecting competitors, not competition, in ways that may ultimately harm innovation and the consumers that benefit from it."

     

    Come on now, Media Player? It sucks. Everyone downloads MusicMatch or WinAmp anyway. IE beat Netscape since Netscape took way too long to innovate (was years in-between releases). Media Player sucks and nobody really uses it.

     

    So European Commission you showed your true colors Maybe the US should fine Airbus for dumping and price fixing.

    posted on Thursday, March 25, 2004 10:12:04 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [12] Trackback

    With the power of the Pentium IV Processor, Scott Hanselman could be Jesus

     

    That is something that Scott Hanselman and I came up with at TechEd Malaysia-we tend to have a lot of fun together. . Well I have always had issues with partying too hard with Scott, but anyway, he will be in town to speak at the NYC .NET Developers Group this Thursday night. Here is his topic:

     

    Zen and the Art of Web Services (or How I Stopped Worrying and Learned to Love WSDL)

     

    Will Web Services save the world? More importantly, will they save you time? Are Web Services just a bunch of hooey? We’ll separate the good from the bad and dig into the WHY of Web Services and the HOW of the .NET Framework. We’ll go low level and sniff packets on the wire and we’ll go high level and design business documents with XML schema. We'll auto-generate Business Domain Objects and Messages. We’ll discuss the meaning of the WS*.* specifications, interoperability and get our heads around the "Zen" of Web Services and see where .NET succeeds and where it falls down. This talk will be as technical as you want it to be, but it will also be valuable for the Business Person or Project Manager who really wants to answer the question "Web Services: So What?" Doesn’t sound like the typical Users Group meeting, does it? You’ll just have to come by and find out!

    posted on Monday, January 12, 2004 5:44:53 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

    How the das Blog Cache Engine Works (And a Caching tip in General)

     

    Clemens and I may disagree on SQL Server v XML storage, but the das Blog Cache engine is real simple and we agree on it. What we do is cache the main page for 1 day (86400 seconds). We also varybyparam for the date and none. This way the page will stay in cache for either 1 day or until it is edited again (via a comment or an addition blog entry.)

     

    I was putting together some samples for the MDC in Cairo next week and made a real simple page that caches a page based on the query string (varybyparam) and a file dependency (an XML file).  Here is an example in a simple page using an ASP .NET datagrid against Northwind:

     

    <%@ Page language="c#" Codebehind="Cache_VaryByParam_Filedep.aspx.cs" AutoEventWireup="false" Inherits="DataGridCSharp.CachingDataGridFile" %>

    <%@ OutputCache duration="86400" varybyparam="CustomerID" %>

    <!DOCTYPE HTML PUBLIC "-//W3C//DTD HTML 4.0 Transitional//EN" >

    <HTML>

    <HEAD>

    <title>CachingDataGrid</title>

    <meta name="GENERATOR" Content="Microsoft Visual Studio .NET 7.1">

    <meta name="CODE_LANGUAGE" Content="C#">

    <meta name="vs_defaultClientScript" content="JavaScript">

    <meta name="vs_targetSchema" content="http://schemas.microsoft.com/intellisense/ie5">

    </HEAD>

    <body MS_POSITIONING="GridLayout">

    <form id="Form1" method="post" runat="server">

    <asp:DataGrid id="DataGrid1" style="Z-INDEX: 101; LEFT: 16px; POSITION: absolute; TOP: 72px" runat="server"></asp:DataGrid>

    <asp:Label id="Label1" style="Z-INDEX: 102; LEFT: 24px; POSITION: absolute; TOP: 40px" runat="server"

                       Width="440px">Label</asp:Label>

    </form>

    </body>

    </HTML>

     

     

    Then all we do is add a file dependency, in the case of das Blog it is the actual XML file that stores the data. This way the page will stay in cache for either 1 day or until it is edited again (via a comment or an addition blog entry.) Here is the code behind for the simple Northwind example from above.

     

              private void Page_Load(object sender, System.EventArgs e)

              {

                  //we are setting the OutputCache to 1 day and the varybyparam for the querystring

                  //<%@ OutputCache duration="86400" varybyparam="CustomerID" %>       

                  //override the cache if this file changes

                  Response.AddFileDependency(Server.MapPath("Contacts.xml"));

                 

                  Label1.Text="Page Generated At: " + System.DateTime.Now.ToString();

                  // simple databinding for testing

                  SqlConnection conn =new SqlConnection("server=(local);uid=sa;pwd=;database=Northwind");

                  //open the connection

                  conn.Open();

                  //can set a command to a SQL text and connection

    SqlCommand cmd = new SqlCommand("Select * From Orders Where CustomerID='" + Request.QueryString["CustomerID"] +"'", conn);

                  SqlDataReader dr;

        

                  //open a datareader

                  dr = cmd.ExecuteReader(CommandBehavior.Default);

     

                  //do the databinding

                  DataGrid1.DataSource = dr;

                  DataGrid1.DataBind();

              }

    posted on Sunday, January 11, 2004 11:36:27 PM (Eastern Standard Time, UTC-05:00)  #    Comments [0] Trackback

    SCO is Desperate

     

    SCO, who is suing IBM over Linux (and is threatening more lawsuits against corporate Linux users), yesterday attacked the GNU GPL (General Public License) in which Linux is distributed. In an open letter from SCO CEO Darl McBride, SCO said the the GPL is in violation the United States Constitution (and also some U.S. copyright and patent laws).  

     

    They are bringing in the US Constitution to this debate? Please.

     

    Here is my open letter to SCO:

     

    Dear Darl McBride,

     

    Drop the damn lawsuit already.

     

    Regards,

    Stephen Forte

    New York, NY

    posted on Friday, December 05, 2003 6:52:50 PM (Eastern Standard Time, UTC-05:00)  #    Comments [14] Trackback

    Great Panels @ PDC Today

    The RDs are covering some panels at the PDC today, check out www.pdcbloggers.net for their reviews.

    Title Speaker(s) RD
    Making it Sizzle: Enabling and Building Next-Generation User Experiences on Windows “Longhorn” David Massy; Pablo Fernicola; Tjeerd Hoek; Chris Anderson; Michael Wallent Thomas Lee
    Designing the CLR Brad Abrams; Anders Hejlsberg; Christopher Brumme; Patrick Dussud; James Miller; Jonathan Hawkins; Sean Trowbridge; George Bosworth Paul Sheriff
    Choosing The Right Business Integration Technologies Donald Farmer; Scott Woodgate; Alex Weinert; Joe Sharp Andrés Fontán García , Mike Snell
    Real World Innovation:  From Idea to Product Phil Fawcett; John Lefor; Lili Cheng; John Breese; Jeff Erwin; Katie Drucker; Renee Labran Joel Semeniuk
    Connected at the Edge: Building Compelling Peer-to-Peer Applications Robert Hess; Amar Gandhi; Oliver Sharp; Kim Cameron; Shaun Pierce; Gursharan Sidhu  
    Client Architecture: The Zen of Data-Driven Applications Michael  Pizzo; Alex Hopmann; Jeremy Mazner; Mike Deem; Quentin Clark; William Kennedy Edgar Sánchez, Terry Weiss
    Mobile Application Development and Distribution:  Innovation and Opportunity Irwin Rodrigues; Chee Chew; David Jones; Bruce E. Johnson; Laura Rippy Jon Box, Chris Kinsman
    Put The Power Inside: Hosting the CLR in Your Application Balaji Rathakrishnan; Mahesh Prakriya; Christopher Brumme; Christopher Brown; Dmitry Robsman; Ramachandran Venkatesh; Mark Alcazar Abdelmalek Kemmou
    High Performance Computing on Windows: Taking Care of Business David Lifka; Kang Su Gatlin; George Spix; Andrew Lumsdaine; Max Giolitti  
    “Indigo:” What’s Next for Connected Apps and Web Services Don  Box; Oliver Sharp; Omri Gazitt; Joe Long; John Shewchuk; Eric Zinda Ingo Rammer
    Computing on the Beach: Visions of Mobility Donald Thompson; Tara Prakriya; Bert Keely; David Groom; Otto Berkes; Arif Maskatia Abdelmalek Kemmou
    Rocking the Web with ASP.NET “Whidbey” Scott Guthrie; Rob Howard; Jon Box; Shanku Niyogi; Thomas Lewis; Nikhil Kothari; Dmitry Robsman Jon Box (panelist), Carlos R. Guevara
    The Future of .NET Languages Paul Vick; Rob Relyea; Anders Hejlsberg; Brandon Bray; Erik Meijer; Daniel Thorpe; Raphael Simon; Basim Khadim  Jackie Goldstein
    Architecture Panel:  What is Service-Oriented Analysis and Design Michael Burner; Brent Carlson; Mark Driver; Martin Fowler Scott Hanselman, Michele Leroux Bustamante
    Security Panel: What’s Next? Directions in Security Jason Garms; James  Hamilton; Carl Ellison; Howard Schmidt Thomas Lee, Patrick Hynds

    posted on Thursday, October 30, 2003 6:29:13 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [0] Trackback

    WinFS at PDC

    The PDC will be all about Longhorn, Yukon and Whidbey (and maybe some Web Services crap too.) Since I am writing the Yukon book for MS Press and have been playing with it almost all year and Whidbey is now in alpha, Longhorn is what I want to see the most of. As a developer, WinFS seems the most important.

    So the future of the file system in Windows is WinFS. The hints are that WinFS will "leverage database technolgies." What exactly does that mean? Hummmmmmm. SQL Server?

    Here are the sessions to look at:

    WinFS: File System and Storage Advances in Windows "Longhorn": Overview

    Track: Client   Code: CLI201
    Room: Room 150/151/152/153   Time Slot: Tue, October 28 2:00 PM-3:15 PM
    Room: Room501ABC   Time Slot: Wed, October 29 2:00 PM-3:15 PM
    Learn about the next generation storage platform for Windows! In "Longhorn" we're advancing the File System into a Storage Platform for storing structured, file and XML data. Leveraging database technologies, the "Longhorn" storage platform manages data for organizing, searching and sharing. The storage platform also allows for data synchronization across other "Longhorn" and foreign data sources. The new storage platform supports rich managed "Longhorn" APIs as well as Win32 APIs.

    WinFS: File System Integration

    Track: Client   Code: CLI326
    Room: Room 152/153   Time Slot: Wed, October 29 11:30 AM-12:45 PM
    Speakers: Sanjay Anand
    This session provides an overview of the File System and Security features of WinFS, including but not limited to a drilldown into the WinFS namespace, file system integration and Win32 support. We also cover the WinFS security model including authentication, authorization and encryption features that help you secure your data as well as build security into your applications. Learn how you can integrate your file-based content into WinFS using WinFS property promotion infrastructure or build support for integrating with WinFS search capabilities.

    WinFS: Schemas and Extensibility

    Track: Client   Code: CLI322
    Room: Room 409AB   Time Slot: Wed, October 29 10:00 AM-11:15 AM
    Speakers: J. Patrick Thompson, Toby Whitney
    The WinFS schemas are the data and API definition that ship with Windows. The Windows Schemas define documents, contacts, system and person tasks, and much more. Learn about the thinking behind the designs of the Windows Schemas and how you can extend the schemas that ship with Windows, create your own schemas, and extend WinFS.

    WinFS: Schemas, Extensibility and the Storage User Experience

    Track: Client   Code: CLI323
    Room: Room 409AB   Time Slot: Wed, October 29 2:00 PM-3:15 PM
    Speakers: Nat Ballou
    Windows "Longhorn" introduces an entirely new user storage experience and model around the storage of user's data. Get an introduction to new concepts such as: dynamic sets, static sets, and views, with a quick overview of the "Longhorn" storage user experience. Focus on how you can present application-specific data in Windows as well as re-use "Longhorn" components to build rich "Longhorn" applications.

    WinFS: Using Windows "Longhorn" Storage ("WinFS") in Your Application (Part 1)

    Track: Client   Code: CLI320
    Room: Room 409AB   Time Slot: Tue, October 28 3:45 PM-5:00 PM
    Speakers: John Ludeman
    The preferred method of access to the advanced features of the new Windows Future Storage (WinFS) is through the WinFS API. This session starts by covering the broad set of concepts that form the foundation of the WinFS API design, and then delve into specific code examples. You will be able to write a simple application against WinFS by the time this session is complete. The walk-through includes connecting to the store, basic enumeration and queries, saving changes back to the store and the associated transactional semantics. Folder and Filestream access are also discussed. Basic data change notification scenarios round out the core examples.

    WinFS: Using Windows "Longhorn" Storage ("WinFS") in Your Application (Part 2)

    Track: Client   Code: CLI321
    Room: Room 409AB   Time Slot: Tue, October 28 5:15 PM-6:30 PM
    Speakers: Mike Deem
    In part 2 of the WinFS API session, we jump right into the deep end and cover the advanced features of the WinFS API, including rich view support, support for XML types, asynchrony, using the "Avalon" data binding support, using the interfaces from COM, how to build your own schemas and extensions on WinFS, the different relationship lifetimes and the associated semantics. A key component of the WinFS architecture will allow for ISVs to extend the same base schemas to maximize information sharing or even create their own schemas. How and where to extend WinFS is discussed, along with the schema and API creation process. Part 1 should be considered a prerequisite for taking this session.

    posted on Friday, October 24, 2003 4:14:26 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [1] Trackback

    Well the Maoists and 3-day general strike is over but it left Kathmandu a mess. At least two bombs went off yesterday and power was lost several times. The Army was all around the city all day today. Garbage and such is everywhere.

    Well the trek to Everest was not as dangerous as the Maoists in Kathmandu, except for a sivere sunburn and about 22 pound weight loss I am fine. No altitude sickness (only went under 19,000') and no "runs" or anything like that. I did accidentally delete all the messages in my inbox, so I have no idea who sent me email when I was away. Oh well, ORCSweb Team to the rescue (Like always)!

    Off to India, more on the trip soon!

    posted on Sunday, September 21, 2003 5:43:53 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [24] Trackback

    After 21 days of hiking in the fresh air without hearing any automobiles or seeing any paved roads, phones, electricity and all work was done by human power or animal power, it was kind of strange getting back to the busy city of Kathmandu today to witness a 3-day general strike. 2.2 million people live here but a general strike because of the Maoist rebellion has reduced the city to a standstill, no cars, and sometimes even no power.

    Soon I will be reunited with my laptop and have a high-speed connection in India, move news to come...

    posted on Saturday, September 20, 2003 8:12:05 AM (Eastern Daylight Time, UTC-04:00)  #    Comments [14] Trackback

    Today Scott Case, fellow RD Tim Huckaby, and I went to Kuala Gandah Elephant Conservation Centre in central Malaysia. This facility, run by the Malaysian Government, takes elephants that are endangered and relocates them to the protected natural rainforest where they roam just about free. The centre also looks after orphan elephants. We got to spend the day with some of the relocated elephants that have not entered the general population yet. I am talking up close in nature with some serious elephants-at times it was quite intimidating like when we had to run out of their way! That said this was one of the most amazing things that I have ever done in my life.

     

    First we got to hang out with an orphan baby female elephant. She was very tame and really enjoyed having us pet her and play with her. She especially liked when we would put our hands in her mouth. At 20 months old she was already over 1,000 pounds!

     

    Then we went into the preserve and hung out with five adults and a child elephant. This was a totally wild experience. After that we got the chance to bathe and hand feed the elephants. After washing and feeding them, they treated us to rides, on land in the river. While in the river the elephants liked to throw us overboard, we were told it was a sign of affection by our guide Razali-who was a very cool dude.

     

    When we were all done, we visited another preserve and saw a nearly extinct bear (who loved me) and some deer and other cool animals. This was quite a unique experience.

     

    What a great day to spend my off day at TechEd. Well it is back to work tomorrow, five sessions in 3 days!!

     

    posted on Monday, August 25, 2003 2:25:43 PM (Eastern Daylight Time, UTC-04:00)  #    Comments [10] Trackback

    They call this FUD

    Stephen Forte’s Testimony to the New York City Council, April 29, 2003

    Thank you all today for taking time to hear my testimony. My name is Stephen Forte, I was born and raised in New York City and am 31 years old. At the age of 23 I founded a high-tech consulting firm called The Aurora Development Group, which was sold 5 years later. I also served as the Chief Technology Officer of Zagat Survey here in New York from late 1999 until January 2002. Last April I co-founded Corzen, based up at Union Square where I currently serve as Chief Technology Officer.

    I have had to do the economic and technical analysis of whether to use Open Source in my operations twice, once at Zagat where we had a 5 million dollar technology budget but not enough time and money to meet our deadlines for an IPO and second when I founded Corzen last year with only $300,000 of initial investment. Time to market and saving money was very important at Corzen epically since I did not get a paycheck until December 2002.

    As a small business owner and resident of the City of New York for over 31 years, I appreciate the magnitude of the current budget shortfall. It may be tempting to make a blanket policy stating that the City must only use Open Source software to save money. On the surface Microsoft Windows, for a typical server machine configuration costs approximately $6,000. Linux costs nothing. Surely Linux is cheaper. Isn't it?

    On the surface it appears that way. But once you dive into the details you will see that Open Source is not free and that while it may have a place in your organization as well as mine when the technology deems fit, there should be no blanket “Open Source” only policy. This is a policy I would strongly urge the council not to spend any taxpayer time considering. Here is why.

    A benchmark recently performed by the non-profit TMC (www.tmc.org) compared a Linux machine running IBM’s middle tier software WebLogic/ WebSphere compared to a Windows Server machine running Microsoft middle tier software .NET. At this moment in time there is no viable open source “middle tier” component to compete with the IBM or Microsoft offerings. Just a technical note, the middle tier is what makes your custom applications work: the web server, the application server, the runtime environments and the programming languages.

    The TMC broke down the cost of the server machine into three components: the hardware, the operating system (Windows or Linux), and the infrastructure (.NET, WebLogic, or WebSphere). Since the hardware was the same in all benchmarks, the differentiating costs were due to the operating system and the infrastructure.

    It turns out that the cost of the operating system is relatively insignificant in the overall server costs. Of the total WebLogic server cost of $76,990, only $5,990  was attributable to Windows. Of the total WebSphere server cost of $84,990, again, only $5,990 was for Windows. In neither case was the cost of Windows more than 8% of the total server cost.

    However the use of Linux does have one dramatic cost consequence. It eliminates the possibilit