I still owe you some architectural overview discussions, but first I wanted to “converge the timeline” and make these posts current by bringing you up to where we are at in the project today. The architectural posts will come soon (really!). I have been somewhat buried in develop/deploy mode, so documentation of all sorts has languished, but that will change soon.
Right now, we are working towards our first milestone.
Milestone 1 Composition
At a glance, it’s a very easy milestone to hit, and contains limited functionality. It is however very significant because it will prove out the processes we are putting in place, as well as the foundation we’re laying down, for both hardware and software. As a firm believer in iterative development, this approach suits me fine, and it has proven its value as we have moved through the process.
From a BizTalk perspective, the initial milestone consists of:
• Dynamic transformation. Only two external sources will be available in this initial release for map selection: hard coding of a map name based on the receive location, and the ability to call the BizTalk rules engine, passing through some data about the message being transformed. Subsequent releases will have more selection options, and I will detail those and the mechanism we’ll be using in future posts.
• Exception handling. The version of exception handling being deployed is extremely simplistic compared to what's coming. The reason this is here at all right now is that there is a chance of failures in dynamic transformations (eg: invalid map name, map not deployed, mapping failure, etc), so we had to provide something.
• Heartbeat. Think of this as a “ping service”. It’s simply a Web service that calls a BizTalk orchestration, and the orchestration echoes back what it was sent as a parameter. Braindead simple, but it will allow us to give the Java side of a house a WSDL they can hit to prove interoperability, and has us deploying an initial Web service into our environment (along with virtual directories, app pools, and all the other little challenges that could potentially involve). We are also bundling in a simple UI to confirm the Web service is working.
• JMS pipeline component. We have a JMS pipeline component that promotes/demotes JMS header properties into/out-of message content. To test this we need to pull a JMS message off an MQ queue, and deposit it in another queue. JMS header properties on the outbound message should be preserved and be identical to the inbound ones.
• Functional tests. From my side we have identified four tests that will be run. Given time constraints, this will only be partially automated, but the automated ones will push NUnit and BizUnit onto the QA servers. Ultimately, the goal is to have most, if not all, tests run fully-automated inside NUnit.
From a non-BizTalk perspective, the initial milestone includes:
• Systinet UDDI registry running on AIX (that's “Unix” to you purely-MSFT people who may not know :))
• AmberPoint Web services management, also running on AIX (including client and service metrics collection)
From an infrastructure perspective, the initial milestone includes:
• High availability MQ Series (AIX only for now, a Windows HA MQ cluster will follow)
• High availability BizTalk (self-clustered like you do with BizTalk servers, plus tuned with multiple MessageBoxes)
• High availability SQL Server (clustered, and tuned)
This is a formal, structured environment. We will not have the luxury of admin control over QA boxes, nor will we be doing our own deployments (we're not allowed to touch the boxes). In order to succeed, this needs to be done in a meticulous and well documented way, and that’s what we’re doing.
Our first stop was to deploy to a “QA” Virtual Machine. This would mark the first installation that was not on my (the developer) VM. We have multiple MSIs, and as expected, it did not work properly the first time. We missed some dependencies, and had to solve some non-development issues, so we iterated through this process several times before we got it right. The fact that we are using a virtual machine for this stage saved us lots of time, we can snapshot the machine, do an install, determine what’s missing, revert back to the snapshot, and do it all over. We could achieve the same result using a physical machine and something like Ghost, but this is much MUCH more convenient.
The steps I went through are:
• I have both a Dev and a QA VM running at the same time (we’re using VMWare)
• In the Dev VM, I build the MSIs and drag them into a shared folder on the host
• I switch to the QA VM, and revert to a “clean” state by restoring a snapshot
• I drag the MSIs over from the shared host folder and do the install
• I load and run the NUnit tests inside the NUnit GUI
• I repeat as necessary to correct any issues
The whole process worked very well, and I was highly productive. Without virtual machines, this whole process would have taken MUCH longer. It was also pretty cool that I was able to work on this while riding the BART (Bay Area Rapid Transit, a train I take most days). I can’t imagine BizTalk development without VM technology!
Build/Deploy Team Composition
From a team perspective, it was at this stage that I involved Tom Canter of MidTech Partners, who has been working alongside me on this project. Up until this point, the division of labor between us was that I did the BizTalk architecture and development (building out the core ESB Engine), while he defined and deployed the complex server infrastructure that all this will run on. We work very well together, and we have a clean division of labor, and complementary skill sets. When explaining our roles to people, I just say “I'm software, Tom's hardware”. Between us, we can do pretty well anything that needs to be done.
Tom and I started working together on the deployment as we were near the end of the “create the installation bits” stage. I got it to the point I was happy with project names and project granularity, and had preliminary installations working. Tom took it from there and fleshed out the scripting part, continually moving towards the goal of “single click deployment” for the ESB engine and functional tests. He had a lot of challenges with file paths, as we wanted to be able to install in any environment, including multi-BizTalk environments, and we didn’t know which drive or path we’d be installing to when we created the MSIs. In addition, the functional test receive locations needed to reflect the installation path, and we had other path dependencies for the BizUnit/NUnit test cases (which are XML files). Then, we had two key variants on the installs: a runtime install needed for Dev, QA, Staging, Production, and a developer install that any developer could install in their VM and start developing against their own personal ESB core engine. Getting this installation right was a non-trivial task!
Moving on: next steps
After we got everything installing properly in a VM and successfully run the unit tests, we then deployed to a “Dev” physical machine, working side by side with an IT Ops team member. This was the point at which we discovered that we didn’t have appropriate rights to the SQL Server (hey, you can’t think of everything!), which caused us a minor but frustrating delay as a request form for permissions worked its way through the IT process.
It was my hope that we will had captured all the steps by then, and that this process would be just a matter of following a script. We have by this point automated as much as possible, trying to get as close as we could to “one click deployment”. We did this by a combination of MSI post-installation steps, and calling WMI script in the same batch file that installs the MSIs, with Tom working his scripting magic where required.
This is the stage that we are currently at.
Once this is done, the next step will be for the operations person to deploy, following a script and by himself, into a QA environment. This is a bit more interesting because 1) we are not involved in the deployment and 2) this is the first multi-BizTalk environment the solution gets to.
However, my role on this part is done. Once we get it deployed, we are going to be needing some documentation and guidance to show other developers to leverage the ESB’s capabilities in their own projects. This is not a case of “if you build it they will come”. In order to be effective, there needs to be an effective “on-boarding” process to help people “get on the bus”.
As boring as that may sound to some technical people, it is every bit as important to the success of the project as the software is. If we don't have effective guidance, we will fail. So, I will be switching back to that as my next priority.
Somewhere along the way, on a plane or a train, or late at night in a hotel room, I’ll find the time to post some architectural information.