I was at Microsoft's Strategic Architect Forum in Redmond last week, where I had a chance to meet and interact with a hand-picked elite group of some of the top software architects from around the world. It was an incredible opportunity to hear new ideas, as well as exchange thoughts on my favorite topics such as Service-Oriented Architecture and, of course, Enterprise Service Bus.
I have some thoughts that I’d like to share….
1. Agile development and Service-Oriented Infrastructures
The message here was loud and clear, and I heard it from several people: agile approaches to SOI work, nothing else does. The principles of SOA have been with us long enough now that many enterprises have now adopted and deployed Service-Oriented Infrastructures, and lessons-learned are emerging. Some initiatives succeeded, others failed. The common thread was that initiatives that embraced the principles of agile development, and took an approach of continual iterative deployment, were successful. Others that took a more “boil the ocean” approach failed. I can see how some people would think that a large waterfall effort is warranted as they are fundamentally effecting organizational change and laying down a whole new infrastructure. However, the real-world experiences I heard showed that this approach consistently fails.
I am currently on a project where we are mapping out a SOI architecture. Our approach is agile. We are doing some minimal (but adequate) overall documentation/diagrams/use cases now, but in about 8 weeks (would be 6 weeks if not for the holiday season) we expect to have our infrastructure laid down and our first entity service deployed. For the first service we will pick the simplest one we can to implement, but something that flexes the infrastructure. In our case, this means that we will require BizTalk 2006 to be deployed (high availability), and we will be communicating with SAP. Ideally, I’d also like to see the BAM portion deployed and have communications with Microsoft CRM, but these are secondary goals, and will likely be pushed off into the second wave. Once the initial service is done, we expect subsequent deployments to happen at a rapid pace, in 4 week sprints, or faster.
2. Event-driven architectures
I was intrigued by the title “demystifying event-driven architectures”, so I attended this brainstorming roundtable led by John Evdemon. It turned out that this was one of the best parts of the conference for me, as it got me thinking about new patterns. First off, I realized there’s nothing new here. If you’ve been following my posts here about architecting BizTalk solutions by using lose-coupling though the MessageBox, and if you’ve seen my posts about implementing an ESB using BizTalk Server, then you’ve been reading about event-driven architecture. I’ve been doing it for years, just not using those words to describe it. Essentially, a publish/subscribe architecture *is* an event-driven architecture. When a message is published, it is an event, and subscribers (event handlers) react to it. Simple huh, so what got me thinking?
Over the past year I’ve had lots of discussions about the best way to implement communications between ESBs. This makes sense in situations where you have geographically dispersed ESBs (eg: the US ESB, EU ESB, APAC ESB), or, departmental ESBs (the HR ESB, Manufacturing ESB, OrderProcessing ESB). A good visualization is "ESB Islands".
This model could be drawn as a series of hubs with lines connecting them.
One of the commonly mentioned event-driven scenarios is RFID. In that model you could have huge amounts of data out at endpoints (items moving between warehouses, leaving stores, sensor data, etc). The volume of this data could potentially be overwhelming, and the value could be diminished due to the sheer volume. One way to solve this data-flood issue is to do aggregation at the perimeter. This is the idea behind the BizTalk Server 2006 R2 RFID approach, data gets aggregated at the edges and only the aggregation gets relayed. The event-driven aspect here is that when a scanner reads an RFID tag, it fires an event.
This model can be drawn as a series of concentric circles, with raw data at the outer edges becoming “condensed” as it moves towards the center. By the time the flood of raw data makes it to the center, it would have been distilled down into meaningful data that is business-relevant.
Where this really gets me thinking is when you combine those two models. I’m still wrapping my head around it, but it seems to me that this would be an incredibly powerful approach to architecting a broad range of distributed applications. I love the notion of a tube of toothpaste being sold in Singapore triggering an RFID scanner event that causes messages to bounce around in distributed ESBs and eventually contributing to a message at a corporate ESB in another part of the world. A few years ago scenarios like that would have been impossible, but now this is the world we’re moving towards. Are you doing something like this today? I’d love to hear about it, as I see this as being the next logical step for some companies (manufacturing, consumer goods, etc) in highly-evolved enterprise messaging/integration backbone, and as a way to leverage ESB-approaches beyond conventional core application integration usage.