Monday, August 22, 2011

Key Learnings - Using EDA to implement the core SOA principle of "loose-coupling"!!!

A lot has been said about how SOA and EDA are unique "architecture styles". It seems like only one or the other architectural principle is considered in proposing architectural solutions. However, there is a distinct benefit to using both paradigms in unison in solving business problems!!


Business events are the "core concept" that drive any EDA implementation. On the other hand, decoupling business applications and the business functions/ business processes embedded in these business applications is the core theme of SOA. SOA implementations rely on the use of standards based web services technology stack and that of canonical business documents (XML) .

However, it is my contention that an enterprise that does not invest in the web services or ESB technology can still leverage EDA style business events to implement loosely coupled business services provided it makes an effort to analyze its' business events and creates canonical representations of these key business events. This could mean defining business events that encapsulate a business concept that have an associated business concept state indicator or a business action indicator. Further, these business events can be used to trigger constructs such as event handlers that act as a facade or layer of indirection to execute a business function via the use of an application API.

It must be noted that the terms event producers and event consumers or publishers/ subscribers are being used loosely to denote the initiator of the event and the owner of the business behavior that "knows" how to deal with the event. In a SOA realm this would be the service consumer and the service provider respectively.

The key to this model in leveraging SOA is the use of self-describing canonical business events that are subscribed to by independent event listeners. These event listeners and/or event handlers that are delegated to by these event listeners help insulate the event producers and event consumers from the complexity of knowing how to interpret the events. Here the event producers/ publishers and event consumers/ subscribers are decoupled from one another via the use of canonical business events as well as messaging technologies.

Either of the two layers i.e. the event publisher or event subscribers can be altered long as the contract is adhered to in terms of the canonical business events. Also, messaging technology oriented configuration consoles allow the definition of the event publisher/ subscribers to be connected via metadata as opposed to hard-wiring these in code. Event handlers act as event adapters in that these could translate the event and invoke the required API call to deal with the event. The event handlers act as a business facade that hide the workings of the business application and allows the business application to change without affecting the event producers.

To recap, if loose-coupling is a core SOA principle that promotes business agility then the use of event handlers invoked using messaging technology and canonical business events can be used to deliver this goal. The enterprise does not need to invest in a SOAP stack right away to achieve this goal. If desired technology standardization and interoperability can be introduced at a later phase by turning these event handlers into web services. This two phased approach defined above enables the enterprise in pushing off investment in the technology stack to a future phase without sacrificing business agility and/or offering novel business capabilities.
 
Please feel free to drop me a note.
thanks.
surekha -

Tuesday, August 09, 2011

Launching MDM as Part of Larger Initiatives

Let me walk through a situation that may be common in many organizations. You as an IT visionary recognize the need for MDM initiative but are having difficulty appropriating funding due to cost concerns or simply as a result of organizational inertia. You decide to ride the coat tail of another major initiative that is somewhat related to the successful implementation of the MDM. Is it a good idea? I would suggest yes, but be careful about setting expectations and having the right level of buy in or your successful execution of MDM initiative may be perceived as not so successful. This may happen due to personalities involved or not having MDM implementation risks included in your schedule or cost estimates. You can be sure that you are going to run into data issues, and amount of data you wil have to deal with will always be more than you think. All these issues may impact the larger intiative and will reflect badly on the MDM effort.

Make sure all stakeholders understand that while MDM is expected to support the larger initiative, it has its own benefits and its goals are much larger in scope. Also, include additional time and cost to mitigate the risks.


Ashok

Saturday, July 02, 2011

Business Event Subscriber Responsibility?

Thought I would write a comment about what are some of the expectations of a business event subscriber.  These would be rules that the business event publisher could depend on.  These sets of rules form the basis for Event Driven Architecture (EDA) based implementations.


Subscribers of business events and business alert notifications often assume that the business event publisher is responsible for insuring that duplicate events and any repeat alert notifications are suppressed. However, to protect itself the subscriber has to be able to "analyze" the business event to determine if the erroneous event/ alert was sent by the business event provider, messaging architecture or the enterprise service gateway. The subscriber (business application, business process) would have to "know" when to discard the business event as being a duplicate event, as well as when to re-apply the same business event, which could have been issued as a result of change in the business policy or due to an increase in the business alert thresholds.  Knowing when an event transmission is real versus a false notification insures that the outcome of the EDA implementations are valid.

For instance, if a "low inventory alert" is received by the Purchasing Process it may react to this alert notification by transmitting a PO to the supplier. However, if the same Purchasing Process gets a second alert a few seconds or a few minutes later chances are the Purchasing Process may choose to ignore this. Not examining the alert more closely may cause it to erroneously discard the alert.  It is possible that in order to optimize and tune the response processing the subscription process just looks at the product and quantity to de-dup alerts received within a certain period thus causing the second alert to be discarded. The business impact of ignoring the second alert was failure to increase in the "minimum product level". The bottom line, the subscribing process has to know how to differentiate between fundamental business rule changes and duplicate alerts and not have a compute process optimization overrule business policies.

In general, in an event driven architecture realm the disconnectedness of the publisher/ subscriber pair places a burden on both parties to insure that there exists a mechanism and rules to enable "semantic" translation of the message payload that carries the business event or the business alert.  Without investing in the definition and analysis of these semantics benefits of the loose coupling EDA architecture would deliver a scalable technical architecture model but would not yield the right business outcome.

It must be noted that standards such as WS Eventing, WS Base Notificaitons and WS Notification help to define WSDL and XML Schemas for defining the mechanics of loosely coupled and inter-operable EDA implementations but the business semantics and rules in constructing and consuming the events are not really well laid out.  Not having had exposure to specifications such as ebXml and OAGIS business documents I assume some of the B2B interactions may have more mature semantic definition but for the most part I think these rules of engagement may still need to be fleshed out by the participating parties.

Thoughts and feedback on this topic would be very useful.
surekha -

Sunday, May 29, 2011

Service granularity and service reuse - why information semantics are key?


In the following blog which refers to the Amazon CEO's "letter to the share holders" one is truly amazed to read about how the Amazon team constructs a single product detail page from a combination of 200 to 300 services!!

Amazon architects may have truly found a way to identify the right level of service granularity to combine these into composite service offerings - without causing the information in the combined service to be distorted. This so called semantic dissonance has led many of us architects to create coarse grained services which preserve the quality of information but at the cost of reuse!

This is the the balancing act of how to define a service with the right grain of information encapsulated in it such that once it is combined with another service the combined service is still able to deliver meaningful information at the same grain of the originally combined services.  This is what we mean by semantics that govern how to combine two or more granular services to create a meaningful composite service which continues to encapsulate information that is valid and accurate.  It may be technically possible to create a composite or a single course grained service starting from two granular services (long as the canonical models/ payload have common elements) but the combination may overlook key business constraints, business rules, regulations and algorithms that render the final response invalid and inaccurate.  

A very simple example, combining product sales of a region with product marketing dollars to yield a service that delivers marketing efficacy may be a great new service concept.  However, if the product sales service does not provide information about the tenure of the product company by region or the presence of similar competitor products by region then the marketing efficacy service is not able to account for lift in the sales accurately.  The reason being marketing efficacy service cannot appropriately reflect the efficacy of the marketing dollar as being attributed to the "novelty" factor (where in the product is the first to enter the market in this region, or else if the product has a very attractive "entry price point" as it's debut to this new region).  Both of these artificially show that the marketing campaign strategy is far more successful than it might really be.  The price and novelty had more to do with the sales lift. 
 
The use of information by the business to make strategic decisions and the study of information flow across lines of business and the alignment of business information to strategic business activities is key to business architecture. By extension we find that it is in the realm of business architecture to govern service interactions and the creation of composite strategies to guard against semantic dissonance. 

Please share with me your thoughts.

surekha -
 

How to take a Transaction Based Vendor Relationship to the next level?

In the following blog, I talk about "What is the definition of a strategic partnership with a vendor?" and outline a few thoughts on how large enterprises should engage with their vendor partenes and what the expectations of such a relationship should be.  A fellow blogger of mine expressed his "dissapointment" with vendor partners and their focus on selling new product as opposed to helping maximize the returns from the existing IT Assets.

I whole heartedly agree the pressure from vendors can be exasperating.  I was thinking if there is a way to turn this around. What if  as an example, your IT product vendor were told that they would have to provide expertise (at no cost) to solve current interoperability issues or resolve performance bottlenecks between their product and one other product while show casing a operations admin and management/ monitoring tool - OAM tool?  This would be their elite product engineering or premium consulting service participating on-site mentoring your team and not the "support" organization which has to trouble shoot remotely.

Another example, if this is a package solution vendor with a long deployment cycle then the vendor can be asked to offer (again for free) analysis tools or processes or best practices and other accelerator kits that improve speed to market for their product, with the chance to demonstrate components of their next generation product.  Again, the vendor has to part with IP (intellectual property) and address your pain point with the chance to demo a product that would or could be a fit in "your" enterprise.


The result would be the ability to extend or enhance the life of existing investments, while evaluating the vendor product, vendor processes and expertise in a real life scenario.  Most importantly however, you are putting their resolve to test as to whether they are able to or willing to be your strategic partner and not just a transactions based product vendor.

Of course, I would be remiss in not stating that VMO, PMO and your legal departments would have to help with insuring this was a fair and scientific discovery process and also, that the licensing models were conducive to both parties.

As always your comments are welcome. 

Surekha -

Tuesday, October 05, 2010

Should Architects aspire to be Product Managers?

One of the interesting trends I am observing is that Architectures aspiring to be Product Managers. Have recently come across multiple PM who were architects and have also been approached my a few who are interested in becoming one.

Following are my thoughts on when Architects should consider PM as their career path.
  1. A true Business Architect with the ability to map the technology strategy to align with the Business Strategy
  2. Good understanding and hopefully first hand experience interacting with the real customer (not the business units)
  3. Good understanding of revenue and business model
  4. Passionate and believe about the role of the products in the customers life (whatever they are using the product for)
  5. Ability to influence cross-functional team and get everyone passionate and focused on the product
  6. Willing to change course mid-stream based on customer/market feedback
  7. Ability to ride up and down the market roller coaster
  8. Ability to keep singularly focused on an single product/portfolio

Do not take on the role:
  1. by assuming that PM get to define the product and every one else will follow without any questions (the PM is responsible for bringing every one on board)
  2. consider whether one would be a great Architect vs. a good Product Manager (focus on what one is better at doing - a great advise given to me by my manager)
  3. do not want to deal with constant communication with customers / leadership teams
  4. assume that the grass is greener there - doing what one does best shall reap the right rewards

Just my point of view and this could also apply for taking on a role of a Business Liaison in an IT organizations.

- Yogish

Monday, June 14, 2010

Enterprise Architecture (EA) team's role in Solution Delivery

Hello Fellow Architects - I am writing this blog in response to Todd Biske's blog entry on the fact that Enterprise Architecture Must Assist in Delivery.  I am in complete agreement with Todd on this aspect of EA's role in the enterprise.  Participating in delivery of business solutions is a value add that cannot be discounted. Not only does it add to the credibility to EA but it allows EA to be viewed as an ally which makes it easy for it to stay in touch with the "goings-on" in the enterprise.  This partnership and visibility is key to start identifying cross-domain synergies and cross-business process impacts across all the granular projects.

As the leader of EA I have tried to instill this behavior in my team.  My team is now seen as a much more "useful" partner by delivery and not just as a "watch-dog".  Some of this perception change was a result of EA working with delivery in coming up with practical multi-step solutions/ alternatives to EA standards compliance. We reached out to the delivery teams that were under a severe time crunch to come up with Remedial Action plans that incorporated tolerable architecture compromises that would not jeopardize the stability and flexibility of the solution while still delivering business value (on-time!!).  Putting such intermediate architecture solutions in place with hooks for building upon these architecture constructs increased buy-in from both application delivery teams and their business partners.  The result the business partners got to test a more robust solution and were willing to give the project team adequate time in the next phase.  In addition, we were "invited" to review plans for the next phase where we could inspect the program plans that included time to work on the agreed upon architecture compliance roadmap. I have to admit that EA did go down to a much more technical design level for helping with the integration and infrastructure to be able to communicate our architecture roadmap and recommendations. 

EA achieved not just inclusion in the process but also this forced our architects to stay in touch with reality.  This education for our Enterprise Architects allowed us to understand both limitations and constraints of the enterprise and also the technology.  Learnings from this experience were then taken into consideration in refining the technical reference implementation for the EA strategy. Finally, this education has enabled EA to be a more formidable team that was able to bring this knowledge to evaluate vendor "marketecture" thus keeping irrelevant products out of the enterprise.


Your feedback is invaluable.

Thanks.

Surekha,

Thursday, February 18, 2010

I have been busy developing globalized SaaS products

It has been a while since I last blogged, especially after joining Intuit in 08-2008. For those not familiar with Intuit engineering culture, Intuit is one of the most admired companies above the other famous companies and can attest to that (feel free to drop me line if you want more details).




OK! enough about where I work and just a quick reminder to the readers of this blog - the comments on the blog represent my own personal view and not that of my employer or any third party (with disclaimer in place can now continue :) ).

For over a year now I have been working on a personal finance tool product called Intuit Money Manager which was launched last month in India through our partner Money Control - click here to learn more about the product and click here to get started with a 90 day free trial.


So what had this got to do with Strategic Use of Information Technology? Well! for those interested in managing your own money (like me) - we have been using tools like Quicken, Money (Microsoft withdrew the product from the market last year) and Mint.com which has been very useful in helping plan our finances.

Intuit Money Manager takes only few minutes for the consumers in India to get started and with capabilities like account aggregation, auto categorization, setting goals, reviewing trends (income & spending) and tracking investments provides a comprehensive PFM tool that is configured for the Indian market. For the folks in the US - I would direct them to Mint.com which provides similar capabilities and have personally been using this product since they launched.

In the subsequent blogs I shall try and provide some additional insights on developing globalized SaaS product and how adopting Services Oriented Architecture has helped us achieve this at a much faster pace than traditional development methodology. This approach is not just adopted by our product teams inside the company. Intuit does also provide 3rd party developers to access services at the Intuit Partner Platform and would recommend you check it out, especially if you want to develop and launch a SaaS based consumer or small business product(s) rapidly.

More to follow later and as usual feel free to drop me a line with your comments/feedback.

- Yogish

Tuesday, July 07, 2009

Issues with SOA Adoption

Here is my attempt to identify some of the reasons for failure to adopt SOA. This time the focus is on not having a holistic SOA enabling infrastructure.

Many large enterprises try to reduce vendor-lock by not having a single provider for their entire SOA development/ deployment stack. This philosophy works great from a risk management perspective. However, this risk management strategy directly competes with the “speed to market” gains promised by SOA.

1. Not having a unified platform that facilitates seamless integration across the service orchestration layer, the application layer, the data layer etc. leads to long system integration and debugging cycles
2. Not having a centralized facility for the end to end management and monitoring of services can cause long outages and hampers the ability to track information/ transactions flowing across the various layers of the service architecture (i.e. service orchestration layer, the application layer/ business logic layer, the data layer etc.)
3. Not having a holistic SOA governance suite that enable discovery of existing service assets at design time and that provides service utilization information at runtime causes service proliferation issues

The following blog by my colleague and co-blogger Yogish prompted me to address this issue as it speaks to Oracle/ BEA integration strategy two big names in the space of SOA infrastructure.

Analysis on Oracle’s BEA integration strategy

In lieu of the one year anniversary of Yogish’ blog I am being optimistic in assuming that this acquisition will lead to the creation of a holistic SOA platform that encompasses service interactions at design time and runtime. I am also hopeful that a single SOA platform will provide seamless integration across various layers of the architecture as in the service orchestration/ mediation layer, the business logic/ application logic layer and the data layer. In addition, my hope is that a stronger player such as Oracle (following the BEA acquisition) would start pushing for “SOA standards” and start holding other SOA players accountable for staying compliant with these (in much the same way that the other vendors would now be putitng more pressure on a stronger SOA contender such as Oracle (following the BEA acquisition) to abide by these same standards. This "peer pressure" will hopefully make interoperability an achievable goal.

In the event Oracle is able to pull together a holistic SOA stack then here are some advantages for its' customers -

1. Having one vendor support the end to end stack enables a customer to find the right support whether it be from the perspective of SOA product suite integration or from the perspective of availability of the right tooling to enable and enterprise to cut down its' service lifecycle timeline (service design, service deployment) and improve its' time to market.
2. Having a player such as Oracle that has traditionally focused on scalability, reliability and end to end monitorability will provide customers with a SOA platform that is robust enough to meet the stringent SLAs at runtime while making service availability more predictable and less of a guess work at runtime
3. Having one vendor provide a holistic service governance suite will allow an enterprise to reuse its enterprise service assets (if authored appropriately) thus enabling the business to compose existing services to offer new capabilities.

In conclusion, I want to state that I do not align with Oracle or any other SOA player but merely want to comment on issues that have hampered SOA adoption and how these might be addressed. I believe that a unified SOA infrastructure platform will be a key capability needed to truly realize the full potential of SOA.

Thanks for listening.
surekha -

Sunday, May 17, 2009

Role of Events in taking Proactive Action

In exploring the role of events is it possible to achieve predictive analysis to provide rapid response and take proactive action?

One possibility is by tracking how humans handle event exceptions and locking their processing logic and turning this into business logic. This allows one to perform event correlations and to automate exception handling. Here event handling can take the form of rapid response or proactive action. Further, analysis of precursor events (i.e. events that occurred just prior to the exception) could lead to predictive alerts to be raised to circumvent exception situations and thus enable proactive actions to be taken.

If sensors and RF ID technology are the first steps to event capturing and event processing then addition of event analysis and event composition (Complex Event Processing style) is the next step in the evolution with exception based learning and proactive action based event emission may be considered a more advanced step in the process of EDA.

Many transportation companies and carriers and just in time supply chain providers could adopt EDA for rapid response or even proactive action. For example, combining weather based events, traffic flow patterns etc can be used to insure quality of the goods being transported to minimize wastage in transport. Furthermore, containers that transport organic food that does not use preservatives could use special types of "sensors" that detect the emission of gases and chemicals within the shipping container chambers to assess the freshness and the ripeness of the produce. If these events indicate rapid ripening proactive action based events can be sent to these shipping containers to lower temperature etc. to retain the freshness of the produce for transportation with minimum damage. (This example is only illustrative as I am not a expert on this subject.)

It must be noted that traffic and weather based events are combined with information about product preservation rules, correlated and processed to preserve sensitive consumer products to safely and preserve the high quality after this type of behavior has been observed in the human actor and this exception processing logic has been codified for future automation. EDA in this case is utilized for the purpose of tracking human exception processing and then automating this behavior albeit all the while depending on incoming current state events and outgoing proactive actionable events.

It seems very much a plausible use of EDA and so I am curious how many of you are using EDA for solving similar use cases.

As always your input is very valuable.
surekha -