Monday, December 29, 2008

Doing a lot more with a lot less in the current environment

Given the current market conditions all business executives are once again challenging the technology teams to do a lot more with lot less. Following are some of my thoughts around this (nothing new - just going back to the basics).
  • Focus on solving your customers needs - independent of the industry.
  • Do not scale back on innovation. All markets typically are different when they come out of a recession that what they were before one. It is only those who focus on innovation shall come out as winners.
  • Focus on your core business - differ the adjacent market until the market recovers (unless it is related to the innovation)
  • Do not focus on developing new infrastructure (unless absolutely needed). Build out the infrastructure incrementally as new products and offerings are either upgraded and rolled out to customers.
  • Do invest heavily on large packaged applications. As the entire market is going through a shift - it may not be wise to invest in large packaged applications. It will consume a lot of resources (and $$$) without any major differentiation. It may be easier to subscribe for the same applications from a SaaS provider.
  • Do invest in a few strategic initiatives
  • Do not attempt to crawl before you run - "run baby run". The company that runs the fastest to meet the customers demand WILL WIN.

Just some of my final thoughts to close out 2008 and as usual please do feel free to drop me line with your comments and/or feedback.

Wishing everyone a very happy and prosperous new year.

Yogish

Wednesday, December 03, 2008

Key Learning from reclycling

Over the long weekend I decided to recycle all the boxes that were in my garage and following were my observations:
  • I had not recycled the boxed for a very long time - so it was complete mess in the garage.
  • About a decade back you could just put the boxed out and the garbage collectors would take it for recycling - not any more. We now need to break down the boxes - flatten them and stack them in the provided container in a systematic manner - otherwise they WILL leave them behind.
  • As I was breaking down the boxed - realized that the technology has changed. It is very easy to flatten and stack them.
  • However, there were a couple of boxes where I need to use brute force, cut them and even stand on them to forcefully flatten them. Some of them were recentely manufactured ones - guess they did not use the latest technology
  • These days - eveything seems to come in box, no matter what you buy.
  • It is important to recycle the boxed on a weekly basis - instead of waiting for a long time. The longer you wait the worse it gets.

Hmm!! - haven't I heard this before? :)

Yogish

Saturday, November 08, 2008

Architecture Organization Patterns

Over the past couple of years I have observed that companies from the high-tech industry are adopting two distinct patterns for organizing their architecture team.

Centralized Organization:
  • This is true for most IT organizations led by a CTO-IT, VP EA or Chief Enterprise Architect
  • All EA members reports to the head of the EA team
  • Individual EA are either focused on some core technology, IT functions (such as networks and operations) and Business Domain
  • Business Domain Architects dotted line report the head of LOB-IT (typically the Divisional CIO)
  • LOB-IT may also have Architects who dotted line report to the member of the EA team
As IT organizations need to focus on what is best for the enterprise - rather than on individual business units, this is a preferred organization structure adopted by IT organizations.

Federated Organization:
  • Most of the High-Tech companies have adopted this model for their line business (with IT adopting the Centralized Organization)
  • Typically there is a Chief Architect for the company who reports to the CTO of the organization
  • Evey business unit has a Chief Architect who directly reports to the Head of Engineering of the business unit and dotted line to the Chief Architect or the company.
  • The Chief Architect sometime reports directly to the GM of the Business Unit.
  • The GM of the Business Unit and the CTO typically report to the CEO
  • The head of engineering reports to the GM or is also the head of the Business Unit (varies - no consistent pattern observed)
  • The success or failure of this pattern is directly dependent on the leadership skills of the Chief Architect of the Company
  • The Chief Architects of the Business Units also play an important role and will be effective only if they constantly communicate to each other
As the GM like to have a sense of ownership - this organization pattern makes sense of the High-Tech industry. If the Architecture team is centralized under one Chief Architect - my observation has been that the GM then hire their own Chief Architect (under some other title) which creates a huge organization conflicts.

Some Interesting Observation:
  • One of the companies during the transformational phases transferred all their key resources - including development managers, developers, DBAs, Operations, etc. to the architecture team. Made a public statement that Architecture team was core - reorganized the rest of the teams and later over the three year period transferred the folks back out to the new organization. Using the Architecture team to retain the best resources during their transformation phase.
  • There is a 50% split between Shared resources team as being part of the Architecture team or an independent team.
  • Lately over the last 12 months - a lot more companies in the Silicon Valley are looking or Chief Architects responsible for both Business and Technology Architecture. However, they do not explicitly mention Business Architecture in their job description but their job description includes Business Architecture responsibilities.
Just some of my observation about the Architecture Organization Patterns and as usual please do feel free to drop me line with your comments and/or feedback.

Yogish

Saturday, October 25, 2008

Comparing Current Financial Crisis to SOA - Continued

I agree completely with Yogish's views as posted on "Comparing Current Financial Crisis to SOA" and his advice on treading light and having solid justification prior to undertaking SOA style projects.


A) An upfront investment has to be made in performing the right level of business process analysis and business architecture to pick the right SOA service candidates.


B) One should not attempt embarking on SOA with a highly visible project, as these types of projects operate under unwarranted pressure for delivery and with little patience for "architectural constructs" SOA or otherwise!!


C) Any project that is a high return type process with established KPIs and benchmarks is a good candidate for first time SOA implementations. Here gains are measurable (in the form of cost avoidance, cost reduction etc.) and can be directly tied to SOA-enabling these operational processes.

D) One has to invest in marketing the success of these projects and the services that were responsible for these quantifiable returns delivered by following SOA principles. Advertising the benefits of SOA style services with the benefits rendered by these projects help the business to relate to the abstract constructs such as SOA and so offer a better chance for service adoption. Also, advertising these business services that deliver efficiencies can be shown to offer speed to market gains that can be used for upcoming projects by encouraging reuse.

E) As always one should not assume technology will be the magic bullet. An assessment has to be made to understand if all components of the process/ business capability have to be refactored into services. One has to be open to leveraging legacy assets to reduce risk even if all that can be achieved is establishing a document based interaction or a mediated invocation of the legacy system.

Your feedback is invaluable.
Thank you!!
Surekha -

Saturday, October 18, 2008

Comparing current financial crisis to SOA

Comparing the current financial crisis with SOA was one of the topics we briefly discussed at the Community of Practice working group of the SOA Consortium. Following are some of my thoughts on this topic:
  • The current financial crisis is based on a flawed foundation - the sub-prime mortgages
  • Investment bankers and mortgage companies composed new and complex derivatives and resold them all across the globe - all in the name of new and investment models
  • Most of the executives of the involved companies as well as government agencies reviewed some of the aspect of the new world and based on what they knew then - thought it was all OK.
  • No one seriously took time to review the potential business risk and take corrective action before it was too late
  • The investments were so interdependent which made it impossible to understand the implications or the impact of letting one service fail. Example: The is no rational on why Lehman Brother was allowed to fail? and why AIG was rescued?

...and we see the result today. Doesn't this sound familiar?

Following are my recommendation on how to make sure that our SOA adoption does not take the same path (some will).

  • Make sure that all SOA adoption is based on sound business and technical foundation. Adopt and make sure to document the business architecture (design) and develop a technology road map (architecture) to meet these goals.
  • Do not hype and push rapid adoption without thinking through the life cycle .
  • Ensure that there is proper governance around your SOA Adoption - over communicate and be a bit more conservative on the risks.
  • It is not necessary to adopt SOA for all implementations - I would rather then to recommend that one makes sure that some of critical business capabilities (applications) are provided (build) using the traditional model.
  • Do not depend on services that are more than two level deeper (Reference: SOA Theorem #4: Service Hierarchy should not exceed more than three levels ). This is to ensure that you understand each of the services and their performance which will help you take corrective action, if required.

These are just some of my initial thoughts and as usual please do feel free to drop me line with your comments and/or feedback.

- Yogish

Monday, October 13, 2008

Canonical Models and Services

In thinking about canonical models one mostly thinks about an enterprise worthy or industry compliant representation of a business concept or a business entity। Often this canonical model compliant payload is packaged in a response envelope that is returned by the service provider. However, the question is whether the term canonical model can be used to define the " business request" or if it is limited to the "business response". This blog explores the canonical request models that might be used to alter the behavior of enterprise services using search services as an example.


Let us look at a "search service" where one could think of a "context based search request" that is made to the search facility to make the search behavior of the service providers more efficient। In addition, this "search context" could also be used to express the specific search needs of the service consumer. Given this, the question is whether the canonical model could be used to define a request model that is a representation of the service provider search/ browsing parameters and service consumer's usage context. Here the search capabilities of a search service could use search/ browse related grammer rules to interpret the search context that is embedded into the request by the consumer. The context provided by the consumer allows the service provider to "accurately" and "efficiently" interpret the request.


For example, search semantics embedded in a search service that returns cross-sell product options could include information like "customer preferences", "type of credit card used by customer", "shipment processing preferences", "customer purchase history" etc। This type of "customer information" represents usage semantics embedded in the request and allows the provider to return cross-sell product options that are geared to the the specific customer being refered to by the consumer call। This in turn allows compilation of product suggestions which incorporate the profitability levels and purchase ability of this particular customer. The "customer information" is deemed to be the search result and usage context that are sent into the search request. These search results are thus customized to the consumer context without the consumer needing to make multiple calls to get the desired results. Therefore, one could see how the canonical request model has not only the standard search parameters supported by the provider but also allows the consumer to express its' search context and how it might apply the results of the search/browse call.


In addition, the canonical request model based search context could be used by the consumer to drive processing efficiencies in the provider। These request based keywords/ context can help the provider eliminate or short-circuit certain types of processing as well. For example, a canonical request model could support "new customer search" vs "existing customer inquiry" keywords to help alter the providers' audit processing behavior and past inquiry Here a consumer call with "new customer search" context could be used to suggest to the provider that the audit/ security and past inquiry lookup be disabled thus positively impacting the response time of search call.


In conclusion, we can see that the consumer usage semantics allow the provider to tailor it's search behavior to the needs of the consumer without changing the provider service interface.

Please give me your feedback on this topic। Also, I would be interested in knowing whether or not you have leveraged these concepts in your industry vertical।




Surekha -
Thank you!!

Monday, October 06, 2008

Organization Skills assessment and Reference Architecture

The other day I was participating in the engineering skills assessment discussion and it dawned on me that mapping the skills to a reference architecture makes is much easier and simpler discussion to have. The reference architecture enabled us to first list the categories and drill into each of them to define the skills as well as leveling required for Architects, Developers, QA and Managers positions.



Key Learning's:
  • Having a reference architecture also helps in organization (engineering) skills assessment
  • The Skills Assessment / Mapping is independent of Governance and Organization. However, Governance drive where these resources belong

Friday, September 26, 2008

SOA Consortium announces SOA Case Study Winners

The SOA Consortium and the CIO Magazine this week announced the winners of the SOA Case Study competition. Please click here for the results.

Tuesday, September 23, 2008

Organizational Issues with SOA/EA ...

Last time I wrote about how shortage of critical skills in the developer community are an obstacle to full realization of SOA potentials. I got some irate responses from developers that it is the short term focus from IT management that is to blame. I must say that I did not mean to single out the developer community as the primary reason for not getting the most out of SOA, EA etc. There is plenty of blame to go around, including but not limited to over-hyping vendors, under performing technologies, cost pressures from executive suite etc. As far as management part of the equation is concerned, the biggest issue is the drive to deliver short term results. We live in a world of instant gratification at the expense of long term viability. Wall Street has proven it again and again that people are willing to throw away their long term future to realize short term gains. Bonuses are not tied to something that will pay off in long term. So, while everyone recognizes the value of architecture, how many CIOs do you know that come from Enterprise Architecture background?

I find that we in EA are constantly in a defensive posture, trying to convince everyone that following EA best practice will deliver results in the long term while others who are solving day to day problems get all the accolades (and promotions to senior management positions). EA must find a way to go beyond being a niche player in an organization where they are either always looking for a “low hanging fruit” for the latest alphabet mix or something that everyone tolerates, to becoming a true player in transforming an organization. For that to happen, we need people at the CIO level that have strong bias towards Architecture and for that to happen we as Enterprise Architects have to learn to not only talk about SOA, BPM, CEP but also have to be able to talk revenue growth, profit growth, costs, and operational aspects of business that is understood by senior executives who are making the decisions about who gets to occupy that office where IT decisions are made.

Ashok Kumar

Sunday, August 31, 2008

Business Architecture – Process Architecture and Information Architecture!

First the problem statement - Typically the Line of Business (LoB) owned business processes and IT owned data/ information aspects. This would then help explain why these two key aspects were never in synch. The industry is now realizing the need for alignment between the process and the informational aspects and has created a new discipline of “Business Architecture” to encompass business process and business information.

Here the idea would be to leverage the business information flow in an optimal manner to drive business process definition, business process engineering and business process optimizations as opposed to shoe-horning this crucial business information into the business process. It must be noted that business information can take forms such as rules to transform business data, business decisions made in the context of an exception in the process, regulatory influences on a process or short-circuiting rules that enable a process to either be aborted without a detrimental effect on the process or rules for enabling a process to be completed quickly in “special business situations”. It is the access to this contextual business information that makes business process automation possible without losing the knowledge base of the subject matter experts.

A key goal of Business Architecture is to bring about the efficiencies of business process by access the right information. As was mentioned earlier, business architecture allows sharing of business information that is an enterprise asset in the context of a business process that is a LoB asset. Business architecture also helps highlight lost opportunities by drawing up scenarios where lack of business information availability (due to "not invented here complex" or lack of proper stewardship) prevents business process efficiencies from being realized. In addition, business process without the decision influencing contextual information still leaves the LoB user to make one off decisions that may be either based on invalid data or insufficient data and leads to process execution inconsistencies.

Furthermore, in an exceedingly inter-related enterprise or even extended enterprise sub-optimal decisions made in any business process leads contradictions in the rest of the enterprise. From an upstream process perspective, these lead to business strategies being interpreted erroneously. From a downstream perspective the business events being emitted by the siloed business process may have insufficient or improper information for execution leading to more exceptions in the downstream steps of the process. This may , slow down the entire business process chain having a negative impact on the business.

Another key goal of Business Architecture is to insure that the business information captured as part of Business Process Optimization efforts is consistent in its’ reporting. Here business process decisions made prior to the process reengineering efforts and after have to be captured consistently and accurately. This base line allows the study of the process efficiencies to be quantified. If the business information that is emitted in the form of business process based events is not being made available beyond the LoB process boundary then again sub-optimal process execution in the downstream steps could overturn the effects of any optimization work.

Information has to be captured consistently, emitted in a timely fashion and finally the events have be interpreted accurately to insure that the business strategy and the optimizations that are being envisioned by the business in implementing the value chain activities are in fact resulting in competitive advantage. This knowledge enables further process improvements and makes it easy to deal with business process adjustments especially when enterprise is faced with making a dramatic shift to deal with changing market conditions or regulatory conditions.

Finally, business process management, business process optimization and business activity monitoring need access to a well thought out MDM philosophy and strategic analytical marts that can be accessed via informational business services. These type of information access services can combine real time operational BI, real time business events and analytical sources to "close the informational loop"!!! Please see my whitepaper on this topic as well - Closing the Loop: Using SOA to Automate Human Interaction!!

As always thank you for your feedback.
surekha -

Friday, August 29, 2008

Architecture tenets of High Cohesion and Loose Coupling

Architecture tenets of High Cohesion and Loose Coupling – Both of these tenets are related to one construct i.e. that of a “Contract”.

The term contract in information technology involves the definition of high level interfaces in the form of a coarse-grained set of operations that have well known inputs, output, clear exceptions or faults. The contract hides all of the details of implementation and allows these hidden implementation details to behave as one cohesive unit - in that it provides support for "high cohesion". By extension, in separating the client or consumer or caller of the contract from the implementation details it provides support for “loose coupling”.

This concept of contract works at any of the following levels:
1. sub-system interface (for example, a persistence sub-system)
2. component interface (for example, a remote monitor)
3. layers of architecture (for example, business layer vs. presentation layer)
4. infrastructure service (for example, a messaging service)
5. SOA style business service (an customer account self-service)

Furthermore, the concept works whether the implementation is a local call or a remote component call long as the "contract" is honored. Experienced architects also insist on unidirectional contract-based communication even between the layers of the architecture - with communication only being allowed to the very next layer down. The concept is that the more volatile layers interact with the more stable layers’ contract without skipping levels. This level of indirection adds as a check for the entire system as the volatility of top level layers and the communications from the volatile layers’ are limited to the very adjacent layer alone without affecting multiple aspects of the system when these layers change.

This concept is the key driver of the Model View Controller or MVC pattern wherein the presentation layer is allowed to talk to the interface or contract of the controller layer or the façade layer alone but not to the interface in the business logic layer or the data logic layer. Also, the façade layer or the controller layer is not allowed to communicate to the client or the presentation layer. The contract dictates that it is the client or the presentation layer that is responsible for initiating the communication.

Advantages of adhering to the Contract:
A) Implementation details can change without negatively affecting the consumer or the client. Loose coupling facilitated by the contract protects the client or consumer. Also, since the behavior is highly cohesive (all hidden behind the contract in one well-knit codebase) any alterations to business rules or behavioral logic is embedded in this codebase, thus insuring the completeness of apply the rule/ logic change. Without this it is very possible that part of the logic is embedded in consumer codebase and part of it may also be placed in the communication or the mediation layer (leading to the anti-pattern low cohesion and high coupling).
B) System integration testing and performance optimizations are easier when there is a known finite set of operations, inputs and outputs that will be allowed by the contract.
C) Understanding the interactions and invocations for the consumer becomes easier due to the known and pre-configured set of operations that are published on the contract. This helps to make the system or application more deterministic.
D) As long as the contract is not broken in making behavioral enhancements new consumers can be entertained without having to create newer versions of the codebase. Of course, this can also mean addition of newer audit and tracking capabilities in compliance with internal or regulatory policies without affecting or "informing" the consumer.
E) Since the consumer is not interacting with multiple internal points of the codebase, the system interactions and resource utilizations per consumer/ client call are more quantifiable and predictable. This makes it easy to scale the system for availability. This contract or interface then becomes the single point of entry for all interactions and is thus the only point that needs to be monitored to assess system resource utilizations. In addition, provisioning of system resources becomes more scientific as all calls of a certain operation take a known amount of time and resources given that the inputs are also quantifiable.

As can be seen, a simple construct such as “Design by Contract” when taken seriously and to its logical conclusion renders a great deal of architectural stability, robustness and extensibility.

As always your comments are welcome.
surekha -

Monday, August 11, 2008

Organizational Issues with SOA

One of the barriers to full realization of SOA potential is shortage of critical skill sets needed to successfully implement SOA initiatives. Let's be honest about the reality of the situation. Development teams typically are made up of outsourcing partners, temporary consultants, and employees. They all have varying degrees of training, skills and motivations when it comes to delivering a solution. These teams are responsible for carrying out the vision, approaches and processes laid out by the EA team. In general, the EA teams do a good job of laying out the target architecture, governance processes, best practice etc. However, the developer community is generally focused on getting things working in the shortest time possible with little regard to making sure the services have the right level of de-coupling and are designed and developed correctly for future re-use.

Having a strong governance structure can help relying too much on governance leads to a situation where the governance body itself becomes more of a micromanager than an oversight entity.

In my opinion, the right team structure is when at least a few key members (preferably in permanent capacity) have the leadership and communication skills and have full understanding and appreciation of SOA. These members can act as mentor and provide the necessary oversight to make sure services are delivering on the promise of business agility.

Ashok Kumar

Thursday, August 07, 2008

Best Practices: Master Data Management

Following are some of the best practices for adopting (note - not implementing) Master Data Management solutions within an Enterprise.

Understand the Business Context (semantics) prior to picking a solution

As per my earlier blog on EA, BPM, SOA and MDM it is very important to understand the business context, including the semantics of what each of business units, departments and entities mean when they refer to MDM entities such as Customers or Products. For example marketing deals with leads, sales with opportunities/accounts and services with paid customers. Should all the entity be referred to as customers throughout the business process? or should the master entity be referred to as Organization? Common vocabulary goes hand-in-hand with master data.For example what does this sign on a building mean? Is the building fully occupied and available for purchase? or does it mean that the entire building is unoccupied and available for rent? The business context needs to be clearly defined in terms of business units, departments and business processes.

Develop a comprehensive entity model.

Not only should one define the common attributes for the master data, one should map the entire set of attributes and where it is used. For example, in customer services the customer is used as a reference whereas in order management it is transactional data.
A set of models and documents that can be used both by Business and IT.

Establish data governance process right up front.

It is important to establish data governance process right upfront, especially as this may required dedicated resources from all the business units as well as from IT to manage the master data. The approach that worked for us is as follows:

  • Executive Leadership Team Data Leadership Team that meets on a periodic basis (once a month) to establish data policies, standards, establish priorities for the data quality team
  • Data Stewardship (Quality) Team are the business operations people who manage the data quality on a day to day basis. The business team enforces and ensures data quality across the enterprise.
  • Enterprise Data Program team is responsible for developing and managing the programs and business rules in the various technology tools.
Integrate with 3rd party data providers

It is important for enterprises to consider bringing in 3rd party data providers such as D&B and Factiva for better understanding of the market. For example on a typical morning between 9:00am and 11:00am
  • 706 businesses will move
  • 578 businesses will change their phone numbers
  • 60 businesses will change their name
  • 49 businesses will shut down
  • 10 businesses will file bankruptcy
  • 1 business with change ownership
Source: Dun & Bradstreet (2004)

3rd party data from D&B could be used for leveraging legal name as the name of the organization, providing knowledge management systems that also provide news feeds as well as market statistics by industry, geography and demography and also mapping it to existing sales.Picking the right data standardization and matching engine:

This is a key technology that will either make or break the quality of your data. I agree that developing the business rules and configuring the matching tool is more important - however from a technology point of view - I would dedicate substantial amount of time evaluating and testing the tools with existing data before picking a technology. My preference would be to use one of the following matching engines:

Trillium, First Logic (now SAP), IBM or SAS.

In cases where a MDM product is packed with a different matching engine, I would be tempted to externalize the data quality to one of the above mentioned data matching engines. Just my preference - maybe the other quality engines may have got better. Do you own evaluation.

Expose all MDM functionality as services
Expose all functionality such as data (address) standardization, data matching, update master and propagate master key.

As usual please do feel free to drop me line with your comments and/or feedback.

- Yogish

Saturday, August 02, 2008

Enterprise Architecture, BPM, SOA and Master Data Management (MDM)

One of the best practices for Enterprise Architecture teams to redo the enterprise road map on a periodic basis. It is typically reviewed and updated during the yearly budgeting cycle and my preference is to perform this activity every 18 months. The best practices (and the traditional approach) is to first document the as-is, next develop the target or future state (architecture) and finally develop a short term (6 months), mid term (12 months) and long term (18 months) road map. Preferable an actionable road map that ties back to the business initiatives.

It is good to document the the as-is (or current reality) from all the domains such as Business Context, Applications, Technology, organization and Funding. Typically the business context is best understood by identifying and mapping the key business processes at a high-level.


This approach not only helps have a common vocabulary between business and IT by identifying the key business processes, it also helps identify the key enterprise data objects (entities) such as Customers, Contacts, Products and orders. Based on the priorities of the each of the business process, the next steps would be to drill down into one or all the business processes as illustrated below.
Once again, it is not necessary to use a Business Process Modeling tool (however, using one would be helpful later), the objectives is to clearly identify and document the next level of details. The next steps are to perform the gap analysis on each of the activities as illustrated below.

This approach enables both business and IT clearly visualize the existing gaps, impact areas and cost estimates which helps in developing the priorities and the investment plan. In addition, it is also important to identify and illustrate the list of applications/solutions that support a given business process as well as it perception within Business and IT as illustrated below.
As we develop the actionable road map to the future state, one this is obvious, no matter what we implement or adopt, whether it is a packaged applications, BPM or SOA key enterprise data crosses the silos both from an organization and the applications. It is for this reason, there is a critical need for adoption Master Data Management across the enterprises.

It is very important to spend sometime understanding and mapping these enterprise data objects in the context of the business process. It would also be very helpful to also develop a high-level data model and transaction (CRUD) matrix associated with the business process before initiating the activity of selecting/implementing an MDM solution.

I have seen a lot examples where companies have embarked on an MDM project without developing the architecture (see my blog on Blueprinting Information Architecture for more details) and not meeting the desired business outcome. The two other primary reasons of MDM failures are:
  • Lack of developing the data governance model up front that involves all the impacted business units
  • Assuming that a packaged applications could be modified to be the master data for enterprise.
To keep this blog short, I plan to blog more about my thought on BPM & SOA later and please do feel free to review my CDI-MDM Summit presentation on my experience with MDM/CDI.

As usual please do feel free to drop me line with your comments and/or feedback.

- Yogish

Tuesday, July 29, 2008

Key learning from Home Entertainment/Automation that can be applied to SOA/SaaS

Unlike the previous generation where technology innovation was driven by enterprise needs, over the past few years technology innovation such as smart phones, multimedia, game consoles, multimedia and social networking is being driven by the consumers. In short the consumers have gone digital, whether it is HDTV, Blue-ray, Home automation, smart phones, media servers or IMS (IP Multimedia Services). The vendors manufacturing these devices and services understood that unless there are standards adopted by industry - the consumers would not adopt these technologies (especially as they are not cheap). It is for this reason, most of the large vendors (hardware, software, manufacturers, protocol stack providers, etc.) got together to form the Digital Living Network Alliance (DLNA).


Their objective was to resolve the following consumer challenges:
  • Products designed for the home should be easy to install, provide noticeable user value and be affordable
  • Product must inter operate with each other without requiring the consumer to undergo complex setup and configuration for connection between devices
  • Digital home products must inter operate with each other and with existing CE devices such as TVs and stereos.
Doesn't this sound very similar to the current IT Operations challenges?

The above diagram illustrates DLNA's view of the customers needs (source: DLNA). In order to be vendor neutral and provide the consumer the ability to control any service (yes! they call it services) the DLNA members standardized the technology stack (as shown below - source DLNA).

The key learning here is that the vendors adopted Peer-to-Peer (P2P) for device discovery, control and Media management. For now they have all adopted UPnP, especially as most existing devices at home (desktops, laptops, storage devices, game consoles, IP based TVs, Stereo systems, network hubs, etc.) support UPnP. Some of the vendors such as Microsoft support both UPnP and WS-Discovery and it the long term (once the backward compatibility issues are addressed) the industry may migrate to WS-Discovery.

For those getting ready to purchase TVs, Phones or other Digital Devices, I would recommend you verify that they are DLNA Certified.

The next obvious question is What does this have to do with SOA/SaaS? Well! why not use this same approach for deploying services? It would make life much easier for IT Operations and potentially eliminate the need for additional ESB hops in the network. Yes! I am back on this topic :). The Services-Oriented approach, it is basically a P2P architecture, i.e. a consumer invokes a producer. As most of the large software vendors have made a commitment to adopt Services Component Architecture (SCA) and developing the SCA Run time engines, it would great if they could adopt either UPnP, WS-Discovery or some other P2P technology. The following diagram illustrated the joining of new node/service(s) to the P2P network.

Benefits:
Following are the benefits of adopting this approach:
  • Based on the SCA standards -unique (logical) service name for services both for defining and invocation. Do not need to know the EPR (physical location) at the time of deployment.
  • Multi-cast availability of service whenever an instance come up - dynamic configuration does not require IT Operations or tools (even if they are automated) to change configurations.
  • Service maps (for a predetermined domain/network) maintained at each node. Complete map could be maintained in the Super peer (read up on p2p architecture for more details).
  • Eliminate the need for Service Registry in production. As each instance of the node and services is maintained dynamically by the Super Peer - there isn't a need to maintain and administer a Service registry.
  • Eliminates the need for having a separate monitoring agent on each node, especially as each instances could updates it's own service performance details in the P2P map.
  • Universal administration tool could be used to configure one or all the instances at any node and propagate the changes across the network.
  • As the consuming services would know the EPR of the producing service, this eliminate the need of an ESB.
The Newton: Component Model (Key technologies, OSGi, Jini, SCA) is the only run time engine I know of, that supports both P2P (Jini) and SCA. The Apache Tuscany project does claim to support JXTA (P2P) binding and have not researched it as yet.

Just my thoughts and as always do feel free to drop me line with your comments and/or feedback.

- Yogish

Thursday, July 24, 2008

Key Best Practices - REST Assured????

The purpose of the blog is to find out if there is a place for REST in the realm of “Business Services”.

First of my definition for REST – it is a "Get", "Put", "Post", "Delete" operations performed on "Resources" that are identified as URIs being transmitted over HTTP(S) as REST.

REST by nature has a very simple service operation set with the complexity all embedded in the Resource URI. The operations that REST allows are NOT business user-friendly and hence do not really belong on the Service Repository that an end user is referencing to discover business services. To accommodate complex business behavior and to compensate for the finite list of RESTful operations the resource identifiers have to be fairly complex.

SOA on the other hand, allows fairly diverse interface definition that resembles as closely to the business syntax as it can get. The canonical model that is shared is standardized and is not as diverse as are the resource URIs of the REST architecture. The canonical models for the most part are XML.

Another key difference lies in the protocol and messaging support for REST. REST is limited to only HTTP/ HTTPS which by nature is stateless and has no standardized protocol level interception model for the more enterprise category behavior such as guaranteed delivery of messages and asynchronous messaging semantics. SOAP and the standardized metadata associated with SOAP envelope on the other hand can be interpreted by the Web Services SOAP stack to enable behaviors such as Reliable Messaging, Transaction Management, Addressing, and Notification with a level of interoperability across-vendor platforms.

Having said that, it is possible to find a home for RESTful architecture and to leverage one of its strong points in terms of a simplified and standardized interface. REST may be a good architecture model or construct to follow in the service implementation layer or at the service adapter layer. This helps to keep the service implementation the same across any type of service interface definition. In the event there is a need for additional transactional integrity then these behaviors could be taken care of in the service facade or in the service mediation layer while keeping the actual implementation in the simplified RESTful architectural format.

If we are looking at SOA service operation being exposed to the world and using a RESTful service implementation layer then we would need to deal with translating canonical models and parameters of SOA style service into URIs. This translation behavior includes extracting the request parameters from the Request canonical model and transforming these to verbs and values in the format of a query string i.e. a URI.

One option is to have the specific URIs needed by the service implementation layer to be translated to the input URIs as specified by the RESTful service implementation layer. Here the service interface layer or the service mediation layer performs the translation of the input parameter into the specific URI and initiates the invocation of the appropriate service implementation layer.

The other option is to have the service interface layer or the service mediation layer translate the input into a generic URI for calling the RESTful implementation layer which in turn does the mapping internally to the required operational version of the URI. This improves the durability of the service interface layer or the service mediation layer. In addition, it provides a mechanism for doing internal operation overloading and polymorphism.

Regardless of which option is chosen, I would think that embedding RESTful services in the implementation layer would be a good option while not exposing painfully abstract behavior to the business user who is used to seeing more of English like business syntax.

Here is a link to another blog of mine on the topic in the context of Service Orientation!

Even though I agree that there is a place for REST as it may simplify (by providing a finite set of operations) there are infrastructural capabilities that do not exist with the REST and without these the business agility piece is going to be hard to deliver. Here are some additional thoughts on the topic that an enterprise architect may have to resolve prior to embarking on a RESTful implementation.

  1. Identify which layer deals with the interpretation of the get(Resource) call? Is it the service provider (SP) interface or is it the service implmentation layer?
  2. Would the SP be dealing with versioning of the Resource thereby creating a grand facade that does internal resource mapping via a URL redirect style substitution? This would be one way of protecting the service consumer (SC) and to accomplish backward compatibility.
  3. Identify how the SC gets the handle or URL to the "right" resource? How does it "know" what to expect from the Resource and how does it "know" to express its' semantic expectations for the SP Resource that is being called upon? Does it do a "Get(SearchByCriteriaResource)" call with semantics embedded in the Request Resource?
  4. Is the get(RightResource) another call and somehow the metadata held in the response Resource is expected to have the "right metadata" for each of the provider Resources (that meet the SC search criteria) in order to help the SC choose the right SP? If this is the case, then following this invocation would be a secondary SC to SP interaction to fetch the chosen SP Resource.
  5. Identify if there would be a standardized URI with search criteria specification. This might be good to define up-front to enable the support of JINI-like or SaaS like interactions?
  6. Identify and define how one would deal with asynchronous interactions? Would it be via the use of " ack/ no-ack post notifications" and "post response Resource" calls? How would this be to handled? The SC would have to provide a pre-set "call back" Resource identifier to which the SP would have to reply back to?
  7. Would existing messaging infrastructures still work as most of them have proprietary protocols or else they support JMS/ RMI?
  8. Identify a URI for brokering of the two-part synchronous post HTTP call to. This would be for enabling the pseudo-asynchronous interaction semantics. Is the centralized broker URI all that is needed to broker the two part synch call for faking the Asynch interaction semantics especially if the architecture does not have to deal with message delivery guarantee? I assume "post" might have void return type communication model to enable releasing of calling address space resources?
  9. Would one have to leverage any "Registry" or ESB like middleware or "service grid" like infrastructures? If so, how does would resource proxying work? Would the proxy Resources’ "forward" their calls to the right Resource?
  10. Would this mean that the middleware and mediation layer products need to know how to deal with this type of an interaction semantic where the "Registries", ESB, "service grids/ service marketplaces" are all metadata-driven Resource proxies with built-in support for REST - as opposed to service proxies as would be the case in SOA style architectural models.

Thanks in advance for your feedback.

surekha -

Tuesday, July 22, 2008

Mashup, WOA, SOA, SaaS, REST and the kitchen sink

These days there is a lot of jargon thrown around and following is my attempt to make sense of put these all together in a simple terms. A could of year back some of us, the early adopter of SOA, had put together the Enterprise SOA Maturity Model (Presentation). The maturity model for IT organizations (in simple terms) consisted of three levels starting with initially developing web applications, followed by composite (aggregated) applications and finally maturing to end-to-end (automated) business processes.

It is great to observe that some in the community are very strong advocates for Web-Oriented Architecture(WOA) as a first step towards adopting SOA. This basically validated our initial assumption that business will only fund those projects that demonstrate value and providing web based solutions is the easiest way to get their buy-in. For those not familiar with WOA - you may want to read this Blog on What is WOA - The Future of Services-Oriented Architecture . What WOA does is simplify the architecture and approach - however there is still a need to ensure consistent architecture and governance across the enterprise.

The above diagram illustrated the architecture approach for WOA based sample application. To keep it simple - this sample applications integrated with Identify Management, Stock Information and a Business Process. The RESTful approach makes is simpler and easier. However, the deployment model can very quickly become pretty complex, especially with the introduction of ESB as a mediation layer. I completely agree that this is how I deployed and may still adopt the model again in the near future. However, as the number of services increases, it increases the burden on the Architects as well as IT Operations to manage not only the complexity of services dependencies, but also the complexity of multiple instances of the same services in the production environment. Coupled that with the introduction of ESB as a mediation layer and the configuration of the load balancer and firewalls between the various sub-zones in the network - this model could become unmanageable (even with an SOA Repository + CMDB) very quickly.

Alternate Approach:
Last year while working with my colleges we came across Tuple Space (an implementation of the associative memory paradigm for parallel/distributed computing) a theoretical underpinning of the Linda language developed by David Gelernter and Nicholas Carriero at Yale University. The original Linda model requires four operations that individual workers perform on the tuples and the tuplespace:
  • in atomically reads and removes—consumes—a tuple from tuplespace
  • rd non-destructively reads a tuplespace
  • out produces a tuple, writing it into tuplespace
  • eval creates new processes to evaluate tuples, writing the result into tuplespace
This simple approach of handling Tuples (Objects) in memory should enable us to integrate the enterprise using simple (limited set of) operations, similar to what the RESTful style.




The above diagram illustrated a simpler approach of integrating the enterprise (or enabling composite applications). From a business context - they always deal with an entity such as Customer, Order, User, Inventory and Product. This approach enables the IT Project teams to understand and define the business context in their language .


How does this work?

There are various different technologies and for this example - lets take JavaSpaces. Every objects has it's own unique space. The number of operations that can performed on a space are limited to Write() - Create/Update the space, Read() - Read the space without delete, take() - read and delete the space and Notify() - send an event when ever a space changes (write() or take()). Of course, they had additional commands and some vendors have provided SQL interface to these objects.

Impact to IT Operations
As organizations move to the objects (space) model - it makes it easier for IT Operations to manage multiple instances of the distributed (in-memory) data grid, eliminated the need to know and understand the services dependency matrix. This approach does not eliminate the need for ESB but does limit the need of multiple (logical) hops and reduces the number of (proxy) services that need to be maintained and managed by IT (development and operations).

Impact to BPM
Business is very much interested in knowing, preferably in real-time. the status of their business. Unfortunately, the currently BPM products can provide BAM capabilities only for those business processes that run on their instance. None of the existing products have the capacity to provide end-to-end near real-time BAM capability.

This approach enables IT to provide business monitoring and management functionality by leveraging the notify() capability of JavaSpaces.

In my opinion, in the long term the BPM vendors shall be providing the modeling and simulation capability and the Business Process Execution shall be powered by distributed (in-memory) object infrastructure.


Impact to SaaS
Today most of the SaaS Platforms (also referred to as PaaS) are object driven. The objects and their relationship not only drive the user interface but also the Business Process and Rules management. They are already one step ahead of the rest of the industry. However, most of SaaS infrastructure and platform teams spend a lot of time an effort in dealing with multi tenancy.

The Distributed object technologies provide capability to map the objects (spaces) to data sources (Databases, JCAs, WS, Files, etc.) one could potentially implement a multenant SaaS solution powered by an existing packaged application at the back end. Something that the existing SaaS companies should not ignore.

Business State Machines
As discussed previously - business deals in the context of entities (or business objects) and their decision making criteria is based on something happening (events). A combination of Event Servers and Business State Machines may be sufficient and enough to provide an end-to-end business process management (run-time).

Impact to Master Data Management (MDM)
In my opinion, MDM is a must-have solution for any enterprise that is interested in managing and monitoring their end-to-end business processes, independent of whether they adopt SOA or not. As MDM solutions deal with the most common business entities (objects) such as customers, contacts, orders and products the distribute (space) object model fits in very well. Address Standardization, Matching, reading entire customer record and performing real-time aggregation (analytics) becomes much easier to develop and manager. An approach that existing MDM vendors should look into. Most of the MDM solutions provided by vendors are easier to implement (compared to 5 years back where we had to build it all within IT), the biggest challenge with these solutions is defining and maintain the Master Data in business context.


Summary:
Even though the traditional approach scales to meet large enterprise needs, it is too complex and difficult to provide for true agility. The distributed space concept has the ability to facility business agility through plug able modules (based on event/state change notifications) and map the objects back to a source system. This not only changes the game but also takes the industry one step closer to the reality of cloud computing.

Just my thoughts and as always do feel free to drop me line with your comments and/or feedback.

- Yogish

Sunday, July 13, 2008

3 Ps of Strategic IT

While working at The Coca-Cola Company we were introduced to the 3Ps at Orientation which were People, Products & Price which translated in something as follows:

"we need the right people to produce the right product to sell at the right price"

I would agree that this is true even today for any enterprise. In my opinion following are the 3Ps key to a Strategic IT.

People:
As always people are the key assets to all organization, especially the ones that make a difference. This is true not only for folks in the leadership positions but also the developers, system administrators, administrative assistants as well as the security guard at the data center. It is important to identify the exception and key people within an enterprise and do whatever it takes (within reasons) to keep them.

Platform:
Even though business keeps asking for solutions, which could be a packaged or custom applications - the platform on which it developed is key. In my experience, even if we adopt a packaged solution for a specific vertical, the customization required to tailor it to the companies specific needs is still very expensive and as for upgrades - forget about it. Might as well do a brand new implementation.

It is heartening to see that all major solution vendors have been focusing on migrating their packaged applications an open standards and tools based platform - which will make it easier for customization as well as future proofing for upgrades.

Process:
When I refer to process - it is not the business process (which is also key) but the process put in place to enable change. Due to fast changing business environment, business will need to transform itself much more rapidly or risk going out of business. This requires IT to help facilitate innovation - a key process that needs to be put in place and made know to the entire organization. And just like any other business process, this innovation/transformation process should also be reviewed on a periodic basis.

Just my thoughts and as always do feel free to drop me line with your comments and/or feedback.

- Yogish Pai

Friday, July 11, 2008

Business Agility and Business Driven EA Domain Models

For my presentation on the topic of SOA and Business Agility at the IC - local chapter event last month I had refined slightly the Business Agility Domain Model. The pdf version of slides on the domain model are available here and is based on my original blog on Defining and Measuring Business Agility. I have been planning for a long time to develop a spread sheet that goes along with - maybe someday I shall get to it.

Over the last few months I have helping my customers understand the role and importance of Enterprise Architecture and came up with this EA domain model (also referred to as the "circle of happiness" by one of my potential customer :) ).


The Business Driven Enterprise Architecture consists of the following domains:
  • Business Architecture (or Business Design)
  • Competency Center -Check out this SOA Podcast on SOA COE by Melvin Geer published by the SOA Consortium.
  • Enterprise Architecture - check out all my blogs on this topic
  • Governance - Corporate, IT, Enterprise Architecture, SOA and so on (whatever makes sense)
  • Life cycle management includes all life cycles such as Program Management, Project (application) Life cycle, SOA Life cycle and Services Life cycle
  • Finally - Environment Industrialization. We came up with this term at BEA-IT to ensure consistent systems environment through out the application life cycle. Systems environment included networks, firewalls, OS (versions and patches), servers, data centers and disaster recover sites for all environments from development to production.
Hope this was helpful and as always As always do feel free to drop me line with your comments and/or feedback.

- Yogish

Thursday, July 10, 2008

SaaS requirements for Large Enterprises

Thanks to Annie once again for forwarding me the article on SaaS Star leaves SAP SalesForce.com. Executives moving to competitors is nothing new it was not long back that most CEO's of large software companies in the Silicon Valley were Oracle Alumni. As Larry could not retain them - he went and bought them :). Sorry - I digress.

While reading the article - the following quote caught my eye.

“SAP doesn’t have a SaaS strategy,” he (Steve Lucas) told me. “They don’t have a single piece of paper that states what their SaaS strategy is.”


I don't know Steve but if he has already made this statement to the executives of SAP while he was there - they kudos to him, otherwise this is Salesforce.com PR at work (and very good at that).

Don't get me wrong - I like Salesforce.com, they are a great company and the market leader in the SaaS environment. A couple of months back I did develop a "
Application Portfolio and Time Reporting" prototype - it took me a couple of days to prototype, very easy to develop (no code required) and shall gladly share it with anyone who is interested.

The focus on AppExchange (or Platform-as-a-Service) by SalesForce.com is pioneering and a great endeavor. However, in my opinion - most SaaS solution providers (and platforms) are focused and excellent for delivering point solutions - such as Sales Force Automation (or CRM like SalesFoce.com), Financial, HR, Inventory Management (CRM and ERP like NetSuite), Payroll, Financial and Tax Reporting (such as Intuit) targeting Small and Medium business. In addition, there are most probably hundreds of other start-ups currently in the process of developing SaaS Platforms (Coghead, saas, etc. and I expect to hear from the rest of them). SAP has very clearly stated that their SaaS solution -Business by Design is targeted for small and medium business (for now). As for Oracle - Larry Ellison already has stake in SalesForce.com and Netsuite and will acquire one or both of them after they demonstrate that they can scale out to large enterprises (or $$$$ - which ever comes first).

Following are some of the high-level SaaS (platform) requirements(on paper/blog :) ) in my opinion for large enterprises. These requirements are focused primarily on the platform, rather than the application/solution.

  • The platform should meet all the technical requirements such as availability, reliability, security, standards, etc. (most SaaS solutions do support this)
  • Decouple the platform into - User Interaction (portal), Business Logic and Domain (data) layers. Not supported by a single SaaS provider (that I know of).
  • User Interaction - only the presentation is provided and hosted by the SaaS provider and all the business logic and data is stored somewhere else. Sample use case could be: User Interaction hosted by SalesForce.com, Business Logic hosted by SAP (Business By Design) and Data hosted by Oracle on-demand.
  • Use standard development tools such as Eclipses and code could be deployed both on-site as well as SaaS platform. Today in most cases it is not possible to reuse code between SaaS and in-house development. It would great if the same code could be deployed both on the SaaS platform as well as on-premise infrastructure.
  • Unrestricted near real-time data services will be key to meeting the needs of large enterprise.
Let me try and break this down in more details:

User Interaction:
  • Support multi-channels - supported by most of the current provides
  • On-line - Off-line capability - none of them do so as yet. I did like the Alchemy product initiated by BEA in 2004 - which was sadly discontinued after Adam Bosworth left BEA. There needs to a capability of develop a solution once which supports both multi-channel as well as off-line / on-line capability.
  • Develop using standard tools and deploy it on SaaS or in-house infrastructure. Major SaaS vendors do support this for Perl, PHP or Ruby but not for Java or .Net (that I know of) - smaller ones do support Java, Flex, AJAX, etc.
Business Logic
  • Based on standards such as BPEL, XPDL or native code (java, Perl, PHP, Ruby or .Net). Most major SaaS vendors do not as yet support standards but do support native code. In addition, developers could use their on-line (graphical) developer tools to model the business process/logic. However, the code is not deploy able within the enterprise.
  • Ability to decouple the business rules/polices from the business process. Supported by develop custom code - expect this to mature over the next two years. The Business Rules engine (such as ILog, Drools) could be deployed within the enterprise.
Domain (data) Layer
  • To me this is the data grid - that support distributed data (which also includes the meta-data that drives the entire solution)
  • Developer should be able to model any simple or composite object and deploy this on the grid.
  • Based on the operations (CRUD) the data layer shall perform the operation on the appropriate data source
  • Ability to support events and alerts to trigger action (support Event Driven Architecture) which could also be used as a Business State Machine.
  • Provide real-time performance - key requirements for next generation BI solutions
  • Other than SalesForce.com and Coghead - I have not come across other players that provide this capability. Their solutions are still transactional and need to be extended to provide near real-time performance. It would be ideal for the SaaS providers to leverage tools like Oracle Coherence (Tangosol), GigaSpaces, Java Space orSAP In-memory database in cobination with EII tools such as MetaMatrix, Composite orAquaLogic DSP to provide the foundation of this layer.
Over the next few years - I expect large enterprises to start adoption point SaaS solutions with IT organizations primarily focusing on integrating the enterprise. I hope by that time we shall get a standards based platform that needs one and only one development tool and give IT organizations the option of deploying the solution either within the enterprise or on on the platform of a SaaS provider.

Maybe - what we really needs is a Services-Oriented Operation System :).
As always do feel free to drop me line with your comments and/or feedback.

- Yogish

Thursday, July 03, 2008

Best Practice: 5 things that the CIO should focus on

Following are the 5 things that the CIO should focus on in the current environment.

1. Focus on the Demand Side of business
As per my earlier Video Blog on the Changing Role of C-level executives - CIO need to focus on the two sides of the business - the demand and the supply side of busienss. Today most of the CIOs are focusing on the supply side, i.e. helping reduce cost, timely delivery of projects, outsourcing and off-shoring and so on. Yes! COIs do also focus on the Demand side - but not necessarily the primary focus for majority of the CIOs.

The demand side basically means focus on the business demands for new or modified solutions and widely known as Business Agility (read my take on Defining and Measuring Business Agility). CIOs would do well to establish the Business Architecture discipline within the Enterprise Architecture team to help understand and prioritize the demand side of the business.

2. Establish a strong centralized function
Agreed that the LOB-IT is essential to provide adequate services at the business level, there is also a need for a strong centralized function. The two primary centralized functions would be:

Program Management Office: The leadership of the PMO should be close (preferably at the same location) to the CIO and the the others close (at the same location) of the LOB-IT. I have seen multiple examples of remote and distributed PMO organizations which has not worked very well. It not because of individual capability - in my opinion, there needs to a face to face discussions between the PMO, LOB-IT leadership team and LOB - Business Operations teams. The PMO should work in partnership with Enterprise Architecture team to help prioritize and optimize the enterprise (instead of at LOB level).

Enterprise Architecture: This is a key function and once again like the PMO - the EA team, especially the Business Architecture team members need to be close to the business. One more point to note - this is the only team in IT that is paid and expected to look at the long-term picture (not just of the current FY or quarter) as well as influence the optimization of the Enterprise.

Other centralized functions would be IT Operations, Application Development (works for some organizations) and QA/RM teams.

3. Outsourcing
As Ashok pointed out in is blog on SOA and Outsourcing - Despite many of its shortcomings, outsourcing is here to stay. Sometimes, due to resistance from direct reports and other organizations dynamics - CIO may sometimes differ outsourcing (and off-shoring) some of the non-core functions such as Data Center Management, Help Desk and Application Support.

I agree that outsourcing is a difficult decision but a required one. Make sure that you have adequate controls and process in place and one key learning (based on my experience) - the first few months the service may deteriorate or take longer - until the outsources/off-shore team comes up to speed.

4. Enable Innovation
This is a key ingredient for success - not just within IT but also within Business. One of my observations is that some/most IT organizations have a tendency to throw cold water on solutions put together by tech savy business folks. The typical push back from IT are statements like:
  • we were not involved so we cannot support this
  • we have no idea what it does and it's impact in the data center - so we cannot take ownership of this
  • we can redo the entire solutions the right way and it would cost $$$$
  • ....
The CIO needs to step in and make sure that IT organizations do accept and support business solutions - even if they were build by tech savy business folks.

5. People
The key primary assets for any organizations are it's people. The CIO needs to cearly define and communicate the IT principles, identify primary and secondary (that could potentially be outsourced) organizations as well as provide adequate training.

These are the 5 things that the CIO should focus on for establishing a Strategic IT.

As always, your comments are welcome and please feel free drop me a line.

Yogish Pai

Tuesday, July 01, 2008

Analysis on Oracle's BEA integration strategy

Oracle presented it's BEA product integration strategy earlier this morning and following is my take on their approach. Their market message about Fusion, Applications and Database did not change - this was more about integrating the product stacks. On the whole this is good news for existing Oracle customers who have standardized on Applications and Fusion but not necessarily for the one's who standardized on BEA's stack (especially Portal and AquaLogic).

Following is how Oracle categorized the combined product stacks:

  • Strategic Products: IMO these are the primary products where they shall continue investing and great news for those who have standardized on these products
  • Continue and Converge Products: IMO - they will increase the support price and shall keep it going for the next 9 years. Customers who have standardized on these products WILL have to migrate to a different product within 9 years.
  • Maintenance Products - EOL products
Oracle classified their products into following categories and shall comment on each of them:

Development tools:

JDeveloper will be their primary development tool and for those who have invested in IBM, TIBCO, SAP or other products - developers will need to develop composite solutions using two environments (JDeveloper and Eclipse). Yes! Oracle will still support Eclipse but not a level which IT organizations would prefer - especially for the one that have not standardized exclusively on Oracle.

One important point to note - they will no longer support Beehive. Beehive are those proprietary controls from BEA that was put into open source. This is key for all development for WebLogic Portal and WebLogic Integration. If Beehive is EOL - this also means the EOL for WLP and WLI (as one cannot develop solutions without controls on these two products).

Application Server:
This is great news for both BEA and Oracle customers - WebLogic will be the standard app server with Oracle's SCA runtime support. Plus - this also beefs up the WLS development team which was trimmed down substantially by BEA to create AquaLogic. In addition, their JRockit (JVM) provides a differentiation that other cannot match as yet. Both for near real-time performance as well as virtualization.


Services Oriented Architecture:
Oracle Data Integrator will still be their primary tool for data integration. No mention about AqauLogic Data Services Platform (looks like it is EOL - without any formal announcement).

Convergence of AquaLogic Service Bus and Oracle's Service Bus into one single products. This is great, a ESB with excellent support for WS, JBI, SCA, etc. and ability to expand. IMO - this integration will take over 2 years and standardizing on either one will do for now.

The other Oracle products are great and are capabilities that were missing in the BEA stack - these products are complementary.

BEA WLI - continued support but no further development. Not very good news for the large Telecom and Financial Industry customers who have standardized on WLI. They will need to find an alternative - some other EAI or messaging platform like TIBCO, MQ, webMethods, SAP XI are some alternatives. ActiveMQ (Open Source) may also be an options for those who leveraged WLI for messaging.


Business Process Mangement:

It is great to see that Oracle has kept ALBPM - I was concerned they may not continue it for long. However, there is a lot of work to be done in ALBPM to integrate it with Oracle Web Center (especially as they are no longer going to continue with ALUI) for human work flow. This could leave the door open for competition (TIBCO, Vitria, SAP, IBM or Microsoft) to walk away with their user interaction business. In addition, ALBPMs will also need to fortify it's support for BPEL as well as transaction management. With additional resources and expertise from Oracle - ALBPM is expected to continue being the market leader.

Overall - this is a step in the right direction for Oracle.

Enterprise 2.0 Portal:
In my opinion - this is where Oracle got it wrong. A lot of large customers have standardized on WLP and/or ALUI. With Oracle putting this in maintenance mode - will force ALL BEA customers to migrate to another platform in the next five years. Most customers shall stop new development on BEA Protal products which leaves the door open for their competitiors. In addition, with loosing WLP - they just lost integration with other Content Management Solutions like Documentum, Interwoven, Filenet, etc. Agreed they still have their own content management solution (which they will have tight integration with) and maybe their intention is to sell the entire stack to the customer.

Customers who have standardized on Oracle apps will most probably go with Oracle Web Center and those who have standardized on SAP applications will most probably standardize on SAP Portal. As for the rest of the customers - it is open season. The race will still be between IBM, Oracle, Microsoft and Open Source (Web 2.0 products).

Good to see that Ensemble and Pathways are still primary products for Oracle.

Identity Management:
Now they do have a complete stack and am still not sure whether customers will standardize on Oracle's stack. Agreed they may be one of the market leaders for security but personally I may still prefer a combination of CA and/or RSA products.

System Management:
Again now they have a complete story but not market leaders in any of their products. Guess Oracle may have to acquire BMC to complete the stack :). Alternately, they could just go after CA and get both management and Security at the same time :).

SOA Governance:
Gald to see that they kept ALER intact. Not sure why they are still OEM the Service Registry, especially as all they need to do is provide UDDI v3 support on ALER and they could own the entire stack.

One area that Oracle will have to address over the next year or two and that is around IT Governance / Application Portfolio Management. Currently ALER does integrate with CA and HP IT Governance tools - however, at sometime Oracle will want to own this application too (another acquition :) ).

Service Delivery Platform:
This is very good news for the telecommunication industry. The combined portfolio is very attractive and something that will get the industry excited about this. The biggest challenge that they most probably will face is the lack of qualified resources in the market. However, knowing Oracle - they will overcome this problem by aggressive training and education programs.



Over all - this is good for Oracle and for most of the customers. However, it would challenging for those BEA customers who had standardized on WLP and/or ALUI (Plumtree).

Disclosure: I was employed by Oracle between 1996 and 2000 and by BEA between 2002 and 2007. I am currently and independent consultant and am neither on a contract or payroll of any of their competitors. This is purely my opinion based on my experience in the industry.


You comments are always welcome and do feel free to drop me line.

Yogish Pai

Tuesday, June 24, 2008

SOA and Outsourcing

Despite many of its shortcomings, outsourcing is here to stay. Businesses are addicted to outsourcing and outsourcing is viewed as low risk (we have someone to pass the buck or blame if things don’t go right). IT departments have come to view outsourcing as a normal mechanism by which costs can be lowered. While this is true most of the time, sometimes uncontrolled outsourcing can limit IT’s ability to take advantage of emerging trends such as SOA. Let’s see how things can get ugly fast. In a typical data center outsourcing scenario, the outsourcer will probably assign a pool of resources to manage the infrastructure with the assumption that individual familiarity with the environment is irrelevant as long you have robust process (read: bureaucracy) in place. It works in theory but the reality is that in a shared services environment, you simply can not assign just any systems administrator to troubleshoot problems. The familiarity with the environment is essential or else you risk impacting many applications. At least at this juncture, most outsourcers are good at monitoring and troubleshooting applications but not services.

Those of you in IT management, who are negotiating contracts with outsourcers, you must make sure there is enough flexibility in the contract to accommodate appropriate monitoring, troubleshooting to handle shared services environment. The other option is have your employee augment the gaps to make sure your SOA infrastructure is operating smoothly.

Ashok Kumar

IT Engagement Model

One of the key ingredient for success is clearly defining the roles and responsibilities within IT. There are multiple stake holders in IT with each doing their best to provide the highest level of support to the business. Most of the time this results in people stepping over each other - especially as there generally is a not a clear definition of everyone's task. Most of the project failures are due to the confusion is the definition of the roles between the PMO, Project Manager and the Enterprise Architects, resulting in responsibilities overlap and lack of making decisions.

It is important to do this in the context of the Services Life Cycle and did publish a short presentation on this topic a couple of months back. This presentations is a summary of the SOA Practitioners Guide Part 3: Introduction to Services Life Cycle with the addition of the IT Engagement Model slide (14) shown below.





This is the RACI (Responsible, Accountable, Consulted and Informed) slide listing all the roles and responsiblities of various actors across the services lifecycle. This is pretty generic in nature and could also be applied to non-SOA lifecycles too.

You comments are always welcome and do feel free to drop me line.

- Yogish Pai