Tuesday, July 29, 2008

Key learning from Home Entertainment/Automation that can be applied to SOA/SaaS

Unlike the previous generation where technology innovation was driven by enterprise needs, over the past few years technology innovation such as smart phones, multimedia, game consoles, multimedia and social networking is being driven by the consumers. In short the consumers have gone digital, whether it is HDTV, Blue-ray, Home automation, smart phones, media servers or IMS (IP Multimedia Services). The vendors manufacturing these devices and services understood that unless there are standards adopted by industry - the consumers would not adopt these technologies (especially as they are not cheap). It is for this reason, most of the large vendors (hardware, software, manufacturers, protocol stack providers, etc.) got together to form the Digital Living Network Alliance (DLNA).


Their objective was to resolve the following consumer challenges:
  • Products designed for the home should be easy to install, provide noticeable user value and be affordable
  • Product must inter operate with each other without requiring the consumer to undergo complex setup and configuration for connection between devices
  • Digital home products must inter operate with each other and with existing CE devices such as TVs and stereos.
Doesn't this sound very similar to the current IT Operations challenges?

The above diagram illustrates DLNA's view of the customers needs (source: DLNA). In order to be vendor neutral and provide the consumer the ability to control any service (yes! they call it services) the DLNA members standardized the technology stack (as shown below - source DLNA).

The key learning here is that the vendors adopted Peer-to-Peer (P2P) for device discovery, control and Media management. For now they have all adopted UPnP, especially as most existing devices at home (desktops, laptops, storage devices, game consoles, IP based TVs, Stereo systems, network hubs, etc.) support UPnP. Some of the vendors such as Microsoft support both UPnP and WS-Discovery and it the long term (once the backward compatibility issues are addressed) the industry may migrate to WS-Discovery.

For those getting ready to purchase TVs, Phones or other Digital Devices, I would recommend you verify that they are DLNA Certified.

The next obvious question is What does this have to do with SOA/SaaS? Well! why not use this same approach for deploying services? It would make life much easier for IT Operations and potentially eliminate the need for additional ESB hops in the network. Yes! I am back on this topic :). The Services-Oriented approach, it is basically a P2P architecture, i.e. a consumer invokes a producer. As most of the large software vendors have made a commitment to adopt Services Component Architecture (SCA) and developing the SCA Run time engines, it would great if they could adopt either UPnP, WS-Discovery or some other P2P technology. The following diagram illustrated the joining of new node/service(s) to the P2P network.

Benefits:
Following are the benefits of adopting this approach:
  • Based on the SCA standards -unique (logical) service name for services both for defining and invocation. Do not need to know the EPR (physical location) at the time of deployment.
  • Multi-cast availability of service whenever an instance come up - dynamic configuration does not require IT Operations or tools (even if they are automated) to change configurations.
  • Service maps (for a predetermined domain/network) maintained at each node. Complete map could be maintained in the Super peer (read up on p2p architecture for more details).
  • Eliminate the need for Service Registry in production. As each instance of the node and services is maintained dynamically by the Super Peer - there isn't a need to maintain and administer a Service registry.
  • Eliminates the need for having a separate monitoring agent on each node, especially as each instances could updates it's own service performance details in the P2P map.
  • Universal administration tool could be used to configure one or all the instances at any node and propagate the changes across the network.
  • As the consuming services would know the EPR of the producing service, this eliminate the need of an ESB.
The Newton: Component Model (Key technologies, OSGi, Jini, SCA) is the only run time engine I know of, that supports both P2P (Jini) and SCA. The Apache Tuscany project does claim to support JXTA (P2P) binding and have not researched it as yet.

Just my thoughts and as always do feel free to drop me line with your comments and/or feedback.

- Yogish

Thursday, July 24, 2008

Key Best Practices - REST Assured????

The purpose of the blog is to find out if there is a place for REST in the realm of “Business Services”.

First of my definition for REST – it is a "Get", "Put", "Post", "Delete" operations performed on "Resources" that are identified as URIs being transmitted over HTTP(S) as REST.

REST by nature has a very simple service operation set with the complexity all embedded in the Resource URI. The operations that REST allows are NOT business user-friendly and hence do not really belong on the Service Repository that an end user is referencing to discover business services. To accommodate complex business behavior and to compensate for the finite list of RESTful operations the resource identifiers have to be fairly complex.

SOA on the other hand, allows fairly diverse interface definition that resembles as closely to the business syntax as it can get. The canonical model that is shared is standardized and is not as diverse as are the resource URIs of the REST architecture. The canonical models for the most part are XML.

Another key difference lies in the protocol and messaging support for REST. REST is limited to only HTTP/ HTTPS which by nature is stateless and has no standardized protocol level interception model for the more enterprise category behavior such as guaranteed delivery of messages and asynchronous messaging semantics. SOAP and the standardized metadata associated with SOAP envelope on the other hand can be interpreted by the Web Services SOAP stack to enable behaviors such as Reliable Messaging, Transaction Management, Addressing, and Notification with a level of interoperability across-vendor platforms.

Having said that, it is possible to find a home for RESTful architecture and to leverage one of its strong points in terms of a simplified and standardized interface. REST may be a good architecture model or construct to follow in the service implementation layer or at the service adapter layer. This helps to keep the service implementation the same across any type of service interface definition. In the event there is a need for additional transactional integrity then these behaviors could be taken care of in the service facade or in the service mediation layer while keeping the actual implementation in the simplified RESTful architectural format.

If we are looking at SOA service operation being exposed to the world and using a RESTful service implementation layer then we would need to deal with translating canonical models and parameters of SOA style service into URIs. This translation behavior includes extracting the request parameters from the Request canonical model and transforming these to verbs and values in the format of a query string i.e. a URI.

One option is to have the specific URIs needed by the service implementation layer to be translated to the input URIs as specified by the RESTful service implementation layer. Here the service interface layer or the service mediation layer performs the translation of the input parameter into the specific URI and initiates the invocation of the appropriate service implementation layer.

The other option is to have the service interface layer or the service mediation layer translate the input into a generic URI for calling the RESTful implementation layer which in turn does the mapping internally to the required operational version of the URI. This improves the durability of the service interface layer or the service mediation layer. In addition, it provides a mechanism for doing internal operation overloading and polymorphism.

Regardless of which option is chosen, I would think that embedding RESTful services in the implementation layer would be a good option while not exposing painfully abstract behavior to the business user who is used to seeing more of English like business syntax.

Here is a link to another blog of mine on the topic in the context of Service Orientation!

Even though I agree that there is a place for REST as it may simplify (by providing a finite set of operations) there are infrastructural capabilities that do not exist with the REST and without these the business agility piece is going to be hard to deliver. Here are some additional thoughts on the topic that an enterprise architect may have to resolve prior to embarking on a RESTful implementation.

  1. Identify which layer deals with the interpretation of the get(Resource) call? Is it the service provider (SP) interface or is it the service implmentation layer?
  2. Would the SP be dealing with versioning of the Resource thereby creating a grand facade that does internal resource mapping via a URL redirect style substitution? This would be one way of protecting the service consumer (SC) and to accomplish backward compatibility.
  3. Identify how the SC gets the handle or URL to the "right" resource? How does it "know" what to expect from the Resource and how does it "know" to express its' semantic expectations for the SP Resource that is being called upon? Does it do a "Get(SearchByCriteriaResource)" call with semantics embedded in the Request Resource?
  4. Is the get(RightResource) another call and somehow the metadata held in the response Resource is expected to have the "right metadata" for each of the provider Resources (that meet the SC search criteria) in order to help the SC choose the right SP? If this is the case, then following this invocation would be a secondary SC to SP interaction to fetch the chosen SP Resource.
  5. Identify if there would be a standardized URI with search criteria specification. This might be good to define up-front to enable the support of JINI-like or SaaS like interactions?
  6. Identify and define how one would deal with asynchronous interactions? Would it be via the use of " ack/ no-ack post notifications" and "post response Resource" calls? How would this be to handled? The SC would have to provide a pre-set "call back" Resource identifier to which the SP would have to reply back to?
  7. Would existing messaging infrastructures still work as most of them have proprietary protocols or else they support JMS/ RMI?
  8. Identify a URI for brokering of the two-part synchronous post HTTP call to. This would be for enabling the pseudo-asynchronous interaction semantics. Is the centralized broker URI all that is needed to broker the two part synch call for faking the Asynch interaction semantics especially if the architecture does not have to deal with message delivery guarantee? I assume "post" might have void return type communication model to enable releasing of calling address space resources?
  9. Would one have to leverage any "Registry" or ESB like middleware or "service grid" like infrastructures? If so, how does would resource proxying work? Would the proxy Resources’ "forward" their calls to the right Resource?
  10. Would this mean that the middleware and mediation layer products need to know how to deal with this type of an interaction semantic where the "Registries", ESB, "service grids/ service marketplaces" are all metadata-driven Resource proxies with built-in support for REST - as opposed to service proxies as would be the case in SOA style architectural models.

Thanks in advance for your feedback.

surekha -

Tuesday, July 22, 2008

Mashup, WOA, SOA, SaaS, REST and the kitchen sink

These days there is a lot of jargon thrown around and following is my attempt to make sense of put these all together in a simple terms. A could of year back some of us, the early adopter of SOA, had put together the Enterprise SOA Maturity Model (Presentation). The maturity model for IT organizations (in simple terms) consisted of three levels starting with initially developing web applications, followed by composite (aggregated) applications and finally maturing to end-to-end (automated) business processes.

It is great to observe that some in the community are very strong advocates for Web-Oriented Architecture(WOA) as a first step towards adopting SOA. This basically validated our initial assumption that business will only fund those projects that demonstrate value and providing web based solutions is the easiest way to get their buy-in. For those not familiar with WOA - you may want to read this Blog on What is WOA - The Future of Services-Oriented Architecture . What WOA does is simplify the architecture and approach - however there is still a need to ensure consistent architecture and governance across the enterprise.

The above diagram illustrated the architecture approach for WOA based sample application. To keep it simple - this sample applications integrated with Identify Management, Stock Information and a Business Process. The RESTful approach makes is simpler and easier. However, the deployment model can very quickly become pretty complex, especially with the introduction of ESB as a mediation layer. I completely agree that this is how I deployed and may still adopt the model again in the near future. However, as the number of services increases, it increases the burden on the Architects as well as IT Operations to manage not only the complexity of services dependencies, but also the complexity of multiple instances of the same services in the production environment. Coupled that with the introduction of ESB as a mediation layer and the configuration of the load balancer and firewalls between the various sub-zones in the network - this model could become unmanageable (even with an SOA Repository + CMDB) very quickly.

Alternate Approach:
Last year while working with my colleges we came across Tuple Space (an implementation of the associative memory paradigm for parallel/distributed computing) a theoretical underpinning of the Linda language developed by David Gelernter and Nicholas Carriero at Yale University. The original Linda model requires four operations that individual workers perform on the tuples and the tuplespace:
  • in atomically reads and removes—consumes—a tuple from tuplespace
  • rd non-destructively reads a tuplespace
  • out produces a tuple, writing it into tuplespace
  • eval creates new processes to evaluate tuples, writing the result into tuplespace
This simple approach of handling Tuples (Objects) in memory should enable us to integrate the enterprise using simple (limited set of) operations, similar to what the RESTful style.




The above diagram illustrated a simpler approach of integrating the enterprise (or enabling composite applications). From a business context - they always deal with an entity such as Customer, Order, User, Inventory and Product. This approach enables the IT Project teams to understand and define the business context in their language .


How does this work?

There are various different technologies and for this example - lets take JavaSpaces. Every objects has it's own unique space. The number of operations that can performed on a space are limited to Write() - Create/Update the space, Read() - Read the space without delete, take() - read and delete the space and Notify() - send an event when ever a space changes (write() or take()). Of course, they had additional commands and some vendors have provided SQL interface to these objects.

Impact to IT Operations
As organizations move to the objects (space) model - it makes it easier for IT Operations to manage multiple instances of the distributed (in-memory) data grid, eliminated the need to know and understand the services dependency matrix. This approach does not eliminate the need for ESB but does limit the need of multiple (logical) hops and reduces the number of (proxy) services that need to be maintained and managed by IT (development and operations).

Impact to BPM
Business is very much interested in knowing, preferably in real-time. the status of their business. Unfortunately, the currently BPM products can provide BAM capabilities only for those business processes that run on their instance. None of the existing products have the capacity to provide end-to-end near real-time BAM capability.

This approach enables IT to provide business monitoring and management functionality by leveraging the notify() capability of JavaSpaces.

In my opinion, in the long term the BPM vendors shall be providing the modeling and simulation capability and the Business Process Execution shall be powered by distributed (in-memory) object infrastructure.


Impact to SaaS
Today most of the SaaS Platforms (also referred to as PaaS) are object driven. The objects and their relationship not only drive the user interface but also the Business Process and Rules management. They are already one step ahead of the rest of the industry. However, most of SaaS infrastructure and platform teams spend a lot of time an effort in dealing with multi tenancy.

The Distributed object technologies provide capability to map the objects (spaces) to data sources (Databases, JCAs, WS, Files, etc.) one could potentially implement a multenant SaaS solution powered by an existing packaged application at the back end. Something that the existing SaaS companies should not ignore.

Business State Machines
As discussed previously - business deals in the context of entities (or business objects) and their decision making criteria is based on something happening (events). A combination of Event Servers and Business State Machines may be sufficient and enough to provide an end-to-end business process management (run-time).

Impact to Master Data Management (MDM)
In my opinion, MDM is a must-have solution for any enterprise that is interested in managing and monitoring their end-to-end business processes, independent of whether they adopt SOA or not. As MDM solutions deal with the most common business entities (objects) such as customers, contacts, orders and products the distribute (space) object model fits in very well. Address Standardization, Matching, reading entire customer record and performing real-time aggregation (analytics) becomes much easier to develop and manager. An approach that existing MDM vendors should look into. Most of the MDM solutions provided by vendors are easier to implement (compared to 5 years back where we had to build it all within IT), the biggest challenge with these solutions is defining and maintain the Master Data in business context.


Summary:
Even though the traditional approach scales to meet large enterprise needs, it is too complex and difficult to provide for true agility. The distributed space concept has the ability to facility business agility through plug able modules (based on event/state change notifications) and map the objects back to a source system. This not only changes the game but also takes the industry one step closer to the reality of cloud computing.

Just my thoughts and as always do feel free to drop me line with your comments and/or feedback.

- Yogish

Sunday, July 13, 2008

3 Ps of Strategic IT

While working at The Coca-Cola Company we were introduced to the 3Ps at Orientation which were People, Products & Price which translated in something as follows:

"we need the right people to produce the right product to sell at the right price"

I would agree that this is true even today for any enterprise. In my opinion following are the 3Ps key to a Strategic IT.

People:
As always people are the key assets to all organization, especially the ones that make a difference. This is true not only for folks in the leadership positions but also the developers, system administrators, administrative assistants as well as the security guard at the data center. It is important to identify the exception and key people within an enterprise and do whatever it takes (within reasons) to keep them.

Platform:
Even though business keeps asking for solutions, which could be a packaged or custom applications - the platform on which it developed is key. In my experience, even if we adopt a packaged solution for a specific vertical, the customization required to tailor it to the companies specific needs is still very expensive and as for upgrades - forget about it. Might as well do a brand new implementation.

It is heartening to see that all major solution vendors have been focusing on migrating their packaged applications an open standards and tools based platform - which will make it easier for customization as well as future proofing for upgrades.

Process:
When I refer to process - it is not the business process (which is also key) but the process put in place to enable change. Due to fast changing business environment, business will need to transform itself much more rapidly or risk going out of business. This requires IT to help facilitate innovation - a key process that needs to be put in place and made know to the entire organization. And just like any other business process, this innovation/transformation process should also be reviewed on a periodic basis.

Just my thoughts and as always do feel free to drop me line with your comments and/or feedback.

- Yogish Pai

Friday, July 11, 2008

Business Agility and Business Driven EA Domain Models

For my presentation on the topic of SOA and Business Agility at the IC - local chapter event last month I had refined slightly the Business Agility Domain Model. The pdf version of slides on the domain model are available here and is based on my original blog on Defining and Measuring Business Agility. I have been planning for a long time to develop a spread sheet that goes along with - maybe someday I shall get to it.

Over the last few months I have helping my customers understand the role and importance of Enterprise Architecture and came up with this EA domain model (also referred to as the "circle of happiness" by one of my potential customer :) ).


The Business Driven Enterprise Architecture consists of the following domains:
  • Business Architecture (or Business Design)
  • Competency Center -Check out this SOA Podcast on SOA COE by Melvin Geer published by the SOA Consortium.
  • Enterprise Architecture - check out all my blogs on this topic
  • Governance - Corporate, IT, Enterprise Architecture, SOA and so on (whatever makes sense)
  • Life cycle management includes all life cycles such as Program Management, Project (application) Life cycle, SOA Life cycle and Services Life cycle
  • Finally - Environment Industrialization. We came up with this term at BEA-IT to ensure consistent systems environment through out the application life cycle. Systems environment included networks, firewalls, OS (versions and patches), servers, data centers and disaster recover sites for all environments from development to production.
Hope this was helpful and as always As always do feel free to drop me line with your comments and/or feedback.

- Yogish

Thursday, July 10, 2008

SaaS requirements for Large Enterprises

Thanks to Annie once again for forwarding me the article on SaaS Star leaves SAP SalesForce.com. Executives moving to competitors is nothing new it was not long back that most CEO's of large software companies in the Silicon Valley were Oracle Alumni. As Larry could not retain them - he went and bought them :). Sorry - I digress.

While reading the article - the following quote caught my eye.

“SAP doesn’t have a SaaS strategy,” he (Steve Lucas) told me. “They don’t have a single piece of paper that states what their SaaS strategy is.”


I don't know Steve but if he has already made this statement to the executives of SAP while he was there - they kudos to him, otherwise this is Salesforce.com PR at work (and very good at that).

Don't get me wrong - I like Salesforce.com, they are a great company and the market leader in the SaaS environment. A couple of months back I did develop a "
Application Portfolio and Time Reporting" prototype - it took me a couple of days to prototype, very easy to develop (no code required) and shall gladly share it with anyone who is interested.

The focus on AppExchange (or Platform-as-a-Service) by SalesForce.com is pioneering and a great endeavor. However, in my opinion - most SaaS solution providers (and platforms) are focused and excellent for delivering point solutions - such as Sales Force Automation (or CRM like SalesFoce.com), Financial, HR, Inventory Management (CRM and ERP like NetSuite), Payroll, Financial and Tax Reporting (such as Intuit) targeting Small and Medium business. In addition, there are most probably hundreds of other start-ups currently in the process of developing SaaS Platforms (Coghead, saas, etc. and I expect to hear from the rest of them). SAP has very clearly stated that their SaaS solution -Business by Design is targeted for small and medium business (for now). As for Oracle - Larry Ellison already has stake in SalesForce.com and Netsuite and will acquire one or both of them after they demonstrate that they can scale out to large enterprises (or $$$$ - which ever comes first).

Following are some of the high-level SaaS (platform) requirements(on paper/blog :) ) in my opinion for large enterprises. These requirements are focused primarily on the platform, rather than the application/solution.

  • The platform should meet all the technical requirements such as availability, reliability, security, standards, etc. (most SaaS solutions do support this)
  • Decouple the platform into - User Interaction (portal), Business Logic and Domain (data) layers. Not supported by a single SaaS provider (that I know of).
  • User Interaction - only the presentation is provided and hosted by the SaaS provider and all the business logic and data is stored somewhere else. Sample use case could be: User Interaction hosted by SalesForce.com, Business Logic hosted by SAP (Business By Design) and Data hosted by Oracle on-demand.
  • Use standard development tools such as Eclipses and code could be deployed both on-site as well as SaaS platform. Today in most cases it is not possible to reuse code between SaaS and in-house development. It would great if the same code could be deployed both on the SaaS platform as well as on-premise infrastructure.
  • Unrestricted near real-time data services will be key to meeting the needs of large enterprise.
Let me try and break this down in more details:

User Interaction:
  • Support multi-channels - supported by most of the current provides
  • On-line - Off-line capability - none of them do so as yet. I did like the Alchemy product initiated by BEA in 2004 - which was sadly discontinued after Adam Bosworth left BEA. There needs to a capability of develop a solution once which supports both multi-channel as well as off-line / on-line capability.
  • Develop using standard tools and deploy it on SaaS or in-house infrastructure. Major SaaS vendors do support this for Perl, PHP or Ruby but not for Java or .Net (that I know of) - smaller ones do support Java, Flex, AJAX, etc.
Business Logic
  • Based on standards such as BPEL, XPDL or native code (java, Perl, PHP, Ruby or .Net). Most major SaaS vendors do not as yet support standards but do support native code. In addition, developers could use their on-line (graphical) developer tools to model the business process/logic. However, the code is not deploy able within the enterprise.
  • Ability to decouple the business rules/polices from the business process. Supported by develop custom code - expect this to mature over the next two years. The Business Rules engine (such as ILog, Drools) could be deployed within the enterprise.
Domain (data) Layer
  • To me this is the data grid - that support distributed data (which also includes the meta-data that drives the entire solution)
  • Developer should be able to model any simple or composite object and deploy this on the grid.
  • Based on the operations (CRUD) the data layer shall perform the operation on the appropriate data source
  • Ability to support events and alerts to trigger action (support Event Driven Architecture) which could also be used as a Business State Machine.
  • Provide real-time performance - key requirements for next generation BI solutions
  • Other than SalesForce.com and Coghead - I have not come across other players that provide this capability. Their solutions are still transactional and need to be extended to provide near real-time performance. It would be ideal for the SaaS providers to leverage tools like Oracle Coherence (Tangosol), GigaSpaces, Java Space orSAP In-memory database in cobination with EII tools such as MetaMatrix, Composite orAquaLogic DSP to provide the foundation of this layer.
Over the next few years - I expect large enterprises to start adoption point SaaS solutions with IT organizations primarily focusing on integrating the enterprise. I hope by that time we shall get a standards based platform that needs one and only one development tool and give IT organizations the option of deploying the solution either within the enterprise or on on the platform of a SaaS provider.

Maybe - what we really needs is a Services-Oriented Operation System :).
As always do feel free to drop me line with your comments and/or feedback.

- Yogish

Thursday, July 03, 2008

Best Practice: 5 things that the CIO should focus on

Following are the 5 things that the CIO should focus on in the current environment.

1. Focus on the Demand Side of business
As per my earlier Video Blog on the Changing Role of C-level executives - CIO need to focus on the two sides of the business - the demand and the supply side of busienss. Today most of the CIOs are focusing on the supply side, i.e. helping reduce cost, timely delivery of projects, outsourcing and off-shoring and so on. Yes! COIs do also focus on the Demand side - but not necessarily the primary focus for majority of the CIOs.

The demand side basically means focus on the business demands for new or modified solutions and widely known as Business Agility (read my take on Defining and Measuring Business Agility). CIOs would do well to establish the Business Architecture discipline within the Enterprise Architecture team to help understand and prioritize the demand side of the business.

2. Establish a strong centralized function
Agreed that the LOB-IT is essential to provide adequate services at the business level, there is also a need for a strong centralized function. The two primary centralized functions would be:

Program Management Office: The leadership of the PMO should be close (preferably at the same location) to the CIO and the the others close (at the same location) of the LOB-IT. I have seen multiple examples of remote and distributed PMO organizations which has not worked very well. It not because of individual capability - in my opinion, there needs to a face to face discussions between the PMO, LOB-IT leadership team and LOB - Business Operations teams. The PMO should work in partnership with Enterprise Architecture team to help prioritize and optimize the enterprise (instead of at LOB level).

Enterprise Architecture: This is a key function and once again like the PMO - the EA team, especially the Business Architecture team members need to be close to the business. One more point to note - this is the only team in IT that is paid and expected to look at the long-term picture (not just of the current FY or quarter) as well as influence the optimization of the Enterprise.

Other centralized functions would be IT Operations, Application Development (works for some organizations) and QA/RM teams.

3. Outsourcing
As Ashok pointed out in is blog on SOA and Outsourcing - Despite many of its shortcomings, outsourcing is here to stay. Sometimes, due to resistance from direct reports and other organizations dynamics - CIO may sometimes differ outsourcing (and off-shoring) some of the non-core functions such as Data Center Management, Help Desk and Application Support.

I agree that outsourcing is a difficult decision but a required one. Make sure that you have adequate controls and process in place and one key learning (based on my experience) - the first few months the service may deteriorate or take longer - until the outsources/off-shore team comes up to speed.

4. Enable Innovation
This is a key ingredient for success - not just within IT but also within Business. One of my observations is that some/most IT organizations have a tendency to throw cold water on solutions put together by tech savy business folks. The typical push back from IT are statements like:
  • we were not involved so we cannot support this
  • we have no idea what it does and it's impact in the data center - so we cannot take ownership of this
  • we can redo the entire solutions the right way and it would cost $$$$
  • ....
The CIO needs to step in and make sure that IT organizations do accept and support business solutions - even if they were build by tech savy business folks.

5. People
The key primary assets for any organizations are it's people. The CIO needs to cearly define and communicate the IT principles, identify primary and secondary (that could potentially be outsourced) organizations as well as provide adequate training.

These are the 5 things that the CIO should focus on for establishing a Strategic IT.

As always, your comments are welcome and please feel free drop me a line.

Yogish Pai

Tuesday, July 01, 2008

Analysis on Oracle's BEA integration strategy

Oracle presented it's BEA product integration strategy earlier this morning and following is my take on their approach. Their market message about Fusion, Applications and Database did not change - this was more about integrating the product stacks. On the whole this is good news for existing Oracle customers who have standardized on Applications and Fusion but not necessarily for the one's who standardized on BEA's stack (especially Portal and AquaLogic).

Following is how Oracle categorized the combined product stacks:

  • Strategic Products: IMO these are the primary products where they shall continue investing and great news for those who have standardized on these products
  • Continue and Converge Products: IMO - they will increase the support price and shall keep it going for the next 9 years. Customers who have standardized on these products WILL have to migrate to a different product within 9 years.
  • Maintenance Products - EOL products
Oracle classified their products into following categories and shall comment on each of them:

Development tools:

JDeveloper will be their primary development tool and for those who have invested in IBM, TIBCO, SAP or other products - developers will need to develop composite solutions using two environments (JDeveloper and Eclipse). Yes! Oracle will still support Eclipse but not a level which IT organizations would prefer - especially for the one that have not standardized exclusively on Oracle.

One important point to note - they will no longer support Beehive. Beehive are those proprietary controls from BEA that was put into open source. This is key for all development for WebLogic Portal and WebLogic Integration. If Beehive is EOL - this also means the EOL for WLP and WLI (as one cannot develop solutions without controls on these two products).

Application Server:
This is great news for both BEA and Oracle customers - WebLogic will be the standard app server with Oracle's SCA runtime support. Plus - this also beefs up the WLS development team which was trimmed down substantially by BEA to create AquaLogic. In addition, their JRockit (JVM) provides a differentiation that other cannot match as yet. Both for near real-time performance as well as virtualization.


Services Oriented Architecture:
Oracle Data Integrator will still be their primary tool for data integration. No mention about AqauLogic Data Services Platform (looks like it is EOL - without any formal announcement).

Convergence of AquaLogic Service Bus and Oracle's Service Bus into one single products. This is great, a ESB with excellent support for WS, JBI, SCA, etc. and ability to expand. IMO - this integration will take over 2 years and standardizing on either one will do for now.

The other Oracle products are great and are capabilities that were missing in the BEA stack - these products are complementary.

BEA WLI - continued support but no further development. Not very good news for the large Telecom and Financial Industry customers who have standardized on WLI. They will need to find an alternative - some other EAI or messaging platform like TIBCO, MQ, webMethods, SAP XI are some alternatives. ActiveMQ (Open Source) may also be an options for those who leveraged WLI for messaging.


Business Process Mangement:

It is great to see that Oracle has kept ALBPM - I was concerned they may not continue it for long. However, there is a lot of work to be done in ALBPM to integrate it with Oracle Web Center (especially as they are no longer going to continue with ALUI) for human work flow. This could leave the door open for competition (TIBCO, Vitria, SAP, IBM or Microsoft) to walk away with their user interaction business. In addition, ALBPMs will also need to fortify it's support for BPEL as well as transaction management. With additional resources and expertise from Oracle - ALBPM is expected to continue being the market leader.

Overall - this is a step in the right direction for Oracle.

Enterprise 2.0 Portal:
In my opinion - this is where Oracle got it wrong. A lot of large customers have standardized on WLP and/or ALUI. With Oracle putting this in maintenance mode - will force ALL BEA customers to migrate to another platform in the next five years. Most customers shall stop new development on BEA Protal products which leaves the door open for their competitiors. In addition, with loosing WLP - they just lost integration with other Content Management Solutions like Documentum, Interwoven, Filenet, etc. Agreed they still have their own content management solution (which they will have tight integration with) and maybe their intention is to sell the entire stack to the customer.

Customers who have standardized on Oracle apps will most probably go with Oracle Web Center and those who have standardized on SAP applications will most probably standardize on SAP Portal. As for the rest of the customers - it is open season. The race will still be between IBM, Oracle, Microsoft and Open Source (Web 2.0 products).

Good to see that Ensemble and Pathways are still primary products for Oracle.

Identity Management:
Now they do have a complete stack and am still not sure whether customers will standardize on Oracle's stack. Agreed they may be one of the market leaders for security but personally I may still prefer a combination of CA and/or RSA products.

System Management:
Again now they have a complete story but not market leaders in any of their products. Guess Oracle may have to acquire BMC to complete the stack :). Alternately, they could just go after CA and get both management and Security at the same time :).

SOA Governance:
Gald to see that they kept ALER intact. Not sure why they are still OEM the Service Registry, especially as all they need to do is provide UDDI v3 support on ALER and they could own the entire stack.

One area that Oracle will have to address over the next year or two and that is around IT Governance / Application Portfolio Management. Currently ALER does integrate with CA and HP IT Governance tools - however, at sometime Oracle will want to own this application too (another acquition :) ).

Service Delivery Platform:
This is very good news for the telecommunication industry. The combined portfolio is very attractive and something that will get the industry excited about this. The biggest challenge that they most probably will face is the lack of qualified resources in the market. However, knowing Oracle - they will overcome this problem by aggressive training and education programs.



Over all - this is good for Oracle and for most of the customers. However, it would challenging for those BEA customers who had standardized on WLP and/or ALUI (Plumtree).

Disclosure: I was employed by Oracle between 1996 and 2000 and by BEA between 2002 and 2007. I am currently and independent consultant and am neither on a contract or payroll of any of their competitors. This is purely my opinion based on my experience in the industry.


You comments are always welcome and do feel free to drop me line.

Yogish Pai

Key Learnings - Using EDA to implement the core SOA principle of "loose-coupling"!!!

A lot has been said about how SOA and EDA are unique "architecture styles". It seems like only one or the other architectural prin...