Building for tomorrow

“We’re like most fifteen-month-olds. We wake up and cry at night and sometimes we fill our diapers but a lot of the time we bring smiles to people’s faces.”

That cute analogy is just one way Calgary-based Bas Wheeler describes Celero Solutions, a joint-venture formed in January 2003 between five cooperative organizations to provide IT services to about 400 credit unions across Canada’s prairie provinces. As Celero’s CEO, he also calls it “a grand experiment” because five IT organizations have been integrated with no head office but 350 employees in offices in Calgary, Winnipeg, Saskatchewan and Regina. The owners of Celero are the Credit Union Central of Alberta, Credit Union Central of Saskatchewan, Credit Union Central of Manitoba, CU Electronic Transaction Services and Co-operative Trust Company of Canada. Together, the parent organizations comprise a $25 billion co-operative financial services sector in western Canada.

Wheeler explains that the Celero initiative started in 1999 when, as executive vice-president of Credit Union Central Alberta, he was talking to his counterparts in the other four organizations around the Y2K issue and it became clear they were repeating the same work multiple times. Between the five organizations, there were at least two data centres and sometimes five of any type of application from banking system to email, office systems, telecommunication networks, Internet banking solutions and interactive voice response systems.

A financial statement projecting IT expenditures if each organization continued on their own over the next five years indicated that creating a Celero would bring efficiency gains over the next three to five years amounting to 15 to 20 per cent of the $60 – $70 million in current combined annual IT spending. With customers/members’support in principle to boost efficiency by eliminating duplicated IT tasks, the existing five management teams created a new management team with a common business-based vision for their offspring organization.

Now, Celero is in the throes of providing full IT services while rationalizing and consolidating “quite a grab bag of stuff…to free up duplicate work and get to the backlog of what we want to get done,” as Wheeler puts it. Wheeler says they are on their way to achieving that, although their benchmarks are moving targets. Progress is difficult to measure against initial benchmarks and targets as after just one year the new IT organization is responding to a changing world and delivering what wasn’t delivered before.

Essentially, Celero provides retail and wholesale banking systems, debit and credit card processing services, network management, security, software development and technology procurement services. Wheeler says that one of their first big initiatives is to select a core retail banking system of the future. That in turn will drive the rationalized infrastructure and surrounding systems. He cites a 25-item long list of criteria as including such tech specifics as being open, able to integrate, a vendor in for the long haul, not bleeding-edge-current or “running on any version of PowerPoint,” and implemented already in a Canadian financial institution. They are also working at building a common single culture from the original organizations with at least five cultures. They don’t yet have a portal because of the cost, says Wheeler, but they supplement face to face meetings by using services from WebEx Communications, Inc. for Webcasts. To complicate matters, each organization had their own processes and procedures, right down to time sheets. In standardizing these, Celero management is trying not to choose any of the existing five processes and procedures in use. “Everyone believes in standards as long as they are their standards,” says Wheeler.

He also finds a “grab and go” approach is effective. He says he was told in advance that one never gets done if one tries to develop a perfect set of procedures. So, rather than analyzing excessively, they try to pick among the best practices now and fix later. “You need speed over certainty in making decisions,” he stresses. “Staff needs that certainty even though you may change later. You can’t give an answer after six weeks of analysis.”

He says it has also been helpful that they resolved up front in 1999 who would be the management leader of Celero and what roles each management team member would take, so there was no pressure to compete for various posts. He adds that building such an infrastructure requires committed champions in charge of the initiative at each of the original organizations who really believe in the new venture. “You can’t do it half-heartedly,” he cautions.

Central support, remote access

Five years ago, another financial services firm was facing the challenge of seeking a safe, efficient and economical way to give real-time access to independent advisors spread over some 7,000 kilometres and a range of computing environments. The company’s choices back then regarding the core infrastucture set the stage for the firm to take advantage of taday’s opportunities.

The Berkshire Group of companies includes the independent financial planning organization Berkshire Investment Group Inc., the independent investment dealer Berkshire Securities Inc. and Berkshire Insurance Services Inc. which provides a range of insurance products aimed at wealth preservation. Until last November, from a base in Burlington, Ont., Berkshire provided its 550 financial advisors and 110 locations across Canada with essential access to four applications bundled together on the company’s integrated portal. Since then, a merger with the TWC Group of Companies Inc. of Radville, Sask., has swollen that user base to about 1,000 advisors across some 200 locations.

At press time, Berkshire was in the process of merging TWC into its infrastructure and expected this to be completed by June. Craig Henshaw, Berkshire vice-president of information technology and operations, says their efforts are quick and seamless compared to other consolidators who take years to bring their assets together.

He explains that the infrastructure they built five years ago and enhanced three years ago enables them to integrate another company’s assets relatively seamlessly with the company’s advisors and business partners. “From a merger and acquisition standpoint, a lot of our competitors are really struggling because at the same time they’re acquiring and trying to merge these firms, they are also looking at replacing and building their core internal infrastructures,” he says. “They’re trying to put new back office systems in place at the same time they’re trying to bring new companies on board. They’ve got a spaghetti mix of technology. We took the (approach) we’re going to build first our infrastructure and technology and then go to the marketplace and try to increase our assets. We felt it is easier to bring people on board to an infrastructure that is already built and stabilized as opposed to trying to do both at the same time.

“We set out five years ago with a fairly specific vision at a business level of where we felt our industry was going to be going and then translated that into a very specific technology vision,” he explains. “I think we’ve just been fortunate that the business vision turned out to be right and subsequently the corresponding technology visions have also turned out to be right. So, decisions that we made five years ago have really paid off for us now.” Those decisions included selecting Dataphile real-time mutual funds/securities trading system, financial planning software, x.eye.wealth.manager as the CRM/contact management solution, and automated forms and documentations.

The Dataphile system was originally designed to operate over a WAN environment for branch locations. While Berkshire’s corporate branches are similar to a traditional bank/broker technology infrastructure and have WAN capabilities, the majority of the associate branches run independently and lack that access. Cost and support issues made an extensive VPN environment an unattractive solution.

Instead, Berkshire chose to create an independent Internet environment by implementing a Citrix NFuse infrastructure to deliver non-Web-based applications remotely into a Web browser. This included a 10MB pipe connecting the head office HP/Unix environment to the public Internet and high-speed connections from independent advisors at remote branches or strategic partners to the public Internet.

Henshaw says advisors purchase and support their own PCs. “We support our corporate products but not the local PC. So with Citrix, support doesn’t disappear; it just centralizes.” Although he admits that it can be challenging to draw the line between what is a local PC issue or corporate application issue, it greatly relieves the logistics of supporting multiple locations. Certainly resources are required to manage the Berkshire Citrix farm of 14 servers and 1000 potential users with hundreds of concurrent sessions at any one time. Henshaw reports that Citrix load balancing spreads users out evenly per Microsoft terminal server.

Henshaw sees NFuse as making a critical difference to the Citrix implementation. He explains that in a typical Citrix environment, the head office programs are connected by a LAN to a Citrix server farm middle tier behind a firewall. The server farm is behind the Internet, buffered by a firewall. Citrix in effect adds an icon on the desktop of ICA (Independent Computing Architecture) local clients, i.e. workstations at remote branches, so they can access what seems to them like a local application on their desktops.

He says that NFuse takes that typical Citrix environment to the next level to make a browser base. The NFuse server, which sits in front of the server farm and still behind the firewall in a DMZ, eliminates the local client that has to be installed on the remote workstation. The browser serves as the client. “NFuse is equal to a traffic cop establishing a connection to the Citrix farm server,” he adds. “Not all applications within the portal are truly Web-based behind the scenes but NFuse has been a key tool in helping us deliver this integrated application suite to the local desktop over the Web in a manner that all the applications look and feel Web-based.”

That’s an important distinction when it comes to recruiting advisors. Henshaw says that advisors like the technology and it facilitates new advisors getting on board easily with Berkshire. “From a business standpoint, one of the key components of this infrastructure is our ability for new advisors to join our firm with ease,” he says.

Sticking to a long-term vision

Still another firm’s infrastructure choices made 10 years ago have remarkably stood the test of time and the rigours of a dramatically changing marketplace. IT executives at New York-based AXA Financial Services LLC figure they’ve saved about US$55 million by adhering to a blueprint for the services-based computing architecture the company first laid out in 1990.

Since then, the technologies have changed, but the central vision of a scalable, future-proof IT architecture based on reusing a core set of software-based services has stayed intact. The payoff has come from the ability to quickly and cost-efficiently develop, deploy and manage new applications across myriad customer channels, which the US$480 billion insurer says gives it an edge in an industry better known for its IT conservatism.

The three key reusable services the AXA IT architecture provides to all applications are common data access, data translation and security. These were initially developed by internal software developers who created proprietary code. Over time, however, as new technologies have become available from commercial IT vendors, AXA has adopted a strategy of swapping out home grown programs for off-the-shelf software. But the company remains true to its original IT blueprint for reusable services.

AXA started with mainframe technology, then moved to client/server and is now Web-based. Originally, AXA used IBM’s SNA to provide communications and common data access to front-end systems directly from the mainframe. In the second phase, it eliminated direct connections between workstations and the mainframe. It also switched from IBM’s OS/2 operating system to Windows on its servers and front-end PCs. In the third phase, object-oriented interfaces based on CORBA and TCP/IP technology for communications were key.

Most recently, AXA brought in Microsoft Metadirectory Services to synchronize all of its other directories and migrated from CORBA to J2EE.

AXA IT executives say that in each transition, the mainframe legacy transactions, customer data and agent data, which together represent the biggest investment, have been carried forward.

For example, the need for a service to aggregate customer data across multiple accounts residing on one or more of AXA’s eight major computer systems remains the same, regardless of whether an agent is requesting it via a voice-response unit or the Web. AXA’s eight major systems include those that it uses for retail and wholesale distribution, sales applications, life annuity services, broker/dealer systems, applications used by AXA’s financial services planning group and client-facing applications for customer inquiries and self-service via the Web.

By separating the technology from the business intelligence and intellectual capital, which is the customer data, agent data and the ability to perform the transaction, AXA claims to be able to take advantage of new and better technologies as they emerge, without interrupting the delivery of key services.

A major infrastructure upgrade completed in 2003 from AXA’s legacy CORBA environment to IBM’s WebSphere MQ message-based technology serves as a prime example. AXA wanted a single software product to deliver a common way for applications to access data in back-end systems, AXA reports. This approach even better streamlines the integration of new applications. It also enables new services, such as allowing users to perform a full complement of financial inquiries and transactions via the Web. AXA uses Candle Corp.’s PathWAI application infrastructure management software to mask the complexity of WebSphere and manage the messaging services that need to take place among different AXA applications outside of the applications themselves.

AXA’s new Web portal is another example of how the company was able to quickly incorporate existing services into a new application. The portal, which serves the sales staff, integrates AXA’s e-business services with Siebel Systems Inc.’s CRM software and AXA’s services-based architecture. The system combines Siebel’s sales, marketing, call centre and analytical products with Sun Microsystems Inc.’s Sun ONE Portal Server.

Now being piloted by about 100 beta testers, the project took a year to complete because of the ability to take architecture-based services such as the ability to aggregate customer data across multiple accounts and plug that capability into the Siebel system, rather than create it again from scratch each time a new application is added. Clearly, it pays to build an IT infrastructure on a business vision of your industry and your company’s strengths and challenges — and then stick to that vision.

— With file from Julia King, Computerworld (U.S.)

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now