Feature: Operating environment

With a user base of 35,000 students and staff, the National University of Singapore (NUS) houses one of the region’s largest campus networks. Managing it, however, is no easy task, and everything that can make life easier for the school’s IT department is more than welcomed.

“We have about 170 IT staff spread across the campus,” says Professor Lawrence Wong, director, National University of Singapore Computer Centre (NUSCC). “It sounds like a lot, but it is actually about one IT person to 200 users whereas in the U.S., a typical university has a ratio of one to 100 or even one to 50.”

To better manage the network with its limited resources, NUSCC has decided to migrate all of its Windows users-which represents some 60 percent of its user base-to Windows XP, Microsoft Corp.’s latest operating system that was launched in Singapore last month.

“We try to standardize the operating system, otherwise support is problematic,” says Wong. “If you have to cater to different versions of legacy systems, it just increases your costs.” He targets to have NUS’ 5,000 institutional systems up and running on Windows XP by the end of the year, with the remaining 15,000-mostly student notebooks-upgraded over the course of 2002.

Wong expects to reap immediate benefits as a result of the exercise. “The main considerations for our upgrade is cost and quality of support,” he says. “From the service provision perspective, tools like Remote Assistance would help us provide more responsive support in the event that users have problems.”

Remote Assistance is one of the key features that Microsoft has incorporated into the Windows operating system. It enables users to share the display with helpdesk staff instead of relying on verbal explanations and interpretation. “Now instead of trying to imagine the situation, a visual channel allows us to dispense much more accurate advice,” Wong says. “This saves a lot of time and resources.”

The feature even allows the helpdesk to remotely take over the desktop. “If the user is comfortable, he can even hand control over to the helpdesk staff who can then run diagnostic tests,” says Wong. “It is a very powerful mechanism to help us work with users who have problems.”

The end user is also expected to benefit with a more intuitive and easy-to-use interface, adds Ben Tan, product manager, Windows XP, Microsoft Singapore.

Comfort level

Perhaps even more important than the ability to conduct remote diagnostics, is Windows XP’s ability to let users feel comfortable with that fact. NUS had tried to implement a product called System Management Server (SMS) that includes detailed hardware inventory, software inventory and metering, software distribution and installation, and remote troubleshooting tools.

However, Wong soon found out that they were unable to move ahead because many of the users thought that “big brother” was watching over them. “We encountered a lot of resistance from users to activate SMS because of that,” says Wong. “XP’s Remote Assistance improves the situation significantly as there is a mechanism of sharing at different degrees which SMS did not have.”

With this fine tuning feature, the computer center would be able to sell the idea of remote diagnostics. “The user will always be the one who initiates the request for service, and even then he can fine tune his system to allow different degrees of openness. The ability to tune that is at the user level,” says Wong. “We have to have some degree of user sensitivity.”

One barrier to a completely smooth upgrade process was the status of the available hardware. “We needed to upgrade the RAM of all the PCs as Windows XP is quite memory hungry,” says Wong. “Our existing PCs only have anywhere between 64MB and 128MB of RAM.”

According to Wong, the absolute minimum for decent performance is 128MB of RAM, and in the end, NUSCC decided to increase every PC’s memory by 256MB. “The price between upgrading by 128MB and 256MB was quite marginal,” he says.

No problems on the client hardware front, however. The PC processors – typically Pentium II 300MHz and 450MHz – are sufficient to handle the new operating system.

Server considerations

On the server side, NUS is also considering using the latest XP server operating system called Windows .NET server, which is scheduled to launch early next year.

According to Microsoft, this version has improvements over the previous Windows 2000 in some key areas. It will be even easier to deploy as the .NET server will allow the deployment of Active Directory in Wide Area Networks; it will be faster than Windows 2000 and will provide more efficient Web services with Internet Information Server 6.0 built into it; it will be more reliable as the .NET server will provide enhancements like hot add-memory support, hardware detection, prediction, and correction capabilities; it will be more manageable with support for remote administration of servers (which means the servers do not require a keyboard, monitor, mouse or even a video card).

Enhancements or not, NUS is not jumping in immediately. “We have to be a bit more cautious because the impact of any bug is much higher,” Wong says. “We have certainly started testing, but to make a conscious decision to move from NT to XP server is quite another thing.”

While Wong considers the performance of .NET server to be adequate, security and robustness are the other considerations that need to be addressed. “Time and again, we hear of Microsoft being targeted for viruses and so on,” says Wong, whose Windows NT servers at NUS handle high volumes of file and print loads, as well as heavy messaging traffic. “We have one million e-mails transacted daily.” So, naturally, he is concerned.

The ‘Other’ OS

Evaluations of the adequacy of .NET server’s security and robustness aside, the experts and chroniclers of the IT business agree that Microsoft Windows is just about ready to dominate the server platform.

That is, if not for open-source based Linux. According to research firm International Data Corp., Microsoft won 41 per cent of new server licenses in 2000, compared to Linux with 27 per cent. Other Unix variants collectively secured 13.9 per cent, while Novell’s NetWare managed to win 13.8 per cent.

If Microsoft had its way, the world would believe that Linux is merely a fast fading open-source fad, and at best a rogue computing platform whose sole purpose in life is to eke out a subsistence as a platform for non-mission-critical Web servers.

However, the fact remains that the Linux operating system is being used in some of the largest and most powerful clusters in the world.

For example, the oil exploration unit of petroleum giant Royal Dutch Shell Group is working with IBM to build a massive Linux-based supercomputer that will link together 1,024 servers.

Its subsidiary, Shell International Exploration & Production, plans to use the clustered system to analyze seismic data and other geophysical information as part of its efforts to find new supplies of oil. The system will provide Shell with more than 2 teraFLOPS (billion floating point operations per second) of computing power.

In Asia, the Institute of High Performance Computing (IHPC) in Singapore, and the Center for Large Scale Computation (CLC) in Hong Kong are using Linux to automate sophisticated number crunching tasks. IHPC’s Linux cluster is approaching 100 gigaFLOPS in theory, while CLC’s has managed 14.1 gigaFLOPS.

Applications that are being run on these systems include computational fluid dynamics, financial engineering, large scale optimization, weather forecasting and pollution modelling, clearly in a different league from run-of-the-mill Web servers being able to handle millions of hits every hour or so.

Beyond the Web

“Linux is already a proven Web server in a cluster environment,” says Cheok Beng Teck, director, plans & strategic development, IHPC.

“We experiment with very high-end engineering software and not things like Apache Web server,” Cheok adds. “Most of our jobs have very big memory size requirements, to solve many millions of simultaneous equations.”

Singapore’s Ministry of Manpower, for example, is working with IHPC to roll out COVES (collaborative one-stop virtual engineering services) on Linux. COVES is a tool used to identify and assess the risk of toxic chemical exposure on workers by simulating air flows, smoke movement and fire spread within a building.

According to CLC, Linux is bringing supercomputing into the mainstream. “What makes this really exciting is that companies are now in a position to use this huge increase in computer performance to achieve real innovation in their products,” says Kenneth Chow, associate director of CLC, and chief operating officer of Cluster Technology Ltd.

CLC is a joint venture of the Chinese University of Hong Kong and Cluster Technology, a service provider for users of high performance computing. CLC works with companies to use high performance computing to increase productivity and to gain a competitive advantage in the marketplace.

“Up to now, costs have been prohibitively high for hardware, software and technical expertise for commercial and industrial applications,” says Chow. “Our solution is to provide the expertise to implement real-life applications on Linux clusters.”

Chow sees CLC as a computer farm for graphics/animation, simulation and optimization, all of which he thinks provides the best short-term potential to popularize the use of clusters. “It will be made available to clients on demand for specific applications,” he says.

CLC’s cluster, which comprises 17 Silicon Graphics, Inc.’s (SGI’s) dual processor Pentium III servers, was implemented in the first quarter of this year, and has been very robust. “Apart from scheduled maintenance, the machine has been running continuously now for eight months without a single crash,” says Chow.

While IHPC still uses traditional supercomputing systems like massively parallel processing (MPP) and symmetric multiprocessing (SMP) machines, clustering-especially with Linux-promises to yield significant benefits to the research and educational institution. Lam Khin Yong, chief executive officer, IHPC, says: “We have to make the best use of current technology-the SMP systems. But we must also continue to make ourselves relevant by exploring new technologies like Linux.”

Cost benefit

One of the reasons why the institute is upbeat about Linux is because of the low hardware costs it typically incurs. “For equivalent processing power, a Linux cluster is one quarter the price of an SMP machine,” says Cheok. “An SMP machine costs between US$20,000 and US$45,000 per gigaFLOP (millions of FLOPS) while a gigaFLOP on a Linux cluster is just the price of a PC.”

A cluster’s inherent scalability also allows the institutions to easily manage growth requirements. “SMP machines must be bought in a fixed number of processors,” says Cheok. “If I have 32 processors, and need 33, I cannot just buy the 33rd one, I have to buy another 32-processor box.”

For Linux clusters, processors can be increased one at a time where a person only needs to “plug in the node, and reconfigure the system,” says Cheok.

In terms of hardware performance, Linux clusters tend to be less scalable than SMP, but CLC has still managed to attain fairly high levels of performance. “Our cluster’s performance is 14.1 gigaFLOPS which is not earth shattering as an absolute number, but we are quite proud of the fact that this is a high percentage (63 percent) of the theoretical peak of 22.4 gigaFLOPS,” says Chow.

Other advantages include high availability because each processor can function independently of the other machines. “If two nodes go down, you just shut them down, and reboot,” says Cheok, who also notes that as an open source system, there is a lot of shared knowledge, and systems can be modified and customized for specific needs.

However, using a Linux cluster poses problems for IHPC. One is the additional skill required to break up a computational job into smaller pieces to take advantage of parallelism within a cluster, something inherent in the SMP architecture. “A PhD trained in mechanics will not face much difficulty writing code on SMP machines, but for Linux clusters, you need to do message passing,” says Lam. “So, not only will he need knowledge of the hard sciences, but required proficient computer science skills as well.”

Another problem is the lack of software. “A lot of third-party software-especially for virtual product development-have already been built for the SMP environment and are being ported to Linux,” says Lam. “However, that takes time, and we cannot shut down operations just to experiment with new things.”

In fact, IHPC is still upgrading its SMP resources and has plans for a 1 teraFLOP system.

High growth

However, Linux is still expected to pave the way in the future. IHPC has configured its 100 Linux boxes into two clusters mounted on three racks. One rack is able to take up to 40 nodes-each comprising a single Pentium III processor with 1 GB of RAM. Implemented in April this year, the Linux clusters are rapidly approaching maximum utilization even as the economy takes a downturn.

“High performance computing allows companies to develop products faster with more accuracy,” says Lam. “So, in a recession, you want to get things done quickly and accurately, and the way to do it is through simulation.”

CLC also expects to grow its cluster rapidly. “We firmly believe that clusters of commodity PCs will claim more and more of the high performance computing market in the future,” says Chow. “And this will continue to drive down price.”

The center is planning a major hardware upgrade within the next six to 12 months that will at least quadruple its computing performance. “We are closely following developments in microprocessor technology, and have developed our own set of benchmarks for evaluating machine performance,” says Chow.

“As a research institute, we have to push state-of-the-art technology like Linux,” says IHPC’s Lam. “We don’t want to keep solving problems and working with companies if they become routine.”

“If we are just comfortable with SMP, in three to four years, we may find ourselves irrelevant,” Lam adds.

A Matter of Choice

Hardware vendors-at least most of them-are hedging their bets. Hewlett-Packard Company, for one, sells on its “platform of choice” strategy. What that means is, HP has machines that can run Windows, Linux and UNIX (specifically HP’s flavour of UNIX, HP-UX). “What we are saying within that strategy is that we don’t see that there’s a single OS that can meet all situations and all means, but the combination of all three-any one of those in the right situation can be the most cost-effective OS deployed,” says Ross Templeton, manager, Solution and Software Programs, Unix Systems, Business Customer Organisation, HP Asia Pacific.

Templeton admits to Linux being a highly suitable environment for certain kinds of applications, and for development. “But when you come to deploying applications-assuming you need a highly robust, highly scalable mission-critical environment-you run it on HP-UX,” says Templeton. “We make it very easy for customers to port, migrate what they’ve developed on Linux to run on HP-UX.”

So fret not. There’s a place for every OS, after all.