12 crackpot tech ideas that may transform IT

Technologies that push the envelope of the plausible capture our curiosity almost as quickly as the would-be crackpots who dare to concoct them become targets of our derision.

Tinkering along the fringe of possibility, hoping to solve the impossible or apply another’s discovery to a real-world problem, these free thinkers navigate a razor-thin edge between crackpot and visionary. They transform our suspicion into admiration when their ideas are authenticated with technical advances that reshape how we view and interact with the world.

IT is no stranger to this spirit of experimentation. An industry in constant flux, IT is pushed forward by innovative ideas that yield advantage when applied to real-world scenarios. Sure, not every revolutionary pose sets the IT world afire. But for every dozen paper-based storage clunkers, there’s an ARPAnet to rewrite IT history — itself a time line of what-were-they-thinkings and who-would-have-thoughts.

It’s in that tenor that we take a level-headed look at 12 technologies that have a history of raising eyebrows and suspicions. We assess the potential each has for transforming the future of the enterprise.

1. Superconducting computing How about petaflops performance to keep that enterprise really humming? Superconducting circuits — which are frictionless and therefore generate no heat — would certainly free you from any thermal limits on clock frequencies. But who has the funds to cool these circuits with liquid helium as required? That is, of course, assuming someone comes up with the extremely complex schemes necessary to interface this circuitry with the room-temperature components of an operable computer.

Of all the technologies proposed in the past 50 years, superconducting computing stands out as psychoceramic. IBM’s program, started in the late 1960s, was cancelled by the early 1980s, and the Japan Ministry of Trade and Industry’s attempt to develop a superconducting mainframe was dropped in the mid-1990s. Both resulted in clock frequencies of only a few gigahertz.

Yet the dream persists in the form of the HTMT (Hybrid Technology Multi-Threaded) program, which takes advantage of superconducting rapid single-flux quantum logic and should eventually scale to about 100GHz. Its proposed NUMA (non-uniform memory access) architecture uses superconducting processors and data buffers, cryo-SRAM (static RAM) semiconductor buffers, semiconductor DRAM main memory, and optical holographic storage in its quest for petaflops performance. Its chief obstacle? A clock cycle that will be shorter than the time it takes to transmit a signal through an entire chip.

So, unless you’re the National Security Agency, which has asked for US$400 million to build an HTMT-based prototype, don’t hold your breath waiting for superconducting’s benefits. In fact, the expected long-term impact of superconducting on the enterprise remains in range of absolute zero.

2. Solid-state drives Solid-state storage devices — both RAM-based and NAND (Not And) flash-based — have held promise as worthwhile alternatives to conventional disk drives for some time despite the healthy dose of skepticism they inspire. By no means new, their integration into IT will only happen when the technologies fulfill their potential and go mainstream.

Volatility and cost have been the Achilles’ heel of external RAM-based devices for the past decade. Most come equipped with standard DIMMs, batteries, and possibly hard drives, all connected to a SCSI bus. And the more advanced models can run without power long enough to move data residing on the RAM to the internal disks, ensuring nothing is lost. Extremely expensive, the devices promise speed advantages that, until recently, were losing ground to faster SCSI and SAS drives. Recent advances, however, suggest RAM-based storage devices may pay off eventually.

As for flash-based solid-state devices, early problems — such as slow write speeds and a finite number of writes per sector — persist. Advances in flash technology, though, have reduced these negatives. NAND-based devices are now being introduced in sizes that make them feasible for use in high-end laptops and, presumably, servers. Samsung’s latest offerings include 32GB and 64GB SSD (solid-state disk) drives with IDE and SATA interfaces. At $1,800 for the 32GB version, they’re certainly not cheap, but as volume increases, pricing will come down. These drives aren’t nearly the speed demons their RAM-based counterparts are, but their read latency is significantly faster than that of standard hard drives.

The state of the solid-state art may not be ready for widespread enterprise adoption yet, but it’s certainly closer than skeptics think.

3. Autonomic computing A datacenter with a mind of its own — or more accurately, a brain stem of its own that would regulate the datacenter equivalents of heart rate, body temperature, and so on. That’s the wacky notion IBM proposed when it unveiled its autonomic computing initiative in 2001. Of the initiative’s four pillars, which included self-configuration, self-optimization, and self-protection, it was self-healing — the idea that hardware or software could detect problems and fix itself — that created the most buzz. The idea was that IBM would sprinkle autonomic-computing fairy dust on a host of products, which would then work together to reduce maintenance costs and optimize datacenter utilization without human intervention.

Ask IBM today, and it will hotly deny that autonomic computing is dead. Instead it will point to this product enhancement (DB2, WebSphere, Tivoli) or that standard (Web Services Distributed Management, IT Service Management). But look closely, and you’ll note that products such as IBM’s Log and Trace Analyzer have been grandfathered in. How autonomic is that?

The fact is that virtualization has stolen much of the initiative’s value-prop thunder: namely, resource optimization and efficient virtual server management. True, that still involves humans. But would any enterprise really want a datacenter with reptilian rule over itself?

4. DC power The warm, humming bricks that convert AC from the wall to the DC used by electronics are finally drawing some much deserved attention — from datacenter engineers hoping to save money by wasting less energy. The waste must often be paid for twice: first to power equipment, then to run the air conditioner to remove the heat produced. One solution is to create a central power supply that distributes pure DC current to rack-mounted computers. But will cutting out converters catch on, or is the buzz surrounding DC to the datacenter destined to fizzle?

Researchers at the Department of Energy’s Lawrence Berkeley National Laboratory have built a prototype rack filled with computers that run directly off 380-volt DC. Bill Tschudi, principal investigator at the lab, says that the system uses 15 percent less power than do servers equipped with today’s most efficient power supplies — and that there can be even greater savings when replacing the older models still in use in most enterprises. If the server room requires cooling, as it does everywhere except in northern regions in the winter, the savings can double, because the air-conditioning bill also can be cut by 15 percent.

Others are working on bringing additional DC savings to the enterprise. Nextek Power, for instance, is building a system that integrates the tradit