Five-year plan: 8 problems IT must solve

There’s a reason that so many businesses create five-year plans: If they’re reasonable, they’re achievable. Setting goals within that timeframe allows room for prioritization and opportunities to deal with the unexpected.

Of course, it’s a little harder to develop a five-year plan for the entire IT industry. But the best place to begin is to consider the needs of IT people who map out the plans and get the job done every day.

That’s what we’ve attempted to do in creating this agenda of eight needs IT must attend to over the next five years. You won’t find a lot of lofty talk about cloud computing or exotic technologies in the lab — just a wish list for solutions to the big problems that get in the way of IT doing its job.

We don’t expect everyone — or even a majority of people who read this article — to agree with our picks. Each person has his or her own axes to grind. We hope to hear about yours in the comments to this article.

IT fix No. 1: A solution to the desktop
Organizations large and small have been dealing with the increasingly creaky model of fat-client desktops since the dawn of the PC age. Admins still rove from desk to desk to upgrade and troubleshoot, and despite IT fix No. 1: A solution to the desktop
Organizations large and small have been dealing with the increasingly creaky model of fat-client desktops since the dawn of the PC age. Admins still rove from desk to desk to upgrade and troubleshoot, and despite IT fix No. 1: A solution to the desktop
Organizations large and small have been dealing with the increasingly creaky model of fat-client desktops since the dawn of the PC age. Admins still rove from desk to desk to upgrade and troubleshoot, and despite IT fix No. 1: A solution to the desktop
Organizations large and small have been dealing with the increasingly creaky model of fat-client desktops since the dawn of the PC age. Admins still rove from desk to desk to upgrade and troubleshoot, and despite IT fix No. 1: A solution to the desktop
Organizations large and small have been dealing with the increasingly creaky model of fat-client desktops since the dawn of the PC age. Admins still rove from desk to desk to upgrade and troubleshoot, and despite endpoint security advances, each desktop or laptop remains a big, fat target for hackers.

A variety of potential replacement technologies have made waves, but none with the all-encompassing feature set required of a no-brainer choice.

Take thin client computing. It fits some usage models very well, such as call centers and data entry applications, but most knowledge workers won’t tolerate what amounts to a share instance of Windows. Other forms of centralized client computing, such as ClearCube’s blade workstations, are in the same boat, meeting 100 percent of the requirements for a few markets and 0 percent for others.

The solution for this problem may come in the form of VDI (virtual desktop infrastructure), a slicker and higher-performance thin client model, an amalgam of the two, or something completely different. Bottom line: We need a new paradigm that extends the PC tradition of personal empowerment, yet sustains centralized IT control, security, and management.

Ultimately, portability should be part of the package, too. That way, users can take their desktop environments with them and work without a connection; when that connection is restored, so is IT control. Will some kind of secure, client-side virtual machine be the solution? Will users bring their own laptops or tablets and run that “business VM”? Maybe. But nothing like that is close to getting widespread traction yet.

IT fix No. 2: Disk-free virtualization servers
If you were to go back in time just 10 years and mention that a 64-bit 48-core server with 512GB of RAM would be available for relatively cheap in 2010, they’d look at you funny, then wonder aloud about the possible uses of such a beast. Almost overwhelmingly, the answer to that question today is virtualization.

There’s no doubt that virtualization is the path of IT for the foreseeable future. An essential part of that vision is huge multicore servers, each housing dozens of virtual servers, but the default configuration of those servers is nowhere near purpose-built for virtualization. It’s time to change the defaults.

Most servers may now be aimed straight at the virtualization market, but they’re still constructed for a single-server role. The additional hardware, heat, power, and size of these boxes don’t do any good in a virtualized environment, and we could easily do away with them. Virtualization hosts need only three items: CPU, RAM, and I/O. Hypervisors can and should boot from internal flash devices or at the very least a 1.8-inch SSD, but the need for physical disk — along with all its cooling and power requirements — can be jettisoned.

A few entries in the server market fit this model to a degree, but they’re all blades meant to reside in the appropriate chassis, all with local disk. In five years, I expect that ordering a blade chassis or a server with local disk to be the rarity, while diskless virtualization host servers will be the norm, with virtualization and SANs as common as keyboards.

Bring those servers to the smallest reasonable size possible, then go forth and virtualize. In 10 years, we’ll tell our kids about way back when you could buy a server with an internal hard drive.

IT fix No. 2: Disk-free virtualization servers
If you were to go back in time just 10 years and mention that a 64-bit 48-core server with 512GB of RAM would be available for relatively cheap in 2010, they’d look at you funny, then wonder aloud about the possible uses of such a beast. Almost overwhelmingly, the answer to that question today is virtualization.

There’s no doubt that virtualization is the path of IT for the foreseeable future. An essential part of that vision is huge multicore servers, each housing dozens of virtual servers, but the default configuration of those servers is nowhere near purpose-built for virtualization. It’s time to change the defaults.

Most servers may now be aimed straight at the virtualization market, but they’re still constructed for a single-server role. The additional hardware, heat, power, and size of these boxes don’t do any good in a virtualized environment, and we could easily do away with them. Virtualization hosts need only three items: CPU, RAM, and I/O. Hypervisors can and should boot from internal flash devices or at the very least a 1.8-inch SSD, but the need for physical disk — along with all its cooling and power requirements — can be jettisoned.

A few entries in the server market fit this model to a degree, but they’re all blades meant to reside in the appropriate chassis, all with local disk. In five years, I expect that ordering a blade chassis or a server with local disk to be the rarity, while diskless virtualization host servers will be the norm, with virtualization and SANs as common as keyboards.

Bring those servers to the smallest reasonable size possible, then go forth and virtualize. In 10 years, we’ll tell our kids about way back when you could buy a server with an internal hard drive.

IT fix No. 2: Disk-free virtualization servers
If you were to go back in time just 10 years and mention that a 64-bit 48-core server with 512GB of RAM would be available for relatively cheap in 2010, they’d look at you funny, then wonder aloud about the possible uses of such a beast. Almost overwhelmingly, the answer to that question today is virtualization.

There’s no doubt that virtualization is the path of IT for the foreseeable future. An essential part of that vision is huge multicore servers, each housing dozens of virtual servers, but the default configuration of those servers is nowhere near purpose-built for virtualization. It’s time to change the defaults.

Most servers may now be aimed straight at the virtualization market, but they’re still constructed for a single-server role. The additional hardware, heat, power, and size of these boxes don’t do any good in a virtualized environment, and we could easily do away with them. Virtualization hosts need only three items: CPU, RAM, and I/O. Hypervisors can and should boot from internal flash devices or at the very least a 1.8-inch SSD, but the need for physical disk — along with all its cooling and power requirements — can be jettisoned.

A few entries in the server market fit this model to a degree, but they’re all blades meant to reside in the appropriate chassis, all with local disk. In five years, I expect that ordering a blade chassis or a server with local disk to be the rarity, while diskless virtualization host servers will be the norm, with virtualization and SANs as common as keyboards.

Bring those servers to the smallest reasonable size possible, then go forth and virtualize. In 10 years, we’ll tell our kids about way back when you could buy a server with an internal hard drive.

IT fix No. 2: Disk-free virtualization servers
If you were to go back in time just 10 years and mention that a 64-bit 48-core server with 512GB of RAM would be available for relatively cheap in 2010, they’d look at you funny, then wonder aloud about the possible uses of such a beast. Almost overwhelmingly, the answer to that question today is virtualization.

There’s no doubt that virtualization is the path of IT for the foreseeable future. An essential part of that vision is huge multicore servers, each housing dozens of virtual servers, but the default configuration of those servers is nowhere near purpose-built for virtualization. It’s time to change the defaults.

Most servers may now be aimed straight at the virtualization market, but they’re still constructed for a single-server role. The additional hardware, heat, power, and size of these boxes don’t do any good in a virtualized environment, and we could easily do away with them. Virtualization hosts need only three items: CPU, RAM, and I/O. Hypervisors can and should boot from internal flash devices or at the very least a 1.8-inch SSD, but the need for physical disk — along with all its cooling and power requirements — can be jettisoned.

A few entries in the server market fit this model to a degree, but they’re all blades meant to reside in the appropriate chassis, all with local disk. In five years, I expect that ordering a blade chassis or a server with local disk to be the rarity, while diskless virtualization host servers will be the norm, with virtualization and SANs as common as keyboards.

Bring those servers to the smallest reasonable size possible, then go forth and virtualize. In 10 years, we’ll tell our kids about way back when you could buy a server with an internal hard drive.

IT fix No. 2: Disk-free virtualization servers
If you were to go back in time just 10 years and mention that a 64-bit 48-core server with 512GB of RAM would be available for relatively cheap in 2010, they’d look at you funny, then wonder aloud about the possible uses of such a beast. Almost overwhelmingly, the answer to that question today is virtualization.

There’s no doubt that virtualization is the path of IT for the foreseeable future. An essential part of that vision is huge multicore servers, each housing dozens of virtual servers, but the default configuration of those servers is nowhere near purpose-built for virtualization. It’s time to change the defaults.

Most servers may now be aimed straight at the virtualization market, but they’re still constructed for a single-server role. The additional hardware, heat, power, and size of these boxes don’t do any good in a virtualized environment, and we could easily do away with them. Virtualization hosts need only three items: CPU, RAM, and I/O. Hypervisors can and should boot from internal flash devices or at the very least a 1.8-inch SSD, but the need for physical disk — along with all its cooling and power requirements — can be jettisoned.

A few entries in the server market fit this model to a degree, but they’re all blades meant to reside in the appropriate chassis, all with local disk. In five years, I expect that ordering a blade chassis or a server with local disk to be the rarity, while diskless virtualization host servers will be the norm, with virtualization and SANs as common as keyboards.

Bring those servers to the smallest reasonable size possible, then go forth and virtualize. In 10 years, we’ll tell our kids about way back when you could buy a server with an internal hard drive.

IT fix No. 3: Cheap WANs
Far too many remote offices in today’s world remain connected by ancient TDM technology. When dialup ruled the scene, those 1.54Mb T1 circuits looked huge, but now they’re abysmally slow, yet cost at least as much as they did 15 years ago. There’s no excuse for it.

If Verizon and others can roll out fiber to the home, they can certainly roll out fiber to the business. Whether your remote offices are in the middle of a city or the middle of the woods, there’s bound to be fiber nearby. Barring that, the strides made in delivering high-speed data circuits over copper in the past decade make that lowly T1 look even older and slower.

The major problem there is that there’s no impetus for carriers to move away from the T1 and T3 cash cow. They’ve been milking those circuits for eons and have established them as high-price, highly reliable circuits — and they are. However, the wheel of technology has moved well beyond their capabilities.

In five years, it should cost no more to connect an office in the Michigan suburbs to an office in Virginia with a 100Mbps or 1Gbps pipe riding over a common carrier than it does to set up today’s T1. And these links should be just as reliable as the T1 ever was.

IT fix No. 3: Cheap WANs
Far too many remote offices in today’s world remain connected by ancient TDM technology. When dialup ruled the scene, those 1.54Mb T1 circuits looked huge, but now they’re abysmally slow, yet cost at least as much as they did 15 years ago. There’s no excuse for it.

If Verizon and others can roll out fiber to the home, they can certainly roll out fiber to the business. Whether your remote offices are in the middle of a city or the middle of the woods, there’s bound to be fiber nearby. Barring that, the strides made in delivering high-speed data circuits over copper in the past decade make that lowly T1 look even older and slower.

The major problem there is that there’s no impetus for carriers to move away from the T1 and T3 cash cow. They’ve been milking those circuits for eons and have established them as high-price, highly reliable circuits — and they are. However, the wheel of technology has moved well beyond their capabilities.

In five years, it should cost no more to connect an office in the Michigan suburbs to an office in Virginia with a 100Mbps or 1Gbps pipe riding over a common carrier than it does to set up today’s T1. And these links should be just as reliable as the T1 ever was.

IT fix No. 3: Cheap WANs
Far too many remote offices in today’s world remain connected by ancient TDM technology. When dialup ruled the scene, those 1.54Mb T1 circuits looked huge, but now they’re abysmally slow, yet cost at least as much as they did 15 years ago. There’s no excuse for it.

If Verizon and others can roll out fiber to the home, they can certainly roll out fiber to the business. Whether your remote offices are in the middle of a city or the middle of the woods, there’s bound to be fiber nearby. Barring that, the strides made in delivering high-speed data circuits over copper in the past decade make that lowly T1 look even older and slower.

The major problem there is that there’s no impetus for carriers to move away from the T1 and T3 cash cow. They’ve been milking those circuits for eons and have established them as high-price, highly reliable circuits — and they are. However, the wheel of technology has moved well beyond their capabilities.

In five years, it should cost no more to connect an office in the Michigan suburbs to an office in Virginia with a 100Mbps or 1Gbps pipe riding over a common carrier than it does to set up today’s T1. And these links should be just as reliable as the T1 ever was.

IT fix No. 3: Cheap WANs
Far too many remote offices in today’s world remain connected by ancient TDM technology. When dialup ruled the scene, those 1.54Mb T1 circuits looked huge, but now they’re abysmally slow, yet cost at least as much as they did 15 years ago. There’s no excuse for it.

If Verizon and others can roll out fiber to the home, they can certainly roll out fiber to the business. Whether your remote offices are in the middle of a city or the middle of the woods, there’s bound to be fiber nearby. Barring that, the strides made in delivering high-speed data circuits over copper in the past decade make that lowly T1 look even older and slower.

The major problem there is that there’s no impetus for carriers to move away from the T1 and T3 cash cow. They’ve been milking those circuits for eons and have established them as high-price, highly reliable circuits — and they are. However, the wheel of technology has moved well beyond their capabilities.

In five years, it should cost no more to connect an office in the Michigan suburbs to an office in Virginia with a 100Mbps or 1Gbps pipe riding over a common carrier than it does to set up today’s T1. And these links should be just as reliable as the T1 ever was.

IT fix No. 4: A complete reworking of software licensing
I can probably count on one hand the number of IT professionals and end-users who’ve ever read an entire EULA. No doubt software licenses will always be written for the lawyers first and the users second, but the array of licensing schemes used by the huge range of companies is far too complicated. They can even interfere with IT’s ability to keep the lights on.

When working to resolve high-profile problems in software and hardware, there’s nothing quite so frustrating as determining the problem is related to licensing, either of the product itself or in some other area that inhibits normal use.

This one may be a little ambitious for a five-year plan, but the cacophony of licensing is deafening at many organizations and needs to be completely overhauled. I’m not going to pretend to have the answers here, but if we can make all our client-server applications run over TCP/IP, we can come up with a common licensing framework that any software development house could use. This was the idea behind products like FLEXlm (now FlexNet Publisher), but it needs to be a freely available service constructed by a consortium of large and small software development companies.

Imagine running a single license server that was responsible for all the commercial software in use, top to bottom. Talk about simple.

IT fix No. 4: A complete reworking of software licensing
I can probably count on one hand the number of IT professionals and end-users who’ve ever read an entire EULA. No doubt software licenses will always be written for the lawyers first and the users second, but the array of licensing schemes used by the huge range of companies is far too complicated. They can even interfere with IT’s ability to keep the lights on.

When working to resolve high-profile problems in software and hardware, there’s nothing quite so frustrating as determining the problem is related to licensing, either of the product itself or in some other area that inhibits normal use.

This one may be a little ambitious for a five-year plan, but the cacophony of licensing is deafening at many organizations and needs to be completely overhauled. I’m not going to pretend to have the answers here, but if we can make all our client-server applications run over TCP/IP, we can come up with a common licensing framework that any software development house could use. This was the idea behind products like FLEXlm (now FlexNet Publisher), but it needs to be a freely available service constructed by a consortium of large and small software development companies.

Imagine running a single license server that was responsible for all the commercial software in use, top to bottom. Talk about simple.

IT fix No. 4: A complete reworking of software licensing
I can probably count on one hand the number of IT professionals and end-users who’ve ever read an entire EULA. No doubt software licenses will always be written for the lawyers first and the users second, but the array of licensing schemes used by the huge range of companies is far too complicated. They can even interfere with IT’s ability to keep the lights on.

When working to resolve high-profile problems in software and hardware, there’s nothing quite so frustrating as determining the problem is related to licensing, either of the product itself or in some other area that inhibits normal use.

This one may be a little ambitious for a five-year plan, but the cacophony of licensing is deafening at many organizations and needs to be completely overhauled. I’m not going to pretend to have the answers here, but if we can make all our client-server applications run over TCP/IP, we can come up with a common licensing framework that any software development house could use. This was the idea behind products like FLEXlm (now FlexNet Publisher), but it needs to be a freely available service constructed by a consortium of large and small software development companies.

Imagine running a single license server that was responsible for all the commercial software in use, top to bottom. Talk about simple.

IT fix No. 4: A complete reworking of software licensing
I can probably count on one hand the number of IT professionals and end-users who’ve ever read an entire EULA. No doubt software licenses will always be written for the lawyers first and the users second, but the array of licensing schemes used by the huge range of companies is far too complicated. They can even interfere with IT’s ability to keep the lights on.

When working to resolve high-profile problems in software and hardware, there’s nothing quite so frustrating as determining the problem is related to licensing, either of the product itself or in some other area that inhibits normal use.

This one may be a little ambitious for a five-year plan, but the cacophony of licensing is deafening at many organizations and needs to be completely overhauled. I’m not going to pretend to have the answers here, but if we can make all our client-server applications run over TCP/IP, we can come up with a common licensing framework that any software development house could use. This was the idea behind products like FLEXlm (now FlexNet Publisher), but it needs to be a freely available service constructed by a consortium of large and small software development companies.

Imagine running a single license server that was responsible for all the commercial software in use, top to bottom. Talk about simple.

IT fix No. 5: The end of the password
The days of the alphanumeric password are already over, but nobody seems to have noticed yet. As you bounce from site to site, application to application, OS to OS, you’ll find a wide variety of IT fix No. 5: The end of the password
The days of the alphanumeric password are already over, but nobody seems to have noticed yet. As you bounce from site to site, application to application, OS to OS, you’ll find a wide variety of IT fix No. 5: The end of the password
The days of the alphanumeric password are already over, but nobody seems to have noticed yet. As you bounce from site to site, application to application, OS to OS, you’ll find a wide variety of IT fix No. 5: The end of the password
The days of the alphanumeric password are already over, but nobody seems to have noticed yet. As you bounce from site to site, application to application, OS to OS, you’ll find a wide variety of password strength requirements. Some are ridiculously lax, like the banking sites that refuse to accept special characters in passwords, to those requiring such a complex password that the user will almost always have to write it down to remember it. Both of these extremes result in the same problem: shamefully low security.

There’s also the significant annoyance of trying to enter strong passwords on mobile devices. With or without a physical keyboard, it can present a significant challenge. No matter how you cut it, passwords are just a bad idea.

But what can replace them? Smart cards and USB keys are great for one network or one device, but the problem is bigger than that. In a world of cloud services, iPads, and the Chrome OS, tokens aren’t the answer. It may be that the only “something you have” as convenient and portable as a password — and that could conceivably be applied across many systems and devices — is biometric authentication. But then every client device would need to be fitted with the required fingerprint or iris scanner.

Biometrics are also problematic from a user standpoint. Although I don’t necessarily share this concern, I’ve heard several people mention that they’d rather not lose a thumb to a villain who’s trying to crack into their bank account. Then there’s the possibility that if your biometric code was compromised, you can’t just reset it since it’s, well, attached and reasonably permanent.

Voice recognition, facial recognition, or any other form of recognition will have to supplant the common password eventually — let’s hope it’s sooner rather than later.

IT fix No. 6: Spam
If it were possible to redirect the time and effort poured into antispam and antimalware code over the last 10 years, we’d already have colonies on Mars and probably a new form of renewable energy.

As it stands, however, we’re not much better off than we were five years ago. The IT fix No. 6: Spam
If it were possible to redirect the time and effort poured into antispam and antimalware code over the last 10 years, we’d already have colonies on Mars and probably a new form of renewable energy.

As it stands, however, we’re not much better off than we were five years ago. The IT fix No. 6: Spam
If it were possible to redirect the time and effort poured into antispam and antimalware code over the last 10 years, we’d already have colonies on Mars and probably a new form of renewable energy.

As it stands, however, we’re not much better off than we were five years ago. The IT fix No. 6: Spam
If it were possible to redirect the time and effort poured into antispam and antimalware code over the last 10 years, we’d already have colonies on Mars and probably a new form of renewable energy.

As it stands, however, we’re not much better off than we were five years ago. The IT fix No. 6: Spam
If it were possible to redirect the time and effort poured into antispam and antimalware code over the last 10 years, we’d already have colonies on Mars and probably a new form of renewable energy.

As it stands, however, we’re not much better off than we were five years ago. The volume of spam has stayed fairly consistent, at somewhere between 95 and 98 percent of all email. It’s possible that the number of spam emails that actually make it into recipient’s mailboxes has decreased somewhat due to enhanced filtering techniques and an army of humans employed at various antispam companies flagging common spam emails. However, the problem continues unabated.

At these volumes, spam isn’t merely an annoyance — it constitutes a legitimate reduction in available services to an organization, whether that be reduced bandwidth due to inbound spam, increased costs due to additional servers or services required to contend with the deluge, or simply the time lost when legitimate emails wind up buried in a junk mail box or lost forever.

The unfortunate reality is that methods to reduce or eliminate spam have been around for a while, such as whitelisting or ISPs charging a small cost per email, but they’re so Draconian they would all but destroy the concept of email. We don’t need to throw the baby out with the bathwater, but we can’t keep putting our fingers in the dyke and shaking our heads sadly.

A bazillion different “solutions” to this problem revolve around email filtering. For example, greylisting institutes a delay period for unknown senders and causes spam blasts to miss their targets. Meanwhile, good ol’ whitelisting and blacklisting add plenty of manual effort and can cause problems with reliable email delivery. But none of those solutions do anything about the vast herds of spam flitting around the Internet, chewing up bandwidth and computing resources the world over. If they work at all, they merely prevent spam from hitting our inboxes, which is a Band-Aid, not a fix.

IT fix No. 7: Virtualized application appliances
Installing a new and expensive line-of-business server application shouldn’t require two weeks of training. It should be delivered ready to go, with all the requisite dependencies, patches, and other detritus that commonly accompanies these massive collections of code.

Rather than be saddled with an install DVD and a future filled with hours of watching progress bars scrape their way across a screen, we need a virtual machine that can be imported and fired up immediately. In many cases, those dreary installation procedures are occurring on VMs anyway, so let’s skip the middleman. Instead, designate the virtual machine as the default application delivery mechanism rather than a Windows installer or a tarball.

The time, effort, complexity, and support costs that could be saved by more companies taking this approach is significant. That’s not to say that there shouldn’t be a way to also do a standard installation, but as a default, go with the VM.

IT fix No. 7: Virtualized application appliances
Installing a new and expensive line-of-business server application shouldn’t require two weeks of training. It should be delivered ready to go, with all the requisite dependencies, patches, and other detritus that commonly accompanies these massive collections of code.

Rather than be saddled with an install DVD and a future filled with hours of watching progress bars scrape their way across a screen, we need a virtual machine that can be imported and fired up immediately. In many cases, those dreary installation procedures are occurring on VMs anyway, so let’s skip the middleman. Instead, designate the virtual machine as the default application delivery mechanism rather than a Windows installer or a tarball.

The time, effort, complexity, and support costs that could be saved by more companies taking this approach is significant. That’s not to say that there shouldn’t be a way to also do a standard installation, but as a default, go with the VM.

IT fix No. 7: Virtualized application appliances
Installing a new and expensive line-of-business server application shouldn’t require two weeks of training. It should be delivered ready to go, with all the requisite dependencies, patches, and other detritus that commonly accompanies these massive collections of code.

Rather than be saddled with an install DVD and a future filled with hours of watching progress bars scrape their way across a screen, we need a virtual machine that can be imported and fired up immediately. In many cases, those dreary installation procedures are occurring on VMs anyway, so let’s skip the middleman. Instead, designate the virtual machine as the default application delivery mechanism rather than a Windows installer or a tarball.

The time, effort, complexity, and support costs that could be saved by more companies taking this approach is significant. That’s not to say that there shouldn’t be a way to also do a standard installation, but as a default, go with the VM.

IT fix No. 7: Virtualized application appliances
Installing a new and expensive line-of-business server application shouldn’t require two weeks of training. It should be delivered ready to go, with all the requisite dependencies, patches, and other detritus that commonly accompanies these massive collections of code.

Rather than be saddled with an install DVD and a future filled with hours of watching progress bars scrape their way across a screen, we need a virtual machine that can be imported and fired up immediately. In many cases, those dreary installation procedures are occurring on VMs anyway, so let’s skip the middleman. Instead, designate the virtual machine as the default application delivery mechanism rather than a Windows installer or a tarball.

The time, effort, complexity, and support costs that could be saved by more companies taking this approach is significant. That’s not to say that there shouldn’t be a way to also do a standard installation, but as a default, go with the VM.

IT fix No. 7: Virtualized application appliances
Installing a new and expensive line-of-business server application shouldn’t require two weeks of training. It should be delivered ready to go, with all the requisite dependencies, patches, and other detritus that commonly accompanies these massive collections of code.

Rather than be saddled with an install DVD and a future filled with hours of watching progress bars scrape their way across a screen, we need a virtual machine that can be imported and fired up immediately. In many cases, those dreary installation procedures are occurring on VMs anyway, so let’s skip the middleman. Instead, designate the virtual machine as the default application delivery mechanism rather than a Windows installer or a tarball.

The time, effort, complexity, and support costs that could be saved by more companies taking this approach is significant. That’s not to say that there shouldn’t be a way to also do a standard installation, but as a default, go with the VM.

IT fix No. 8: IPv6
It’s not lost on me that if I’d written this article five years ago, this would definitely have made the list, yet we’re not any closer to widespread IT fix No. 8: IPv6
It’s not lost on me that if I’d written this article five years ago, this would definitely have made the list, yet we’re not any closer to widespread IT fix No. 8: IPv6
It’s not lost on me that if I’d written this article five years ago, this would definitely have made the list, yet we’re not any closer to widespread IT fix No. 8: IPv6
It’s not lost on me that if I’d written this article five years ago, this would definitely have made the list, yet we’re not any closer to widespread IPv6 adoption.

Part of the problem is that we’ve become far too comfortable with our cozy, phone-number length IPv4 addressing. After all, 192.168.1.100 is much simpler to recognize and remember than 3eff:4960:0:1001::68.

It’s also true that the vast majority of IT organizations have been fairly OK within their internal reserved IP ranges for the past decade or so. The onus of not just a massive renumbering effort, but required verification that all applications and services will function properly over IPv6, is more than daunting. It’s basically a nonstarter for all but the biggest IT budgets.

Thus, the problem with IPv6 is that there’s no perceptible benefit for most shops, but a mountain of effort required to get there. When IT budgets are already tight, that’s just not going to happen.

But those problems may be overshadowed by the larger problem of disappearing IPv4 address space. This problem isn’t as big of a deal as you might think at the moment, but these addresses are being eaten up at an alarming rate, particularly as China extends Internet service to outlying areas. And of course, there’s the massive number of Internet-connected mobile devices.

If there’s any hope of making a real push for IPv6 throughout the computing world, it has to happen soon. Every day we continue to be fat, dumb, and happy with our IANA private ranges and doing port address translation at the firewall is another day further ensconced in the inebriation of IPv4. That simply cannot scale.

Would you recommend this article?

Share

Thanks for taking the time to let us know what you think of this article!
We'd love to hear your opinion about this or any other story you read in our publication.


Jim Love, Chief Content Officer, IT World Canada

Featured Download

Featured Articles

Cybersecurity in 2024: Priorities and challenges for Canadian organizations 

By Derek Manky As predictions for 2024 point to the continued expansion...

Survey shows generative AI is a top priority for Canadian corporate leaders.

Leaders are devoting significant budget to generative AI for 2024 Canadian corporate...

Related Tech News

Tech Jobs

Our experienced team of journalists and bloggers bring you engaging in-depth interviews, videos and content targeted to IT professionals and line-of-business executives.

Tech Companies Hiring Right Now