Companies like EMC that build Flash into their storage arrays are missing a trick, says David Flynn
LONDON — The real advantage of high density NAND Flash is more its ability to carry out application workload acceleration than its capacity for storage, according to Flash storage vendor Fusion-io.
The company’s chief executive, David Flynn, said that companies like EMC are approaching the problem from the wrong direction, integrating Flash into their storage arrays to speed up the read/write process, rather than harnessing its memory capabilities.
“The thing that has been the choke point or the constraint on how much useful work you can get from a server hasn’t been the amount of processing, but the amount of data that you can feed it,” said Flynn. “RAM is a way to feed it data, and disks are a way to feed it data, but RAM is too small and disks are too slow, so either way you can’t get large quantities of data into the processor fast.”
“I’m not saying it gets rid of your disk drives or memory, it allows those to be optimised – one for lots of capacity and the other for performance,” said Flynn. “So you still have RAM, you just don’t need as much as it. You still have disks, you just don’t need as many spindles.”
Flynn said the power is in being able to purchase performance separately from capacity, and scale the two independently. With today’s storage arrays, improving performance means buying lots more mechanical disks, and you get capacity even if you didn’t ask for it, he said. Satisfying those needs separately is cheaper, because you don’t end up overbuying capacity in order to supply performance.
“The biggest limiting factor on how many VMs you can put on a server is how much memory you put in it. If you had a huge amount of memory, you wouldn’t care about your storage performance,” said Flynn. “This gives you that memory capacity, so you don’t have to worry about the performance in your storage and can size it solely for capacity.”
By giving companies the ability to add performance from outside of the SAN, where vendors control the markup, they are able to scale things more cost effectively and benefit from buying more software licenses, added Flynn.
“This isn’t just about the fundamental change of the medium, it’s about the change of the market to an open market,” he said. “The open models win.”
EMC is now also working on ‘Project Thunder’, which it describes as a purpose-built, low-latency, server-networked, Flash-based appliance that is scalable and shareable. “Project Thunder will deliver I/Os measured in millions and timed in microseconds,” EMC said.
Stock exchange lowers latency and increases availability with HP
This case study provides an overview of why the National Stock Exchange turned to HP to meet specific needs for a next-generation server and storage infrastructure with high availability and ultra-low latency to support online transaction processing and data warehouse solutions.