Future-storage solutions must be adaptable and flexible

Blog 4 pic

Business managers used to look at data and ask, “How cheaply can we store this stuff?” Not now, though. Now the question is likely to be, “How fast can we get this processed and get the analysis back?”

There’s more data, and more pressure on making sense out of it all.

Consider that 90% of all data in existence originates from the past two years. There is a 65% growth of data a year, which equals 100% growth every 18 months. In other words, we are talking about exabytes, zettabytes of data, and even yottabytes pretty soon, and we’ve pretty much gone past recognising any real boundaries on capacity.

The vast quantities of data involved with mobile devices, smart sensors, the Internet of Things, connected vehicles and more, demand solutions that go beyond adding rows upon rows of last-generation hardware and software to crunch digits. Performance agility and flexibility now are crucial parts of platform architecture.

In 2009, all-flash storage was thought to be a niche installation that would stay like that. Disk economics was too good to be threatened by another medium.

Seven years later we see that all-flash arrays will replace Tier 1 and 2 computer architecture solutions, and (possibly) Tier 3. The future for disk is going to be archival data.

Fujitsu are there now with SolidFire. Our all-flash array can be customised – in size by scaling out, whether four nodes or 100; in capacity, from 20 terabytes to many petabytes; in speed, from 200k IOPS (input output per second), to millions.

With SolidFire and its Quality of Service (QoS) architecture, both SQL (Structured Query Language) and NoSQL (Not only SQL) database workloads run properly without unnecessary expenditure. That’s because SolidFire allows you to create your system to specifically meet your capacity and performance demands.

At the same time, SolidFire is adaptable and flexible. For example, its Quality of Service controls make it simple to mix and match almost any workload types within the shared infrastructure while maintaining predictable performance to each application.

Administrators can choose to run many instances of one type of workload or run any combination of block storage workloads without compromising performance.

Scalable databases, including read-heavy and write-heavy instances, can go onto SolidFire and be protected with QoS.

Do you need to create dozens of distributed web application instances using a rapid cloning process, and then double the number of workloads quickly and without affecting the performance and availability of the running instances?

Not a problem.

Or you might want to stage a production database to a test/development environment, while it’s running, without slowing the performance of the workload.

Easy as.

To learn more about how SolidFire is the all-rounder you need for your future storage needs, read the white paper.