In this special guest feature, Jim D’Arrezo , CEO of Condusiv, discusses how the massive adoption of “big data” and “fast data” as essential parts of retail operations has revolutionized the industry, but has also jacked up IT spending, putting stress on retailers already-thin margins. Jim earned his BA from Johns Hopkins University and an MBA from Fordham University, before embarking on a long and distinguished career in high technology, with an impressive list of stops that includes serving on the team that introduced the IBM Personal Computer, joining Compaq Computer as an original corporate officer in the 1980’s, and later as president and COO of Radiant Logic.
It’s estimated that by 2025, the amount of data most enterprises need to manage will have increased more than 50-fold1. With that increase, of course, will also come a massive shift in how businesses use that data, as we are now seeing many enterprises transition from “big data” to “fast data.” In other words, the focus shifts from having tons of data available to being able to use that data rapidly, accurately and for a host of applications in daily operations.
But, while fast data has opened up many new avenues for making a profit, it’s also resulted in serious hikes in IT spending that threaten to cut into those profits in many industries. In fact, big data performance issues are estimated to drive up IT spending by hundreds of millions of dollars over the next few years, with retail spending leading the pack.
A decade ago, omnichannel retail – providing a seamless customer shopping experience both in-store and online – was practically unheard of, if not impossible. Now, it’s a necessity for almost any retailer; omnichannel retail customers may only account for only 7% of customers in the US, but they make 27% of retail purchases2. Keeping them satisfied requires a complex network of fast, accurate data, and retailers need to know exactly where their products are at any given time and need to integrate data from various sources virtually instantly. And while the average accuracy of retail inventory is only 65%, omnichannel fulfillment requires closer to 95% accuracy for a seamless experience3.
Returns alone are enough to bog down virtually any retailer; the return rate for online purchases nears 30%4, and returns require serious amounts of data and processing power as retailers use fast data to integrate e-commerce systems with behind-the-scenes logistics. This is in addition to the huge profit loss that returns traditionally impose on businesses, leaving them with what are often razor-thin profit margins.
It’s clear that the need for high-performing big data has never been bigger, but where does that leave smaller businesses and those without the computing power to handle such operations?
As more and more businesses make the shift to fast data, they find themselves needing to make upgrades to their infrastructure to accommodate the sheer amount of data and processing power necessary. But as they do, it’s also becoming clear that spending money on hardware upgrades isn’t always the answer – and often ends up being a waste of that money, threatening the profits they’ve managed to make in spite of high return rates and increasing fraud and chargebacks.
Instead, it’s evident that software solutions are necessary. Increasing data speeds via software saves retailers time and energy, as well as precious dollars, that would otherwise be spent on hardware upgrades. Speeding up processing performance also means less waiting for customers, resulting in fewer complaints and happier customers, as well as less downtime for retailers – and less downtime means greater profits.
How to Improve Software Storage Solutions
So how do you deal with these performance issues and get systems to rapidly handle the massive amounts of data retail calls for? You’re probably asking why you can’t just buy and install newer and faster storage and compute power? If you have the money to spend, this could be a good temporary solution. But you’ll still have network pipelinechallenges, as there will simply be too much data for your system to handle. It’s no different than being stuck in traffic simply because the road you’re on isn’t built to handle the number of cars taking it.
Three main issues each cause I/O bottlenecks: your data pipelines, as mentioned, but also the sheer amount of non-application I/O overheard, as well as file system overhead. Each of these can cause degradations in data performance of 30 to 50% alone; combined, they spell trouble for virtually any hardware setup. The fact is, much of the I/O overhead is caused by the software running your sytems and applications.
So, what do you do to deal with the oncoming flood of data as an IT department head or system admin? As software is causing much of the problem, look at software to fix it before rushing out to purchase new hardware. There are solutions that can deal with storage performance at the operating system, file system and application levels,.
Certainly new hardware can buy improved performance for a time but will soon be bogged down and hampered by denser, faster dataand more demanding applications, as has been shown to happen time and time again. When that happens, having the right software solutions at your disposal will get you back up and running at much better speeds.
- Kolsky, Esteban, “What to do with the data? The evolution of data platforms in a post big data world,” ZDNet, September 13, 2018.
- Garcia, Krista, “Omnichannel Investments Paying Off,” eMarketer Retail, February 22, 2018.
- Thau, Barbara, “Is The ‘RFID Retail Revolution’ Finally Here? A Macy’s Case Study,” Forbes, May 15, 2017.
- Reagan, Courtney, “A $260 billion ‘ticking time bomb’: The costly business of retail returns,” CNBC, December 16, 2016.
Sign up for the free insideAI News newsletter.