With Chips Scarce, Now is the Time for a Data Management Redo

In this special guest feature, Nathan Wilson, Channel Marketing Manager at Redstor, outlines why the current chip shortage presents a great time to review your data management strategy. Nathan is responsible for developing, driving and implementing channel marketing activity for the company and its channel partner community. He has been with the company for nearly nine years, and is based in the UK.

One of the hardest-hit commodities in the ongoing problems with global supply chains is semiconductors. Shortages are expected to continue into 2023 as supplies fail to keep up with the demands of a smarter, more connected world.

Few industries do not rely on chips these days. Everything from cars and smart phones to more mundane items like washing machines and light bulbs is being digitized and incorporated into the worldwide information network. This in turn is driving demand for more traditional data-driven devices like servers and network switches. Shipments of new chips are already being delayed by weeks, if not months, and this will continue to negatively impact an economy that has become increasingly dependent on the smooth flow of data.

To counter this trend, the enterprise must do a better job of streamlining data flows so they do not overwhelm back-end hardware systems. Better data management has already dramatically lowered costs and improved service delivery, particularly in highly complex environments. To continue that success, and perhaps alleviate disruptions on the consumer end until the chip shortage ends, organizations will have to switch from piecemeal improvements to data management to more fundamental, strategic enhancements.

There are key challenges to keep in mind when it comes to hardware-based data management and protection. 

Delayed upgrades

The hardware lifecycle usually runs between three and five years. While many organizations have already gotten underway with long-term shifts in their data management strategies, the recent supply chain crunch has likely upended these plans by creating a more immediate need for better data management. Meanwhile, these same supply issues are lengthening the wait-list for critical components, which delays the point at which effective management and protection can be implemented.

This leaves most organizations with only two courses of action: spend more to maintain and/or retrofit existing hardware to meet today’s challenges or renew subscriptions on legacy software. Both of these measures deliver limited results because they force aging platforms to deal with modern data loads that they were never designed to accommodate.

Singular failure

Another problem with most legacy architectures is that they rely on server-based backups that introduce single points of failure in the data chain. Backup is vital, of course, but the cascading effect of a single failure cannot be overstated.

Cloud-based backup has grown in popularity in recent years, in part due to its greater resiliency. Still, many organizations that have embraced the cloud still utilize traditional architectures based on removable storage hardware, mostly tape. With new hardware on back-order, these solutions still represent single points of failure in the event of data overload, operator error, hardware failure or the increasingly common incidence of malware, which in itself represents a growing threat to hardware-dependent approaches to data management, as cyber-criminals evolve malicious code to target network-attached backups and increase the possibility of a ransom payment.

Capacity constraints

Organizations that house data operations to their own data center run the constant risk of running out of storage. This increases the need to not just manage available capacity but to plan for continued growth, which again is being hampered by the short supply of new hardware. 

This situation is becoming more untenable by the day given the dramatic acceleration of data volumes. Current estimates point to a worldwide data load of 180 zettabytes by 2025, which means any limits on future storage will act as a drag on competitiveness and could even hamper the entire business model.

With the expansion of physical storage resources under constraint, IT departments must devote greater resources to often mundane and time-consuming data management tasks, namely, finding and deleting duplicate and unnecessary data, placing controls on the growth of new data and archiving data to low-cost/high-capacity storage both on-premises and in the cloud. All the while, additional storage capacity comes at a premium cost right now, and still requires organizations to ensure that adequate protection policies accompany data wherever it goes.

Modern challenges, modern solution 

An intelligent, serverless, cloud-native suite of services is the best way to deliver the levels of management and protection that data-driven enterprises need in today’s business environment. Not only does this model provide data protection within minutes of its creation, it can be implemented without the need for new hardware. 

With this approach, enterprises will enable a 21st-Century architecture capable of delivering a wide range of benefits, including:

  • Cloud-first deployment models that eliminate direct dependence on hardware
  • A streamed, on-demand recovery process with direct serverless access to data
  • Resource provisioning times of 15 minutes or less
  • A built-in archive to instantly expand the capacity of legacy primary storage
  • Protection for all data, even that generated by SaaS environments like M365 and Google Workspace and are slated for migration to the cloud

Best of all, businesses do not have to wait months to create this environment. Complete end-to-end online onboarding is only a few minutes away and systems can be up and running in as little as a day, with no hardware requirements or upfront costs and additional savings to be made with cloud-ready archiving that extends capacity of existing hardware.

Sign up for the free insideAI News newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Comments

  1. When dealing with large amounts of data, making sure that your data storage solution is scalable is the first thing you need to consider since you’re not really sure how much data is going to be stored during your operations. You can work around this issue by utilizing cloud hosting and cloud storage, which makes use of another company’s infrastructure to save you the trouble of setting up the infrastructure and the amount of space it will eventually take up upon completion.