In this special guest feature, Geoff Tudor, VP and GM of Vizion.ai, offers five key business issues enterprises are facing, and then how we can resolve them. Geoff has over 22 years experience in storage, broadband, and networking. Currently. Previously, as Chief Cloud Strategist at Hewlett Packard Enterprise, Geoff led CxO engagements for Fortune 100 private cloud opportunities resulting in 10X growth to over $1B in revenues while positioning HPE as the #1 private cloud infrastructure supplier globally. Geoff holds an MBA from The University of Texas at Austin, a BA from Tulane University, and is a patent-holder in satellite communications. He resides in Austin, TX with his wife and two children.
Data analytics has established itself as the lifeblood of organizations. The sky literally seems the limit for a host of new applications, including predictive maintenance, fraud prevention, self-driving cars, face and object-recognition security cameras, and augmented reality. The problem is that 85% of Big Data analytics and Machine Learning (ML) projects fail. Having invested in analytics platforms over the years, why is that? Largely because enterprises are increasingly feeling limited by data sprawl and a technology they thought would unshackle their ability to derive actionable insights from the data at hand. What’s going wrong and what can we do to put it right?
Within the next decade, Artificial Intelligence (AI) is set to drive $15.7TN of GDP, as data analytics matures and businesses benefit from insights they didn’t think possible only a decade ago. The majority of data is now produced by machines, not people, powering everything from your toothbrush to critical scientific analysis in space. It seems, suddenly, like AI is everywhere. But the promise of AI and the data analytics it enables, is falling flat.
That’s because about 85% of all corporate data analytics and Machine Learning (ML) projects fail, according to Gartner, whether through a lack of human skills, operational resources or production planning. And, not every business can afford to hire a chief data scientist who commands a quarter-of-a-million-dollar salary, let alone compete for one while they’re in short supply.
Until now, AI and ML-based systems and the data they produce have been the primary preserve of the data scientist. Businesses rely on them to plan the analysis of the data, and construct the ML model required to get the results. And that’s a very special skill-set. But what if AI and ML could be simplified for everyone, so that business analysts as well as data scientists can discover the results needed? If we were able to achieve that, then this would also deliver data insights to internal customers and partners by recruiting candidates from a deeper pool of candidates.
So, how does the average enterprise bridge the gap between promise and reality, and get ahead with data analytics, when the odds of failure seem so stacked? In short, the operationalization of AI and the ML models it uses needs to be made simpler. That way, everyone can access the value of AI and ML without encountering any of the complexity that has derailed so many projects before now.
Here are the five key business issues enterprises are facing, and then how we can resolve them.
- Complexity. Open source software frameworks are everywhere, providing enterprises with all of the ‘doorknobs’ they need in order to open the way to the data analysis results they want. But unfortunately, that upside has an immediate downside: the data fabric of these tools – from data streaming through frameworks and clusters to ML models – is just getting way too complex. If AI is to deliver on its promise, it needs to be simplified. Otherwise it will not reach the critical mass the analysts are calling for. We need one simple user interface in which users can select the connectors, dashboards and applications to public, private or hybrid cloud services to automatically import data. Then, we need to be able to leverage open source solutions such as Elasticsearch and log analytics without needing to manage nodes, shards, replicas, tiering and other costly deployment details.
- Cost. We may love it, but let’s just say it aloud, ElasticSearch, the most popular data search engine, needn’t cost as much as it does. Businesses which are starting their data analytics journey need the TCO barrier lowered. And if you’re a larger enterprise, and using Splunk or Sumo Logic, then you understand there is a cost and complexity in managing Elasticsearch. Open source technologies are great, but managing them is pushing the ongoing price tag up.
- Storage. Another facet of data analytics which pushes the cost up is storage. Typically, as an enterprise service grows, databases increase in size, more databases are created, more data is ingested and managed, and all this requires additional block storage – and yet more money. If we can reduce data redundancy here, all the better.
- Capacity. Gone are the days where enterprises needed to pay high cloud bills for infrastructure they are simply not using. With the average hosted ElasticSearch service, enterprises must buy the fixed amount of infrastructure they plan to use and then only the software is provided as a managed service. They’re in effect committing to a minimum fixed cost that could be thousands of dollars per month whether 1% or 95% of the capacity is used. Even then, if demands peak and are higher than anticipated (and that happens a lot if your business is doing well), the organization has little choice but to buy even more infrastructure.
- The needle in the haystack. All of this complexity means that finding the data you need to enable your business to take advantage of real-time, emerging opportunities, is truly challenging. That’s a product of the way analytics has grown up: as new functionality became available, enterprises adopted it. The problem here is that over time, enterprises have enabled so many layers of software and systems that finding that elusive single source of truth at the end of the day is becoming painstakingly slow.
How do we solve all of these issues? Older technology is not driving but draining the enterprise, failing to deliver on the initial promise of analytics. However, if the layers are stripped away, new technology is emerging that simplifies data analytics, reducing the cost and delivering in an instant the answers businesses need to be successful. So why not make it easy to consume data analytics using the SaaS model? Why not indeed.
At the end of the day, all that matters is the data, not layers and layers of systems and software that fuel complexity and cost. We think data analytics should be consumable as a service, where the customer pays only for exactly what they use. It needs to deliver all of the varied functionality of open source, while using a unifying underlying data fabric that spans multiple analytics and ML applications.
Most of all, it should eliminate the do-it-yourself hassles and gripes that over time have set data analytics on the wrong course to complexity and chaos. In its place, we see a future where using data analytics – and supercharging your business to outperform and out-compete – is as simple as plugging-in to recharge your toothbrush.
Sign up for the free insideAI News newsletter.