Big Data Industry Predictions for 2024

Welcome to insideAI News’s annual technology predictions round-up! The big data industry has significant inertia moving into 2024. In order to give our valued readers a pulse on important new trends leading into next year, we here at insideAI News heard from all our friends across the vendor ecosystem to get their insights, reflections and predictions for what may be coming. We were very encouraged to hear such exciting perspectives. Even if only half actually come true, Big Data in the next year is destined to be quite an exciting ride. Enjoy!

[NOTE: please check back often as we’ll be adding new content to this feature article into March 2024]

Daniel D. Gutierrez – Editor-in-Chief & Resident Data Scientist

Analytics

The landscape of advertising analytics is poised for a seismic shift with the evolution of omni-channel commerce. The traditional silos between online and offline consumer interactions are crumbling, paving the way for a truly omni-channel consumer. While physical/digital walls are falling down across the consumer’s journey, walled gardens and consumer privacy will still loom large, complicating analytics. This growth of the omni-channel consumer will demand a recalibration of marketing measurement models. The traditional digital last-click attribution will make way for a more nuanced approach, recognizing the influence of multiple touchpoints along the customer journey. This shift will bring forth a more accurate representation of the incremental value each channel contributes to creating and converting consumer demand. Privacy concerns will loom large, necessitating a delicate balance between data-driven personalization and respecting user privacy. Striking this equilibrium will be crucial to maintaining consumer trust while harnessing the full potential of omnichannel analytics. The future of advertising analytics in the era of omnichannel e-commerce will be characterized by a convergence of data, a redefinition of attribution, and a delicate dance with privacy. It’s not just a transformation; it’s a revolution in how we understand, interpret, and leverage consumer data for the art and science of advertising. – Skye Frontier – SVP of Growth at Incremental

Multi-modal interfaces for analytics – the winning formula for enterprise collaboration: Analytics is a team sport. And like every sport, we’ll see organizations seek platforms that allow each player to harness their strengths by giving flexibility to various professionals in various roles across the business. This will give rise to multi-modal interfaces that empower every user in the analytics value chain to engage with data intuitively and successfully based on their role – all while increasing collaboration across the decision-making process. The result? A winning formula for every organization. – Suresh Vittal, Chief Product Officer, Alteryx

Artificial Intelligence

AI won’t replace low-code, but enhance it for improved outcomes: For years, low code has given citizen developers the ability to create applications without coding experience. Now ChatGPT has brought to horizon a promise of dramatic productivity gains for writing code. However, simply using ChatGPT to write code that a developer would have otherwise written is not solving the productivity problem at the right scale. The problem of reuse and maintenance remain unaddressed. Many months of developer time is taken up in absorbing upgrades from upstream teams, executing tech stack upgrades, implementing redesigns to uplevel their app to modern UI/UX patterns etc. Therefore, AI will not replace low-code but rather be used in tandem with low-code to improve productivity. Next year we will see enterprise software vendors using a combination of computer vision or a trained model to understand patterns and then triggering generative code within their low code platforms. – Vikram Srivats with WaveMaker

Ownership will become the key determinant of success in whether companies’ AI initiatives actually take off in 2024: Businesses were eager to begin adopting generative AI in 2023, particularly as they saw the immediate impacts it had on internal productivity. But in the new year we’ll begin to see that, while it’s easy for companies to play around with AI, actually driving business impact takes much more than that. Companies delegating AI exploration without a clear problem or dedicated team tend to falter, leading to ineffective outcomes. Ownership will become the key determinant of success in whether companies’ AI initiatives actually take off in 2024 and beyond. When a business owner takes a vested interest in digital innovation, identifies a specific challenge, and assembles a team for experimentation and action, the likelihood of success surges. Ownership will be the key driver of who will succeed in harnessing AI’s transformative potential and who won’t. – Raj De Datta, AI visionary and CEO at Bloomreach

From Enterprise AI to Zero-Trust AI: In 2024, we will see a significant shift in how enterprises approach AI, from focusing on performance to emphasizing accountability. As AI becomes more integrated into critical decision-making processes, organizations will prioritize ensuring the accuracy and reliability of AI outputs. This shift will lead to the development of “zero-trust AI,” where the validation of data sources and the transparency of AI-induced modifications become paramount. The goal will be to create AI systems whose operations and decisions are not just effective but also understandable and reviewable by all stakeholders, thereby fostering a culture of trust and responsibility around AI usage. – David Boskovic, founder and CEO of Flatfile

New techniques will improve efficiency for AI inference: Increased context length and large datasets require architectural innovations. Reducing the number of bits in the parameters reduces both the memory footprint and the latency of LLMs. To address this, we’ll see more model size distillation techniques, over-training past the Chinchilla optimal settings, incorporating GPTQ, more Grouped-Query Attention, more specialized models or a combination of large and small models will deliver better results with less RAM, along with techniques like GGML and NF4 (as mentioned). – d-Matrix, VP of Product, Sree Ganesan

AI will continue to boom, and we will see adaptations in almost every area of our lives. While it will undoubtedly make our lives easier in many ways, we will see an uptick in error rates because this technology is only as smart as the language it’s been trained on. AI will inevitably replace more people and jobs, but the good news is that it will also create more jobs. In a few years, we will see many IoT devices generating huge volumes of high-cardinality data. With AI, the possibilities are virtually endless, and we are only now starting to explore them. – Jason Haworth, CPO, Apica

AI had quite the year in 2023, dominating the headlines with major analyst firms predicting its significant impact over the years to come. But to be successful in 2024 and beyond, AI will be forced to rely on the very sources many fear the technology will replace: people and data. Retail data is highly complex and dynamic with siloed information that is constantly in flux, whether it’s consumer buying behaviors, delayed shipments, product shortages or labor demands. Teams equipped with retail order and inventory data management systems, will play a major role next year to help produce and maintain clean, accurate and accessible data needed for businesses to take full advantage of AI. – Nicola Kinsella, SVP of global marketing at Fluent Commerce 

Organizations will appoint a chief AI officer to oversee the safe and responsible use of AI: In 2024, organizations will increasingly appoint senior executives to their leadership teams to ensure readiness for AI’s security, compliance, and governance implications. As employees become more accustomed to using AI in their personal lives, through exposure to tools such as ChatGPT, they will increasingly look to use AI to boost their productivity at work. Organizations have already realized that if they don’t empower their employees to use AI tools officially, they will do so without consent. Organizations will, therefore, appoint a chief AI officer (CAIO) to oversee their use of these technologies in the same way many have a security executive, or CISO, on their leadership teams. The CAIO will center on developing policies and educating and empowering the workforce to use AI safely to protect the organization from accidental noncompliance, intellectual property leakage, or security threats. These practices will pave the way for widespread adoption of AI across organizations. As this trend progresses, AI will become a commodity, as the mobile phone has. – Bernd Greifeneder, Chief Technology Officer and Founder, Dynatrace    

2024 will be the year of the AI and data C-Suite leader: If 2023 is the year that enterprise AI burst onto the scene, then 2024 will be a year of consolidation as businesses look to understand how to use it to gain a competitive advantage and comply with inevitable future regulations. To future-proof AI deployments, organizations will increasingly look to build out a role at the C-Suite level to oversee both AI innovation and compliance, but that won’t necessarily be in the form of a Chief AI Officer. Instead, AI will likely create a new generation of Chief Data Officers where existing data leaders develop new skill sets. Just as we’ve seen the rise of Chief Data and Analytics Officers, we could be about to see the start of a fresh generation of Chief Data and Artificial Intelligence Officers focused on ensuring the data foundations of AI models are compliant with new legislation and of a high enough quality to gain the business a competitive advantage. What’s certain is the rise of AI Governance committees, taking cross-functional roles in ensuring safe and efficient enterprise AI and partnering with Legal, Ethics, Security, and Privacy constituencies in the same way that Data officers have in years past. – Satyen Sangani, CEO and co-founder of Alation

The shift to more a deliberate, outcome-focused use of AI will create interesting dilemmas around the notion of appropriate use. It’s very different to use AI capabilities to enhance existing work patterns according to standards that are widely accepted, compared to coming up with “AI-first” use cases that are novel, and where the benefits and possible unintended consequences are yet to be understood. This is where impact assessments and responsible AI guidelines will become an invaluable tool in the value strategy and risk formulation of those use cases, particularly in the legal tech space where the stakes are high. – Emili Budell-Rhodes, Director of Engineering Culture, LexisNexis

AI for employers and employees – We know that AI technologies can minimize manual tasks, improve employee workloads, and allow teams to focus on more strategic initiatives. Capitalizing on such automation will continue to be key in 2024, but it will have a unique role for an organization’s HR department. While AI can better streamline potential employees through the hiring process and keep prospective employees more engaged, job candidates with even just a baseline of AI knowledge are going to be extremely sought after as organizations establish their own AI strategies. On the other hand, job candidates will start to analyze how employers and their organizations as a whole are using AI. People are simply accustomed to its integration in the rest of their lives, and they’ll want to ensure they are being set up for success with these advanced technologies in their careers as well. – Laura Hanson, CHRO, insightsoftware

AI’s Ugly Side is Further Revealed: The 2024 Presidential Election is one example of how the coming year will reveal more of AI’s nefarious capabilities. Expect to see deepfakes and other AI-generated disinformation designed to influence the election emerge at an alarming rate. If used by savvy threat actors, it’s possible these images could become compelling propaganda, creating a veritable wilderness of mirrors for voters, who will have trouble discerning reality from carefully crafted disinformation. This will be a growing focus area as the candidates’ campaigns kick into high gear.  Perhaps no better example of the technology’s ugly side exists than AI-generated abuse imagery, which has been increasing in recent months. We’ll see more attention focused on preventing this in 2024, with a cluster of new solutions released to address the issue. Of course, we can also expect hackers to increasingly leverage AI for their bread-and-butter campaigns—attacking organizations and employees to exfiltrate sensitive data. Think threat actors leveraging the technology to improve their malware code or relying on generative AI to craft more legitimate phishing emails. As this happens, organizations will need to adjust their training—for example, poor grammar, once a hallmark of phishing campaigns, will no longer serve as a red flag, thanks to generative AI– Mike Wilson, founder and CTO of Enzoic

Peak AI hype will fade. However, the most innovative and competitive companies will take on the true challenge of AI’s digital disruption – its people. The paramount skill sought by companies will be “good judgement,” elevating it from a soft skill to a crucial human-in-the-loop necessity. Companies will realize that their AI adoption challenge isn’t access to technology, it’s access to people with the skills and bandwidth to stand up the programs. – Upwork,  Tim Sanders, VP of client strategy 

AI website generators will ultimately be seen as toys, or at least a little better than off-the-shelf templates/themes. In the next year, AI’s promise will primarily be realized through the cooperative support of humans who would otherwise be expected to handle the entire tasks themselves. In other words, AI will be leveraged primarily in areas where the user is already competent at the task. Think diagnostic support for physicians, writing support for marketers, coding support for engineers, and content moderation automation still sometimes subject to human review escalation. That said, there are exceptions. Additionally, 2024 will have the first AI price wars. It’s going to be like early VMs on the cloud all over again as vendors vie for spending. I agree with Google’s internal letter: there is no moat here for any company willing to spend prolifically on AI product development. – David Strauss, CTO and co-founder of Pantheon

The application scenarios of artificial intelligence aim to improve individuals’ productivity within teams. Therefore, in 2024, businesses are likely to purchase or use more AI enterprise applications to enhance efficiency. Currently, there are many AI applications for individual use, but there has yet to be a development management system that can effectively define AI’s role, allowing AI and humans to collaborate within the system. AI will help complete many repetitive tasks in project management, such as workflow transitions, creating management process templates, and summarizing documents. Developers should be capable of introducing AI to improve productivity while ensuring that the quality of work is maintained or at least remains unchanged. In 2024, it will be crucial to recognize the limits and scope of AI’s capabilities and to introduce AI appropriately into workflows. There will be multi-role system software involving AI, where system workflows will be reconstructed, and the software will continue to add value. If we define AI’s role and have a need for transparency in all processes, the significance of system software remains. Before the involvement of AI, system software described human collaboration workflows in certain fields. With AI, workflows will be restructured, and the system will accelerate. Given today’s LLM capabilities and stability, and the significance of the human role in most systems, AI currently appears as a Copilot, meaning it requires a human mentor in the system. In the future, AI may independently complete tasks within certain specific capabilities. This is similar to an airplane’s Autopilot; when parts of the system work can be automated, workflows will be restructured, and software will continue to add value. – Yingqi Wang, CEO and co-founder of ONES.com

AI Arms Race Continues: The thirst for more resources to power LLMs will remain unquenched in 2024. This will drive hyper scalers to source more GPUs and compute time from other hyper scalers as they race to train their models. Over time, this growing demand will become a significant problem limiting the potential of LLM and AI apps. – Rupert Colbourne, CTO of Orbus Software

AI Regulation: We’ll start to see AI regulations in 2024: For example, there have been discussions around monitoring the frontier model developments that consume lots of GPU compute. There will also need to be guardrails in place against DeepFakes on the internet given the 2024 presidential election. We think the efforts will make AI safer similar to how the FDA regulates the drug industry. – Tim Shi, co-founder and CTO of Cresta

There’s no doubt that AI will continue to revolutionize modern applications. On one hand, AI brings excitement and new opportunities, but on the other, it will also bring new hurdles for operation and observability. Different from traditional applications, AI tech stacks include new components, like LLMs and vector data stores, and generate additional telemetry to track such as quality and cost. Troubleshooting and optimizing these new types of applications require new solutions. That’s why, we are pioneering AI observability with the recent introduction of New Relic AI Monitoring (AIM), the industry’s first end-to-end AI monitoring solution. This offers engineering teams unprecedented visibility and insights across the AI application stack, making it easier to troubleshoot and optimize their AI applications for performance, quality, cost, responsible use of AI, and forthcoming AI regulations and standards. – Camden Swita, Senior Product Manager at New Relic

As the enterprise AI imperative only gains momentum in 2024, many organizations in highly regulated industries will find the public cloud limiting or prohibitive for AI and ML agendas, and we will see a surge in data repatriation and management. These regulated industries (such as financial services, healthcare, etc.) must consider data sovereignty and security policies when devising their AI/ML workloads and may require hybrid cloud or on-prem deployments. In addition, for high performance workloads, latency from cloud based applications just won’t be acceptable. For accelerating time-to-value, many will present opportunity to partners experienced in designing, building, deploying, and managing enterprise solutions (inclusive of software, compute/GPU, data technologies, and infrastructure) which support advanced workloads with optimal performance and stability at scale. – Mark Adams, president and CEO of SGH

In 2024, Enterprises Get A Double Whammy from Real-Time and AI – More Cost Savings and Competitive Intelligence: AI-powered real-time data analytics will give enterprises far greater cost savings and competitive intelligence than before by way of automation, and enable software engineers to move faster within the organization. Insurance companies, for example, have terabytes and terabytes of data stored in their databases, things like documentation if you buy a new house and documentation if you rent. With AI, in 2024, we will be able to process these documents in real-time and also get good intelligence from this dataset without having to code custom models. Until now, a software engineer was needed to write code to parse these documents, then write more code to extract out the keywords or the values, and then put it into a database and query to generate actionable insights. The cost savings to enterprises will be huge because thanks to real-time AI, companies won’t have to employ a lot of staff to get competitive value out of data. – Dhruba Borthakur, Co-Founder and CTO of Rockset

Emerging technologies like generative AI have so intensified the explosion of data that a majority of business leaders expect data storage needs to double by 2025, according to Hitachi Vantara’s Data Infrastructure report. In addition, the surge of sprawled environments creating siloed data is exacerbating the strain on existing infrastructure, with 76% of surveyed business leaders expressing concerns that their current infrastructure will be unable to scale to meet impending demands. The ability to derive insights from your data is another major concern, as it requires robust architecture that allows you to collate the data from various sources and forms and convert that data into knowledge. Today’s enterprises recognize the imperative of addressing these needs to stay competitive. 2024 will usher in a new era of enterprise innovation: the emergence of unified data ecosystems that integrate into existing infrastructure and leverage AI to intelligently synthesize vast volumes of data across all distributed environments, granting more visibility and interoperability of your data to unlock deeper insights faster and respond to market needs quickly. This ability to derive more complete insights from data, and at a pace significantly faster than ever before, will be the answer to ultimate business growth, removing barriers to innovation, and reducing complexity and costs in the process. – Bharti Patel, SVP, Head of Engineering at Hitachi Vantara

In 2024 we’ll see AI will move beyond the hype cycle and put IT efficiency into overdrive: Like any other new technology, AI is still going through a hype cycle. People are beginning to better understand what AI looks like and in 2024, we’ll move beyond the hype to more valid use cases. One result of this is that CIOs will need to show that they’re not using AI for AI’s sake. As we see IT pros embrace AI to automate workflows and boost efficiency, CIOs need to focus on arming their teams with the AI tools to better their business and optimize IT workflows across the teams. – Freshworks CIO, Prasad Ramakrishnan 

The Future of AI Adoption & Roadblocks: AI adoption will accelerate, and it’ll spread. We will continue to see big advances in the capability of models, and our understanding of how they work will increase, which will itself unlock new advances. We’ll see more models tuned to specific use cases, from code to DNA, to CAD, to chemical structure, to image analysis. We’ll also see better integrations and user experience design within applications and workflows, much beyond a text box in which one types prose. Making models ‘natural’ to use may actually become the most impactful development, just like tuning and wrapping GPT-3 into a chat app made it usable for millions of users. Investments and funding for the companies building generative AI technologies will not slow down in the next year, even with the state of the financial system. What could slow down the development of generative AI, however, is the unavailability of enough hardware to satisfy the demand. In this case, only the biggest companies, or those that already own a large amount of hardware, will be able to continue developing new approaches at scale. – Alex Chabot-Leclerc, Ph.D., VP of Digital Transformation at Enthought 

Shallow AI solutions will be exposed: Overly complicated SaaS add-ons and features that claim to automate, but really just have an “AI sticker on top” will be exposed after detract from productive working hours. Users are getting smarter when it comes to AI, and a recent survey shows that a majority of IT pros (71%) are using AI to support their own workload. Relentless app rationalization and scrutiny is critical, especially in the new AI era. – Freshworks CIO, Prasad Ramakrishnan 

2024 is the year “AI building AI” will become mainstream: While the demand for AI is growing, most people don’t realize that the current generation of AIs require a great deal of manual work to build and use. Popular AI models such as ChatGPT and Stable Diffusion require dozens of specialized data scientists to prepare data and write code, and this routine work can take several months. We were promised that AI would automate mundane tasks, yet sometimes it takes more mundane work to set up an AI to do these tasks. What if we used AI to automate this process? I predict that in 2024 AI-building AIs will become mainstream tools. And as a result, we will see an explosion in productivity, because people will use natural language to interact with AI systems and they will build AIs without coding skills. The end result is that all of this automation will free up people’s time to do more human-facing, value-added tasks instead of routine, mundane tasks. – Colin Priest, Chief Evangelist at FeatureByte

AI Implementation Challenges: Tech leaders today are being challenged to adopt AI to drive further advancements for their organizations. Many are grappling with the question of, “Where and how do we deploy AI?” AI will accelerate many things by filling in gaps of information, but it’s not a complete substitute for data-driven decision-making or analysis. We must also consider the distinction between the value of generative AI and other tools like machine learning and pattern recognition. It will be imperative heading into the new year that we take a deeper look at the technology our organizations already have in place to determine if it is truly scalable with AI. Before diving full in, we must ask how our businesses can specifically benefit from investing in AI and if it will provide the right outcomes. – Ian Van Reenen, CTO, 1E

The struggle for AI profitability will continue — and that’s okay: Companies building massive AI applications are not going to turn a profit any time soon, and that means the only people that can actually run them are companies with insane cash balances, like Google and Microsoft. But these companies will continue to fight their way through this in 2024 and run losses for a very long period of time, until the economies of scale bring the price of chips and processing down. Something to consider as these companies move forward is how open source fits into all of this. The risk for these larger companies is the possibility that they’ll make this sizable investment in their models — and then the models that actually end up winning are open source ones. So it will be critical for them to think about how to create differentiation in their models that go beyond what the open source community will tackle. – Raj De Datta, AI visionary and CEO at Bloomreach

Ethical frameworks and regulation are necessary for AI and not just a distraction for organizations as they pursue their bottom line. We cannot avoid AI, as it’s the only way we can scale our operations in the asymmetrical cyber battlefield. Ethical frameworks and regulatory governance will become critically important to help AI function efficiently and equitably. Every new piece of software or service will have an AI or ML element to it.  Establishing best practices for ethics in AI is a challenge because of how quickly the technology is developing, but several public- and private-sector organizations have taken it upon themselves to deploy frameworks and information hubs for ethical question. All of this activity is likely to spark increasing amounts of regulation in the major economies and trading blocks which for a while which could lead to an increasingly piecemeal regulatory landscape at least for now.  It’s safe to predict that the current “Wild West” era of AI and ML will fade quickly, leaving organizations with a sizable compliance burden when they want to take advantage of the technology.  – Nick Savvide, Director of Strategic Accounts, Asia Pacific, Forcepoint

As boardrooms and C-suites intensify their focus on AI, the spotlight will magnify the imperative to resolve underlying data issues: In 2024, more CEOs and boardrooms will increasingly realize that data is the linchpin for AI’s success. I’m witnessing a seismic shift in the executive mindset; for the first time in years, CEOs actively seek to increase their technology spend, particularly in AI, as they see great promise. CEOs are not merely intrigued by AI’s potential; they’re captivated by the promise of GenAI to redefine the very fabric of how we conduct business—from revolutionizing customer experiences to optimizing supply chains and bolstering risk management. The allure of AI is undeniable; it holds the key to unlocking new markets, saving millions, and catapulting companies into a league of their own. However, the sobering truth that every CIO understands is that AI is not a plug-and-play miracle. The Achilles’ heel lies within our data—the most valuable yet underperforming asset due to its fragmented nature. Investments in AI are futile without unifying and managing our data to ensure it’s clean, connected, and trustworthy. The path to AI’s promise is paved with data unification. It’s about transforming data into a singular, interoperable product that can truly catalyze digital transformation and harness AI’s transformative power. – Manish Sood, Founder, CEO and Chairman of Reltio

2024 will be the year of adaptability and useability of AI tools: 2023 was the year of cautious experimentation of AI tools but in 2024 organizations will shift their focus towards responsible deployment. While much remains that companies don’t fully understand about AI, along with its associated risks, there are many opportunities to take advantage of moving forward in business and life. Falling behind in the AI adoption race can pose significant challenges for organizations. However, there is no one-size-fits-all model for organizations to follow. Technology leaders will need to assess which use cases benefit from the integration of new AI tools and which tools are better left untouched. They will also need to ensure that GenAI tools are used in a safe and responsible way governed and controlled by organizational governance processes. This strategic approach ensures that AI adoption aligns with an organization’s unique goals and needs. – Barry Shurkey, CIO at NTT DATA

Businesses Use AI to Gain a Competitive Advantage Amid Unpredictable Markets: The most crucial AI trend in 2024 will be the rise in utilizing AI to gather contextual information, which will enable organizations to better understand their environment and gain a leg up amid unpredictable markets, supply chain challenges, and talent management issues. Organizations, drowning in dashboards, will leverage natural language processing to unlock and transform data into actionable insights, empowering leaders to predict risks, manage costs, and enhance operational effectiveness. –  Jeff Moloughney, CMO, Digital.ai 

AI is Recession and Inflation Proof: Despite economic headwinds or tailwinds, interest in AI will remain strong in 2024 regardless of which way the economy turns. AI’s potential to drive innovation and competitive advantage is a must-have, with its own line item in the budget. Measuring the ROI on AI will be critical and practical use cases will be put under the microscope. For example, proving out how AI can make everyday tasks like data analysis cheaper and more broadly available to business users will be key. Likewise, investors will be more wary of AI companies. – Arina Curtis, CEO and co-founder of DataGPT

Ensuring AI Integrity in a Trustless World: With the proliferation of AI technologies like deepfakes and automated content generation, there’s an increasing need for mechanisms to verify AI. Web3 technologies offer a solution to this challenge by providing a framework for transparent, verifiable AI operations. This shift will be crucial for industries that are increasingly relying on AI, ensuring that AI remains a trustworthy tool despite the decentralized and often opaque nature of its operation. – Blane Sims, Head of Product of Truebit

Next-gen AI personalization: AI will be increasingly personalized, and companies like Google and Apple will evolve their AI-based tools (Siri and Bard) to enable digital assistants that can have extensive dialogues via voice commands with consumers. The banking industry will follow suit, developing AI-based technology that will help financial institutions deliver highly personalized, proactive, and consultative digital interactions with customers. – Daniel Haisley, EVP of Innovation at Apiture

Potential Meets Pragmatism: AI and ROI in 2024: As IT leaders assess new technologies to help automate mundane tasks and optimize resources, they’ll focus on modern platforms that include advanced AI capabilities, including generative AI, to achieve cost-saving. But as we dive into the new year, it’s crucial for organizations to also take precautions to avoid shiny object syndrome, where exciting new opportunities turn from distraction to detriment. They must take into consideration the strategic prioritization of investments, for example through forming task forces such as investment committees, to optimize spending in the correct technology areas to reach their business goals. It seems inevitable that adoption of AI-enabled technologies will continue to expand and accelerate. But the reality in 2024, is that successful organizations will need to focus more time and efforts in first understanding where AI might actually provide maximum ROI. To control their IT roadmap, eliminate unnecessary spending, and with an eye on profitability, we’ll see successful leaders accelerate their adoption of solutions and services to streamline operations, enhance efficiency, and liberate their teams to concentrate on strategic business operations. – Steven Salaets, CIO, Rimini Street 

2024 will be the year that small businesses turn to AI: Over the past year, we have seen many large companies take advantage of the AI “gold rush,” while most small businesses have yet to embrace it. AI is a rapidly evolving tool that improves operational efficiency and productivity, and its benefits are undeniable. In 2024, more small business owners will likely start implementing these tools directly into their companies, and more of the applications they rely on will use AI to augment existing functionality. By leveraging AI to automate many traditionally time-consuming tasks such as invoicing, data entry, and scheduling, small business owners can spend less time on administrative work and more time focusing on growing their business and delivering a superior customer experience. – Forrest Zeisler, CTO & Co-founder of Jobber

With the advancements in AI and natural language processing, we’re entering a new era of data communication. We’re shifting from a query-based data extraction approach to a real dialog with our data. By embracing this conversational approach, we can uncover hidden patterns, identify trends, and make data-driven decisions with ease. It’s no longer about wrestling with numbers and formulas; it’s about having a genuine conversation with our most valuable asset. – Alex Bebin, MD, Clinical Solutions and AI Leader at MDClonewww.mdclone.com

60% of workers will use their own AI to perform their job and tasks. Businesses are scrambling to capitalize on the AI opportunity, but they won’t innovate fast enough to outpace widespread employee usage of consumer AI services — also known as bring-your-own-AI (BYOAI). Enterprises should focus on building a strategy to manage and secure BYOAI now, while they develop formal company-sanctioned AI resources. – Forrester

Access, scale, and trust: In 2024, the three biggest challenges that AI companies will face are access to AI tools, scalability within specific industries, and user trust in popular AI tools. We’ve seen the question of trust emerge in 2023, and that will be even bigger in 2024 when we see the impact of the AI Act. – Dan Head, CEO of Phrasee

2023 was the year of AI promises — 2024 will be the year of AI action. We will start to see the tangible outcomes of the initiatives companies have been putting in place and discover their impact on customers. Those who have chosen to invest in resources and identify opportunities for AI to work collaboratively with human intelligence (as opposed to replacing it) will be the ones who emerge ready to capture the market. – Laura Merling 

In 2024, we can expect to see a move towards automating the data collection process on the construction site. Today, teams are burdened to get the project done on time and within budget – while still keeping safety and quality requirements in mind. With AI, both computer vision and generative AI, companies will be able to structure and standardize their data across the entire lifespan of a project. Whether it’s during the design process with building information modeling (BIM) and drawings, inputting credit cards to purchase materials, or validating insurance information to protect workers and the project, the construction industry works with a vast amount of data. We are already beginning to see general contractors leverage data in unique ways to improve their business but a lot of the data is unstructured and isn’t used to its full potential. It’s reported that nearly 20% of time on a typical project is spent just searching for data and information. AI will be able to solve this problem through automated data collection, allowing individuals to spend more time and resources on pulling insights from their data to mitigate risk and improve the business. – Rajitha Chaparala, VP of Product, Data and AI of Procore

CX Gets a Facelift with AI: AI will help agents contribute to success by answering questions faster and better, resolving problems on first contact, communicating clearly, and leaving the customer feeling satisfied. This will lead to new CX strategies centered around AI to design, execute, and measure new or reimagined customer service experiences. According to Forrester, the key to many of 2024’s improvements will be behind-the-scenes GenAI, which augments customer service agents’ capabilities. – Sreekanth Menon, Genpact’s Global AI/ML Services Leader

AI to Drive Real-Time Intelligence and Decision Making: Next year will be foundational for the next phase of AI. We’ll see a number of new innovations for AI, but we’re still years away from the application of bigger AI use cases. The current environment is making it easy for startups to build and prepare for the next hype cycle of AI. That said, 2024 is going to be the year of chasing profitability. Due to this, the most important trend in 2024 will be the use of AI to drive real-time intelligence and decision-making. This will ultimately revolutionize go-to-market strategies, derisk investments, and increase bottom-line value. – Mike Carpenter, CEO and Co-Founder at XFactor.io

Companies will have top-down mandates on the adoption of AI in 2024: Many team leaders will come back from the holidays to find mandates from their CEO and CFO with pointed targets that AI adoption should achieve. Expectations like reducing Opex by 20%, increasing CSAT/NRR by 10%, and generating 10% topline revenue through AI-based products and experiences will be at the forefront. In service of these objectives, some C-suite teams will appoint an AI leadership role to mimic the success of digital transformation winners in the previous decade. We anticipate Chief AI Officer or similarly titled roles will become common as organizations grapple with how to rapidly integrate this new technology into legacy operations. This new role will be somewhat contentious with the increasingly fractional role of the CIO. Whether CIOs can deploy enough automation to carve out a strong focus on AI or ultimately cede that territory to this newcomer in the C-suite is something to watch closely. – Sean Knapp, CEO of Ascend.io

Over the past few years, the CTO role has become the bridge between the tech-savvy and the business-savvy, charged with enabling the right solutions to create the best overall business outcomes. This comes with its communication challenges as the CTO needs to navigate how to translate tech into an ROI for the organization’s board and C-suite. In 2024, the ability to educate their C-level colleagues will become even more important as artificial intelligence (AI) technologies become commonplace. The CTO will not only need to be able to collaborate with the tech side of the business to ensure what is realistically possible in the realm of AI but will need to communicate on a business level its potential – both from employee productivity and product standpoint. – Bernie Emsley, CTO at insightsoftware

The “Tiger Team” Approach for the AI Revolution – As businesses race to integrate AI into their internal processes, one strategy gaining traction is the concept of the “tiger team” — a small, specialized group entrusted with piloting AI initiatives before their full-scale implementation. This approach allows companies to navigate the complexities of AI without disrupting the entire organization. More companies will build a tiger team to focus on AI implementation that is both pragmatic and forward-thinking. This approach not only mitigates potential challenges but also cultivates a culture of innovation, ensuring that companies are well-equipped to thrive in the age of artificial intelligence. – Rafay Systems co-founder and CEO Haseeb Budhani 

Automation and AI will tackle asset visibility – As adoption of AI, ML and automation continue, organizations will increasingly be finding new use cases. One area where I expect to see this play out is in automating processes around asset visibility, which has always been a big challenge. Let’s say you are part of a security team, and you have unidentified assets on your network; you notice that one of these assets has touched a certain application, but you don’t know the asset owner to determine if this access is legitimate. Rather than setting off a chain reaction where the security team reaches out to the application owner, who then must go on a fact-finding mission to determine who owns the asset and why it’s accessing the application, an automated process using AI will be able to quickly identify the asset owner and intent. This move toward a more automated and less manual process will be incredibly helpful to both security and compliance teams. – Erin Hamm, field chief data officer, Comcast Technology Solutions

AI will streamline and improve document-based workflows in 2024 – Document-based workflows will continue to be streamlined as a result of AI. In 2023, development teams and smaller organizations, such as startups, began to leverage AI more frequently to expedite operations and make their workflows easier to track and manage. Running these workflows without the help of AI will reduce the quality of work being done as well, leading to degraded performance and a lack of innovation in the cloud. As big cloud players, like Google, continue to scale solutions like DocumentAI and Bard to further streamline paper or doc-based processes, development teams and executives will need to double down on AI as a business-critical technology. – Miles Ward, CTO at SADA

Artificial General Intelligence (AGI) will begin taking the spotlight in 2024. This means AI will learn to accomplish the intellectual tasks that you and I can do. Even more, AGI technology will enable machines to surpass human capabilities in most tasks we define as economically valuable today. AGI will be the connective tissue between people, machines and data – changing the way we work.  Remedial tasks will increasingly be handled by AI – allowing more meaningful jobs to be fulfilled by humans, ultimately increasing efficiency and producing higher-quality business outcomes. We are living in a unique moment in time, with a front row seat to the AI revolution. As such, those in the tech industry, government, business, education, healthcare – have the responsibility to work together, put the necessary guardrails in place and encourage innovation – laying the groundwork for how our world will operate in the future. – Umesh Sachdev, CEO of Uniphore

AI will further turn data into actions – Every platform has a ton of data, analytics, and dashboards it touts. But we are continually hearing from customers and prospects that they are already overwhelmed with the amount of data they have, and don’t know what to do with it all. Furthermore, data is disjointed in various systems, making it hard to synthesize and make sense of, oftentimes requiring resources like a business analyst to interpret. We predict AI applications that incorporate collating data from multiple sources and providing the action to take based on this data will be the most impactful going into 2024. – Parth Mukherjee, Global VP of Product Marketing & GTM Strategy at Mindtickle

AI will empower more data-driven benchmarks to improve engineering performance in 2024: In 2023, tech companies scrambled to reorganize their engineering resources in response to ChatGPT and the looming recession and they found out that they are severely unequipped to make informed workforce decisions. Too many leaders lack insight into what their development organization is doing and who the top performers are. Companies need to become more data-driven in 2024 to gain a holistic view of their existing human resources and understand how tech workers are contributing. AI allows organizations to track more data points and development benchmarks to compile a data-driven evaluation of development team productivity and help identify top performers. In order to enact changes within engineering teams – whether to increase productivity, shift resources to new opportunities, or reduce costs – tech leaders need more reliable and intelligent ways to track developer productivity so they can ask the right questions and make intelligent decisions. – Cory Hymel, Vice President of Product & Research, Gigster

AI is fundamentally changing the way we live and work, and this transformation will only accelerate over the next 12 months – and well beyond that. In 2024, the conversation around AI will become more nuanced, focused on the different types of AI, use cases and – crucially – what technological foundation we need to put in place to make the AI-powered world of the future a reality. Advanced, purpose-built chips will play a critical role in allowing today’s AI technologies to scale and in fueling further advancements in their deployment. The CPU is already vital in all AI systems, whether it is handling the AI workload entirely or in combination with a co-processor, such as a GPU or an NPU. As a result, there will be a heightened emphasis on the low power acceleration of these algorithms and the chips that run AI workloads in areas of intense compute power, like large language models, generative AI, and autonomous driving. In 2024 we will also see even more advancements when it comes to running AI on the CPU, and the rise of AI at the edge, as the industry looks at how to deliver AI in a sustainable way and move data around more efficiently. There have been great strides in delivering the technologies needed for developers to create secure AI workloads at the edge, and we look forward to seeing what’s possible with increased on-device compute and AI in more cost-efficient solutions. It is clear that, in every area, from sensors, smartphones, and software-defined vehicles, to data centers and supercomputers, the future of AI will need to be built on the world’s most advanced semiconductor technologies. The advancements of AI and chips are therefore intrinsically interlinked and will be a focal point for the whole technology industry in 2024. – Gary Campbell, EVP Central Engineering, Arm

AI will bridge the gap between managers and their direct reports. In 2024, AI will fill the missing gaps that managers have inadvertently caused. Whether it’s crafting more thoughtful performance reviews or identifying internal growth opportunities for their direct reports, AI will provide much needed support on tasks where managers are either inexperienced or too burnt out to handle. These AI capabilities will help them become stronger managers, in turn allowing them to better empower their direct reports. – David Lloyd, Chief Data Officer, Ceridian

Since 2023 was all about the mainstreaming of AI and the crushing demand for specialized infrastructure (even from the category leaders), in 2024, we will see a reckoning for capacity to support other specialized workloads. The collective crisis standing in the way of business innovation is no longer just big data and the quality, compliance, and privacy concerns that come with it. It’s now big-time processing to unblock the teams, initiatives, and workloads within each particular domain. We have seen the GPU boom. But what comes next? Faced with enormous capacity constraints – including data center space and energy, as well as budget and performance – enterprises will have to strongly consider their future needs to efficiently and strategically do more with less. In the next year, we’ll start to see the shift to dedicated hardware for dedicated workloads to accelerate processing and break the cycle of scaling the footprint of generic compute across every conceivable industry and endeavor. – Jonathan Friedmann, CEO of Speedata

We’ll see organizations go from piloting seemingly small AI projects to complete end-to-end transformation at record speed, thanks to modern data infrastructures. For many companies, huge AI ambitions were the final straw when it came to greenlighting transformation of their legacy data infrastructures. Now, with all data across the organization stored in an accessible format, many are in a position to introduce any AI use cases they can conceive of. These use cases might not feel much different than many single-use technology applications out there at first. We’re likely to see companies pilot applications within a single area of their business, for instance. But once they prove AI’s ability to increase internal efficiencies and/or customer-facing offerings, we’ll see them quickly commit to scaling what they’ve built—and in the process, cement their journeys toward becoming end-to-end, AI and data-driven enterprises. – Marco Santos, CEO Americas of GFT

2024 will be a make-or-break year for data intelligence: Following the booming interest in AI in 2023, enterprises will face increased pressure from their boards to leverage AI to gain a competitive edge. That rush for an AI advantage is surfacing deeper data infrastructure issues that have been mounting for years. Before they can integrate AI effectively, organizations will first have to address how they collect, store, and manage their unstructured data, particularly at the edge. AI doesn’t work in a vacuum and it’s just one part of the broader data intelligence umbrella. Many organizations have already implemented data analytics, machine learning and AI into their sales, customer support, and similar low-hanging initiatives, but struggle to integrate the technology in more sophisticated, high-value applications. Visibility, for example, is a crucial and often-overlooked first step towards data intelligence. A shocking number of companies store massive volumes of data simply because they don’t know what’s in it or whether they need it. Is the data accurate and up-to-date? Is it properly classified and ‘searchable’? Is it compliant? Does it contain personal identifiable information (PII), protected health information (PHI), or other sensitive information? Is it available on-demand or archived? In the coming year, companies across the board will be forced to come to terms with the data quality, governance, access, and storage requirements of AI before they can move forward with digital transformation or improvement programs to give them the desired competitive edge. – Jim Liddle, Chief Innovation Officer at Nasuni

Instead of replacing the work of open source maintainers, AI makes them more powerful than ever. Too much reliance on AI to generate code that automatically goes into production makes it ripe for vulnerabilities. As organizations move beyond this “next big thing” they will realize that human judgment is required to ensure the security and integrity of the code. As a result, the industry will see the rise of the open source super maintainers, who leverage AI to increase their capacity to take on more work while at the same time incorporating their human expertise to prevent unseen risk. – Donald Fischer, CEO and co-founder, Tidelift

AI Will Need to Explain Itself: Users will demand a more transparent understanding of their AI journey with “Explainable AI” and a way to show that all steps meet governance and compliance regulations. The White House’s recent executive order on artificial intelligence will put heightened pressure on organizations to demonstrate they are adhering to new standards on cybersecurity, consumer data privacy, bias and discrimination. – Mark Do Couto, SVP, Data Analytics, Altair

AI will become one with the network, impacting all business operations: If 2023 was the year of flashy AI investments, 2024 will be the year of AI impact—which may not be as visible to the naked eye. AI will move from a “tool you go to” (such as ChatGPT) to being integrated into the applications we are using everyday and empowering network connectivity. As such, we’ll begin to see the benefits of AI being integrated into all applications related to the network, bolstering network predictability, troubleshooting, security and more. Businesses will need to ensure AI transparency and security practices are adequate in order to make the most of AI. – Eric Purcell, Senior Vice President of Global Partner Sales, Cradlepoint   

In 2024, AI will undergo a rebrand. While AI whisperers are predicting ‘AI winter’ – or a slowdown in interest in the technology due in part to its overexposure in some quarters – it’s notable that a lot of the current focus is on negative aspects of AI. Elon Musk recently talked about how AI will “end all jobs”, while the name of the recent ‘AI Safety Summit’ itself at Bletchley Park infers an element of risk mitigation. The facts are that fear sells, and on some level, it makes people more likely to pay attention. But as more and more people become comfortable using AI tools, as things like ChatGPT raise awareness further, the next year should see more people opening their eyes to the ways in which AI should be used – and, in many ways, already is – as a force for good in their lives, in ways they never appreciated. Of course, they’ll need to do this without losing sight of the risks or limitations of the technology, and by looking for pragmatic, practical ways to minimize these risks. As AI becomes more mainstream and ‘trendy’, we could see more consumer-facing brands more clearly articulating the ways in which they use AI, by way of differentiating themselves and appealing to customers. It may be wishful thinking, but perhaps, a year from now, Bletchley Park could even be hosting the ‘AI Opportunities Summit’ instead? – Pega,  Peter van der Putten, Lead Scientist and Director, AI Lab

Architecting For Hyperscale Datasets Will Result In More Valuable AI Investments. The value of AI depends on the quality of its training data, and extracting maximum value from multimillion-dollar AI investments is challenging without systems and architectures designed for massive data processing. Companies that will be leading with the transformative power of AI in 2024 will be those investing in hyperscale architecture as a strategic cornerstone vs a plug-in solution, and I expect we’ll start seeing a significant shift in the AI landscape as more and more businesses adopt this approach. – Chris Gladwin, CEO and co-founder at Ocient

Applied AI will seamlessly integrate, complement existing workflows: In 2024, Applied AI will seamlessly integrate into organizational workflows, enhancing human capabilities and improving operational efficiency. AI technologies will be user-friendly and adaptable, aligning with existing human behaviors and operational processes to facilitate easy adoption and immediate benefit realization. AI will be designed to complement existing workflows, promoting efficiency without causing disruption or necessitating significant changes in work patterns. This approach will ensure smooth transitions, quick adoption, and immediate productivity improvements. By aligning with human behaviors and enhancing current processes, AI will enable organizations to be more responsive and agile, easily adapting to changing conditions and evolving needs. In 2024, the focus of Applied AI will be on practical integration, ensuring that AI technologies work harmoniously within existing organizational structures to drive innovation and success. – Nick King, CEO and Founder, Data Kinetic

I don’t expect AI adoption to slow in 2024, in fact, I expect it to continue to accelerate, particularly for CX use cases. But more business leaders will come around to the idea that generative AI is not a silver bullet – and it is most powerful when used for specific use cases, often along with other AI techniques, to meet specific business needs. I predict that the organizations who ‘get it right’ will be the ones that effectively balance AI velocity and agility with responsibility and security. Those that do this will find themselves in the position to deliver the most value to their customers and improve the bottom line. – CallMiner Chief Marketing Officer, Eric Williamson

AI Will Get More Specialized: In 2024, we’re going to start to see generative AI tech expand beyond the “artificial copywriter,” and be used to generate content in multiple mediums and more highly specialized applications, such as 3-D modeling, animation, architecture, and video creation. We’ll see tremendous early growth, followed by a realization that the costs of generative AI are still very high due to the scarcity of specialized hardware and the energy costs to train large models. – Michael Allen, CTO at Laserfiche

AI will force us to get our data “house in order”: AI is only as good as the data it’s given. As we apply AI to more parts of our lives, it will become increasingly apparent what areas are informed by poor data sources versus quality data sources. Next year, product leaders, data scientists, and chief architects will need to work more closely together to ensure the data powering their offerings is up to date, not siloed, serves as a single source of truth, and is versioned appropriately. – Asana, Alex Hood, Chief Product Officer

Artificial intelligence will continue to take a front seat, but the hype will die down: Now that we’re at the end of 2023, just a short year after the initial release of ChatGPT, it seems like everything has an AI component to it. If you ask me, generative AI is at the top of its “hype cycle,” but it will still remain in the general consciousness in 2024 and will likely start to deliver more business value. However, what we have to realize is that even though AI was the buzz-topic of the year, it has yet to reach its full potential. For adversaries we’ve seen it mainly be used for social engineering purposes so far, and it’s likely that we’ll continue to see that threat surface deepen, but both from a defensive and offensive cyber operations side, we have a long way to go. I think in 2024, we’ll see vendors try to combat AI’s use for malicious purposes outside of just social engineering and ultimately use AI to deliver more tangible value. However, like any “hot new topic,” the hype will inevitably cool down as time goes on and it will settle into part of the landscape. – Jon France, CISO of ISC2

Impact of AI on 2024 Presidential Election: AI promises to shape both the 2024 campaign methods and debates; however, it’s interesting that even candidates with tech backgrounds have avoided AI specifics so far. We’ve seen immense interest in AI and machine learning as they transform the way the world works, does business, and uses data. As a global society we need to be aware of and carefully consider potential shortcomings of AI, such as unintended bias, erroneous baseline data, and/or ethical considerations. Even if the topic isn’t covered in debates, the challenge and opportunity of AI is something that the next administration will have to grapple with. – Ivanti’s Chief Product Officer, Sri Mukkamala 

As the AI explosion continues, an organization’s ability to stay competitive and innovate will come down to their enterprise data strategy. Over the past year and a half, there’s been a significant explosion of ‘ready for prime time’ generative AI, opening opportunities for enterprises to benefit from intelligent automation. There’s no denying that AI will continue to increase efficiency, accuracy, and overall business agility in 2024. With this, we’ll start to see an increased need for a robust foundation of reliable and well-governed enterprise data. Utilizing the power of this data is paramount for training precise machine learning models, deriving insightful analytics, and enabling intelligent decision-making. As AI technologies continue to evolve, the quality and accessibility of enterprise data could significantly impact an organization’s ability to assess large datasets in real-time, stay competitive, eliminate bias, and free up more time for innovation. – Stephen Franchetti, CIO, Samsara 

AI Answers The Call for Help Managing Data Overabundance: Today’s data professionals have an overwhelming amount of information at their fingertips but many may lack the actionable insights they need. And, with the increase in data being categorized across distributed sources—328.77 million terabytes daily—organizations are grappling with the challenges of data management. Data is one of the most valuable assets an enterprise has, yet it’s fundamentally useless unless it can be leveraged, understood, and applied effectively. As we approach 2024, data management is rapidly evolving toward a future dominated by artificial intelligence. AI is the answer for IT teams as they navigate today’s increasingly complex distributed and hybrid digital environments. Because these technologies process more information than any one human ever could, they support resource-constrained IT teams by ensuring applications and services are running properly without the need for human intervention. AI-powered observability and ITSM solutions, in particular, can provide a lift to IT teams by enabling them to automate tasks, detect security threats and performance anomalies, optimize performance, and make better decisions based on data analysis. Yet our path forward in 2024 requires deliberate planning and a keen understanding of how and in what ways AI can help us. While walking the exhibit halls of several large IT conferences this year, I was surprised how almost every vendor’s booth was blazoned with AI captions. These frothy headlines won’t turn a poor or mediocre product into a good one. And organizations that begin their journey to AI by hurrying to implement the latest shiny new technology without analysis, are least likely to see long-term and sustainable success. Instead, carefully plan your AI strategy and you’ll reap the rewards long into the future. – Kevin Kline, Senior Staff Technical Marketing Manager from SolarWinds

Companies will upskill non-technical teams on data and analytics, in preparation for an AI-led future: AI has significant potential to transform the roles of many knowledge workers, but there’s one problem: too few employees understand data and analytics to be able to use it effectively. Generative models are literally designed to generate data. More than ever, we need people to interpret the output and layer in the business context or adjustments of the raw outbound to ensure it’s appropriate. – Megan Dixon – VP of Data Science at Assurance IQ  

AIOps for Network Operations: Network optimization can support better performance of AI, but AI can also support better performance of networks. Although it’s still early days for AIOps (AI for IT operations), it is beginning to show potential. While all areas of IT operations are covered by AIOps, one area which is now emerging as an important component is AIOps for network operations. Network engineers are being faced with increasingly complex network landscapes, combining a distributed workforce, a multitude of devices, and cloud infrastructure, etc. AIOps simplifies the management of network operations through automation, predictive analytics, and root cause analysis on the basis of big data and machine learning. AIOps can speed up troubleshooting and resolving issues for customers, and at the same time reduce costs, as precious NOC employees can work on more critical tasks that AI can’t solve today. In late 2023, one survey found that while only 4% of respondents have already integrated some kind of AIOps organization-wide, a further 15% have implemented AIOps as a proof of concept, and 29% have identified use cases for its future implementation. The market is forecast to triple in size over the next four years, reaching nearly US$ 65 billion in 2028. – Dr. Thomas King, CTO at DE-CIX

Optimizing Use of AI Will Determine Future Supply Chain Winners: AI and predictive analytics will separate the winners and losers over the next decade across manufacturing and retail. Leaders who harness big data to optimize inventory, forecast demand, control costs, and personalized recommendations will dominate their less analytical peers. Companies that fail to adopt will see spiraling costs and plummeting efficiency. – Padhu Raman, co-founder and chief product officer of Osa Commerce

Businesses Use AI to Gain a Competitive Advantage Amid Unpredictable Markets: The most crucial AI trend in 2024 will be the rise in utilizing AI to gather contextual information, which will enable organizations to better understand their environment and gain a leg up amid unpredictable markets, supply chain challenges, and talent management issues. Organizations, drowning in dashboards, will leverage natural language processing to unlock and transform data into actionable insights, empowering leaders to predict risks, manage costs, and enhance operational effectiveness. –  Jeff Moloughney, CMO, Digital.ai 

Expect AI backlash, as organizations waste more time and money trying to ‘get it right’: “As organizations dive deeper into AI, experimentation is bound to be a key theme in the first half of 2024. Those responsible for AI implementation must lead with a mindset of “try fast, fail fast,” but too often, these roles need to understand the variables they are targeting, do not have clear expected outcomes, and struggle to ask the right questions of AI. The most successful organizations will fail fast and quickly rebound from lessons learned. Enterprises should anticipate spending extra time and money on AI experimentation, given that most of these practices are not rooted in a scientific approach. At the end of the year, clear winners of AI will emerge if the right conclusions are drawn. With failure also comes greater questioning around the data fueling AI’s potential. For example, data analysts and C-suite leaders will both raise questions such as: How clean is the data we’re using? What’s our legal right to this data, specifically if used in any new models? What about our customers’ legal rights? With any new technology comes greater questioning, and in turn, more involvement across the entire enterprise.” – Florian Wenzel, Global Head of Solution Engineering, Exasol

Organizations will (finally) Manage the Hype around AI: As the deafening noise around GenAI reaches a crescendo, organizations will be forced to temper the hype and foster a realistic and responsible approach to this disruptive technology. Whether it’s an AI crisis around the shortage of GPUs, climate effects of training large language models (LLMs), or concerns around privacy, ethics, bias, and/or governance, these challenges will worsen before they get better leading many to wonder if it’s worth applying GenAI in the first place. While corporate pressures may prompt organizations to do something with AI, being data driven must come first and remain top priority. After all, ensuring data is organized, shareable, and interconnected is just as critical as asking whether GenAI models are trusted, reliable, deterministic, explainable, ethical, and free from bias. Before deploying GenAI solutions to production, organizations must be sure to protect their intellectual property and plan for potential liability issues. This is because while GenAI can replace people in some cases, there is no professional liability insurance for LLMs. This means that business processes that involve GenAI will still require extensive “humans-in-the-loop” involvement which can offset any efficiency gains. In 2024, expect to see vendors accelerate enhancements to their product offerings by adding new interfaces focused on meeting the GenAI market trend. However, organizations need to be aware that these may be nothing more than bolted-on band aids. Addressing challenges like data quality and ensuring unified, semantically consistent access to accurate, trustworthy data will require setting a clear data strategy, as well as taking a realistic, business driven approach. Without this, organizations will continue to pay the bad data tax as AI/ML models will struggle to get past a proof of concept and ultimately fail to deliver on the hype. – Atanas Kiryakov, founder and CEO of Ontotext

Thoughts on AI: As with any hype cycle, a lot of people are going to jump on this with poor plans or inadequate knowledge or ability and they’re going to produce bad, or even dangerous, code and applications. Organizations that invest heavily in AI and then fail are likely to be in trouble. Other organizations that take on these questionable AI apps and processes may suffer data breaches, bad or misinformed decision making, and suffer from their reliance on poor code. – Grant Fritchey, Product Advocate at Redgate Software 

A Push for Greater AI Explainability: The business community has witnessed significant advances in artificial intelligence over the last two years. Yet a defining characteristic of sophisticated AI systems, including neural networks, is that they do not always behave as we might expect. Indeed, the path an AI system chooses to arrive at a destination may vary significantly from how a human expert would respond to the same challenge. Studying these choices – and building in tools for AI explainability – will become increasingly important as AI systems grow more sophisticated. Organizations must have the ability to analyze the decision-making of AI systems to put adequate safeguards in place. Additionally, the outputs that AI systems provide to explain their thinking will be critical toward making further improvements over time. – Paul Barrett, CTO at NETSCOUT

Balancing act of AI content and bans – Visibility vs. Control: Publishers’ consideration of AI bans stems from a desire to maintain control over their content. However, this approach may result in decreased visibility in search results as search engines increasingly rely on AI to curate content. Integration vs. Exclusion: While some brands may see AI bans as a way to protect their content, they risk missing out on the advantages that AI, especially LLMs, can provide in content matching and query understanding. The reasoning against AI bans is that LLMs can leverage alternative means to access content, making total exclusion challenging. Balancing Act: Brands will need to find a balance between protecting their content and leveraging AI to increase their visibility and relevance in search results. This might involve developing nuanced policies that regulate AI interaction with content without full exclusion. – A.J. Ghergich, VP of Consulting Services, Botify

AI can certainly help clean up “messy data”, but it’s also a bit circular in that AI use should be based on strong data governance, as data protection law requires companies to understand which personal data is used in AI use cases.  As such, in 2024 we will see a bigger focus on data inventory and classification as a necessary foundational piece for companies that want to lean into the power of AI. – Seth Batey, Data Protection Officer and Senior Managing Privacy Counsel at Fivetran

In my opinion, the marketing world is poised for a paradigm shift from broad marketing monologues to interactive, AI-driven customer dialogues. This change will mandate reevaluating marketing technology stacks to prioritize real-time, meaningful interactions. Simultaneously, personalization will transition from perceived intrusiveness to trust-building through responsive dialogues. I believe this will gradually phase out traditional navigation, like drop-down menus, in favor of search and chat interfaces. In this evolving landscape, companies will recognize that their AI strategy is intrinsically linked to their data strategy. Emphasizing lean data becomes essential to leverage new interfaces and tools effectively and compliantly, ensuring that data quality and relevance are at the forefront of these technological advancements. – Christian Ward, Executive Vice President & Chief Data Officer of Yext

Optimizing Use of AI Will Determine Future Supply Chain Winners: AI and predictive analytics will separate the winners and losers over the next decade across manufacturing and retail. Leaders who harness big data to optimize inventory, forecast demand, control costs, and personalized recommendations will dominate their less analytical peers. Companies that fail to adopt will see spiraling costs and plummeting efficiency. – Padhu Raman, co-founder and CEO of Osa Commerce

Why AIOps Fall Is LLMs Triumph: In 2024, I expect to see more companies reach a breaking point with AIOps and shift their focus towards the potential of LLMs. While AIOps was a laudable concept when introduced, in practice it has failed to live up to its promise. The idea that you could train a model on data emitted by apps, that change everyday, is nothing more than a pipe dream. Large Language Models (LLMs) appear to be a far more promising alternative because they attack the problem differently and help users make more intelligent decisions. Companies are waking up to this fact but many more will begin to act on it in the new year. – Jeremy Burton, CEO of Observe

AI is already proving to be an incredibly powerful tool for developers, though many are skeptical about the extent of its capabilities and concerned about the potential it has to disrupt traditional workplace practices, jobs, and processes. From my point of view, AI is set to enhance developers’ every day workflow, rather than replace it. More and more developers will use AI to automate simple tasks such as scanning for performance issues, spotting patterns in workflows, and writing test cases. Instead of “AI-jacking”, it will actually free up developers to spend more time on impactful, innovative work. – Dana Lawson, Senior Vice President of Engineering at Netlify

Artificial intelligence will bring teams closer together as leaders across every industry begin to embrace the technology: Within the next year, AI will become the primary driver of the development life cycle — not just as an IT assistant, but as a collaborative tool. Developer and engineering teams have had their work largely restricted to the backend, but I anticipate IT leaders to become key advisors as AI becomes more ingrained in a business’ overarching goals. Both technical and non-technical staff will need to align on their AI strategy in tandem as organizations seek to utilize AI for automation, prototyping, testing, and quality assurance to drastically reduce the time needed to develop new projects. This will enable technical staff to innovate more frequently, and non-technical staff can have a stake in building solutions, rather than just providing requirements. – Ed Macosky, Chief Product and Technology Officer, Boomi

On adopting/investing in AI: Investing in AI tools can be a lever that helps some developers become more productive. The more training on prompting, the more likely it is that you will get increased productivity from developers. The downside is that often the AIs don’t really know the problem space and might be using code that is subpar. Lots of training code out there on the Internet isn’t suitable for your application. Some of it isn’t suitable for any application, so expecting an AI to make developers better is unlikely to work. AI is a tool or lever, not a substitute for training and skill. – Steve Jones, DevOps Advocate at Redgate Software

The digital capacity race to fuel AI advancements: AI is a data-hungry technology, and the demand for bandwidth to move and process that data will skyrocket in the coming years. AI applications are evolving much faster than infrastructure can be built, leading to the risk of a capacity shortage. Network infrastructure must rapidly develop to meet connectivity demands and avoid the crunch. This will require investment in new technologies and infrastructure and a more collaborative approach between network operators, hyperscale giants and other stakeholders. AI is nothing short of a trillion-dollar opportunity, and it will drive unprecedented demand for bandwidth, making it much different from other hype cycles like 5G and IoT, where monetization is unclear. Industries that rely heavily on data and computing — such as healthcare, finance, and manufacturing — will be the first to reap the benefits of AI. Hyperscale giants will invest heavily in digital infrastructure to prepare for this surge, and as we look ahead, smaller players must follow suit or get left behind. – Bill Long, CPO of Zayo

Companies will prioritize minding the gap between data foundations and AI innovation. There is no AI strategy without a data strategy and companies will need to prioritize closing gaps in their data strategy; specifically, the foundational elements of more efficiently accessing more accurate data securely. – Justin Borgman, Cofounder and CEO, Starburst

AI will positively impact data center operations: In 2023, organizations did a lot of experimenting, applying AI to different data sets. In 2024, companies are going to become much more intentional about how they use AI to solve very specific business problems, and they’ll make investment decisions based on three primary desired outcomes: innovation/new revenue generation, existing revenue protection, or cost savings / efficiency gains. The data center industry is no different – we expect that AI will have a positive impact on the industry in 2024. For example, providers will increasingly use AI solutions to improve operational efficiency and sustainability by monitoring and setting optimal thresholds for factors like temperature, humidity, and overall general energy consumption. AI tools can also improve data center providers’ security posture – and the security posture of their customers – by more proactively alerting customers to security events in the data center or alerting the data center NOC and/or SOC to issues relating to network congestion or a potential breach. There is so much potential, but I predict that early experimentation may fall short of expectations (as is often the case). It will take some time to find the diamonds in the rough, and typically, the more specific the use case, the more positive the outcome will be. – Holland Barry, SVP and Field CTO, Cyxtera

As a whole, the bar for understanding and harnessing the full value of AI is still low but it won’t be for long as market pressures continue to accelerate AI adoption. The future of enterprise AI will be centered on AI being built into the products and services already in use. But as AI innovation evolves, we’ll see enterprises learn to build their own in-house AI data platform and move part of the workflows into their own infrastructure. For enterprises who want to get ahead of the curve, its critical that they start investing in building their in-house expertise now. A central ‘center of excellence’ for AI and Data Sciences will be more beneficial than individual AI projects scattered around the company. – Pure Storage Analytics & AI Global Practice Lead, Miroslav Klivansky

Real-Time AI Monitoring: A Data-Driven Future: 2024 will witness the rise of real-time AI monitoring systems, capable of detecting and resolving data anomalies instantaneously. This transformative technology will ensure data reliability and accessibility, especially for the ever-growing volume of unstructured data. – CEO of Acceldata, Rohit Choudhary

After the boom, there will be an extinction for many AI companies as a direct result of enhanced scrutiny around data privacy, security and safety. As such, 2024 will be the year of the secure, safe harbor AI company, and the explosion in AI investment and innovation will both consolidate and accelerate. Winners will begin to emerge in all fields. AI will go mainstream, no longer serving as a supportive tool for experimental production, but a vital, strategic business asset. It will operate at warp speed and drive major business decisions by the end of 2024. AI models and chips that offer increased compute power while simultaneously reducing energy consumption and lowering total cost of ownership will trend. In other words, ESG (environmental, social and governance) will quickly become the new North Star. – SambaNova’s CEO, Rodrigo Liang

AGI will advance in the year ahead: Artificial general intelligence is very far away from becoming a reality, but it’s closer than ever. What we have today in LLMs is a blurry copy of human intelligence. It’s pretty good, and it can do some amazing things to improve your business. But can an LLM invent a working theory of economics or a vaccine to fight a pandemic? Humans can! I believe we’ve made huge leaps in the past few years, but I don’t believe any of us know how long we’re from true creative genius. – Jason Tatum, Vice President of Product at CallRail

AI will streamline the software development process: “Many of the time consuming tasks that developers currently do will soon become automated, making processes and tasks more streamlined, all while creating a level of speed and efficiency that we’ve never seen before. Additionally, for developers, understanding AI will eventually become a required skill. It’s critical that the industry continues to embrace this technology, and understand its benefits so that the pace of innovation can improve and allow developers to hone in on their specialties with the removal of tedious, repetitive tasks.”– Natwar Maheshwari, PLG Marketing Director at Algolia

Unstructured Data Sets Missing Link to Successful AI Data Pipelines: Organizations will put distributed unstructured data sets to work to fortify their AI strategies and AI data pipelines while simultaneously achieving the performance and scale not found in traditional enterprise solutions. One of the biggest challenges facing organizations is putting distributed unstructured data sets to work in their AI strategies while simultaneously delivering the performance and scale not found in traditional enterprise solutions. It is critical that a data pipeline is designed to use all available compute power and can make data available to the cloud models such as those found in Databricks and Snowflake. In 2024, high-performance local read/write access to data that is orchestrated globally in real time, in a global data environment, will become indispensable and ubiquitous. – Molly Presley, SVP of Marketing, Hammerspace

AI is Here to Stay: 2023 was the year of AI and that isn’t likely to change heading into 2024. AI is too valuable to avoid and remains the number one priority for most IT leaders, but how these tools are used and approached is starting to change. While interest in new AI applications and solutions will remain healthy, organizational focus, and spend, is shifting toward improving the AI services already being used internally, as opposed to collecting and having to successfully implement a host of new tools. But, as organizations continue to workshop their internal AI functionalities, they must balance this pursuit of optimized value with the risk inherent to increased AI use and the resulting need for effective regulation. – Alastair Pooley, Chief Information Officer at Snow Software

Very large AI companies continue to misrepresent the truth by claiming the only way to train models is with the largest datasets, and those large datasets have to be in their control with privacy laced over it as an afterthought. Both are untrue, yet the increasing weight of heavily funded headline-stealing organizations will make it hard to get more rational and expert voices through the fog. The truth is a privacy-first platform design, with small datasets and federation/interoperability standards, can perform faster training of higher quality. Another critical challenge is the “undo” feature necessary for privacy to be effective. In other words, if someone can forget something you tell them, you regain your privacy. AI vendors aren’t working hard enough on this problem, even though there already are solutions available to them such as W3C Solid. – Davi Ottenheimer, VP of Trust and Digital Ethics, Inrupt

AI Meet the Three S’s: Before widespread adoption, AI and LLMs need to be smart, safe and at scale. AI offerings will incorporate computer vision for real-time automated decision-making from video content. With that, companies can use AI for threat detection at public events, video-enabled flight boarding and grab-and-go stores. – Younes Amar, VP of Product at Wallaroo.ai

Big Data

Investment in digital transformation will be a priority on the CIO agenda for 2024, especially with rising inflation, as this will allow for greater risk management, reduction in costs, and improved customer experience. Additionally, following the trend we’ve seen this year, there will also be a continuous investment in generative AI. Equally crucial in assessing our initial business needs and objectives is our commitment to establishing guidelines that prioritize responsible use. Finally, as an industry, I believe we need to embrace data silos. We can’t omit silos, so we need to better enable them and give them the ability to pull the vetted data they need. – Danielle Conklin, CIO at Quility

The innate characteristics of big data – volume, velocity, value, variety, and veracity – remain the same yearly, while evolving technologies that emerge each year helps us to use domain knowledge to contextualize data and gain more insights, accelerating business transformation. – Dr. Ahmed El Adl, Senior Advisor, Sand Technologies

Big data insights won’t be just for data scientists anymore: The ability to extract meaningful business insights from big data has largely been the domain of the highly specialized data scientist. But, as in cybersecurity, these experts are rather few and far between, and more and more teams are placing demands on this finite resource. In the coming year, we’ll see this change exponentially. Data fabric platforms, and data science and machine language (DSML) platforms, are changing the game, unifying and simplifying access to enterprise data. The more user-friendly interfaces of these platforms give more people on more teams the ability to see and act on threats or other challenges to the business. The democratization of data comes none too soon, as advancements in AI are making it easier for bad actors to infiltrate. With more eyes watching and able to take protective action, enterprises have a real shot at staying ahead of threats. – Nicole Bucala, vice president and general manager, Comcast Technology Solutions

Chief Data Officers (or any data leaders for that matter) will need to be change management specialists first and data specialists second to be successful in 2024. Creating a data culture is the exact opposite of the “Build it and they will come” approach from Field of Dreams; CDOs have found themselves too often in a field alone with only their own dreams. You have to bring the “data dream” to all areas of the organization to make a data-driven culture a reality; generative AI is the most tangible and relatable vessel that CDOs have ever had to do just that. – Niamh O’Brien, Senior Manager of Solution Architecture at Fivetran

In the upcoming year, we predict a growing demand for evolved data lakes and how genAI can help make Big Data more accessible for organizations. Business leaders will be seeking more than just an organized storage space; they will be looking for an intelligent and interactive platform that fosters meaningful dialogues with data, translating it into actionable insights. Large Language Models (LLMs) in genAI have introduced new opportunities to bridge the gap between Big Data and decision-making. Powered by LLMs, Intelligent Agents will have the inventive ability to understand and respond to natural language queries, breaking new ground for businesses as it will allow their users to engage with data in a conversational manner. This shift propels organizations toward well-organized data repositories, empowering users to have useful understandings with their data. – Nirav Patel, CEO, Bristlecone

2024 is the year we stop moving data and start working with data in place: Data growth has outpaced connectivity for over two decades, leading to an exponential problem. Exponential problems can suddenly become overwhelming, like a jar filled with grains of sand that are doubled daily. One day it’s half full; the next it’s overflowing. Data transfer rates cannot meet our needs, prompting solutions like Amazon’s AWS Snowmobile, a 45-foot-long shipping container pulled by a truck designed to transport exabyte-scale data. We’ve reached a point where we can’t move all the data to where it needs to be analyzed or used – we’ve shifted from data centers to centers of data. Exabytes of data are generated daily at the edge (e.g., factories, hospitals, autonomous vehicles) to power new AI models. However, our AI ecosystem primarily resides in the cloud, and shifting this immense volume of data from the edge to the cloud is not feasible. In 2024, we foresee the rise of tools that allow us to work with data in place without moving it. These tools will enable cloud applications to access edge data as if it were local or data center apps to access cloud data as if it were local. Welcome to the era of data everywhere. – Kiran Bhageshpur, CTO at Qumulo

Sync or Sink – Companies are using Data to Drive Success: The ability to connect past results and current realities with probability of success for future objectives through effective data utilization will emerge as a defining factor in 2024. This will separate the successful innovators from those struggling to keep pace with evolving market demands. Often, data resides not in one or two or three systems, but in hundreds, sometimes thousands, of different systems. Looking forward, we can anticipate a growing divide between organizations that harness the power of data for strategic alignment and predictive capabilities and those that neglect this critical aspect. Those adept at synchronizing data to align with their goals and OKRs will continue to thrive, leveraging their insights for rapid adaptation and maintaining a competitive advantage. In contrast, organizations failing to prioritize data-driven decision-making risk lagging behind, facing challenges in navigating the dynamic business landscape. – Razat Gaurav CEO at Planview

Cloud

Cloud and OS agnostic high availability becomes an expected requirement for most applications: IT teams will look for application HA solutions that are consistent across operating systems and cloud reducing complexity and improving cost-efficiency. As the need for HA rises, companies running applications in both on-prem and cloud environments as well as those running applications in both Windows and Linux environments will look to streamline their application environments with HA solutions that deliver a consistent user interface across all of their environments and also for matching cloud and OS technical support and services from the HA vendor. – Cassius Rhue, Vice President, Customer Experience, SIOS Technology

The AI cloud wars between hyperscalers will take center stage – With Google’s latest investment in Anthropic, together with Microsoft’s stake in OpenAI as well as Nvidia’s support for GPU-as-a-service players like CoreWeave, we are beginning to see the emerging outlines of a new phase of competition in the public cloud driven by differentiated AI GPU clouds. In 2024, these new competition dynamics will take center stage as big tech seeks to outcompete each other in the race to realize artificial general intelligence. Nvidia will emerge as a giant competing on the same level as the ranks of Google, Microsoft and AWS. With its cutting-edge GPUs, I see Nvidia emerging as a very capable threat to big tech’s dominance in the public cloud space. Early wins could be the foretelling of the big winners. If that’s true, Microsoft is in a good position. – Raul Martynek, CEO at DataBank

Organizations will continue looking for public cloud DBaaS alternatives: What we hear from our users, customers and the market in general is that they want public cloud DBaaS alternatives. There are multiple reasons for this – for example, they may want more independence from their vendor, they may want to optimize costs, or get more flexibility around their database configurations. Right now, the market provides a limited number of alternatives to those willing to make a change. Rather than looking at DBaaS from a specific provider, there is a gap in the market for open source private database platform that gives organizations and IT teams greater control over data access, configuration flexibility, and costs associated with cloud-based databases. The growth of Kubernetes and Kubernetes operators has made it easier to implement this kind of approach, but there are still multiple gaps around this that make it harder to deploy and run in production. Closing those gaps and making fully open source DBaaS options available will be something that comes to fruition in 2024. – Aleksandra Mitroshkina, Senior Manager, Product Marketing, Percona

Building starts with a prompt and hosting with the cloud: In the near future, AI-driven Language Models (LLMs) will keep revolutionizing server-based (virtualized) computing, where fast deployment with automation tools will drive the change. It starts with a simple prompt directing you to create a website.  Adding additional directions to direct what kind of website you are building. Cloud hosting will be top of mind, with the ability to scale, load-balance, secure, and handle large amounts of traffic as online presence grows. For reliability,  security, and flexibility, more and more users may want to switch to a multi-cloud approach, thus avoiding to be locked in by a single provider. Serverless Functions enabling running code on-demand without needing to manage infrastructure, provision servers, or upgrade hardware, will even more become the go-to architecture for developers. It simplifies the deployment process, allows for more efficient resource allocation, and will lead to  substantial savings of effort and time. As quantum computing advances, even while doing it slowly, it will disrupt traditional encryption methods. Cloud hosting providers must adapt by offering quantum-resistant security solutions to protect sensitive data. Rising energy prices will drive the adoption of more sustainable practices in cloud hosting. More providers will commit to using renewable energy, reusing wastewater, reducing carbon footprints, and promoting eco-friendly cloud services. – Mark Neufurth, Lead Strategist at IONOS

The Year of the Private Storage Cloud: In the face of more and more data, its growing value, and unpredictable cloud costs, large organizations across industries, academia, and research strategically choose to deploy their own in-house and co-located storage clouds. At increasingly massive scale ranging from petabytes to exabytes, organizations will look to rein in the long-term cost exposure of hosting all their data in other people’s clouds, dedicating their public cloud storage resources to actively used data sets, ongoing workloads, offsite copies and compliance use cases. Multi-tiered, private storage clouds, complemented by managed and hybrid cloud services from strategically chosen suppliers, will be built with better cost efficiency, accessibility, and performance to become a corporate-wide strategic asset available to the entire enterprise. – Tim Sherbak, Enterprise Products and Solutions Marketing at Quantum

2024: the Rise of the Data Cloud to Advance AI and Analytics: While data clouds are not new, I believe there will be a continued emergence and a clear distinction made between data clouds and compute clouds in 2024. With compute clouds like AWS or Azure, we have had to assemble and stitch together all the components needed to work with AI. So with data clouds, like Snowflake or Microsoft Fabric, users have it all pre-packaged together in a single platform, making it much easier to run analytics on data needed to build AI systems. The rise of the data clouds will offer a better starting point for data analytics and Artificial Intelligence (AI) and Machine Learning (ML). – Molham Aref, founder and CEO of RelationalAI 

Database/Data Warehouse/Data Lake/Data Management

Data Anti-Gravity Will Prevail: The notion of data gravity, which is an analogy of the nature of data and its ability to attract additional applications and services, no longer exists. Every organization with a modern data strategy needs a data warehouse alongside a data lake, if not multiple ones, to fulfill their business needs. In the last two decades, data warehouses and data lakes became popular to solve enterprise data silo problems, yet what they created were even bigger problems. This is because data warehouses and data lakes are comprised of both on-premises and cloud systems, and they are often geographically dispersed. Also, even though every cloud service provider tries to solve many data and analytics problems independently, most organizations run their data and analytics in a multi-cloud environment, cherry-picking products, and services from two or more cloud service providers. This is why data anti-gravity, where data and applications remain distributed across regional and cloud boundaries, will be the new norm in 2024 and beyond. Other factors contributing to data anti-gravity will be the rising costs of data replication, data sovereignty, local data governance laws and regulations, and the requirement for accelerated speed-to-insights. As the data anti-gravity trend continues, data management leaders should invest in technologies that are built on the premise of distributed data management. – Angel Viña, CEO of Denodo

Data models will reach a tectonic shift away from highly structured traditional databases. As more companies integrate AI capabilities to gain a competitive edge and transform the real-time pace of business, the historical approach to data management will fall by the wayside and there will be a need for a new data model to take its place. – Quentin Clark, Managing Director at General Catalyst

A new class of data warehousing will emerge: Snowflake, BigQuery, and Redshift brought enterprise data to the cloud. In 2024 we’ll see a new generation of databases steal workload from these monolithic data warehouses. These real-time data warehouses will do so by offering faster and more efficient handling of real-time data-driven applications that power products in observability and analytics. – ClickHouse’s VP of Product, Tanya Bragin

Shrinking IT budgets force data management reckoning: With IT budgets shrinking, many high-flying companies will find their early data management decisions coming back to haunt them. For years, fast-growing businesses have tended to develop their own custom data infrastructures — sometimes by choice, other times by necessity.  The trouble is, those kinds of bespoke systems are extremely complex. The cost to maintain them is very high. You need specialists in Kafka, Redis, MySQL and 20 other services to make them all work together. The more successful you become, the greater the challenge. If you have unlimited resources, you can just throw more people and servers at the problem. But when the flow of easy money stops, you need to find more sustainable solutions and try to consolidate to reduce cost and simplify management. Already you see companies like Pinterest adopting technologies like distributed SQL to simplify their management load while improving their ability to scale their services at a lower cost. That’s just the tip of the iceberg. In 2024, we’ll see more rocket-ship companies make similar moves as they confront the need to do more with less. There is a clarion call for greater efficiency and simpler operations — for example, to consolidate databases and reduce operational complexity by using more modern multi-tenant-capable distributed SQL databases. OLTP and OLAP will also converge for many use cases, especially in real-time analysis of OLTP data. This solves a lot of the problems around data governance and time-consuming ETL pipelines. – Sunny Bains, Software Architect at PingCAP

SQL is here to stay: Structured Query Language or SQL is proclaimed too old-fashioned every few years and in 2024 proposals to use LLM AI tools to generate database queries will get a lot of attention. But one of the reasons SQL is the only programming language from the 1970s that still gets used so widely today is its power in querying data. You may not like the syntax. You may find its rules somewhat arbitrary. You may have gripes about learning such an old language. But for decades, SQL has proven itself again and again as the premier tool to manipulate data. It won’t be going out of fashion any time soon. – Dave Stokes, Technology Evangelist, Percona

Flexible Global Architectures Are Needed More Than Ever: The demand for global databases will come from increasing compliance requirements for data residency as well as the need to serve data with low latency to a globally distributed user base. With more countries putting data residency regulations in place, global businesses will need to evaluate their databases to ensure they can be deployed in flexible global architectures. The General Data Protection Regulation (GDPR) (enacted on May 25, 2018) is the world’s toughest data protection policy. It places strict requirements on businesses to protect personal data and privacy for EU citizens. If a business is not GDPR compliant, they can be fined up to €10 million, or up to 2% of the entire global turnover of the preceding fiscal year. These harsh penalties (along with reputational loss from media coverage) make it increasingly important for businesses to meet and comply with global regulations, wherever they are based. Having a flexible global architecture helps businesses avoid falling foul of these regulations. The demand for global databases may be a result of increasingly stringent compliance requirements, but having flexible global architectures can also improve organizational privacy hygiene. Having a flexible global architecture gives businesses the ability to adapt to changing market and customer needs, and to serve data with low latency to a globally distributed user database. – Yugabyte‘s VP of Strategy and Marketing, Suda Srinivasan

Rise of the Data Lakes and Fall of Data Lake Vendors: While some companies may choose to collect less data, increasing regulatory requirements mean that most teams have no choice but to do more with less. As they struggle to find cost-effective means to store data of unpredictable value, companies are increasingly reconsidering data lakes. Once considered the final resting place for unstructured data, I see the migration to data lakes accelerating in 2024, driven by increasing storage costs, as well as advancements in query capabilities across data lakes and object storage, and the comparative ease with which data can be routed into them. With the ability to quickly and cost-effectively search large data stores, companies will start using data lakes as a first stop, rather than a final destination for their data. This will cause a shift of data volumes out of analytics platforms and hot storage into data lakes. In contrast to this growth, we anticipate data lake vendors who are not best-of-breed may see slowing growth and consolidation next year, as the market matures from theory and deployment to reality and utilization. For the segments of industries that experienced outsized growth leading into the looming economic downturn, this pain will be more acute, and data lake vendors are definitely on that list. – Nick Heudecker, Senior Director, Market Strategy & Competitive Intelligence, Cribl

English will replace SQL as the lingua-franca of business analysts: We can anticipate a significant mainstream adoption of language-to-SQL technology, following successful efforts to address its accuracy, performance, and security concerns. Moreover, LLMs for language-to-SQL will move in-database to protect sensitive data when utilizing these LLMs, addressing one of the primary concerns surrounding data privacy and security. The maturation of language-to-SQL technology will open doors to a broader audience, democratizing access to data and database management tools, and furthering the integration of natural language processing into everyday data-related tasks.- Nima Negahban, CEO and Cofounder, Kinetica

Open formats are poised to deal the final blow to the data warehouse model. While many anticipate the data lakehouse model supplanting warehouses, the true disruptors are open formats and data stacks. They free companies from vendor lock-in, a constraint that affects both lakehouse and warehouse architectures. – Justin Borgman, Cofounder and CEO, Starburst

Data first architecture means and data management strategies: We’re about to see another explosion in the data people are keeping. By 2025, global data creation is projected to grow to more than 180 zettabytes. Data is becoming more valuable to organizations, even if they don’t know how they’re going to use or need it in the long term. The data explosion will continue to drive the need for highly available and scalable solutions. To take advantage of this burst, organizations will need to democratize data across departments for a data-first approach so all things would truly benefit every aspect of an organization. – Jeff Heller, VP of Technology and Operations, Faction, Inc.

2024 is the year transactional distributed databases will hit mainstream adoption. Until recently, there was a perception that distributed databases were only useful for niche use cases. However, as AI and cloud adoption grows, and businesses expand their operation across multiple time zones and locations, an increasing number of applications will require scalability, resilience, high availability and data geo-distribution. Cloud native, distributed databases proven by industry-leading enterprises will become the obvious choice for many of these organizations. Taxing data residency legislation and the need to be compliant will further push adoption. We expect to see major players such as AWS, Google Cloud and Microsoft Azure announcing more distributed relational database capabilities to capitalize on this trend in the coming year. – Karthik Ranganathan, Founder & CTO at Yugabyte

Disruption in Data Management: There’s more water in this world than land, but drinking water remains rare and precious. It’s the same thing with data. Most organizations are swimming in data and yet most of that data is not camera-ready when it comes to decision-making. In 2024, LLM and semantic-search driven innovations solving the challenge of accessing company-wide knowledge will provide seamless intranet navigation. These comprehensive solutions will disrupt established Document and Content Management System (DCMS) vendors. The all-in-one nature of this capability will redefine how organizations manage and utilize their internal knowledge resources. – Dr. Richard Sonnenblick, Chief Data Scientist at Planview

Rise of the Data Lakes and Fall of Data Lake Vendors: While some companies may choose to collect less data, increasing regulatory requirements mean that most teams have no choice but to do more with less. As they struggle to find cost-effective means to store data of unpredictable value, companies are increasingly reconsidering data lakes. Once considered the final resting place for unstructured data, I see the migration to data lakes accelerating in 2024, driven by increasing storage costs, as well as advancements in query capabilities across data lakes and object storage, and the comparative ease with which data can be routed into them. With the ability to quickly and cost-effectively search large data stores, companies will start using data lakes as a first stop, rather than a final destination for their data. This will cause a shift of data volumes out of analytics platforms and hot storage into data lakes. In contrast to this growth, we anticipate data lake vendors who are not best-of-breed may see slowing growth and consolidation next year, as the market matures from theory and deployment to reality and utilization. For the segments of industries that experienced outsized growth leading into the looming economic downturn, this pain will be more acute, and data lake vendors are definitely on that list. – Nick Heudecker, Senior Director, Market Strategy & Competitive Intelligence, Cribl

Data Engineering

AI Technology Will Not Replace Developers: AI is moving to the forefront of software development, with IT leaders using AI to speed time to market and alleviate the developer shortage. While generative AI–based tools can speed up many common developer tasks, complex tasks remain in the domain of developers for now. AI technology will be used to augment developers rather than replace them as some tasks continue to demand skilled developer expertise. – Jason Beres, Sr. VP of Developer Tools at Infragistics

AI-generated code will create the need for digital immune systems: In 2024, more organizations will experience major digital service outages due to poor quality and insufficiently supervised software code. Developers will increasingly use generative AI-powered autonomous agents to write code for them, exposing their organizations to increased risks of unexpected problems that affect customer and user experiences. This is because the challenge of maintaining autonomous agent-generated code is similar to preserving code created by developers who have left an organization. None of the remaining team members fully understand the code. Therefore, no one can quickly resolve problems in the code when they arise. Also, those who attempt to use generative AI to review and resolve issues in the code created by autonomous agents will find themselves with a recursive problem, as they will still lack the fundamental knowledge and understanding needed to manage it effectively. These challenges will drive organizations to develop digital immune systems, combining practices and technologies for software design, development, operations, and analytics to protect their software from the inside by ensuring code resilience by default. To enable this, organizations will harness predictive AI to automatically forecast problems in code or applications before they emerge and trigger an instant, automated response to safeguard user experience. For example, development teams can design applications with self-healing capabilities. These capabilities enable automatic roll-back to the latest stable version of the codebase if a new release introduces errors or automated provisioning of additional cloud resources to support an increase in demand for compute power. – Bernd Greifeneder, Chief Technology Officer and Founder, Dynatrace   

Agile Development: Co-pilots Become BFFS with Product Teams: Co-pilots will take on an increasingly important role in helping product teams with conceptual problem-solving. Co-pilots will review and inspect user stories, design choices and code written to solve a problem. As it learns, the co-pilot will accumulate an extensive library of solutions shortening the agile lifecycle. It will become a best practice engine for the product and engineering team, helping guide them to a solution without wasting precious time and money exploring a multitude of avenues, as happens currently. This streamlining of the SDLC will accelerate the delivery of high-quality software and applications. – Rupert Colbourne, CTO of Orbus Software

The role of the data engineer has radically expanded over the past decade. We even used to call them ETL developers, then it was big data engineers. The skills required for the modern data engineer range from ETL designer, to data modeller, to SQL developer, Site Reliability Engineer, to security analyst, and now with the advent of AI and Machine learning, to python programmer and data scientist. There are new technologies to learn, from vector databases, large language models to train and tune, whether it’s ChatGPT, Bert, Claude, Lambda or beyond, plus there are new AI tools to use from AWS Bedrock, Azure OpenAI, Anthropic, and Databricks: it is all linguistic soup. The next 12 months will be the year that tech companies make life simpler for data engineers. Tools will come to market, be integrated into existing platforms to enable adding generative AI to existing data pipelines with the ability to deploy these models internally so that users can interact live with these models just like they already do with ChatGPT. Regardless of the tools that come to market, the next year will also see huge demand for data engineers to retrain to master prompt engineering, how to fine tune these models, how to massively increase their productivity. The next year will see data engineers’ lives get so much more interesting. – Ciaran Dynes, Chief Product Officer, Matillion

Data Governance and Regulation

40% of enterprises will proactively invest in AI governance for compliance. With the EU due to pass the new EU AI Act soon, the US rushing regulators to produce AI and generative AI collaterals, and China’s recent genAI regulation, some companies will push even more on AI compliance. Failure to do so means missing compliance deadlines and having to retrofit AI governance which increases complexity, cost, and time. To meet current and future compliance requirements, enterprises will invest in acquiring new technology, filling the talent gap, and securing the third-party support they need. – Forrester

As AI regulation evolves, clarity around liability will be a catalyst for progress and adoption: One of the biggest questions about AI going into 2024 is centered around liability. The EU’s AI Act is proposing restrictions on the purposes for which AI can be employed, placing more scrutiny on high-risk applications, but President Biden’s October 30th executive order concerning AI took a slightly different tune, focusing on the vetting and reviewing of models and imposing restrictions and standards based on that. On both sides, liability and indemnity remain murky. In 2024, as these regulations fall into place, industry leaders will begin to get more clarity around who is liable for what and AI insurance will emerge as regulators and industry leaders look to harden the vetting and reviewal process, both in production and in development. – Joe Regensburger, VP of Research & AI SME at Immuta 

Data governance will evolve into data intelligence: Data loss prevention and protection strategies ruled the roost during the early days of data governance. Although still useful for meeting governmental requirements, these tools may impede the effective exploitation of data. When data is locked away tightly, stewards can’t understand how their data is used, moved or accessed, so they cannot effectively improve their data storage and implementation practices. But I foresee a change coming soon. Yes, data governance will remain vital for maintaining compliance. However, evolved data intelligence capabilities have now emerged, allowing practitioners to not only control data but also understand it — and these capabilities are a must in the modern business world. Mining metadata to comprehend its lifecycle will allow teams to more effectively support their business requirements. These enlightened governance strategies will help organizations achieve mutual goals of data compliance while also uncovering granular data insights. – Brett Hansen, Chief Growth Officer at Semarchy

AI will be dragged through a messy regulatory maze. Regulations will rain down on AI from all corners of the world, creating a complex regulatory maze that will be challenging for companies to navigate. Specifically, within the United States, AI regulation could and likely will vary on a state-by-state or even a city-by-city basis, similar to how tax laws currently vary by jurisdiction. In 2024, as organizations work to address a patchwork of regulatory AI frameworks, they must ask themselves: ‘Should AI be enabled here, and if so, how?’ – David Lloyd, Chief Data Officer, Ceridian

The U.S. is unlikely to enact laws related to AI in 2024:  If history is any indication, it will take a long time for legislators to develop a working knowledge about AI, understand their options, and develop a sufficient consensus to enact a law. Predicting the outcome of any complex political process is difficult, especially with an impending presidential election. However, there is a sense of urgency given how generative AI took hold of the public’s imagination in 2023, which may have been an impetus for President Biden’s Executive Order (EO) on Safe, Secure, and Trustworthy AI. In lieu of federal law to guide the use and development of LLMs and AI, the  EO will help to further AI safety and security by leveraging the power and resources of the Executive branch departments, such as Homeland Security, Defense, Energy, Commerce, etc. The government’s influence on markets via its broad purchasing power will also be leveraged to drive the development and adoption of safety and security controls. – Maurice Uenuma, VP & GM, Americas at Blancco

Trusted data will become the most critical asset in the world: The critical role of trusted data in AI systems is becoming a cornerstone for the future of technology. Ensuring the information and data that come out of the AI system are trustworthy is just as critical. In a world that’s inching closer and closer to artificial general intelligence (AGI), knowing what to trust and who to trust will be critical to everything we learn and everything we think we know. Highlighting this shift, Forrester predicts that domain-specific, Large Language Model (LLM)-infused digital coworkers will soon assist 1 in 10 operational tasks. When tailored to specific business needs, these LLMs promise substantial investment returns. This trend has led organizations to focus more on finding, understanding, and governing high-quality, dependable data, which is vital for training AI models tailored to specific business requirements. The result is that AI governance is going to gain importance quickly. It involves more than just managing data; it’s about understanding the entire lifecycle of information and models. The analogy of data as the new oil now seems insufficient in the era of generative AI and the challenges hallucinations bring. Merely amassing and analyzing large data sets is no longer adequate in today’s business environment. In 2024 and beyond, trusted data – and all the tools associated with building trust in data – will be the number one commodity for organizations. – Satyen Sangani, CEO and co-founder of Alation

Generative AI adoption will slow amid regulatory hurdles, shifting focus to enterprise data usability: After its 2023 limelight, generative AI will face regulatory headwinds in the new year, causing businesses to tread more cautiously into 2024. The looming regulations and mounting security concerns are prompting organizations to hit the brakes on wholesale adoption. While pilot initiatives will be numerous, many may not achieve the desired outcomes, tempering enterprise enthusiasm. As the AI evaluation intensifies, vendors will face heightened scrutiny. Yet, this scrutiny could pave the way for a more data-centric, user-friendly application landscape. – Nick Heinzmann, Zip Head of Research

Robust governance frameworks will drive enterprise-wide generative AI adoption: In 2024, we’ll see enterprises advance their governance frameworks to unlock broad benefits and productivity gains from meaningful application of Generative AI. Executive buyers understand how ungoverned deployment of Generative AI can damage their organization and reputation. Unsurprisingly, the top two reasons for not yet implementing generative AI came down to data privacy concerns and lack of trust in generative AI results. Therefore, any governance framework must give executive buyers confidence that it can effectively manage the risks associated with the AI applications, including their embedded LLMs, the end users of those applications, and the exchanges between the first two. – Suresh Vittal, Chief Product Officer, Alteryx

Regulations and guardrails are needed to build safe, trust-worthy AI-based functionality. However, how requirements are formulated and governed is even more critical to ensure fair opportunity and prevent indirect bias. For example, when training a general-purpose AI data model, a series of data points, such as gender and race, can result in biased outputs. Creating guardrails in this context would be critical to ensure fair and balanced output. On the other hand, the same data points would be critical in training a medical assistant model. There is no one-size-fits-all solution here—guidance needs to be domain-specific and carefully drafted to create opportunities, provide security, and prevent harm. – Atena Reyhani, Chief Product Officer, ContractPodAi

Data Integration, Data Quality, Datz Pipelines, DataOps

Businesses Big and Small Will Prioritize Clean Data Sets: As companies realize the power of AI-driven data analysis, they’ll want to jump on the bandwagon – but won’t get far without consolidated, clean data sets, as the effectiveness of AI algorithms is heavily dependent on the quality and cleanliness of data. Clean data sets will serve as the foundation for successful AI implementation, enabling businesses to derive valuable insights and stay competitive. – Arina Curtis, CEO and co-founder of DataGPT

Data Quality and Integrity: In 2024, the technology landscape will witness a transformative shift as data evolves from being a valuable asset to the lifeblood of thriving enterprises. Organizations that overlook data quality, integrity, and lineage will be challenged to not only make informed decisions but also realize the full potential of generative AI, LLM and ML applications and use cases. As the year unfolds, I predict that organizations neglecting to craft robust data foundations and strategies will find it increasingly challenging to stay afloat in the swiftly evolving tech industry. Those who fail to adapt and prioritize data fundamentals will struggle to outpace their competitors and may even risk survival in this highly competitive environment. – Armon Petrossian, CEO and co-founder at Coalesce

We expect the adoption of DataOps practices we’ve seen with many of our enterprise customers this year to increase in speed and scope in 2024.  This is driven by the urgency, very often seen at the board level, to modernize the business and deliver better data-driven customer outcomes.  Modernizing data-driven outcomes requires DataOps teams to operationalize the benefits of ML, AI, and other complex data pipelines within the fabric of existing and new enterprise applications in a hybrid/multi-cloud environment.  Many customers and analysts agree, operationalizing business modernization requires an enterprise application and data workflow orchestration framework.  Getting to production at scale with agility, data quality, and governance built-in is where DataOps, with the emphasis on “Ops”, is delivering real, bottom-line enterprise value that will accelerate in 2024. – Gur Steif, President of Digital Business Automation, BMC Software

Data Mesh, Data Fabric

Data fabric and data mesh will continue to be hot topics as companies look to share data across distributed environments. Implement a data mesh architecture. Let each business unit design its own data solution and then only connect it to the components of the bigger scale they need. – Manish Patel, Chief Product Officer at CData

Data Products Will Rise in Importance: 2024 will be a pivotal year for the ascent of data mesh, which embraces the inherently distributed nature of data. In contrast with traditional, centralized paradigms in which data is stored and managed by a central data team that delivers data projects to business users, data mesh is organized around multiple data domains, each of which is managed by the primary business consumers of that data. In a data mesh, the role of IT shifts to providing the foundation for data domains to do their work, i.e., the creation and distribution of data products throughout the enterprise. The turning point will be the realization that data products should be treated with the same level of importance as any other product offering. Take, for instance, a Tylenol capsule: its value is not just in the capsule itself but in the comprehensive package that earns consumer trust—from the description and intended use to the ingredient list and safety measures. Similarly, data catalogs act as the crucial “packaging” that turns raw data into reliable, consumable assets. In this data-centric era, it is not enough to merely package data attractively; organizations need to enhance the entire end-user experience. Echoing the best practices of e-commerce giants, contemporary data platforms must offer features like personalized recommendations and popular product highlights, while also building confidence through user endorsements and data lineage visibility. Moreover, these platforms should facilitate real-time queries directly from the data catalog and maintain an interactive feedback loop for user inquiries, data requests, and modifications. Just as timely delivery is essential in e-commerce, quick and dependable access to data is becoming indispensable for organizations. – Angel Viña, CEO of Denodo

Data Observability

Data observability emerges as a critical trend, proactively ensuring data quality and addressing anomalies throughout data pipelines. The 5 key pillars of Data Observability are Lineage, Quality, Freshness, Volume, and Schema Drift. Active monitoring of these pillars in cloud setups can result in significant cost savings, potentially reducing costs by 30-40%. The significance lies in the fact that high-quality data is imperative for informed decision-making. Ensuring proper observability across the landscape enables users to access trustworthy and curated data assets for valuable insights. – Arnab Sen, VP, Data Engineering, Tredence Inc. 

With hype comes promises, but with promises comes, inevitably, disappointment. AI in observability will be driven by time and cost savings. Users will seek more thoughtful solutions that abstract away time-consuming tasks by automating routine processes. We won’t see AI take jobs from SREs, DevOps specialists, or engineers but rather see AI become a trusted tool to understand systems quickly through signal correlation, anomaly detection, root cause analysis, and performance optimization. AI-driven tools will free up valuable human resources, allowing teams to focus on more strategic and creative aspects of system management. However, it’s crucial to approach AI in observability with a realistic perspective, understanding that while it holds the potential to revolutionize the field, it is not a panacea. That’s why 2024 will be the year we learn to strike the right balance between human expertise and AI-driven automation. – Marc Chipouras, Senior Director of Engineering / Office of the CTO at Grafana Labs

Observability is recognized as a Data Problem: Despite pouring $17 billion into observability and monitoring tools each year, enterprises are seeing a negligible impact on mean-time-to-resolution (MTTR) — in fact they are increasing. Why? Modern distributed applications are complex, they change multiple times a day which leads to DevOps teams seeing ‘unknown’ problems in production every day. When troubleshooting an ‘unknown’ problem, DevOps team must triangulate on data points to determine where the problem may be occurring. That’s where the problems start, some data points are in a logging tool,a monitoring tool, or an APM tool.The best practice is often to screenshot what each tool is showing and post in a Slack channel so the final decision maker can correlate. This is not sustainable. For Observability to deliver on its promise, the observability data must be in one place — not in several siloes. If the data is in one place it’s easier to navigate, find relevant context for the incident being investigated, and for the DevOps team to collaborate in one consistent interface (that’s not Slack!) – Jeremy Burton, CEO of Observe

As AI adoption surges, observability will emerge as essential to oversee the increase in data and complexity: Generative AI and code copilots will make us superhuman, right? Rising productivity is going to cause an explosion in the number, scale and complexity of things that we build in the coming years. In the next 5 years, you will have to observe many more things, like environments, applications, microservices, code pushes and clusters. Thanks to AI, observability solutions will have to deal with far more variety and volume of data — open standards will become much more important. – Arijit Mukherji, Distinguished Architect, Splunk.

Deep Learning

Deep fake danger: 2024 will bring forth a slew of deep fake dangers consumers should be wary of – especially in virtual customer service settings. Identity and verification (ID & V) is a standard practice in most industries, where customer identity and right to transact is established. However, if a customer generates a fake image implicating that a company’s product was used to commit a crime, deep fakes have the potential to overcome biometric verification and authentication methods – making identity theft far easier. And this is just the beginning. Deep fake tech is in its infancy, and will only get better and more cunning. Fortunately, more predictive signals can be used to detect that fraud is potentially occurring, given that stolen identity can mean that bad actors can pass ID & V in some circumstances. Technology is evolving to address these issues, and we’ll undoubtedly see major tech innovation in the year on both sides of the coin. – Brett Weigl, SVP & GM – Digital, AI, and Journey Analytics, Genesys

Adversarial AI Will Skyrocket in 2024: Large Language Models (LLMs) hold a lot of promise—but they are nowhere near their maximum potential. In 2024, as public LLMs become more accurate and powerful, we’ll simultaneously see an uptick in Adversarial AI. LLMs can already perform standalone vulnerability research, exploit implementation, and execute attacks, including custom obfuscation and malware builders like we’ve never seen before. Furthermore, existing tools have proven they cannot address zero-day threats, creating the need to fight AI with AI—specifically, a more advanced form of it, Deep Learning (DL). Only DL, the most sophisticated form of AI, is best positioned to combat these zero-day Adversarial AI threats. – Yariv Fishman, Chief Product Officer, Deep Instinct

Generative AI

Generative AI Will Move to Modern Data Management. Historically, data management is a bit of a black box with highly technical skills required to create a strategy and manage data efficiently. With the help of LLMs, modern data management will change its framework, allowing users to participate in the entire data stack in a fully governed and compliant manner. – Vasu Sattenapalli, CEO at RightData

AI will reach “plateau of productivity”: In 2023, with the release of ChatGPT, we witnessed inflated expectations and billions of dollars poured into AI startups. In 2024, we’ll start to see more Generative AI Act 2.0, with companies building not just a foundation model, but a holistic product solution with workflows re-imagined. We’ll see the market transition from the noise of “everyone can do everything” to a few GenAI winning companies delivering real value. – Tim Shi, co-founder and CTO of Cresta

There is going to be a rapid shift from infrastructure-based Gen AI to local Gen AI because right now, that’s not really possible. The average startup doesn’t have thousands of dollars to throw at a cloud provider and it will prove almost impossible to run by yourself but that is changing quickly with the innovation around local generative AI. With it going local, you will have a complete RAG stack under your control with your access controls. That way, you won’t have to expose your proprietary data in any way. When we go from centralized, API-based LLMs to local LLMs, it will happen quickly. The ones that will work will be adopted like wildfire. Just be mindful of the downside as de-centralized LLMs introduce the concept of bad actors in the loop. –  Patrick McFadin, VP of Developer Relations, DataStax.

Generative AI will not only be generative, it will also be perceptive. Generative AI will lean further into its ability to learn and discern what works and what doesn’t to continually optimize. AI will get better at learning at an accelerated pace. The implications are likely inconceivable today. – Vipul Vyas, SVP of Go-To-Market Strategy, Persado

In 2023, we saw generative AI take off and many companies jumped on implementing and using genAI-powered technologies, and they are now realizing the implications of this rapid adoption both internally and externally, namely in regards to trust and security. To account for this, in 2024, the market will need to create and adopt new solutions focused on reestablishing trust within today’s digital world. We can expect to see an uptick in solutions that focus on verifying digital assets online, as well as digital agreements. With digital transactions at the core of every business, we need to prepare ourselves for an upgrade in innovation and bleed confidence into every interaction we have with customers. – Will LaSala, Field CTO at OneSpan

The practical uses of GenAI will crystalize in 2024, but not in the way we think – Data predictability will be high stakes in 2024, where “quality data in quantity” will be in high demand to inform a new era of AI models. However, despite conversations about full automation, GenAI will prove its worth as a valuable decision-support tool, acting as a “back-pocket data scientist” that will increase both access to critical data and speed to insights that would otherwise have been more challenging to obtain, democratizing data access for better decision making. – Geotab CEO Neil Cawse

The Use of Artificial Intelligence and LLMs Will Stretch IT Security‍ – No one argues that AI is here, and here to stay. However, the amount of new data that AI creates will require new ways to manage it. This is also true for the number of new applications including SaaS that are created using AI and associated tools and solutions. The one constant throughout the growth of this new data source is that like many SaaS applications in existence today, there are fewer than a handful of solutions available to protect and recover the varied data sources at enterprise-class scale. The acceleration of AI coupled with the rate of delivery of new SaaS services will also focus IT on regaining control of modern IT environments through proactive management and visualization. Knowing what you need to control and manage starts by understanding what you have in your IT environment. And, you will see innovative uses of AI to do this.  – Subbiah Sundaram, SVP, Products, HYCU, Inc

In 2024, as generative AI becomes more democratized, data storage will drive AI success.  Data-hungry AI will compel data centers and winning enterprises toward high-density hard drive storage to futureproof their data value by saving raw data sets as well as insights produced by AI and LLM processing. With massive amounts of data being available to users—291ZB to be generated in 2027 (IDC)—the speed at which data is growing will intensify this need.  – B.S. Teh, EVP and Chief Commercial Officer at Seagate Technology  

Generative AI hype train will continue to grow exponentially – I think we’re still in a GenAI hype cycle, and I tend to be very practical. Things around GenAI have been very compelling. We hardly talked about GenAI a year ago; now we do, which is excellent. Generative AI will be the future of user interfaces. All applications will embed generative AI to drive user interaction, which guides user productivity. Companies are embedding GenAI to do semantic searching to solve some of those old data problems – discovery becomes easier, creating pipelines becomes more accessible – Dremio CEO Sendur Sellakumar

Companies will require AI/Gen AI training to upskill employees – AI is no longer a question – and the GenAI boom is in full swing. Companies are moving fast, with employees expecting to benefit from GenAI’s capabilities in their day-to-day work. To make AI/GenAI successful at companies will require Mandatory trainings to upskill the workforce will be a key component not only to harness the technology’s benefits, but also in order to use it within their context and ecosystems. – Patrick Martin, Chief Customer Officer and GM of Service at Coveo

AI supermodels will rapidly replace purpose-built models for millions of use cases – LLMs will turn the process of developing AI models inside out. Instead of relying on massive real-world data sets and human intuition, developers will be able to start with knowledge-rich models, fine-tuning them with a handful of samples for precise outputs. As these models become more intelligent, extensive fine-tuning becomes less necessary, making AI development more accessible and commoditized – you’ll have to build fewer and fewer products customized to any particular model. While specialized models might still find niches in scenarios demanding specific performance or latency requirements, the trend is clear: eventually the current explosion of special-purpose models will consolidate into a “supermodel” with general intelligence that will be able to directly solve a wide array of really specific problems in AI. The rise of these supermodels will usher in an age where AI solutions are not just intelligent and deliver superior performance but are also economically viable and easier to build. – John Hayes, CEO and founder, Ghost Autonomy 

By 2025 almost all analytical systems will work with and be powered by generative AI – At its core, AI is about offering new insights, whether through analytics or generative content. Either way, success depends on data breath, data quality and data usage. Otherwise, you’ve just got a bunch of seemingly clever programs that use AI but with no purpose and no substance, and this is more rampant across organizations than many realize. Companies that were early in the machine learning and generative AI revolutions have an advantage in speed and efficiency to develop and activate accurate AI models, the key to broad deployment. Companies will want to bring AI closer to their analytical systems to draw both generative-based and ML-driven results. Other businesses racing to catch up will be drawn to more powerful and automated data systems built for the generative AI era, modernizing their data systems for better quality, flexibility, and outcomes. Those who wait…well, they’ll have a challenging 2025. – Gerrit Kazmaier, VP and GM, Data & Analytics, Google Cloud

Organizations Will Struggle to both Adopt GenAI and Leverage it Successfully: Organizations are encountering multiple challenges as they attempt to implement GenAI and large language models (LLMs), including issues with data quality, governance, ethical compliance, and cost management. Each obstacle has direct or indirect ties to an organization’s overarching data management strategy, affecting the organization’s ability to ensure the integrity of the data fed into AI models, abide by complex regulatory guidelines, or facilitate the model’s integration into existing systems. – Angel Viña, CEO of Denodo

AI Growth and Regulations Will Lead to Data Center Changes – Artificial Intelligence (AI) is on everyone’s radar. One subset of AI that we may see a focus on in 2024 is how data centers will need to grow to support the technology and the supercomputing capabilities required to generate large language models (LLM) for consumer and enterprise-based AI applications. Over the next year, there will be a greater shift in the type of processors used in data centers – away from CPUs to GPUs to support the advancement of LLMs and AI. GPUs operate at a much higher capacity and speed, giving businesses a competitive edge as they race to grow their language models and be the first in the next iteration of AI. However, companies leading the way must also navigate increasing concerns surrounding AI and the regulations that governments may impose to protect end users and secure supply chains as they are being viewed geopolitically as a national security concern. – Vito Savino, data center and wireline segment leader at OmniOn Power

Large language models will commoditize in 2024: There’s a huge race for companies today to build their own unique large language models (LLMs), like OpenAI’s GPT-4 or Meta’s LLaMA. However, I predict that these models will commoditize in 2024. The differentiation will come down to what data is being fed into the LLM and what its purpose is. This is similar to what happened in cable TV and streaming, where one monthly cable bill turned into a number of disparate streaming subscriptions. We’re seeing a similar “unbundling” of AI models, with the formation of many new companies that each have their own differentiated models. In the future, these AI models will likely aggregate back into a single technology, with data as the unique differentiator. – Spencer Thompson, Co-Founder and CEO, Prelude Security

In 2024, an important impact that generative AI will have is empowering people to discuss their financial worries or hardships without fear or embarrassment. For some, it is easier to talk to a chatbot than a live human when seeking advice about financial matters. By providing a confidential and non-judgmental way to get financial advice and support, AI will create a more financially inclusive future where everyone has access to the financial advice and support they need, regardless of their background or circumstances. – David Dowhan, Chief Product Officer of SavvyMoney

The race to build larger, smarter, comprehensive LLM models will continue, resulting in large organizations spending a fortune in the range of $100 million to $0.5B. Staying ahead of the curve in AI-powered language capabilities can lead to market dominance.  The race to build larger language models has prompted big enterprises to commit substantial financial resources, approaching or exceeding billion-dollar budgets. These investments are motivated by the potential for competitive advantage, market expansion, improved customer experiences, data monetization, research and development, infrastructure, and the anticipation of significant ROI. – Pliops

As generative AI becomes more mainstream, the potential productivity gains will significantly benefit these organizations. We will see tech leaders invest more in training, innovation centers setup and the adoption of new development platforms that maximize the value tech teams deliver. Tech leaders will need to take a two-pronged approach, enabling creative playgrounds for data experimentation while applying AI services to accelerate outcomes. All of which will be required to govern innovative creation and mitigate the risks associated with public AI models. – Miguel Lopes, VP Analyst Relations at OutSystems

How generative AI tools like ChatGPT will be applied in 2024 – We’re seeing a growing demand for enterprise solutions leveraging GPT-like technologies as companies seek to enhance employee experience and bolster productivity. The substantial volume of data needed to power LLMs will be a significant hurdle and require domain expertise to annotate and refine data for model training. We also foresee a surge in the availability of GenAI models beyond English, requiring multilingual expert annotators to ensure models achieve the same level of accuracy and quality that we’ve come to expect from Chat GPT. – Olga Megorskaya, CEO, Toloka

The Death of the Traditional Globalization Process – As marketing departments scramble to adopt new GPT/LLM technologies under the direction of the C-Suite, you can bet that an ever-increasing amount of translations will go awry. Digital marketing is about to change irrevocably. All marketing will be global from the start. This will be the death of the traditional globalization process, where you start in one language and then translate into another. Everything will be created in different languages from the get-go.  Why? LanguageAI Platforms powered by LLMs are fluent and produce marketing copy in a brand’s own tone and voice, even to the point that they can replicate the same level of emotional engagement. – Bryan Murphy, CEO of Smartling

A year into the ChatGPT-induced AI revolution, will we soon be surrounded by dramatic GenAI success stories or will we see the fastest collapse into the trough of disillusionment of a technology to date? Both! AI-savvy enterprises are already augmenting their most valuable employees and, occasionally automating them, and the trend will gain momentum as clear, repeatable GenAI use cases mature and investments in MLOps and LLMOps bear fruit. Meanwhile most PoCs — dazzled by the mirage of democratized, outsourced GenAI — crash headfirst into the realities of operationalizing production-grade GenAI applications, leading to widespread disillusionment. It turns out that human intelligence about AI is the most important factor for GenAI success, and ‘Generalized Pre-trained Transformer Models’ are more valuable when they are specialized for specific use cases and verticals. – Dr. Kjell Carlsson, head of AI strategy at Domino Data Lab

LLMs will assist generative AI to reason more and hallucinate less: AI is moving beyond the Large Language Model (LLM) text world of ChatGPT and the landscapes of Midjourney to Large Multimodal Models (LMMs), systems that can reason across different media types.  This is opening up new types of applications and possibilities, such as image-based inventory or virtual product support assistants for small businesses, and may help to ground future AI systems on more real-world examples that mitigate the potential of hallucination. We expect many more applications over the next 12 months, and as generative AI learns with sound, vision, and other senses, the near future may bring with it AI systems that can distinguish between reality and fiction. – Ashok Srivastava, Senior Vice President & Chief Data Officer at Intuit 

In 2024, Natural Language Processing (NLP) will likely see expanded use in interpreting clinical notes, extracting insights from medical literature, and even in direct patient interactions, such as through AI-driven chatbots for initial patient screenings or follow-ups. Automation in drug discovery and patient care will also make headway in the coming year. AI could automate routine tasks in laboratories, aid in the predictive analysis of drug efficacy, and help in forecasting patient outcomes. There will be great improvements in automation downstream, and there are solutions available that use AI to automate the clinical reporting process helping to make medical writers more effective and getting drugs to market faster. – Timothy Martin, Executive Vice President of Product & Development at Yseop

The commoditization of analytics: Natural Language Processing (NLP) has been instrumental in increasing the adoption of analytics among users. Now, the right mix of NLP and Large Language Models (LLMs) will help further commoditize analytics. LLMs have been helpful in assisting users in performing complex computations in analytics software. Analytics vendors will incorporate such features into analytics software without depending on LLMs to fill the gaps and mitigate privacy concerns introduced by LLMs. – Rakesh Jayaprakash, product manager, ManageEngine    

In 2024, the focus will be on transforming models, with increased specialisation for specific market requirements. Large language models like ChatGPT will evolve into new generations, becoming more specialised for particular use cases. There will be a big uptick in AI content usage for visual applications, such as advertising and news articles, generated by improved generative AI models. Also the current racial bias in AI will likely reduce. Right now, if you ask an AI model for a picture of a man, 90% of pictures shown will be of white men. AI models have to become more reflective of the world to continue to keep up. – Steve Harris, CEO at Mindtech

ChatGPT Will No Longer Be the Prevailing Technology for the Enterprise by 2025: Like most first movers in technology, ChatGPT will become less and less relevant as the year progresses. Local LLMs like Llama2 (and whatever comes next) will become the engine of corporate AI. There are many reasons for this, but data security and the ability to influence the results by augmenting a local LLM with industry-specific content are likely to be the two that drive this change. – Jeff Catlin, EVP of AI Products at InMoment

Gen AI Will Evolve the Role of the Sales Rep: As B2B companies focus on increasing the revenue per sales rep, they will look to equip every sales rep with a virtual assistant through AI. And to grow efficiently, they’ll need to lean heavily on GenAI capabilities within their sales and go-to-market tech stack. By automating certain tasks around prospecting, customer research, and engagement channels, the average sales rep will be able to spend 50% more time on creative problem-solving and productive actions, according to Forrester Research. – Henry Schuck, CEO of ZoomInfo

ChatGPT 3-year anniversary – Three components of our lives are now in flux because of the proliferation of generative language models (and Chat-GPT in particular) across education, knowledge management, and individual assistants. In 2024, we’ll see a large portion of the older methods to teach and test knowledge are obsolete with the arrival of LLMs. As always, new opportunities arise simultaneously –but the next wave requires education to become a far more interactive and “project-centered” model, where people build to learn. For years, organizations stored internal data, aware of its value, but unable and limited in their ability to analyze and distribute it. With the introduction of LLMs, bigger organizations are overhauling their datasets to make this critical knowledge accessible to every employee –creating an unprecedented productivity boost we’ve not seen for years. Chat-GPT is becoming more of a mainstream tool, “individual assistant-like” for White-collar workers from lawyers, doctors, and accountants. Generating original content at surprisingly high quality to support their writing, creativity, and email production increases productivity and frees up time for the truly important things in each of those roles. – Ivan Yamshchikov, Head of Ecosystem Strategy and Research Professor, Toloka

In 2024, Businesses Will Make Sense of GenAI in AIOps: As generative AI continues to gain momentum in mainstream business, expect to see a ‘leveling out’ in 2024 as enterprises begin to adopt standards and deploy GenAI in applications that make business sense as they pair it with AIOps. Conversational AI will drive customer-facing elements such as customer support, directing inquiries, managing UX interfaces, and the initial vetting of customers. GenAI will be used to provide personalized support to IT users, such as by answering their questions and troubleshooting problems. Beyond that, GenAI will support improved anomaly detection and prediction, automated remediation, which can free up IT staff to focus on more strategic tasks, and enhanced decision-making, providing insights that help IT leaders make better decisions about resource allocation, capacity planning, and other critical areas. – Ugo Orsi, Chief Customer Officer, Digitate

Another major conversation next year will be about the role generative AI plays in how work gets done. It’s been a year since ChatGPT was introduced to the market, and 2024 will be the year that businesses begin to sharpen their strategies around it. The efficiency gains are clear, and employees across all industries quickly caught on to the fact that services like ChatGPT have value beyond functioning like a sophisticated search engine. I think in the coming year, we’ll see more and more C-suites stepping up and embracing the opportunity to not only help their employees be more productive with AI but harness its power to build more equitable and fair experiences, both in the workplace and for customers. – Doug Dennerline, CEO and Executive Chairman of Betterworks

Normalizing conversations with GenAI: In 2024, the population will get used to chatting with GenAI models. People won’t be surprised when getting responses from LLMs when they expect to speak to a human representative. Those interactions will also normalize GenAI to ease the overall acceptance and adoption of the technology. – Elad Tsur, co-founder and CEO, Planck

Overcoming generative AI supply challenges – While AI adoption in data centers is still in the early stages, the industry must prepare for potential challenges going into next year. For starters, a projected 1.6 to 2 million H100 GPU shipments are being put into production next year, with AMD just announcing 300K-400K of their latest MI100 GPU to deliver in 2024. Neither Nvidia nor AMD can keep up with demand – customers want more. This will only add to the demand pressures that the data center industry is experiencing now, in addition to organic cloud growth. An estimated 80 to 90 percent of these shipments will land domestically in the U.S., adding 2.5 to 3.0 gigawatts of demand pressure on the market. With the recent OpenAI corporate drama highlighting existential concerns about AI’s potential to save or destroy the world, the physical bottlenecks present a nearer term concern as AI innovations may stall if capacity can’t be delivered to enable AI hardware to operate. – Tom Traugott, SVP of Strategy at EdgeCore Digital Infrastructure

AI Cold Shower: 2024 will be the year that Generative AI faces a ‘cold shower’ wake-up call, according to new data from CCS Insight. Companies have been pulled in by the overhype around AI to develop hopeful long-term objectives for productivity and transformation. With these blinders on, many have overlooked the burdens of cost, risk, and complexity involved with adopting and deploying Gen AI. And it’s only getting worse – now we’re being told that by 2027, AI may need as much electricity as an entire country. The promise of AI is huge, but resources are a problem. Not every organization or government can afford it, and not everyone has the resources to embed it into their existing systems and processes. The world is still in the early stages of developing AI regulations, and the absence of set boundaries and safety nets could put many industries at risk. We have already seen a period of fragmentation when it comes to AI. The fact is AI developments are moving faster than many are prepared for, and the technology needs different resources to run. To prevent being caught in the “cold shower” next year, organizations must strategically invest in how they will power the AI of the future (investing in things like photonics and digital twins, to address the underlying problem of inequity in resources). Harnessing the power of cutting-edge technologies can help build a smarter world, where people and society are optimized using all types of accessible, connected and cohesive information. – Tanvir Khan, Chief Digital and Strategy Officer, NTT DATA

Data Poisoning: The Newest Threat to Generative AI:   Perhaps nothing illustrates the rapid mainstreaming of machine learning and artificial intelligence more than ChatGPT. But as algorithms become a staple of everyday life, they also represent a new attack surface. This type of attack, called data poisoning, is becoming more prolific as bad actors gain access to greater computing power and new tools. Looking ahead to 2024, considering the popularity and uptake of new machine learning and AI tools, companies can expect to see an increase in data poisoning attacks which include availability attacks, backdoor attacks, targeted attacks, and subpopulation attacks. The unfortunate reality is that data poisoning is difficult to remedy. The only solution is to retrain the model completely. But that’s hardly simple or cheap. As organizations use artificial intelligence and machine learning for a broader range of use cases, understanding and preventing such vulnerabilities is of the utmost importance. While generative AI has a long list of promising use cases, its full potential can only be realized if we keep adversaries out and models protected. – Audra Simons, Senior Director, Global Products, Forcepoint Global Governments

There will be increased adoption of key-value stores from traditional applications to Generative AI applications for caching in the Prompt and Token phase of LLMs. The increased adoption of key-value stores represents a pivotal shift in the technological landscape, extending from traditional applications to the emerging domain of Generative AI applications, particularly in the context of caching during the Prompt and Token phases of Large Language Models (LLMs). LLMs, often with hundreds of billions of parameters, demand substantial computing resources for both training and inference. To mitigate latency and reduce computational overhead, organizations are turning to key-value stores as a means to efficiently cache frequently used prompts, tokens, and intermediate model states. – Pliops

Large Multimodal Models (LMMs) that support images, text, and video will steal the spotlight from LLMs and will be far more data intensive, exacerbating the infrastructure challenges and compute shortages we’re already experiencing. Methods like model routing, fine-tuning, and other forms of model customization will become essential to optimize compute efficiency and bring costs down. We’re already seeing this trend with early adopters. – Robert Nishihara, Co-founder and CEO of Anyscale

GenAI will change the nature of work for programmers and how future programmers learn. Writing source code will become easier and faster, but programming is less about grinding out lines of code than it is about solving problems. GenAI will allow programmers to spend more time understanding the problems they need to solve, managing complexity, and testing the results, resulting in better software: software that’s more reliable and easier to use. – Mike Loukides, Vice President of Emerging Tech Content at O’Reilly Media

Primary value use cases of adopting LLMs in the enterprise will finally be established. While 2023 was about dreaming up the possibilities of generative AI, 2024 will be the year that the enterprise puts it into action. After a year of speculation, businesses will finally get specific about applying LLMs to streamline their workflows. By the end of the year, there will be a handful of named scenario-based areas of value that people understand, moving us past “what-ifs” and shedding light on clear use cases. – Quentin Clark, Managing Director at General Catalyst 

Generative AI will continue to face organizational scrutiny: With the rapid growth of generative AI tools in 2023, organizations will intensify their scrutiny of the effects of AI tools on their employees and systems in the new year. One challenge is the persistence of misinformation and questions around the legality of AI tools, including the exposed source codes and the ability to determine the legitimacy of the results that employees are receiving. Leaders will need to establish methods to validate and authenticate information, while defining clear parameters determining how employees can use AI tools within their organization. – Bret Settle, Chief Strategy Officer, ThreatX

Enterprises will hesitate to fuel LLMs with their own code, curbing the impact of GenAI on modernization efforts. Though interest in applying GenAI to modernization will surge next year, enterprises will be hesitant to supply their code to train LLM models, due to security concerns and the fact that their software’s code represents their intellectual property. This hesitation will significantly limit the near term impact that GenAI will have on modernization processes, given that any GenAI-enhanced modernization technology would require massive amounts of legacy code that reside within organizations to properly train a model and thus achieve accurate and useful results. – Miten Marfatia, Founder and CEO, EvolveWare 

Extreme Hype Around Generative AI will Diminish, True Generative AI Deployments Will Emerge: In the new year, we will also begin to see the extreme overhype around generative AI start to diminish. I’ve been working in, and around, AI since the early nineties, AI has always been prone to be overhyped. Having said that, I think we are going to see enterprises actually deploying generative AI in more measured and meaningful ways. As with most new technology adoption in the enterprise, it’s going to take longer for these kinds of AI systems to become part of enterprise software in the ERP or HCM sense, but real value will start to be created next year.  We will be able to calibrate our expectations appropriately once we begin to see its true impact. – Molham Aref, founder and CEO of RelationalAI 

Generative AI-focused data compliance regulations will impact adoption: For all its potential use cases, generative AI also carries heavy risks, not the least of which are data privacy concerns. Organizations that fail to put proper guardrails in place to stop employees from potentially breaching existing privacy regulations through the inappropriate use of generative AI tools are playing a dangerous game that is likely to bring significant consequences. Over the past 12 months, the average organization that experienced a data breach resulting in regulatory noncompliance shelled out more than US$336,000 in fines. Right now, most regulatory bodies are focused on how existing data privacy laws apply to generative AI, but as the technology continues to evolve, expect generative AI-specific legislation in 2024 that applies rules directly to these tools and the data used to train them.

Moving GenAI from Pilots to Production: GenAI is influencing organizations’ investment decisions. While early GenAI pilots show promise, most organizations remain cautious about full production deployment due to limited hands-on experience and rapid evolution. In 2023, most organizations are on small and targeted trials to assess benefits and risks carefully. As GenAI technologies mature and become more democratized through pre-trained models, cloud computing, and open-source tools, budget allocations will shift more heavily toward GenAI in 2024. – Haoyuan Li, Founder and CEO, Alluxio

Generative AI-focused data compliance regulations will impact adoption: For all its potential use cases, generative AI also carries heavy risks, not the least of which are data privacy concerns. Organizations that fail to put proper guardrails in place to stop employees from potentially breaching existing privacy regulations through the inappropriate use of generative AI tools are playing a dangerous game that is likely to bring significant consequences. Over the past 12 months, the average organization that experienced a data breach resulting in regulatory noncompliance shelled out more than US$336,000 in fines. Right now, most regulatory bodies are focused on how existing data privacy laws apply to generative AI, but as the technology continues to evolve, expect generative AI-specific legislation in 2024 that applies rules directly to these tools and the data used to train them. – Veritas Technologies’ Matt Waxman, SVP and GM for data protection

Generative AI will unlock the value and risks hidden in unstructured enterprise data: Unstructured data — primarily internal document repositories — will become an urgent focus for enterprise IT and data governance teams. These repositories of content have barely been used in operational systems and traditional predictive models to date, so they’ve been off the radar of data and governance teams. GenAI-based chat bots and fine-tuned foundation models will unlock a host of new applications of this data, but will also make governance critical. Companies who have rushed to develop GenAI use cases without having implemented the necessary processes and platforms for governing the data and GenAI models will find their projects trapped in PoC purgatory, or worse. These new requirements will give rise to specialized tools and technology for governing unstructured data sources. – Nick Elprin, co-founder and CEO, Domino Data Lab

AI will Increase the Velocity of Knowledge… Eventually – Generative AI has the ability to surface valuable and actionable insights for businesses, and that is likely the most impactful and accessible benefit of generative AI that will be realized by businesses in 2024. However, unlike humans, generative AI does not proactively disseminate information to parties across the organization who might benefit from it. It can dig up a wealth of knowledge in seconds, but a user must know the right questions to ask and the right people to share the insights with for the value to be realized. Information sharing between humans is much faster than between computers today – we can instinctively pick up cues from others when there may be a gap in knowledge, we can proactively offer to share more information to help one another succeed and we might share useful details unknowingly simply by conversing with one another. In the future, I anticipate a universal programming language will be developed to allow various computers and Large Language Models (LLMs) to share information instantly with each other. This will eventually allow businesses to more easily access, share and act on the breadth of data at their fingertips, but until then, generative AI will be a powerful resource for humans as they drive the velocity of knowledge within their organizations. – Hubert Palan, founder & CEO of Productboard

OpenAI Drama Will Continue to Fill 2024: The ouster and rehiring of Sam Altman to OpenAI created news cycles jam-packed with gossip and hot takes, and I suspect OpenAI stories will continue to fill headlines all next year. The underlying catalysts – the unique non-profit/for-profit hybrid structure, the massive costs, the risks and promises of AI –  haven’t changed, and with the speed this field has been advancing, there’s ample opportunity for these forces to come to a head again and again next year. – Paul Barba, Chief Scientist at InMoment

The massive adoption of freely available LLMs and the use of ChatGPT will quickly give way to enterprises enforcing their own constraints on the use of such technology, as anything fed to these models as questions or content for further refinement or analysis is information about the user or the consumer that this model is learning. Enterprises will start preventing or even prohibiting unfettered use of this technology to protect corporate IP or the kinds of things that the enterprise is considering. Expect to see things like enterprises blocking domain names like openai.com and implementing other such controls soon. – Lalit Ahuja, chief product and customer officer at GridGain

GenAI experimentation for application modernization will start with code documentation and source code transformation. With the advent of AI-powered tools over the last few years and the promise that GenAI will further streamline modernization efforts, organizations will aim for increasingly complex modernization strategies such as refactoring their monolithic applications and creating microservices in 2024. GenAI models for application modernization will first be developed in areas where significant data is available for modeling, likely starting with documentation of legacy applications where code is translated to a plain English description for use by business personnel, and transformation of source code to modern code. – Miten Marfatia, Founder and CEO, EvolveWare

Generative AI Will Continue to Gain Traction, But Not Without Overcoming a Few Hurdles – Generative AI is gaining increasing traction in the financial services industry, with applications in customer service, fraud prevention, risk management, coding, and data analysis. It has the potential to save the industry billions of dollars annually by automating repetitive tasks and enabling employees to focus on more creative, profitable work. However, challenges such as customization for specific use cases, reliability of generated output, regulation, and a shortage of talent with expertise in generative AI need to be addressed. Generative AI is promising, but its full potential in sensitive tasks like investment decision-making is still a way off. Financial services organizations must carefully assess the safety, security, and regulatory implications of AI before widespread adoption can occur. – M-Files CEO and Founder Antti Nivala

As the “Generative AI Era” enters its second year we will start seeing more purpose and order in AI usage in enterprises: As the “wow” effect regarding what can be done with Generative AI remains prominent for a second year in a row, being fed by consequent innovations delivered by the likes of OpenAI and Google, organizations everywhere will start figuring out how to harness AI capabilities for their purposes, rather than just being astonished by the “art of the possible.” The first generation of AI capabilities in various enterprise products, focused on low-hanging, non-complex scenarios, such as all kinds of co-pilots, will no longer easily astonish and dazzle every person seeing them for the first time. The result will be a requirement that AI-powered capabilities focus on use value and being harnessed to solve real issues. – Leonid Belkind, Co-founder and CTO, Torq

Increased adoption of generative AI will drive need for clean data. The foundation of generative AI is data. That is, to function as desired, data is what provides the basis for this new technology. However, that data also needs to be clean. Regardless of where you’re pulling the data from – whether you’re using something like modeling or a warehouse of your choice – quality data will be essential. Bad data can lead to bad recommendations, inaccuracies, bias, etc. Having a strong data governance strategy will become more important as more organizations seek to leverage the power of generative AI in their organization. Ensuring your data stewards can access and control this data will also be key. – Rex Ahlstrom, CTO and VP of Innovation and Growth, Syniti  

60% of enterprise employees will receive prompt engineering training. With AI at the center of future enterprise workplace productivity for all employees, teams will need to continue to invest in Data / AI literacy programs to close the skills gap in learning how to engineer successful prompts. Don’t leave this important training to L&D — IT needs to develop BYOAI guidelines and enterprise training programs for employees to help them best leverage generative AI consistently and safely. – Forrester

More organizations will jump on the AI operating system bandwagon: Generative AI operating systems will receive more attention and investment in the year ahead. AI operating systems are the interface between artificial intelligence and everything else, from the engineers and designers leveraging generative AI tools, to the robotic systems being trained by generative AI to mimic human behavior and action in the physical world. Because of the well-documented high-stakes of widespread AI adoption, more emphasis will be placed upon the importance for organizations to build operating systems that can act as an intermediary between AI and everything else as more companies and public sector organizations embrace advanced AI technology at scale. – Ashok Srivastava, Senior Vice President & Chief Data Officer at Intuit

From Search Engine to Intelligent Assistant: How Retrieval Augmentation Generation (RAG) is Set to Improve Large Language Model Responses in 2024: As the calendar flips to 2024, one obscure term is set to captivate the tech world’s attention. Though not widely recognized until now, Retrieval Augmentation Generation (RAG) has begun to make waves as a transformative framework for technologists. RAG augments the capabilities of a Large Language Model (LLM) by capturing information from external sources, like an external knowledge base, to enhance the quality and accuracy of search responses by including data that is new to the LLM. Think of RAG as personalizing the LLM for your needs, providing the same LLM intelligent insights but from your data.  It’s like upgrading from a regular internet search to having a personal research assistant who finds exactly what you need. Financial decision makers have seen the boon that generative AI has been for other stakeholders in their organizations. Chief Investment Officers are eager to apply generative AI to reduce the “time-to-insights” gap while filtering in more information to produce more accurate results. Thanks to innovations improving RAG, sophisticated ring-fencing to ensure appropriate access to queries has become a reality. In short order, I believe RAG will continue to overcome knowledge gaps with LLMs, enhance accuracy, and serve as a solution for knowledge-intensive activities across a number of industries, including investment management. In addition, RAG can constrain which data is used for the LLM to process, which ensures that responses are only from the RAG data and not sourced from the general LLM data. RAG can also be enabled to provide citations of where the data came from so the users have confidence in the response. Enhancing security, you can have multiple RAG data sources and lock down access to certain ones. This way, only authorized users for those data sources can use the LLM for questions on that sensitive data. Looking to 2024, highly-regulated industries are expected to drive the adoption of gen AI, with RAG able to capture better information for their stakeholders. – Souvik Das, CTO, Clearwater Analytics

Generative AI will redefine the cybersecurity landscape, simultaneously promising innovation and peril – In 2024, the widespread adoption of Generative AI will be a double-edged sword for cybersecurity. While it promises significant productivity gains across various enterprise functions, the rush to embrace it will open organizations to substantial risks. Simultaneously, more restrictive regulatory frameworks and guidelines will emerge to ensure responsible adoption and compliance with data protection and security standards. In this context, enterprises that do not adopt safeguards in their adoption of Generative AI are likely to inadvertently expose company-sensitive information or customer data, leading to new kinds of data breaches and regulatory non-compliance. Simultaneously, regulatory frameworks and guidelines will emerge to ensure responsible adoption and compliance with data protection and security standards. In 2024, cybersecurity efforts will be critical to reap the benefits of Generative AI while safeguarding sensitive information and maintaining regulatory compliance. – Michael Rinehart, VP of AI at Securiti

Private LLMs Will Take Off: Concerns about data privacy and security will drive organizations in 2024 to invest in private LLMs tailored to their specific needs and datasets. These private LLMs will be fine-tuned to ensure greater compliance with regulatory standards and data protection requirements. This shift toward privacy-centric LLMs will empower businesses with more control over their AI applications, foster trust among users, and open the door to innovative and secure AI solutions in industries ranging from healthcare to finance. – Dr. Jans Aasman, CEO of Franz Inc.

Generative AI initiatives will be driven by Line of Business not IT: Executives traditionally require organizations to adopt new tools to enable new (and     better) business practices and save money, even if the users prefer to stick with what they already know. IT supports the rollout while implementation teams debate change management procedures, conduct extensive training for potentially reluctant users, and stamp out any continued use of the older tools. However, ensuring compliance and achieving the desired benefits quickly is no easy feat. GenAI will be the opposite in 2024. The enthusiasm among users for GenAI-enabled solutions is palpable, as many have already tried these tools in various forms. The user-friendly nature of GenAI, with its natural language interfaces, facilitates seamless adoption for non-technical stakeholders. However, technical teams are left grappling with inherent challenges, including hallucinations, the lack of explainability, domain-specific knowledge limitations, and cost concerns. In some organizations, the use of GenAI is forbidden until their technical teams come up to speed. Detecting ‘shadow’ usage, where individuals become suddenly hyper-productive after a brief period of quiet, adds an additional complication to the implementation challenges. Next year, organizations will work out a process to evaluate the myriad of options available and allow the business to use the few tools that are capable of addressing all of GenAI’s challenges in an enterprise environment. – Ryan Welsh, Founder and CEO of Kyndi

Generative AI (GenAI) Maturity as Table Stakes: The broad democratization of GenAI capabilities has forever reshaped the dynamics of knowledge work and the global labor marketplace, already shaken up by the pandemic and recovery timelines. The broad consensus across the industry, is while embracing GenAI may seem optional today, very soon the choice will be to embrace it or go extinct. Expect business, technology, and security decisions to be augmented by GenAI, leading to an even greater focus on AI governance and ethics requirements. An example of this push is the recently released White House executive order calling on AI vendors to ensure the trust, safety, and security of AI platforms in the context of national security and public safety. The demand for AI skills will continue to grow as innovation in this space redefines our relationship with digital ecosystems. – Igor Volovich, Vice President of Compliance Strategy at Qmulos

Unlocking GenAI’s potential will require data excellence: Data is the currency that will unlock GenAI’s potential. Without accurate, reliable data, organizations won’t be able to deliver critical results. In the year to come, CIOs will need to prioritize data quality in order to pilot and test how GenAI can best serve and advance their organizations at large. – Asana, Saket Srivastava, Chief Information Officer

The next phase of AI from Gen. AI to AGI: There is an apparent shift with Generative AI and its direction. The focus is increasingly centered around artificial general intelligence (AGI) and the rise of intelligent agents. For agents, there are two parts that will be critical in the world of AlOps and MLOps. One is purely around learning control and infrastructure management with agents ensuring automated configuration management and drift protection. The learning agent needs to understand how to make improvements, perform, give feedback and determine how the performance should be modified. This practice applies to AI infrastructure management, ensuring it’s built and test to deploy tasks by the agent. Looking at the near-future agenda, the trends within workplaces, most notably the bigger companies, will be associated with AI and organizations will need to control the agents. Organizations cannot let AI become autonomous without proper infrastructure. For the next phase of AI to reach from Generative AI to AGI, infrastructure needs to be set in place first and foremost and embedding platform engineering will be important to accelerate the delivery of applications. Organizations need configurations to work no matter where learning systems are (hybrid or private cloud). – Kapil Tandon, Vice President of Product Management in the IT Ops Business Unit, on open source, AGI, and IT Ops, Perforce

The Rise of Custom Enterprise Foundation Models (FMs): The debate around open-source vs closed source will only get heated as we move to 2024. The open-source LLMs like Meta’s Llama are catching up to the closed-source LLMs like GPT-4. Both these models come with their trade-offs with regard to performance and privacy. Enterprises would want to deliver on both fronts. The recent updates, such as OpenAI Enterprise, allow enterprises to build custom models to suit their solutions. Similarly, open-source models allow enterprises to build lightweight custom models with privacy in mind. This trend will continue, and we will see custom tiny language models take center stage. – Sreekanth Menon, Genpact’s Global AI/ML Services Leader

Universities will begin to teach prompt engineering: In 2024, universities will teach prompt engineering as a minor field of study and through certificate programs. Prompt engineering for GenAI is a skill already augmenting domain experts, similar to how computing has augmented other domains. The successful use of large language models (LLMs) relies heavily on giving the models the right prompts. When looking to fill the role of a prompt engineer, the task becomes finding a domain expert who can formulate a question with examples in a specific domain, a skill critical for today’s IT professionals to refine to successfully implement LLMs. Given this, universities will introduce new academic focus areas to address the growing demand for professionals with specific skills required to build the next generation of GenAI applications. – Greg Benson, Chief Scientist at SnapLogic and Professor of Computer Science at the University of San Francisco 

“Me Too” AI Vendors Sink as Generative AI hits a trough of disillusionment: Right now, generative AI is at the peak of its hype cycle. Next year, some orgs will begin to be disillusioned when their AI investments don’t provide the complete transformation they’re expecting. Customers will grow wary with vendors that have been late to the AI race, tacking on AI capabilities that provide little business value or compelling functionality. But orgs who weigh their expectations and use generative AI correctly – supporting proven use cases – can avoid this disillusionment and see expected value from AI. – Mike Finley, CTO of AnswerRocket   

2024 will be the year of enterprise grade open-source AI adoption. To date, there are not a lot of examples of meaningful, production-based adoption of LLMs in the enterprise. For instance, not a lot has been built around enterprise grade resilience, security, uptime, or predictability. Over the next year, a handful of companies will turn the tables by taking advantage of open source language models and making them more production-ready. This will result in more serverless, open source language models for enterprise grade scenarios to be built upon, allowing enterprises to adopt this technology in a more turnkey fashion. – Quentin Clark, Managing Director at General Catalyst 

Generative AI will become more factual thanks to retrieval augmented generation (RAG): This technology will allow engineers to feed clean business data into LLMs models to reduce hallucinations and ground outputs in on factual information. This clean business data will be generated by traditional data pipelines that handle data extraction, cleansing, normalization, and enrichment on an organization-wide scale. RAG is starting to emerge now and will see increased adoption next year as businesses seek to ensure more accurate results from generative AI. – Sean Knapp, CEO of Ascend.io

Towards AGI – Memory, Input, and Learning: The pursuit of AGI will focus on three key areas: enhancing LLMs’ long-term memory, enabling continuous input and internal state, and advancing reinforcement learning. Developments like the increased context length in Claude 2 and GPT-4 Turbo, and architectures aimed at better memory and continuous learning, exemplify this trend. Rumors of OpenAI’s Q* algorithm also indicate significant strides in this direction. These predictions for 2024 reflect not just the rapid advancements in AI and big data but also underscore the shifts in the industry landscape, where efficiency, multimodality, and deeper AI capabilities will drive innovation and competition. – Tomer Borenstein, Co-Founder & CTO of BlastPoint, Inc.

GenAI could stifle innovation: When you got your first iPhone, you quickly forgot people’s phone numbers. The same happened with your navigation abilities when you started using Google Maps or Waze. Similarly, in the coming years, we’ll see people lose their innovation skills as they become more dependent on GenAI to help generate code. We’re going to have to start thinking about how to preserve knowledge and encourage innovation in 2024. – Ori Keren, Co-founder and CEO, LinearB

Multimodal LLMs and databases will enable a new frontier of AI apps across industries: One of the most exciting trends for 2024 will be the rise of multimodal LLMs. With this emergence, the need for multimodal databases that can store, manage and allow efficient querying across diverse data types has grown. However, the size and complexity of multimodal datasets pose a challenge for traditional databases, which are typically designed to store and query a single type of data, such as text or images. Multimodal databases, on the other hand, are much more versatile and powerful. They represent a natural progression in the evolution of LLMs to incorporate the different aspects of processing and understanding information using multiple modalities such as text, images, audio and video. There will be a number of use cases and industries that will benefit directly from the multimodal approach including healthcare, robotics, e-commerce, education, retail and gaming. Multimodal databases will see significant growth and investments in 2024 and beyond — so businesses can continue to drive AI-powered applications. – Rahul Pradhan, VP of Product and Strategy at Couchbase

Generative AI will quickly move from the peak of inflated expectations to the trough of disillusionment. There’s a lot of hype right now around generative AI, to put it mildly. However, all of this hype means that for some organizations, adoption of this technology is more of a matter of “keeping up with the Jones” rather than because it is truly the best solution for a specific problem they are trying to solve for. As a result, we’re likely to see a lot of money invested in failed generative AI projects – hence, the failing into the trough of disillusionment. It’s the shiny new object and many CIOs and other senior leaders may feel pressured to be able to say they have a generative AI program in place. The key to limiting these failed projects will lie in really ensuring that your organization understands the specific reason for using generative AI, that it’s tied to a defined business outcome and there’s a method established for measuring the success of the investment. – Rex Ahlstrom, CTO and VP of Innovation and Growth, Syniti 

GenAI and Legacy technology: Why the key to modernization may reside in GenAI tools: After a year of GenAI practice, legacy businesses are starting to understand that GenAI interest is not just driven by ‘hype’, and instead could be truly transformative for their sector. Therefore, in 2024, we can expect even more traditional businesses to deploy the technology to help evolve legacy systems and modernize their technology stack. Typically, traditional companies are not amenable to change or agile enough to adopt the latest in new technology. Many companies are tied to legacy software due to a combination of outdated procurement processes, familiarity, or concerns about data loss or disruption, making modernization inaccessible. The key here is that GenAI can assist with migrating off old code bases and technology stacks to modern programming languages and platforms. However, GenAI could bridge this gap by allowing companies previously locked into legacy systems to access a more modern workforce’s knowledge and work practices. GenAI also makes some modern tools far more user-friendly, and therefore more likely to be deployed across businesses. – Greg Benson, Chief Scientist at SnapLogic and Professor of Computer Science at the University of San Francisco 

Generative AI will cause a clash between executives as they vie for control over its agenda within the enterprise: Nearly half of executives report that that their AI investments will increase next year to jump on the generative AI bandwagon, while 70% are already in generative AI exploration mode. Now that organizations are ramping up AI adoption in the enterprise, every executive wants to be the one to take their company on its AI journey. In 2024, the AI agenda will become more complex as more players enter the chat to gain control, from the CTO to the CIO to data analytics executives. The C-Suite will need to identify where their opportunities for AI lie and what conversation they must have with different departments to decide who should be the one to take the lead. In the meantime, CIOs are facing pressure from CEOs to expand their use of generative AI. In 2024, we will see CIOs continuing to push forward their exploratory AI experiments and projects as the battle continues. – Alon Goren, CEO, AnswerRocket

The ChatGPT hype will die down as enterprises turn toward industry-specific LLMs: In 2024, we will see the initial hype around large foundational AI LLMs fade as companies realize that one size does not fit all. While the introduction of AI tools like ChatGPT was impressive, the enterprise will not benefit from solutions that pull from the entire internet. Instead, businesses are going to move away from leveraging large LLMs, leaning toward more specialized solutions and LLMs that are trained on a more bespoke and curated dataset. Not only will these produce more tailored results, but they are also more secure and cost-efficient. Businesses will embrace AI that is tailored to them and their customers to improve accuracy, avoid hallucination and, ultimately, increase productivity and revenue. – Brian Peterson, Chief Technology Officer and Co-founder, Dialpad 

An army of smaller, specialized Large Language Models will triumph over giant general ones. As we saw during the era of “big data” — bigger is rarely better. Models will “win” based not on how many parameters they have, but based on their effectiveness on domain-specific tasks and their efficiency. Rather than having one or two mega-models to rule them all, companies will have their own portfolio of focused models, each fine-tuned for a specific task and minimally sized to reduce compute costs and boost performance. – Nick Elprin, co-founder and CEO, Domino Data Lab

Generative AI turns its focus towards structured, enterprise data: Businesses will embrace the use of generative AI for extracting insights from structured numeric data, enhancing generative AI’s conventional applications in producing original content from images, video, text and audio. Generative AI will persist in automating data analysis, streamlining the rapid identification of patterns, anomalies, and trends, particularly in sensor and machine data use cases. This automation will bolster predictive analytics, enabling businesses to proactively respond to changing conditions, optimizing operations, and improving customer experiences. – Nima Negahban, CEO and Cofounder, Kinetica

AI-powered Human quality translation will increase productivity by 10X or more: At the beginning of 2023, everyone believed that LLMs alone would produce human-quality translations. Over the year, we identified multiple gaps in LLM translations ranging from hallucinations to subpar performance in languages other than English. Like cloud storage or services, AI-powered Human quality translation is increasingly moving toward a cost at which the ROI of translating nearly all content becomes attractive, creating a competitive advantage for those companies that use it to access the global market. Contrary to the shared belief that the language services industry will shrink in 2024, it will grow as more content gets localized, but it costs less to do. 2024 will be the year the cost of translation plummets. Translators powered by Language AI and AI-powered Language Quality Assurance increase their productivity by 10X or more. – Bryan Murphy, CEO of Smartling

While 2023 saw euphoric hype around the emergence of artificial intelligence (AI) with seemingly boundless potential, in healthcare, we have already begun to see the limitations of prescriptive, large language model (LLM)-based solutions in providing clinical recommendations and insights. In 2024, we anticipate that clinicians, increasingly sophisticated when it comes to AI, will seek ways to mitigate the potential risks of accepting prescriptive recommendations from LLM-based solutions and instead choose responsible AI solutions that provide evidence-based and explainable recommendations. As the focus shifts towards responsible AI, healthcare leaders seeking to incorporate innovative AI technologies into their organizations’ clinical workflows will need to be aware of how these tools work. Solutions relying on licensed LLMs cannot provide tailored recommendations for care for individual patients, as these solutions are based on millions of data points with no specific emphasis on the individual. The lack of personalized focus and ‘explainability’ in the ‘black box’ nature of these solutions will underscore the necessity of clinicians having the final word in their decision-making. As a result, we anticipate a natural split will emerge in 2024: solutions that exist to provide clinical recommendations will increasingly be based on specific data and provide evidence for AI-generated insights. In contrast, solutions that aim to support clinicians in writing documentation and visit summarization, which rely heavily on natural language generation, will benefit from using universal LLMs. – Ronen Lavi, CEO and Co-Founder, Navina

While AI and LLMs continue to increase in popularity, so will the potential danger: With the rapid rise of AI and LLMs in 2023, the business landscape has undergone a profound transformation, marked by innovation and efficiency. But this quick ascent has also given rise to concerns about the utilization and the safeguarding of sensitive data. Unfortunately, early indications reveal that the data security problem will only intensify next year. When prompted effectively, LLMs are adept at extracting valuable insight from training data, but this poses a unique set of challenges that require modern technical solutions. As the use of AI and LLMs continues to grow in 2024, it will be essential to balance the potential benefits with the need to mitigate risks and ensure responsible use. Without stringent data protection over the data that AI has access to, there is a heightened risk of data breaches that can result in financial losses, regulatory fines, and severe damage to the organization’s reputation. There is also a dangerous risk of insider threats within organizations, where trusted personnel can exploit AI and LLM tools for unauthorized data sharing whether it was done maliciously or not, potentially resulting in intellectual property theft, corporate espionage, and damage to an organization’s reputation.  In the coming year, organizations will combat these challenges by implementing comprehensive data governance frameworks, including, data classification, access controls, anonymization, frequent audits and monitoring, regulatory compliance, and consistent employee training. Also, SaaS-based data governance and data security solutions will play a critical role in keeping data protected, as it enables organizations to fit them into their existing framework without roadblocks. –  ALTR CEO, James Beecham

Generative AI and large language model (LLM) hype will start to fade: Without a doubt, GenAI is a major leap forward; however, many people have wildly overestimated what is actually possible. Although generated text, images and voices can seem incredibly authentic and appear as if they were created with all the thoughtfulness and the same desire for accuracy as a human, they are really just statistically relevant collections of words or images that fit together well (but in reality, may be completely inaccurate). The good news is the actual outputs of AI can be incredibly useful if all of their benefits and limitations are fully considered by the end user. – Ryan Welsh, Founder and CEO of Kyndi

As a result, 2024 will usher in reality checks for organizations on the real limitations and benefits GenAI and LLMs can bring to their business, and the outcomes of that assessment will reset the strategies and adoption of those technologies. Vendors will need to make these benefits and limitations apparent to end users who are appropriately skeptical of anything created by AI. Key elements like accuracy, explainability, security, and total cost must be considered. In the coming year, the GenAI space will settle into a new paradigm for enterprises, one in which  they deploy just a handful of GenAI-powered applications in production to solve specific use cases.

One-way Ticket to Vector-town: As new applications get built from the ground up with AI, and as LLMs become integrated into existing applications, vector databases will play an increasingly important role in the tech stack, just as application databases have in the past. Teams will need scalable, easy to use, and operationally simple vector data storage as they seek to create AI-enabled products with new LLM-powered capabilities. – Avthar Sewrathan, GM for AI and Vector at Timescale

Competition Among LLM Providers: The landscape of Large Language Models (LLMs) is heating up. OpenAI, with its GPT-4 Turbo, has been leading the race, but others like Anthropic’s Claude, Google’s Gemini, and Meta’s Llama are close on its heels. The recent management turmoil at OpenAI, notably involving Sam Altman, has opened up opportunities for these competitors to advance and potentially outpace OpenAI in certain areas. – Tomer Borenstein, Co-Founder & CTO of BlastPoint, Inc.

Generative AI will reach the trough of disillusionment as organizations realize there is no magic bullet. There is no doubt that the usage of generative AI will continue to explode in 2024. However, many organizations may be disappointed with the performance of generative AI if their expectations of how quickly its benefits come to fruition are unrealistic, or if they don’t have the expertise to implement and use it effectively. In 2024, we can expect to see a trough of disillusionment for generative AI. This is not to say that generative AI is a failure. It simply means that it will take more time for generative AI solutions to reach the desired results to match the hype. – Cody Cornell, Co-Founder and Chief Strategy Officer of Swimlane

There will be spike in interest in vector databases, but it won’t last: Vector databases will be the hot new area for discussion by many but will eventually be absorbed by relational databases after a few years. Every ten or so years a ‘new’ database technology is proclaimed to be the end of relational databases, and developers jump on that bandwagon only to rediscover that the relational model is extremely flexible and relational database vendors can easily adapt new technologies into their products. Look at PostgreSQL’s pgVector as an example of how a relational database can process vector data today and why you will be able to ignore the hype around specialized vector databases. The community around pgVector and PostgreSQL was able to support this use case around vector data quickly – the project started in 2021, but it has developed quickly this year with all the interest in Generative AI and vector data. For those thinking about this area and looking at implementing open source components in their projects, pgVector makes PostgreSQL an obvious choice. – Dave Stokes, Technology Evangelist, Percona

Companies are accelerating their investment in safeguarding generative AI for employees, alongside their AI investments overall: Investment in technology is increasing, even more than in office spaces. AI brings perhaps the largest growth potential of any category today and also some of the largest risks. Companies will invest in seizing the AI advantage while proactively mitigating and addressing its risk factors. As generative AI finds its role in the workplace, employers are investing in guidelines, risk mitigation technologies and parameters, particularly when it comes to securing company information from ‘unknown unknown’ risk factors. A 2023 report from McKinsey stated that 60% of companies with reported AI adoption are using generative AI. WalkMe believes this number will continue to increase, along a path similar to cloud and internet adoption. The same report found that the two biggest risks with generative AI are inaccuracy and cybersecurity. We anticipate these issues will escalate, and enterprises’ ability to face the risks will improve as technology posture improves. – Uzi Dvir, CTO, WalkMe

More organizations are dipping their toes into generative AI and also increasing their investment in machine learning more broadly. There are so many operational challenges for platform teams that want to facilitate running ML jobs on cloud platforms. MLOps is a hot topic at the moment but still in the early stages of adoption – we’ll see advancements there as more organizations mature their ML infrastructure. – Malavika Balachandran Tadeusz, Senior Product Manager, Tigera

LLMs to transition to smaller models for more accessibility: Though LLMs are impressive in their generality, they require huge amounts of compute and storage to develop, tune, and use, and thus may be cost-prohibitive to the overwhelming majority of organizations. Only companies with vastly deep resources have the means to access them. Since there needs to be a path forward for making them more economically viable, we should expect to see solutions that decentralize and democratize their use. We should anticipate more numerous, more focused, and smaller models that consume less power becoming more readily available to a wider range of users. These focused models should also be less susceptible to the hallucination effects from which LLMs often suffer. – Naren Narendran, Chief Scientist, Aerospike 

The data ownership conversations will heat up: As large language models (LLMs) become more powerful and sophisticated, there will be a growing debate about data ownership. Similar to what we saw with open-source code, there is an ongoing discussion about how large companies are using data that they do not own to train their models, which could lead to a concentration of power in the hands of a few large companies. To address this issue, we will see new licensing frameworks for data. These frameworks should ensure that data owners are fairly compensated for the use of their data and that users can access and use data in a responsible and ethical manner. – Bob Friday, Chief AI Officer at Juniper Networks

To Invest in AI Chatbots, or Not: We know Gen Z typically seeks out digital forms of communication rather than having to speak with someone over the phone, which is especially true for customer service requests. The caveat is that this demographic expects their media and technology to work in a symbiotic relationship that supports connection, engagement and utility; they know good customer experience when they see it and will avoid anything that delivers a subpar experience. Organizations are investing in generative AI capabilities to entice people to stay on their applications longer and drive more activity among Gen Z users. This is the right move and can have a tremendous impact if done correctly. Organizations will not find success simply by creating better chatbots because Gen Z craves authentic connection and utility which is hard to replicate. If the chatbot could provide users with new experiences, recommendations and other helpful services, then it may increase activity on specific applications or a brand’s website. That being said, users will likely be skeptical and cautious of genAI bots, and organizations will need to show incremental wins to reinforce the chatbot’s safety and value. – Robin Gomez, Director of Customer Care Innovation at Radial

While 2023 marked a breakout year for generative AI, the supply chain industry has lagged in adoption due to data barriers – just 3% of organizations reported using generative AI for supply chain management. Manual, paper-based processes still dominate global trade, so many supply chain companies have struggled to unify the vast amount of unstructured data across disparate sources. Yet, companies who have solved this data problem will make 2024 the year of generative AI supply chain breakthroughs. As generative AI models are trained to be supply chain experts, global supply chains will become more autonomous, self-repairing and self-optimizing. For example, generative AI could tell a shipper about an exception (its shipment was delayed due to extreme weather), what to do about it (reroute to a more reliable location) and ultimately even execute the solution. By telling companies where they need to focus their efforts, these AI innovations will enable global brands to deliver a better customer experience and grow their business at the lowest cost and environmental impact. – AJ Wilhoit, Chief Product Officer, project44 

Generative AI dominated the conversation this year, and for good reason— it will significantly mature and scale in 2024. There’s a vast array of applications for generative AI that are currently in experimental stages and are poised to evolve. The real value will lie in its capacity to help people make sense of unstructured information in various internal use cases— parsing through extensive volumes of documents, generating more concise and informative summaries, and facilitating Q&A interactions with these documents, thereby ensuring consistency across multiple domains. On top of this, LLM interfaces and text-based interfaces will become integral components of nearly every software product. These interfaces will be used for everything, from controlling applications to providing answers to user inquiries about the application itself. We are starting to see this emerge in corporate websites that have consumer facing elements. Additionally, in the next year we can expect to see a shift toward smaller, more specialized LLMs, reducing the amount of data required for their training. This transition aligns with the broader push toward open-source solutions, particularly models that can prove a pedigree of information sources. – Michael Curry, President of Data Modernization at Rocket Software

Generative AI and AI coding assistants will move from what some people call “junior developer” level, with a 25-30% code acceptance rate status, to CTO status through embedded context. The ability to add more context, including runtime context, will exponentially increase the value and massively improve the acceptance rate (70% and better) of AI generated code. Going one level deeper….. Currently activities like deep debugging, multi-file changes, using large files as inputs is beyond the scope of most coding assistants. – Elizabeth Lawler, CEO of AppMap

GenAI will transform transformation: In 2024, GenAI will drive transformation in various areas, making it more urgent and transformative. With the help of customized GenAI agents, tasks like reading, organizing, and cleansing unstructured data can be done “AI-first” reducing a lot of manual effort. Data can be accessed from anywhere for GenAI to access, but governance, data pipelines, and processes will still be necessary for managing quality, enabling outcomes, assessing value, determining rights, and achieving compliance. GenAI, in combination with the cloud, can accelerate data-related transformation initiatives. Additionally, GenAI can enable organizations to leapfrog competitors and accelerate transformation, handling complex tasks and processes in finance, tax, legal, IT, compliance, and other departments. Leveraging GenAI as a catalyst for transformation has the potential to create a divide between competitors and organizations that fail to utilize GenAI may struggle to compete against those who do. – Bret Greenstein, Data and AI Leader, PwC US    

How do we use generative AI for our business? Do we build or buy our own AI solution? How do we upskill our employees to keep pace with AI? These are the questions swirling around all industries— not just the tech industry— and point to one common theme: 2024 is the year generative AI will significantly impact the future of work. The introduction of new technology is often accompanied by enormous pressure from business leaders to deploy those new solutions quickly. In 2024, we’ll see organizations realize they can no longer wait and see. They’ll need to find ways to go all in on AI. In the next 6-12 months, we’ll see a major transformation where more organizations invest in an AI strategy and find ways to use the technology to reimagine their workflows and become more efficient. – Glean, Arvind Jain, CEO

In 2023, companies were exploring the basics of AI, but we anticipate a surge in demand for tailored AI models in 2024. Despite the vast knowledge of LLMs like GPT-4, applying them to new domains poses challenges. To address this knowledge gap, we expect a rise in “knowledge injection,” where LLMs integrate with domain-specific data for a more specialized, context-aware AI solution. For example, merging a general-purpose LLM with patient records can enhance the overall patient-provider experience in the healthcare sector. In business, connecting AI with customer interactions can provide the model sales domain expertise and benefit revenue teams. As we approach 2024, trends like knowledge injection offer opportunities for businesses to leverage LLMs with specific databases to foster innovation and growth. – Gong, Omri Allouche, VP of Research 

2024 will probably see the emergence of a new approach to interaction between information systems, thanks to the arrival of a new consumer: the AI-powered assistant. The progress and democratization of generative AI tools will create new uses. And OpenAI, creator of ChatGPT, wants to create a competitor to the iPhone. They want to put ChatGPT in your pocket, making it the ultimate advisor. It’s time to rethink digital interactions. In the 2020s, we understood that we had to move from a vision of technical APIs to business-oriented digital products. The coming year will force us to reassess these products and their marketing to adapt them to the new consumers of services: the AI generatives! – Emmanuel Methivier, Axway Catalyst, and Business Program Director

We are seeing several startups leveraging the cloud and generative AI to help enterprises make sense of their unstructured data—think videos, audio files, images, and more. Instead of labeling millions of pieces of unstructured data, artificial intelligence will be able to provide better context on all of this content in nearly real time-no tags needed. This couldn’t come to fruition at a better time, as according to IDC, 80% of data created worldwide by 2025 will be unstructured. AI will create a data renaissance for enterprises, who will be able to improve strategic decision-making and gain better customer insights like never before. – Howard Wright, Global Head of Startups at AWS

What is disappointing, though, is that the level of hallucinations and factual inaccuracies has improved, but has not changed dramatically when it comes to producing text in foreign languages. Also, this may not be attributable to the model degradation, but what we are noticing is that if a GPT model is used as an automated post-editing layer on top of neural machine translation, it has equal chances of improving the translation output and making it worse, introducing both critical and major errors which were not present in the original translation. GPT may not be getting “dumber,” but it is still struggling with foreign language grammar and cultural phenomena, despite additional training data and more parameters. – Olga Beregovaya, VP of AI and Machine Translation at Smartling  

LLMs will assist generative AI to reason more and hallucinate less: AI is moving beyond the Large Language Model (LLM) text world of ChatGPT and the landscapes of Midjourney to Large Multimodal Models (LMMs), systems that can reason across different media types.  This is opening up new types of applications and possibilities, such as image-based inventory or virtual product support assistants for small businesses, and may help to ground future AI systems on more real-world examples that mitigate the potential of hallucination. We expect many more applications over the next 12 months, and as generative AI learns with sound, vision, and other senses, the near future may bring with it AI systems that can distinguish between reality and fiction. – Ashok Srivastava, Senior Vice President and Chief Data Officer, Intuit  

2023 was the year of LLM training in AI, and NVIDIA saw an incredible surge in demand for their flagship products, selling as many H100 InfiniBand clusters as they could make. 2024 will see continued growth in AI training, but once those models are trained, the demand for compute will shift dramatically to AI model deployment and inference.  In the second half of 2024, we anticipate that inferencing will spawn another wave to compute demand that should rival the training explosion of 2023. – David Driggers, CTO of AI cloud compute provider, Cirrascale Cloud Services

AI as a Service: It’s already possible to use OpenAI’s ChatGPT in your own applications, but being able to model responses based on your own, proprietary datasets will bring much more value to businesses. This leads to issues of data sovereignty and confidentiality, which will see the rise of not just cloud-based AI services, but the ability to run them in siloed cloud-environments. – Ben Dechrai, Developer Advocate, Sonar

What we saw with Artificial Intelligence (AI) and Large Language Models (LLMs) this year was the proverbial tip of the iceberg. Given the amount of investment that has gone into progressing this technology, I expect to see rapid innovation in all aspects of LLM usage in 2024 – specifically at the foundational level, such as scale and efficiency. More importantly, we will see the emergence of very impactful use cases in industry verticals such as healthcare, learning, manufacturing, and automation. We will also see increased adoption of LLMs for the edge – LLMs, and AI will go where the data resides or is generated as against aggregating all the data to a centralized location. This will accelerate exponentially in addressing some of society’s most complex and urgent problems. Furthermore, I expect more solutions and regulations to emerge to grant organizations the confidence and guidance they need to use these powerful tools effectively and in a trustworthy manner. – Vinay Anand, Chief Product Officer, NetSPI

LLM providers will increase data transparency and embrace open-source tools: The term “open source” has evolved over time into something differentiated from its initial meaning. Traditionally, open-source tools gave insight into code used to build something. Now, with the rapid emergence of LLMs, we’re able to accelerate past the code and get right to the end result without needing full observability of how a model was produced. This is where we see fine-tuning of models using domain-specific data for models being critical, and I foresee companies putting a concerted effort around transparency around how open-source models are trained heading into 2024. In my opinion, LLMs are easier to build than one may think given the availability of open source tools. That said, the important element that cannot be overlooked for building generative AI models is understanding how you uplevel the model to be specific to each use case, and giving users visibility into where training data was derived and how it’ll be used in the future. One way to ensure data transparency is to encourage your team to embrace data lineage best practices, which I think we’ll see become more of a norm heading into 2024. Incorporating data lineage into your data pipelines allows for heightened tracking and observability, catalyzing the potential for increased transparency. – Julian LaNeve, CTO at Astronomer

AI/LLM Development: AI/LLM technology advancements in 2024 will enable developers to ditch the rote work but at the cost of introducing leaky, imperfect abstractions. Unless LLMs can also help developers debug their software, it will introduce bugs at the same rate as it increases code production productivity. I’m particularly excited about developments like Stanza’s new LLM for adding OpenTelemetry instrumentation to existing codebases as a form of reducing rote work. Yet, at the end of the day, there will always be a need for manual checking. – Liz Fong-Jones, Field CTO, Honeycomb 

Graph

Goodbye Hallucinations – Hello Amplified Content! In 2024 Generative AI, powered by rapidly advancing language models and grounded by Knowledge Graphs will hallucinate less and produce content that is increasingly contextually relevant and insightful. This will pave the way for groundbreaking developments in natural language understanding, tailored content creation, and complex problem-solving across various domains such as healthcare, drug discovery, and engineering.  – Dr. Jans Aasman, CEO of Franz Inc.

Knowledge Graphs will Help Users Eliminate Data Silos: As enterprises continue to move more data into a data cloud, they are collecting hundreds, thousands, and sometimes even tens of thousands, of data silos in their clouds. Knowledge graphs can easily drive language models to navigate all of the data silos present by leveraging the relationships between various data sources. In the new year, we will see a variety of established and novel AI techniques that support the development of intelligent applications emerge. – Molham Aref, founder and CEO of RelationalAI

Graph databases are poised to continue revolutionizing how data science and engineering teams process large-scale and real-time datasets, enabling them to extract deeper insights and achieve faster time-to-value. As the volume and velocity of data continues to grow exponentially, particularly real-time data like points-of-interest and foot traffic, teams will need to rethink their data management tech stack to keep up. I expect more and more teams to turn to graph databases to navigate complex datasets, boost efficiency, and do it all in a way that protects consumer privacy. – Emma Cramer, Senior Manager of Engineering at Foursquare

Knowledge Graph Adoption Accelerates Due to LLMs and Technology Convergence: A key factor slowing down knowledge graphs (KG) adoption is the extensive (and expensive) process of developing the necessary domain models. LLMs can optimize several tasks ranging from the evolution of taxonomies, classifying entities, and extracting new properties and relationships from unstructured data. Done correctly, LLMs could lower information extraction costs, as the proper tools and methodology can manage the quality of text analysis pipelines and bootstrap/evolve KGs at a fraction of the effort currently required. LLMs will also make it easier to consume KGs by applying natural language querying and summarization. Labeled Property Graphs (LPG) and Resource Description Frameworks (RDF) will also help propel KG adoption, as each are powerful data models with strong synergies when combined. So while RDF and LPG are optimized for different things, data managers and technology vendors are realizing that together they provide a comprehensive and flexible approach to data modeling and integration. The combination of these graph technology stacks will enable enterprises to create better data management practices, where data analytics, reference data and metadata management, data sharing and reuse are handled in an efficient and future proof manner. Once an effective graph foundation is built, it can be reused and repurposed across organizations to deliver enterprise level results, instead of being limited to disconnected KG implementations. As innovative and emerging technologies such as digital twins, IoT, AI, and ML gain further mind-share, managing data will become even more important. Using LPG and RDF’s capabilities together, organizations can represent complex data relationships between AI and ML models, as well as tracking IoT data to support these new use cases. Additionally, with both the scale and diversity of data increasing, this combination will also address the need for better performance. As a result, expect knowledge graph adoption to continue to grow as businesses look to connect, process, analyze, and query the large volume data sets that are currently in use. – Atanas Kiryakov, founder and CEO of Ontotext

LGMs become the next household gen AI tech in the enterprise: Today, nearly every organization is experimenting with LLMs in some way. Next year, another major AI technology will emerge alongside LLMs: Large Graphical Models (LGMs). An LGM is a probabilistic model that uses a graph to represent the conditional dependence structure between a set of random variables. LGMs are probabilistic in nature, aiming to capture the entire joint distribution between all variables of interest. They are particularly suitable for modeling tabular data, such as data found in spreadsheets or tables. LGMs are useful for analyzing time series data. By analyzing time series data through the novel lens of tabular data, LGMs are able to forecast critical business trends, such as sales, inventory levels and supply chain performance. These insights help guide enterprises to make better decisions. This is game changing because existing AI models have not adequately addressed the challenge of analyzing tabular, time-series data (which accounts for the majority of enterprise data). Instead, LLMs and other models were created to analyze text documents. That’s limited the enterprise use cases they’re really capable of supporting: LLMs are great for building chatbots, but they’re not designed to support detailed predictions and forecasting. Those sort of use cases offer organizations the most business value today – and LGMs are the only technology that enables them. Enterprises already have tons of time-series data, so it’ll be easy for them to begin getting value from LGMs. As a result, in 2024, LGM adoption will take off, particularly in retail and healthcare. – Devavrat Shah, co-CEO and co-Founder of Ikigai Labs and MIT AI professor 

Hardware

Limited chip availability drives common sense and tampers down AI expectations. The mad dash for AI has demand for GPUs and related chip production at its limits. With constrained capacity to make more of these chips, AI processing will hit a wall in 2024. This shortage will most acutely affect large buyers like cloud providers, Meta, Tesla, and OpenAI. – Forrester

Access to GPUs is becoming increasingly expensive and competitive, which will usher forth a new chapter in the cloud industry. The traditional providers – AWS, Microsoft Azure and Google Cloud – are unable to meet demand from developers, with smaller companies finding it hard to afford and reserve the compute they need to train large language models. As a result, an increasing number of organizations will turn to distributed and permissionless cloud networks to gain access to GPUs, including less sophisticated chips that in many cases sit idle. Looking ahead to 2024, this newfound attention to “lesser” GPUs will help sustain the AI boom, and mitigate concerns that Microsoft, Alphabet and Meta will dominate the tech transformation. Those seeking alternatives amid the GPU squeeze will make progress by using less intensive data set requirements, deploying more efficient techniques like Low-Rank Adaptation (LoRA) to train language models, and distributing workloads in a parallel manner. This involves deploying clusters of lower-tier chips to accomplish tasks equivalent to a smaller number of A100s and H100s. A new era of cloud computing will emerge, one in which power is decentralized and not in the hands of just a few. – Greg Osuri, founder of Akash Network and CEO of Overclock Labs

Today’s technologies for compute, memory, networking will prove highly limiting for scaled deployment, restricting the economic impact of AI. New technologies will be required on all three fronts, going beyond the ill-validated technology investments driven by hype we have seen in the last few years. Fundamental technological barriers across compute, memory, and networking will drive specialized inference infrastructure for different use-case profiles and models. We will see substantial dedicated investment in inference infrastructure (which generates predictions to make decisions), to address the critical bottleneck to scaled deployment. As we move towards scaled deployment, sustainability issues will emerge as one of the key factors limiting wide-scale AI deployment. These include energy consumption and the impact on our planet. Early value applications of generative AI will focus on internal efficiency improvements for cost-reduction, rather than external/customer-facing revenue growth. Open Source models will enable broad early exploration of generative AI, but ultimately end users will need to invest in specialized internal teams or engage external partners to leverage both open source models and/or custom models for value deployments. – Naveen Verma, PhD, CEO, EnCharge AI

Compute Power is the New Oil: The soaring demand for GPUs has outpaced industry-wide supply, making specialized compute with the right configuration a scarce resource. Compute power has now become the new oil, and organizations are wielding it as a competitive edge. In 2024, we anticipate even greater innovation and adoption of technologies to enhance compute efficiency and scale capacity as AI workloads continue to explode. In addition, specialized AI hardware, like TPUs, ASICs, FPGAs and neuromorphic chips, will become more accessible. – Haoyuan Li, Founder and CEO, Alluxio

While Moore’s Law predicts that computer chips will become faster and smaller over time, the data centers that house them will not necessarily become smaller as a result. As generative AI requires more powerful servers and greater power consumption, power utilities are struggling to keep up with the needs of data centers. New data centers are more likely to be built outside of urban areas, and existing data centers may physically relocate. We will see a mix of new and existing enterprise data centers move away from urban areas to meet the need and availability of power. Land around traditional data center hubs like Silicon Valley or Northern Virginia is scarce and expensive, while energy grids are not keeping pace with demand in many regions. Data centers are increasingly looking to generate their own power on-site, and that means they need larger and more affordable areas of land. Solutions in compute and storage that minimize power utilization will find favor among data centers struggling in this area. – Matt Ninesling, Senior Director Tape Portfolio, Spectra Logic  

IoT and Edge Computing

Edge computing’s influence on tech investment in 2024: In 2024, edge computing will continue to grow in importance. Organizations will invest in edge infrastructure to support applications requiring low latency, such as autonomous vehicles, augmented reality, and industrial automation. – Srinivasa Raghavan, director of product management, ManageEngine 

The success of Edge AI will depend on advancements in lightweight AI models: The innovation surrounding AI is exciting, and edge computing is one way to enable new AI applications. However, in order to make edge AI a viable option, AI models need to be lightweight and capable of running in resource constrained embedded devices and edge servers while continuing to deliver results at acceptable levels of accuracy. Models need to strike the right balance — meaning, models must be small and less computationally intensive so they can run efficiently at the edge while also delivering accurate results. While a lot of progress has been made in model compression, I predict that there will be continued innovation in this space, which when coupled with advancements in edge AI processors will make EdgeAI ubiquitous. – Priya Rajagopal, Director of Product Management at Couchbase

Long-awaited edge computing: As AI applications are developed, companies will look for processing power closer to where the application is being utilized. That means data centers will focus on keeping the heavy compute closer to where the data is actually being used. – Michael Crook, Market Development Manager – Data Centers, Corning Optical Communications

 MLOps (Machine Learning Operations) will significantly evolve to not only provide operational capabilities such as deployment, scaling, monitoring, etc. but will include model optimization. This will encompass everything from hyperparameter tuning to tweak model performance to model size/quantization and performance optimization for specific chipsets and use cases such as for edge computing on wearable devices or cloud computing. – Yeshwant Mummaneni, Chief Engineer, Cloud, Altair

In 2024, IoT devices will become even more integrated and intertwined into our daily lives. Smart homes for example, will not only automate daily tasks but proactively suggest energy-saving measures and security enhancements for their residents. In fields like agriculture, we’ll see IoT and AI leading to more sustainable and efficient practices, with the technology playing a critical role in effectively reducing resource usage and increasing overall crop yields. The healthcare industry will also continue to benefit from IoT devices that continuously monitor patient health, sending real-time data to medical professionals and acting as a critical force multiplier for understaffed hospitals and care facilities. I believe we’ll also see the power of AI and IoT working together to drive better outcomes for cities around the globe. Consider this: AI-powered streetlights that adjust their brightness based on real-time data, or smart waste management systems that optimize collection routes. Enhanced mobility will also be a focus as we head into the new year. Self-driving electric shuttles and autonomous delivery robots will become a common sight on city streets, significantly reducing traffic congestion and pollution. We may even see more augmented reality (AR) and virtual reality (VR) integrating into urban planning and design; in the not-so-distant future, residents might use AR glasses to explore future city developments or visualize historical landmarks, enhancing the overall urban experience. – David Ly, CEO & Founder, Iveda

Low-code/No-code

Low Code Abstraction Frameworks: Abstraction frameworks like DBT Labs facilitate SQL-based code that can seamlessly run on various underlying platforms such as Snowflake and Databricks. This abstraction simplifies technology transitions, offering enhanced flexibility and reducing effort and costs associated with platform changes. The goal is to empower citizen data analysts to operate platforms independently, reducing reliance on experts, considering the scarcity of talent in the field. – Arnab Sen, VP, Data Engineering, Tredence Inc. 

LLMs won’t replace low code – AI will push existing low-code solutions to do even more: Looking ahead to next year, some low-code vendors have proposed putting AI to work generating code as a means of fixing gaps in their platforms. The results will likely be less robust applications, higher technical debt, and greater cost and risk to clients. Rather than having AI generate massive amounts of flawed custom code, and creating apps that will only get worse over time, 2024 is the year we will set our sites on super-powering low-code solutions with AI. We’ll see AI making low-code platforms even more intuitive, lowering the bar for business users to create their own intelligent business processes and pushing citizen development further than ever before. – Anthony Abdulla, Senior Director, Product Marketing, Intelligent Automation at Pega

Alternatives to Low-Code/No-Code Will Continue to Gain Traction: While low-code/no-code tools have gained popularity for their rapid development capabilities, concerns regarding scalability, customization, and the ability to handle complex functionalities will drive enterprises to seek out new, pro-code tools. These tools, including Features-as-a-Service, enable enterprises to continue rapid application development and empower them to focus on the complex features and functionality that will set them apart from the competition. They do so by offering a wide range of common functionalities – such as chat, location, and access – out of the box and granting them full ownership of licensed source code. This provides enterprise engineering teams with the flexibility needed to tailor features and functionalities to meet their requirements. – Shiva Nathan, Founder & CEO of Onymos

How organizations can mitigate technical talent shortages in 2024 through upskilling – Data shows 92% of organizations experienced a year-over-year increase in budget allocation for data science and machine learning initiatives. However, there’s a problem – 27% claim a lack of skilled talent stands in the way of developing and implementing data science projects. Rather than wait around for candidates with preexisting data literacy, organizations should instead find ways to upskill, getting their non-technical talent hands-on with data and cutting down on the repetitive work bogging down technical talent like data science teams. One of the best ways to upskill both technical and non-technical employees is through the use of intuitive low-code/no-code tools. When data scientists do not need to worry about underlying software complexity, they can focus exclusively on the data, work more independently from IT, and better manage the data process from end to end. A LC/NC environment also reduces barriers to entry, allowing non-technical employees to take some of the load off of data experts, enabling greater overall efficiency. To effectively upskill, there are three questions organizations must consider: is the data tool we’re using user-friendly? (I.E. is it intuitive to get started? Do you need pre-existing knowledge or qualifications?), does it allow for effortless knowledge-sharing between employees?, and does it enable easy delivery of analytical solutions across the entire organization? The more accessible data science solution starters, training resources, and best practices are, the more employees can effortlessly learn and create their own solutions. Ultimately, the success of data science initiatives in 2024 will hinge on an organization’s ability to harness the power of and upskill its existing talent. Companies already upskilling existing users have seen success and 92% rate the experience of non-data science professionals being involved in data science initiatives and working with data science teams as positive, or very positive. However, they must remember that for these groups to flourish, and to effectively mitigate technical talent shortages through upskilling, employees must be given a shared language and efficient, easy-to-use mechanisms for the swift exchange and mutual learning from each other’s solutions. – Michael Berthold, CEO of KNIME

Low-Code/No-Code Tools Will Dominate Software Development in 2024: In 2024, low-code/no-code tools will dominate software development as they bring the power of app development to users across the business. The rise of “citizen developers” has proven that as we move toward a no-code future, people without coding experience are changing the working world. As tech companies adopt low-code/no-code tools, they’ll save time and money, rather than falling behind early adopters. – Jason Beres, Sr. VP of Developer Tools at Infragistics

Natural language will pave the way for the next evolution of no-code: Automation is only effective when implemented by teams on the frontline. Five years ago, the best way to place powerful automation in the hands of non-technical teams was via low- or no-code interfaces. Now, with AI chatbots that let people use natural language, every single team member — from sales to security — is technical enough to put automation to work solving their own unique problems. The breakthrough in AI was the new ability to iterate in natural language, simply asking an LLM to do something a bit differently, then slightly differently again. Generative AI and LLMs are obliterating barriers to entry, like no-code tools once did for the need to know how to code, and no-code will be the next barrier to fall. We’ve already moved from programming languages like Python to Microsoft Excel or drag-and-drop interfaces. Next year, we will see more and more AI chat functions replace no-code interfaces. We can expect non-technical teams throughout organizations embracing automation in ways they never thought possible. Natural language is the future on the frontline. – Eoin Hinchy, co-founder and CEO at Tines

As the AI boom continues, low code will become increasingly popular: There are parallels between AI and low code use cases and adoptions. AI is helping organizations and individuals to analyze, interpret and manage massive data sets, as well as create initial drafts of content, find answers to questions and interpret medical images such as x-rays. Across all use cases, AI is skyrocketing. Similarly, low code removes much of the burden of writing actual code: It takes much less time to provide high-level direction, which low-code systems convert to code. This is similar to the ways generative AI systems, such as ChatGPT and DALL-E, save time producing text or images based on high-level direction. As organizations look to expand software development to citizen developers to increase productivity and agility, and to free developers to focus more on system design and architecture and less on coding, low code enables such initiatives. Therefore, we expect the use of low code will also increase in 2024. – Peter Kreslins, CTO of Digibee

Machine Learning

Machine Learning Key to Detecting Security Anomalies in IoT Devices: As more devices are connected, the risk of a cyber attack— and its consequences— continues to escalate. Machine learning will increasingly become pivotal in helping identify threats before they become serious security risks. In 2024, you can expect a slew of new ML-driven solutions to enter the market to help address this growing problem with IoT devices. – Mike Wilson, founder and CTO of Enzoic

The need for reusable data will drive the adoption of data management and unification tools integrated with AI/ML capabilities: We’re on the cusp of a data renaissance where sophisticated data management and unification tools, seamlessly integrated with AI and ML capabilities, will enhance and revolutionize how we automate and deliver data products. This is about crafting certified, effortlessly consumable, and eminently reusable data assets tailored to many business use cases. We’re not just talking about making data work smarter; we’re architecting a future where data becomes the lifeblood of decision-making and operations, driving unprecedented efficiency and innovation across industries. – Manish Sood, Founder, CEO and Chairman of Reltio

Quantum Computing

Quantum Neural Networks Will Make Machines Talk More Like Humans: The development of quantum neural networks is poised to reshape the AI landscape, particularly in the domains of NLP and image recognition. Quantum-enhanced capabilities will bring about more accurate, efficient, and versatile AI models, driving innovation across industries and unlocking new possibilities for AI applications. QNNs will also address the challenges of long-range dependencies and ambiguity in language, resulting in more contextually accurate and human-like responses in conversational AI.  – Dr. Jans Aasman, CEO of Franz Inc.

Quantum computing will get real:  While full commercial utilization of quantum technologies is likely still a few years away, there are an increasing number of solutions on the market that are solving real-world problems. You’re not going to find a quantum chip in your gaming laptop anytime soon, but in 2024, you should expect more quantum-based use cases to become a reality as this technology moves out of the lab environment and into the data center, where it becomes more accessible to enterprises. In addition to greater commercial investment, we’ll continue to see more funding from the U.S., U.K., and other governments to address security hurdles and other evolving technical challenges. – Holland Barry, SVP and Field CTO, Cyxtera   

In 2024, the industry will risk falling behind if they neglect “early quantum adoption”: Like the rise of AI, new and powerful technologies such as quantum computing present a large unknown that looms over the security industry. The ambiguity of not knowing whether quantum will prove to be a greater threat than an asset exposes the sobering reality that even the most technical audiences have difficulty understanding how it works. In order to adequately prepare for the quantum evolution, the security industry must avoid the faulty position of waiting to see how others prepare. Instead, they must be early adopters of defensive protocols against quantum. – Jaya Baloo, CSO at Rapid7

Quantum computing in the future: Quantum computing will leap in scale and bring our expectations for tech into reality. CIOs should lean on the patterns of the past to prepare for the future and the scale of processing quantum computing will bring – 20 days to 20 milliseconds. Examine the underpinning systems that went into your organization’s data gathering, and security and start preparing the infrastructure to be able to handle the increase in load this will bring. We saw this same process with remote working – most of our applications and infrastructure weren’t originally built for remote work and had to be refactored to allow for internet speeds, mobile devices, and new applications. There has been a lot of talk about remote work causing burnout in IT, but the real root cause is that our applications weren’t built to enable remote work. We’ll see the same burnout when quantum computing takes off if our environments aren’t ready for this next evolution of tech. – Ivanti’s CIO, Robert Grazioli 

Democratizing Quantum: The Emergence of Quantum-as-a-Service (QaaS) – Due to the significant cost and resource burden in developing quantum labs, this will give rise to more quantum-as-a-service (QaaS) providers. Remote cloud access to quantum processors, test beds for device characterization, and foundries that offer fabrication services are examples of services that are available which in turn will help attract startups into the quantum ecosystem. QaaS providers, over time, will help standardize device operation, characterization, and fabrication, which will enable benchmarking of quantum processors and qubit-adjacent enabling technologies. – Dr. Philip Krantz, Quantum Engineering Solutions at Keysight Technologies

In 2024, the landscape of computing will continue to experience a transformative shift as quantum computing steadily moves from theoretical promise to practical implementation. While they have amazing capabilities to solve some of our world’s greatest problems, they also pose a massive risk to today’s widely used public key infrastructure (PKI) cryptography. The foundation of practically all cryptographic protection is PKI and, as quantum computers increasingly come online around the 2030 time period, these algorithms will be vulnerable to attacks. As advancements accelerate, quantum computing is anticipated to become more accessible, heralding a new era of computational power. Shifting to post-quantum cryptography (PQC) will be key to defending against quantum computing attacks. As quantum computers threaten current encryption standards, there is an urgent need to fortify our digital security against potential vulnerabilities. U.S. government regulations like the Commercial National Security Algorithm Suite (CSNA) 2.0 and the Quantum Computing Cybersecurity Preparedness Act have taken effect and are mandating a switchover to quantum resilient security algorithms starting as early as 2025 for certain critical infrastructure components. The National Institute of Standards and Technology (NIST) is also expected to release the final versions of PQC algorithms within 2024. Simultaneously, the proliferation of quantum computing demands parallel focus on cyber resilience, as the threat landscape continues to evolve. Strengthening infrastructure to withstand and recover from the growing sophistication of cyberattacks will become paramount, necessitating a proactive approach to safeguard digital assets in the quantum-powered future. Flexible solutions like FPGAs will be essential in ushering in a new wave of innovation in the industry to ensure data protection and system integrity in the face of evolving threats. – Mamta Gupta, Director of Security and Comms Segment Marketing at Lattice Semiconductor

Despite its initial promise, quantum computing has yet to fully materialize as a viable option for businesses to tackle the most pressing business and scientific challenges. While the industry waits for quantum to live up to expectations, we find immense potential in a new way of computing, leveraging lasers. Microsoft’s announcement this year about an all-optical computer underscores a growing recognition of this paradigm shift. We firmly believe that laser-based solutions will emerge as promising and viable infrastructure options in the realm of high-performance computing, proving to be the most scalable, reliable and affordable option for the enterprise, and will provide groundbreaking solutions to the world’s most complex problems. – Chene Tradonsky, CTO and Co-Founder of LightSolver

RPA, Automation, Robotics

Automation, not AI, will have a bigger enterprise impact in 2024: While AI is likely to continue making headlines next year, automation will be the more impactful technology for enterprises from an implementation perspective. The truth is that most of the world is not very automated today. If you look at any technology stack right now, you’re likely to find some poorly implemented automation and a lot of manual processes under the hood. However, as businesses look for ways to improve efficiency in 2024, most will turn to automation, particularly for their engineering and infrastructure functions. This is because automation is highly efficient and requires very few people to manage it. For many use cases, businesses can set up fully automated systems that operate just as well – if not better – than humans or even AI-augmented humans. – David Hunt, Co-Founder and CTO, Prelude Security

Automation tools will make a more visible impact on developer velocity and how developers’ work is measured. This year’s explosion of AI and ML is instigating an unparalleled transformation in business productivity expectations. In 2024, the extended accessibility to AI- and ML-driven automation tools will continue to elevate the benchmarks for code quality, reliability, and security, in response to the escalating demand for expedited software delivery. – Sairaj Uddin, SVP of Technology at The Trade Desk

Automation and AI tooling will come together to make one central “enterprise autopilot.” Infusing process mining and task mining with AI and automation will finally bring digital transformation full circle in 2024. These technologies will no longer operate as separate, but will be combined to power the full potential of automation. Enterprises that bring AI and automation together under one unified system will connect work from dispersed processes and systems to enable the intelligence and agility that businesses desperately need to keep pace with digital transformation. – Anthony Abdulla, Senior Director, Product Marketing, Intelligent Automation at Pega

Security

GenAI will prove preexisting security awareness training antiquated in 2024; organizations will modernize their programs to address these new, more sophisticated threats. With the consumption of GenAI at scale within the bad actor community, the value of traditional security awareness training will decline rapidly. Companies will modernize security awareness programs to include continuous user-focused controls with a greater ability to identify and defend against today’s modern social engineering attacks alongside real-time user guidance to prevent users from accidentally falling victim to such attacks in the blink of an eye. – Curtis Simpson, CISO, Armis

AI risks will largely stem from developers and application security: I predict AI risks will continue to largely stem from developers and application security. When talking about the risks of AI, many think about threat actors using it in nefarious ways. But in actuality, in 2024 we should be most concerned about how our internal teams are using AI — specifically those in application security and software development. While it can be a powerful tool for certain teams like offensive and defensive teams and SOC analysts to enhance and parse through information, without proper parameters and rules in place regarding AI usage by organizations, it can potentially lead to unexpected risks for CISOs and business executives and leave holes in their cyber resilience to leave the door open for exploitation. – Kev Breen, Director of Cyber Threat Research, Immersive Labs

Phishing training will fall out of fashion in 2024, thanks to AI: Teach a man to phish (with AI), and you’ll feed him for a long time. Thanks to AI, modern adversaries are upping their attack game with new innovations. Advanced LLMs will make it easier for attackers to craft the perfect phishing email, leaving no typos or odd formatting behind for their victims to catch. As a result, phishing training as we know it will become obsolete. Businesses will need to look to other forms of cybersecurity training to ensure their critical assets are protected, and adopt more sophisticated defenses. – Eric Skinner, VP of Market Strategy, Trend Micro

A Rise in Disinformation Influenced by Global Events and AI: In 2024, there will be an acceleration in disinformation, exacerbated by ongoing global conflicts and the growing availability of AI tools that will create and/or spread false narratives more rapidly and convincingly. This trend will be viewed against a backdrop of declining public trust in institutions, a phenomenon intensified by the US election year. With email being the primary communication tool used, validating sender authentication will become increasingly more important. – Alexander Garcia-Tobar, CEO and Co-Founder, Valimail

The rise of AI enabled social engineering will destabilize politics and disrupt business: We will see attackers leverage AI tools with greater sophistication and scale to create social engineering content that will target global elections, especially with the presidential election in the U.S. on the horizon, and cause political friction on a global scale. We also expect to see this within organizations as AI-enabled social engineering tactics become harder to detect, therefore increasing our reliance on advanced detection capabilities to bridge the gap. – Wendi Whitmore, SVP and Head of Unit 42 at Palo Alto Networks 

Threat actors will use AI to exploit Zero Days in half the time: Threat actors are harnessing the power of artificial intelligence to work faster. The surge in AI adoption seen in 2023 is just the beginning. In the coming year, we predict an exponential increase in zero-day vulnerabilities. Here’s the kicker: Threat actors will exploit large language models (LLMs) to craft sophisticated exploits, reducing the average time to known exploitation by half. The result? A fierce AI arms race between defenders and attackers, both leveraging technology for protection and infiltration. – Ryan Sherstobitoff, Senior Vice President, Threat Research & Intelligence, SecurityScorecard

AI Blind Spots Open the Door to New Corporate Risks: In 2024, CrowdStrike expects that threat actors will shift their attention to AI systems as the newest threat vector to target organizations, through vulnerabilities in sanctioned AI deployments and blind spots from employees’ unsanctioned use of AI tools. After a year of explosive growth in AI use cases and adoption, security teams are still in the early stages of understanding the threat models around their AI deployments and tracking unsanctioned AI tools that have been introduced to their environments by employees. These blind spots and new technologies open the door to threat actors eager to infiltrate corporate networks or access sensitive data. Critically, as employees use AI tools without oversight from their security team, companies will be forced to grapple with new data protection risks. Corporate data that is inputted into AI tools isn’t just at risk of threat actors targeting vulnerabilities in these tools to extract data, the data is also at risk of being leaked or shared with unauthorized parties as part of the system’s training protocol. 2024 will be the year when organizations will need to look internally to understand where AI has already been introduced into their organizations (through official and unofficial channels), assess their risk posture, and be strategic in creating guidelines to ensure secure and auditable usage that minimizes company risk and spend but maximizes value. – Elia Zaitsev, CTO, CrowdStrike

This year, AI began to emerge in cybersecurity: both for threat actors looking to vastly improve their phishing emails and for organizations looking to automate and improve their cybersecurity solutions. In 2024, AI will disrupt cybersecurity. Threat actors will likely evolve and start using generative AI to write and deploy malware, while companies will see the true benefit of AI in preventing attacks, reducing alert fatigue and supporting overburdened IT teams. In 2024 we must harness the full potential of AI for cybersecurity, with a keen eye on responsible and ethical use. – Sergey Shykevich, Threat Intelligence Group Manager at Check Point Software

Organizations will adopt new strategies for managing data risk. Enforcement of privacy regulations, – which will cover more than 75% of the world’s population in 2024 according to Gartner (up from 10% just 4 years ago) – continued threat of data breach (externally or internally caused), increased costs of litigation and unrelenting growth in both the number of data sources and aggregate amount of data created/stored will drive these strategies. CIO’s will look for data risk management solutions that facilitate the identification, classification, preservation, collection, review, redaction, analysis/reporting, defensible disposition, entity extraction and protection of data. Credible solutions will include automated data discovery, a comprehensive data inventory, orchestrated workflows, broad integration with enterprise data sources and storage applications and advanced AI designed to increase the accuracy and speed when responding to requests for data. – Ajith Samuel, Chief Product Officer, Exterro

AI will dominate the cybersecurity conversation: Both attackers and defenders will step up their use of AI. The bad guys will use it more to generate malware quickly, automate attacks, and strengthen the effectiveness of social engineering campaigns. The good guys will counter by incorporating machine learning algorithms, natural language processing, and other AI-based tools into their cybersecurity strategies. I believe there is almost no scenario where AI-driven deepfakes won’t be part of the pending U.S. Presidential election amongst others.  Even if the technology isn’t applied, its within the realm AI deepfakes will be blamed for gaffes or embarrassing imagery. We’ll also hear more about the role AI can play in solving the persistent cybersecurity talent gap, with AI-powered systems taking over more and more of the routine operations in security operations centers. When it comes to cybersecurity in 2024, AI will be everywhere. – Steve Stone, Head of Rubrik Zero Labs

2024 will bring a serious cyberattack or data breach related to AI. The rush to capitalize on the productivity benefits of AI has led to teams cutting corners on security. We’re seeing an inverse correlation between an open source AI/ML project’s popularity and its security posture. On the other end, AI will help organizations readily address cybersecurity by being able to detect and highlight common bad security patterns in code and configuration. Over the next few years, we will see AI improving to help provide guidance in more complex scenarios. However, AI/ML must be a signal – not a decision maker. – Mike Lieberman, Kusari CTO & Co-Founder

AI will shift from defense to attack, and organizations will need to prepare. In 2023, we heard a lot about utilizing AI for defensive solutions like intrusion detection and prevention systems. But in 2024, the tables will turn, with AI being used far more often for attack surfaces. Attackers will begin using AI capabilities to harvest the landscape, learning about an individual or enterprise to later generate AI-based attacks. With today’s technology, a bad actor could pick up a phone, pull basic data from LinkedIn and other online sources to mimic a manager’s voice, and perform malicious activities like an organizational password reset. The ability to render sites on the fly based on search can be used for legitimate or harmful activities. As AI and generative AI searches continue to mature, websites will grow more susceptible to being taken over by force. Once this technology becomes widespread, organizations could lose control of the information on their websites, but a fake page’s malicious content will look authentic thanks to AI’s ability to write, build and render a page as fast as a search result can be delivered. Just as they’re doing with Post Quantum Computing, leaders will need to create a strategy to combat AI threats and assure trust for public-facing websites and other key assets. – Mike Nelson, Vice President of Digital Trust at DigiCert

Adoption of Generative AI Tools – the incredibly fast adoption of generative AI tools will lead to new data risks, such as privacy violations, fake AI tools, phishing, etc. Organizations will need to put together AI safety standards to keep their customer and employee data safe. Having a SaaS security solution that can identify connected generative AI tools will be critical. – Ofer Klein, Co-founder and CEO of Reco.ai

Using AI to protect the risk of AI: Artificial intelligence (AI) will play a crucial role in enhancing DSPM capabilities, even managing the risk of AI tools themselves. Whether it’s employees using ChatGPT to write stronger emails, or customers speaking to AI rather than customer service agents, AI feeds off data. AI-driven Data Security Posture Management (DSPM) tools will not only identify critical risks but also recognize and mitigate risks originating from AI systems themselves. – Metomic CEO Rich Vibert

Prediction: API security evolves as AI enhances offense-defense strategies: In 2023, AI began transforming cybersecurity,  playing pivotal roles both on the offensive and defensive security fronts. Traditionally, identifying and exploiting complex, one-off API vulnerabilities required human intervention. AI is now changing this landscape, automating the process, enabling cost-effective, large-scale attacks. In 2024, I predict a notable increase in the sophistication and scalability of attacks. We will witness a pivotal shift as AI becomes a powerful tool for both malicious actors and defenders, redefining the dynamics of digital security. – Shay Levi – CTO and co-founder – Noname Security

Adversaries Will Seed Disinformation By Manipulating LLMs: The tools of the AI revolution are fueled by the power of Large Language Models (LLMs). Adversaries will look for opportunities to seed disinformation and distrust by manipulating the data that underpins them. – Chris Scott, Managing Partner, Unit 42 at Palo Alto Networks 

AI-Based Attacks: No, there won’t be a wave of AI based attacks – while AI has been getting a lot of attention ever since the introduction of ChatGPT, we are not even close to seeing a full-fledged AI based attack. You don’t even have to take my word for it – the threat actors on major cybercrime forums are saying it as well. Hallucinations, model restrictions, and the current maturity level of LLMs are just some of the reasons this issue is actually a non-issue at this point in time. But, we should expect to see LLMs being used for expediting and perfecting small portions or tasks of the attacks, be it email creation, help with social engineering by creating profiles or documents and more. AI is not going to replace AI, but people who know how to use AI will replace those who don’t… – Etay Maor, Senior Director of Security Strategy at Cato Networks

AI will take election hacking to another level in 2024: With major elections in the US, UK, and India coinciding with the mass adoption of Generative AI, we are likely to see AI supercharging election interference in 2024. From the creation of convincing deepfakes, to an increase of targeted misinformation, the concept of trust, identity and democracy itself will be under the microscope. This will put even greater onus on individuals to scrutinize and make informed decisions and on media platforms to root out false content. – Shivajee Samdarshi, Chief Product Officer at Venafi

With the rapid advancement of generative AI, phishing attacks will become more sophisticated and more challenging to detect – the persuasive power of AI-driven content will be particularly alarming. By tailoring messages that resonate strongly with individual targets based on their interests, behaviors, and even fears, AI can manipulate decision-making processes and induce actions that compromise security. In response, there will be widespread adoption of phishing-resistant multi-factor authentication (MFA) methods. These methods, which are more secure than traditional MFA, will become a standard practice, especially in industries handling sensitive data. – Husnain Bajwa, VP of Product Strategy at Beyond Identity

More security vendors ban Generative AI platforms, pushing organizations to embrace contextual Large Language Models : Vendors are increasingly wary of generative AI platforms like ChatGPT, with 23% of vendors banning the use of these platforms internally this year due to security concerns. While generative AI offers the potential for greater efficiencies, its effectiveness hinges on contextual understanding. In 2024, we will see organizations build contextual LLMs based on smaller datasets (small language model?) to enhance the capabilities and security of AI platforms. – Jason Kent, Hacker in Residence at Cequence Security

Deceptive AI Driven Techniques Will Become Prominent in 2024: The level of sophistication in cybersecurity has evolved exponentially over time, but 2023 saw some of the quickest innovations as generative AI became more prominent. Because these tools are often generally available and easily accessible, we must assess the risk it poses to the current cyber landscape. Generative AI is a double edged sword for the cybersecurity industry – it’s used to make defenders faster and more capable, but it’s also doing the same thing for adversaries. Attackers have become more deceptive in their techniques and harder to detect as generative AI gets better and better at impersonating humans, making traditional signs of social engineering harder to identify from the first point of contact. These trends will continue into 2024, and become even more dangerous. It’s important that the industry’s capabilities continue to keep pace with attackers’ use of emerging technologies like generative AI and 5G in the coming year. – Siroui MushegianBarracuda CIO  

AI will play a key and growing role in how organizations analyze and act on security data: We will begin to see quantifiable benefits from the use of AI as it relates to analytics and operational playbooks. These benefits will help bridge some of the heavy lifting that Security Operations Center (SOC) analysts do today. AI will also benefit how response and mitigation capabilities translate into operational ones.” – Mike Spanbauer, Field CTO, Security at Juniper Networks

From Productivity to Peril: AI’s Impact on Identity Security: In the broader context of digital transformation, AI has supercharged productivity like never before and has opened up remarkable possibilities, such as creating realistic images, videos, and text virtually indistinguishable from human-created content. But in the upcoming year, organizations must brace for AI’s double-edged sword. This capacity for hyper-realistic content generation has profound implications, and the rise of Generative AI will turbocharge identity-based attacks. The development of AI is intertwined with a broader landscape of identity-based risks and vulnerabilities, including the growing threat of phishing and spear phishing campaigns, techniques where attackers target a specific person or group and often will include information known to be of interest to the target, which has taken on a new dimension due to the capabilities of AI. As we head into 2024, organizations must stay vigilant, understand the technology’s risks, invest in advanced security measures, and develop a complete picture of their identity infrastructure to stand a chance against threat actors. – Silverfort’s Co-founder and CTO, Yaron Kassner 

AI will drive the adoption of proactive security models. There will be a greater focus on proactive approaches and tools including firewalls, zero trust, malware, and hardening. The top GenAI threat issues are growing privacy concerns, undetectable phishing attacks, and an increase in the volume/velocity of attacks. Addressing the complex security challenges AI poses requires strategic planning and proactive measures. Security is on the map in a way that it hasn’t been in many recent years. – Mike Loukides, Vice President of Emerging Tech Content at O’Reilly Media

Rapid AI adoption will require a new reckoning for security professionals: It’s been a year since ChatGPT hit the scene, and since its debut, we’ve seen a massive proliferation in AI tools. To say it’s shaken up how organizations approach work would be an understatement. However, as organizations rush to adopt AI, many lack a fundamental understanding of how to implement the right security controls for it. In 2024, security teams biggest challenge will be properly securing the AI tools and technologies their organizations have already onboarded. We’ve already seen attacks against GenAI models such as model inversion, data poisoning, and prompt injection; and as the industry adopts more AI tools, AI attack surfaces across these novel applications will expand. This will pose a couple challenges: refining the ways AI is used to help improve efficiency and threat detection while grappling with the new vulnerabilities these tools introduce. Add in the fact that bad actors are also using these tools to help automate development and execution of new threats, and you’ve created an environment ripe for new security incidents. Just like any new technology, companies will need to balance security, convenience, and innovation as they adopt AI and ensure they understand the potential repercussions of it. – Dr. Chaz Lever, senior director, security research at Devo

Consumers will become more concerned about AI: A recent survey found that over half (51%) of consumers have been a victim of identity theft or know someone who has, and 81% are worried about AI-based fraud while shopping online. Consumers and businesses have a societal duty to protect themselves and one another, calling for more security tools to protect personal information. There also must be greater education from business-to-consumer levels to avoid compromised data, such as when shopping online and for shoppers saving their payment information on retailer websites. The importance of authentication tools, such as mobile authentication, will only go up from here. – Tim Brown, global identity officer at Prove Identity

Beware the Weaponization of Generative Artificial Intelligence in 2024: The top threat this year and going forward involves the weaponization of generative AI to drive more sophisticated phishing attacks, and how we will address that concern from a security standpoint. We know that human training is not enough to prevent business email compromise (BEC) attacks from succeeding. According to the FBI’s Internet Crime Report, BEC alone accounted for approximately $2.7B in losses in 2022, and another $52M in losses from other types of phishing. With rewards like this, cybercriminals are increasingly doubling down on phishing and BEC attempts – and generative AI is only further greasing the wheels. In 2024 we will see more, not less, of such human compromise attacks that are a lot more sophisticated and targeted due to the use of gen AI. We will need to rethink our roadmaps as to how we can counter this problem. We should expect an acceleration of gen AI-based attacks becoming more prevalent and targeted, and unfortunately more successful. The attackers are moving from a spray-and-pray approach that relied on high-volume phishing emails, to now instead targeting people with specific information about someone’s identity or bank account or personal details, which makes the scams much more convincing. We will see a significant increase in both the targeted nature of these social engineering attacks and their sophistication, and ultimately their success. Email will continue to be the top threat vector, but we are seeing these attacks anywhere now, including text messages, voice messages, work collaboration tools like Slack and social media. Anywhere you can get messaged on both the personal and business side, you can get attacked. – Patrick Harr, CEO of SlashNext 

Secure data sharing becomes the linchpin in robust and resilient Generative AI-driven cyber defenses: Generative AI is a dual-use technology with the potential to usher humanity forward or, if mismanaged, regress our advancements or even push us toward potential extinction. APIs, which drive the integrations between systems, software, and data points, are pivotal in realizing the potential of AI in a secure, protected manner. This is also true when it comes to AI’s application in cyber defenses. In 2024, organizations will recognize that secure data sharing is essential to building a strong, resilient AI-powered future. While AI is undoubtedly a testament to human ingenuity and potential, its safe and ethical application is imperative. It’s not merely about acquiring AI tools; it’s the responsibility and accountability of secure integration, primarily when facilitated through APIs. – Ameya Talwalkar, CEO and Founder of Cequence Security

AI-Driven Attacks and Defenses: Cybercriminals will increasingly use artificial intelligence (AI) to automate and enhance their attacks. In response, cybersecurity defenses will rely more on AI and machine learning for threat detection and automated incident response, creating a continuous battle of algorithms. – Joseph Carson, Chief Security Scientist and Advisory CISO at Delinea

Threat actors will win the AI battle in 2024: The rise of generative AI has ignited a critical debate. Will organizations harness generative AI in time, or will threat actors exploit faster to gain an advantage? Unfortunately, the scales will tip in favor of the dark side as threat actors outpace organizations in adopting generative AI. Brace for a relentless onslaught of deepfakes, sophisticated phishing campaigns, and stealthy payloads that evade endpoint security defenses. These challenges will test the mettle of cybersecurity defenders like never before. – Dr. Aleksandr Yampolskiy, Co-Founder and CEO of SecurityScorecard

Hackers will start to leverage AI, creating a crisis: While this is an unintended consequence of great technology innovation, we will start to see evidence of the damage that AI in the wrong hands can cause. AI tools can help hackers increase their productivity by an order of magnitude. This will enable them to plant malware, find vulnerabilities, and exploit weak posture much faster. – Ratan Tipirneni, President & CEO at Tigera

AI is already providing a tremendous advantage for our cyber defenders, enabling them to improve capabilities, reduce toil and better protect against threats. We expect these capabilities and benefits to surge in 2024 as the defenders own the technology and thus direct its development with specific use cases in mind. In essence, we have the home-field advantage and intend to fully utilize it. On the other hand, while our frontline investigators saw very limited use of attackers using AI in 2023, in 2024 we expect attackers to use generative AI and LLMs to personalize and slowly scale their campaigns. They will use anything they can to blur the line between benign and malicious AI applications, so defenders must act quicker and more efficiently in response. – Phil Venables, CISO, Google Cloud 

With the risk of cybersecurity attacks on the rise, in 2024, it is crucial for governments to take a proactive approach to their security, making sure that their official channels of communication to their residents are not exploited or affected, and to be very deliberate with any sensitive information that they obtain from their residents in the first place. The major piece that everyone should be looking for is the ability to set up Multifactor Authentication to make it as hard as possible for a threat actor to get into a potential communication system like Facebook or X (formerly Twitter). – Ben Sebree, Senior VP of R&D at CivicPlus

New Malicious Uses of Generative AI Will Emerge: AI fears are warranted, but not in the way we expect. Content creation, while a risk to cybersecurity, is one that our modern solutions can address. The real threat is generative AI developing the ability to plan and orchestrate attacks. If that were to happen, it would mean that AI could design and execute attacks on the fly—and do so using information on the Internet. Generative AI promises to erase the greatest advantage we have over our adversaries: the time and resources required for sophisticated attacks. If generative AI can orchestrate attacks, it would shift the balance of power dramatically. Today, it takes hackers weeks to discover our vulnerabilities. Tomorrow, AI could do the same in a matter of seconds or minutes. And rather than requiring a team of hackers with diverse skill sets, it could only take just one person working with AI. – Adrien Gendre, Chief Product and Technology Officer at Vade

It’s easy to look at the cybersecurity implications of bad actors fine-tuning LLMs for nefarious purposes through an extremely negative lens. And while it is true that AI will enable hackers to scale the work that they do, the same holds true for security professionals. The good news is national governments aren’t sitting still. Building custom LLMs represents a viable path forward for other security-focused government agencies and business organizations. While only the largest well-funded big tech companies have the resources to build an LLM from scratch, many have the expertise and the resources to fine-tune open source LLMs in the fight to mitigate the threats bad actors—from tech-savvy teenagers to sophisticated nation-state operations—are in the process of building. It’s incumbent upon us to ensure that whatever is created for malicious purposes, an equal and opposite force is applied to create the equivalent toolsets for good. – Aaron Mulgrew, Solutions Architect, Forcepoint

Since 2022, we’ve witnessed a notable transformation in the data security and regulatory compliance technology landscape. This shift has marked the onset of technology consolidation that is expected to persist in the coming years. Niche and single-solution products and vendors are increasingly sought after for acquisitions and partnerships as consumers seek solutions to meet data security and regulatory requirements while minimizing required expertise, costs, and effort. What will make 2024 particularly interesting are the recent developments and acquisitions that currently suggest vendors will take some diverging paths in the new year – some will prioritize enhancing their cloud capabilities, while others will merge existing technologies for a consolidated, all-in-one offering. Others are simply looking to answer questions of data risk. There will be overlaps across these developments, but the ultimate winner will be consumers, who will see substantial growth in enterprise data asset coverage, reduced skill requirements, and improved synergy among technologies that were traditionally segmented. – Terry Ray, SVP Data Security GTM, Field CTO of Imperva

Trust in data and AI will be paramount to making smart decisions, but that trust can only come through understanding. CEOs will need to understand how their company collects and structures data, where their data infrastructure could improve, and its limitations to be able to effectively use AI in 2024. That means data infrastructure, quality, security and integrity can’t simply be delegated to the CTO, CIO or CDO. CEOs must be intimately familiar with what they are putting into AI, in order to act on what comes out of it with the appropriate context. – Allison Arzeno – CEO, Assurance IQ

The surging investments in AI will trigger a momentous shift in AI security, reshaping the landscape of technological safeguarding: In 2024, as the investment in AI continues to surge, a pivotal shift will unfold in the realm of AI security. With AI models, particularly large language models and generative AI, being integrated into every facet of the software chain across diverse industries, the demand for safeguarding these technologies against evolving threats like prompt injection and other malicious attacks will reach unprecedented levels. Despite the relative novelty of these advancements, the imperative for stringent security measures will gain traction, marking a watershed moment in the journey of AI technology. As we continue to grapple with the uncharted territory of immense data and new challenges, we will witness a concerted effort to fortify the boundaries and ensure the responsible growth of this transformative technology. – JP Perez-Etchegoyen, CTO, Onapsis 

In 2024, there will be a transition to AI-generated tailored malware and full-scale automation of cyberattacks: Cybersecurity teams face a significant threat from the rapid automation of malware creation and execution using generative AI and other advanced tools. In 2023, AI systems capable of generating highly customized malware emerged, giving threat actors a new and powerful weapon. In the coming year, the focus will shift from merely generating tailored malware to automating the entire attack process. This will make it much easier for even unskilled threat actors to launch successful attacks. – Adi Dubin, Vice President of Product Management, Skybox Security

Generative AI will become the largest proliferator of shadow IT: Traditional concerns around shadow IT revolved primarily around cost-control, but this year, with the unsanctioned use of generative AI services rapidly growing within the enterprise, that risk has now expanded to exposure of intellectual property and customer data being exposed outside of your organization. In 2024, we can expect to see businesses without strong AI compliance policies and visibility into the tools being used by their employees experience higher rates of PII exposure. I also expect to see at least a couple of incidents of proprietary source code being inadvertently used to train AI models not under IT’s control. My hope is that this will be a significant wake-up call for tech teams and business leaders at large about the urgent need for a proactive, enforced plan around responsible generative AI use. – Heath Thompson, President & GM, Quest Software

AI-Centric Surveillance Systems: Safety and Security: In the case of a security incident, traditional video surveillance systems require someone to review many hours of footage to find key incidents, a time-consuming process which can delay response. The video surveillance industry is poised to transform to AI-driven security systems. Traditional video surveillance systems are evolving into comprehensive AI security solutions. These systems will record video footage, but will also do a lot more to enhance safety and security. This shift reflects the fact that customers are less interested in video and more concerned about preventing and addressing security issues. Leveraging machine learning, algorithms, and computer vision, AI safety and security systems will efficiently process and interpret video content, enabling real-time threat detection. These AI-driven security systems are set to become the norm, delivering intelligent, proactive solutions that minimize problems and enhance overall security across various types of environments, including homes, businesses and government agencies. – Dean Drako, CEO of Eagle Eye Networks

The emergence of “poly-crisis” due to pervasive AI-based cyber-attacks: We saw the emergence of AI in 2022, and we saw the emergence of misuse of AI as an attack vector, helping make phishing attempts sharper and more effective. In 2024, I expect cyberattacks to become pervasive as enterprises transform. It is possible today to entice AI enthusiasts to fall prey to AI prompt injection. Come 2024, perpetrators will find it easier to use AI to attack not only traditional IT but also cloud containers and, increasingly, ICS and OT environments, leading to the emergence of a “poly-crisis” that threatens not only financial impact but also impacts human life simultaneously at the same time in cascading effects. Critical Computing Infrastructure will be under increased threat due to increasing geo-political threat. Cyber defense will be automated, leveraging AI to adapt to newer attack models. – Agnidipta Sarkar, VP CISO Advisory, ColorTokens

Security programs for generative AI: As companies begin to move generative AI projects from experimental pilot to production, concerns about data security become paramount. LLMs that are trained on sensitive data can be manipulated to expose that data through prompt injections attacks. LLMs with access to sensitive data pose compliance, security, and governance risks. The effort around securing LLMs in production will require more organizational focus on data discovery and classification – in order to create transparency into the data that ‘feeds’ the language model. – Dan Benjamin, CEO and Co-Founder of Dig Security

Generative AI will augment, not replace, SOC analysts in cybersecurity: As the cybersecurity landscape evolves, generative AI’s role within Security Operations Centers (SOCs) will be characterized by augmentation rather than replacement of human analysts due to its maturity limitations. Gen AI will primarily assist and enhance the capabilities of SOC staff with the necessary expertise to interpret its output, proving especially valuable for mid-level analysts. Organizations will need to discern genuine gen AI contributions amid marketing hype, and the debate between investing in more technology like gen AI or hiring additional SOC analysts will persist, with the human factor remaining crucial. Success will depend on aligning these tools with analyst workflows rather than relying on superficial intelligence. –  Andrew Hollister, CISO & VP Labs R&D, LogRhythm

By the end of 2024, 95% of consumers in the U.S. will have fallen victim to a deepfake: Every company and consumer is jumping on the AI bandwagon, and fraudsters are no exception. Cybercriminals have previously found ways to cheat the system. Earlier in 2023, they were found bypassing ChatGPT’s anti-abuse restrictions to generate and review malicious code. Now, ChatGPT is fully connected to the internet and has the ability to generate images — a recipe for the perfect deepfake. In 2023, 52% of consumers believed they could detect a deepfake video, reflecting an over-confidence in consumers. Deepfakes have become highly sophisticated and practically impossible to detect by the naked eye, and now generative AI makes their creation easier than ever. Misinformation is already spreading like wildfire, and deepfakes will only get more complicated with the upcoming elections. By the end of 2024, the vast majority of U.S. consumers will have been exposed to a deepfake, whether they knew it to be synthetic media or not. – Stuart Wells, CTO, Jumio

Despite what headlines might suggest, based on what we’ve seen first-hand in our security operations center, the greatest risks to company networks in 2024 won’t come from attacks shaped by AI. AI promises the power to enable both defenders and attackers alike, but we see cyberattacks actually progress the furthest when visibility or security controls are lacking. Why is this noteworthy? While the hype and subsequent fear around AI as of late will only continue, the biggest threats will result from existing vulnerabilities and insecure configurations—and attackers don’t need AI to exploit those. – Aaron Walton, Threat Intel Analyst, Expel

AI will only make it easier for bad actors to carry out targeted attacks in 2024. Today’s attacks are simple, and they’re continuing to target the shortcomings that we’ve failed to remedy from the beginning (too many connections with not enough visibility or insight into what’s communicating with what). More widespread adoption of Zero Trust and an “assume breach” approach is really the only way we’ll make progress when it comes to building cyber resilience. Otherwise, we’ll continue to see massive losses in cyberspace. – Paul Dant, Senior Director, Cybersecurity Strategy & Research at Illumio

Storage

AI will accelerate storage and security requirements: By nature, generative AI models produce a vast amount of data. Because of this, in the upcoming year organizations can expect to see a surge in their data storage and security needs, leading to investments in scalable storage solutions, whether on-premises, cloud-based, or hybrid. The dynamic and continuous production of data generated by AI will necessitate more frequent backup cycles, and enterprises will need to implement more robust data lifecycle management solutions to determine data retention, archival, or deletion policies, ensuring that only valuable data is stored long-term. Ensuring the integrity of backups will also be paramount given the business-critical nature of AI-generated insights. Given that AI-generated data can be sensitive and critical, heightened security measures will be the last piece to the accelerated storage puzzle, meaning data security will need to be weaved into the fabric of all generative AI projects, including prevention, detection, and data recoverability. – Tony Liau, VP of Product at Object First

Data and data storage strategies will be reimagined – Data is everywhere, so enterprises need to radically rethink the underlying infrastructure supporting it. Datasets are too large, transfer costs are too great, and the potential misuse of data continues to expand. As a result, enterprises will take a new approach to sourcing, securing, transferring, and ensuring governance and compliance of the large-scale datasets needed to power future AI/ML applications. – Kevin Cochrane, Chief Marketing Officer at Vultr

We Finally Overcome the Data Silo Problem: In 2024, organizations will increasingly adopt parallel global file systems to truly realize digital transformation. File systems are traditionally buried into a proprietary storage layer, which typically locks them and an organization’s data into a storage vendor platform. Moving the data from one vendor’s storage type to another, or to a different location or cloud, involves creating a new copy of both the file system metadata and the actual file essence. This proliferation of file copies and the complexity needed to initiate copy management across silos interrupts user access, and is a key problem that inhibits IT modernization and consolidation. The traditional paradigm of the file system trapped in vendor storage platforms is inconvenient within silos of a single data center. But the increasing migration to the cloud has dramatically compounded the problem, since it is typically difficult for enterprises with large volumes of unstructured data to move all of their files entirely to the cloud. Unlike solutions that try to manage storage silos and distributed environments by shuffling file copies from one place to another, a high performance parallel global file system that can span all storage types, from any vendor, and across one or more locations and clouds is more effective. – Molly Presley, SVP of Marketing, Hammerspace 

From Specialized Storage to Optimized Commodity Storage for AI Platform: The growth of AI workloads has driven the adoption of specialized high-performance computing (HPC) storage optimized for speed and throughput. But in 2024, we expect a shift towards commoditized storage. Cloud object stores, NVMe flash, and other storage solutions will be optimized for cost-efficient scalability. The high cost and complexity of specialized storage will give way to flexible, cheaper, easy-to-manage commodity storage tailored for AI needs, allowing more organizations to store and process data-intensive workloads using cost-effective solutions. – Haoyuan Li, Founder and CEO, Alluxio

Freeing up money from storage for AI and other critical IT projects: Dramatically reducing the costs of enterprise storage frees up IT budget to fund other significant new projects, such as AI projects, cybersecurity-related projects, or other strategic activities. This trend for 2024 will play a pivotal role in enterprises where there will be pressure to accelerate AI projects for the next stage of digital transformation, as well as to improve cybersecurity against more sophisticated AI-driven cyberattacks. With IT budgets projected by Gartner to grow by 8% in 2024, funding for these new projects will need to come from somewhere. A smart approach to shifting IT spending internally that is taking hold is to remove costs from storage, while simultaneously improving storage. It sounds like a paradox at first sight, but the trend in 2024 will be to take advantage of three key factors that make this “paradox” an actual reality: (1) storage consolidation onto a single, scalable high availability and high performance platform, (2) autonomous automation, and (3) pay-as-you-go, flexible consumption models for hybrid cloud (private cloud and public cloud) implementations of storage. Storage consolidation, for example, replaces 20 storage arrays with one or two storage arrays that can scale into the multi-petabyte range with 100% availability guaranteed. Having fewer arrays immediately lowers costs in terms of IT resource and storage management, power, cooling, and space. This cost savings can be used for critical IT projects. Autonomous automation simplifies storage, intelligently automating processes and how to handle applications and workloads. Virtually no human intervention is needed. The storage systems run themselves, enabling a set-it-and-forget-it mode of operations. IT staff can focus on more value-adding activities and building AI capabilities into the infrastructure and across the enterprise. Leveraging flexible consumption models to pay for storage, as needed and as efficiently as possible, lowers CAPEX and OPEX, freeing up money for these other IT projects. An extension of this trend is also to invest in enterprise storage solutions that deliver an ROI in one year or less, optimizing the budget. – Eric Herzog, CMO of Infinidat

AI has accelerated the realization that every enterprise is a data-centric enterprise. The implications will manifest themselves in 2024. A core one will be that a new, object storage-centric architecture will be required to power the AI datalake. This is yet another blow for the SAN/NAS appliance vendors who have struggled to adapt to the cloud and now face scale requirements that they simply cannot meet. This will also serve as the final nail in the coffin for Hadoop/HDFS which will go the way of SwiftStack in 2024.
Ultimately, this will be a positive development as these changes are overdue but they will have the effect of shifting the power in the enterprise from IT to the developer/data science teams. – Ugur Tigli, CTO, MinIO

Synthetic Data

AI-Generated Data: Data has been viewed as a trustworthy and unbiased way to make smart decisions. As we tackle the rise of AI generated data, organizations will need to spend time and oversight validating the data or risk hallucinated results. Another large implication with these data sets is the risk of data being modified in cyberattacks – the results of which would be catastrophic. We rely on correct data to vote, receive government services, login to our work devices and applications and make informed, data-driven decisions. If an organization or governments data has been modified by threat actors or we place too much trust in AI generated data without validation, there will be widespread consequences. – Ivanti’s Chief Product Officer, Sri Mukkamala 

Verticals/Applications

Strong data engines will make financial data movement possible: Financial organizations are just starting to realize the potential their data holds, using it for guidance in financial planning and analysis, budgetary planning, and more. However, much of this data is still siloed, and we have reached the point where these organizations have so much of this data, that they need to start thinking about how it can bring value to the company or risk losing their competitive advantage. In 2024, we will see finance organizations seek to classify and harmonize their data across repositories to enable new solutions. In response, data engines, data platforms, and data lakes will be just a few tools that will become crucial to understanding and utilizing such data effectively. As a result, we can expect to see the growth of fintech applications to enable this aggregated data analysis, reporting, and visualization to take place. – Bernie Emsley, CTO, insightsoftware

In 2024, we will see the true impact of AI on the role of attorneys—the good & the bad – In 2023, the Stanford Center for Legal Informatics successfully trained GPT-4 to pass the Bar exam. The AI model passed not only the multiple choice and written portions, but it scored in the 90th percentile, surpassing the average scores of human bar exam takers. There’s no doubt generative AI is becoming an asset to the workforce as it continues to improve. Within the next year, attorneys’ roles will transform as many previously manual, tedious tasks can be done with AI, so legal teams can focus on the strategic, proactive, risk-mitigating work that only humans with deep expertise can do. On the flip side, ChatGPT and other Large Language Models (LLMs) will increasingly raise legal and ethical issues such as copyright violations, intellectual property (IP) lawsuits and misinformation from AI hallucinations. We will see a rising number of lawsuits in 2024 stemming from these issues, keeping attorneys busy. – Sirion’s Evangelist, Ceschino Brooks de Vita 

The quality and reliability of data in life sciences is paramount. In 2024, AI might significantly improve data cleansing, validation, and enrichment processes. This will ensure that data for critical decision-making is accurate, reliable and reflects the latest scientific knowledge. The integration of disparate data sources will also become more sophisticated. AI will play a key role in harmonizing data from various formats and sources, including electronic health records (EHRs), genomic data, and wearables. This seamless integration will support more comprehensive and personalized healthcare insights. In the coming year, I also expect AI in life sciences to prioritize data governance. AI technologies will likely become more adept at ensuring data privacy and security, managing consent, and automating compliance processes. This will be crucial for maintaining trust and adhering to strict industry standards. – Timothy Martin, Executive Vice President of Product & Development at Yseop

The future of the AI biomedical worker: As the fields of AI and bioscience grow more deeply intertwined, there will be a greater influx of hybrid engineer-scientists – workers with a dual Ph.D. in biology and computer science. This will bridge the gap between the pharma and tech industries, supplying the workforce with an army of specialized workers trained to perform in both specialties simultaneously. – Amaro Taylor-Weiner, Chief AI Officer, Absci

AI will be the driving force behind the cultivation of a continuous learning culture within contact centers in the coming year, enhancing agents’ critical thinking abilities. Recognizing the role of adaptability, contact center managers will allocate funds to training initiatives that empower agents to adjust to evolving challenges, and recognize these skills as essential for future productivity. More than 60% of managers feel that critical thinking is a top skill needed by the agents of the future. Recruitment strategies will pivot towards individuals exhibiting robust critical thinking skills and a proactive willingness to continuously acquire new skills. – Dave Hoekstra, Product Evangelist, Calabrio 

The expectations for AI are not clear-cut because AI is not “one thing,” and there is more to come. In some industries, like healthcare, AI will play a crucial role in augmenting the care we receive, not replacing it. The healthcare industry as a whole is only just getting started with AI and, in 2024, both generative AI and computer vision continue to have a tremendous impact. The focus will be on creating cohesive, engaging experiences that go beyond simple transactions and that enable experienced clinicians to provide a higher quality of care. I expect to see more thoughtfully integrated experiences that leverage AI but are more than just chatting to a bot. AI will be adapted and shapeshifted to more deeply target industry needs and enhance the richness of human interactions. Each new development will offer new solutions with its own shockwaves. – Gabriel Mecklenburg, Co-Founder & Executive Chairman at Hinge Health

To secure the future of LLMs in healthcare, we must establish structured training frameworks that prioritize clinical relevance, accountability, and transparency. LLMs should be trained to align EHR concepts with validated medical ontologies and terminologies, and their natural language capabilities should be coupled with references to authoritative coded data sources. – David Lareau, CEO, Medicomp Systems

In the coming year, incredibly large datasets like those used to train most headline-driving AI models will simply not be the norm due to high costs and time to acquire clinical data. High quality and holistic data acquired through deeply sampling individual patients and study participants – especially if that data is multimodal and can be multiplexed or reused for multiple concurrent applications – is far more common and, honestly, more valuable when it can be acquired within normal clinical workflows. – Sean Tobyne, Vice President, Data Science and Analytics at Linus Health

I believe 2024 will see the continued swirling of exuberance about Generative AI in health care applications along with a greater focus on governance of their use. The FDA is going to ramp up scrutiny of these applications. The cloud companies will continue to get a boost from the broader adoption of AI as we use lots of computing power to process big data for its use. Look for some breakout uses of AI to help health care organizations address their workforce challenges. – Nancy Pratt, Senior Vice President of Clinical Product Development at CliniComp

I predict generative AI will continue to penetrate all industries and help automate processes that used to be manual. This will increase the demand for engineers as it will result in more automatable work being unearthed and have a significant impact on productivity. The race to hire more engineers is on. In HR specifically, AI will take an active role in how companies attract, hire, manage, and retain top talent at a global scale. I also think generative AI will be an effective tool for providing continuous knowledge and risk mitigation when it comes to global workforce management. If you are not leveraging generative AI, you will get run over by the competition. – Siddharth Ram, CTO, Velocity Global 

Generative AI (GenAI) proliferation will continue, and corporations will demand that their service providers demonstrate savings through the use of technology, specifically by using AI. There will be increasing investment in GenAI tools, affecting all areas of the legal functions. GenAI tools will support both legal operations (e.g., simplifying timekeeping), as well as enhance substantive functions (e.g., drafting motions based on past work products). We’ll also see investments in tactical tools, such as AI taking on the persona of an opposing counsel to prepare for cross-examination or a judge to predict a ruling on a motion. Technology adoption will no longer be an option, and strategic technology deployment will greatly impact the practice of law. – James Park, AI Consulting Director at DISCO 

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW

Comments

  1. A lot of info. Thank you very musch!

  2. Informative predictions for the big data industry in 2024! Valuable insights for staying ahead in the ever-evolving tech landscape.