2022 Trends in Artificial Intelligence and Machine Learning: Reasoning Meets Learning

For most organizations, the bifurcation of Artificial Intelligence has been as stark as it’s been simplistic. AI was either machine learning or rules-based approaches (the former of which outnumbered the latter), supervised or unsupervised learning, computer vision or natural language technologies.

Due to a number of developments in the past year around ModelOps, composite AI, and neuro-symbolic AI, there’s currently a growing awareness throughout the enterprise that AI—and its ROI—not only involves each of the foresaid dimensions, but does so optimally when they operate in conjunction with each other to pare the costs, difficulty, and time they otherwise require.

2022 will usher in a surplus of use cases in which converging AI’s respective connectionist and reasoning approaches, as well as the array of learning methodologies between supervised and unsupervised learning, renders the efficiency and scope of these technologies transformational for everyday business needs. 

According to expert.ai CTO Marco Varone, “There are situations where you can get better results combining the different approaches; there are situations where you can use both and it’s not too different, and there are situations where it’s better with one approach.”

By incorporating the full AI spectrum into their toolkits, organizations can not only deploy the most appropriate method for their cognitive computing tasks, but also exploit surrounding areas of opportunity like intellectual property for machine learning models, cloud or Internet of Things use cases, and explainable AI.

“The future is what we call a hybrid or composite approach where you use all the techniques that are available and you put them together in a way that the end user or data scientist trying to solve a specific problem can take different techniques and decide to use the ones giving the best results,” Varone predicted.

Composite AI

As Varone implied, composite AI is based on employing the most suitable AI methodology for a specific use case. Such options include techniques for semantic inferencing and knowledge graphs alongside “text analytics, supervised learning, the traditional machine learning like predictive modeling, forecasting, the optimization piece, and [neural] networks,” commented Wayne Thompson, SAS Chief Data Scientist. Neuro-symbolic AI, however, synthesizes AI’s statistical and reasoning capabilities in the same deployment. These possibilities represent a consummation of AI’s multifaceted utility with eminent enterprise repercussions, including:

  • Natural Language Technologies: Amalgamating AI’s connectionist and symbolic reasoning approaches supports an endless array of natural language applications, including conversational search, natural language generation, and ad-hoc question answering. Symbolic reasoning can abbreviate the labeling process for constructing machine learning models; machine learning can populate the enterprise knowledge for reasoning systems, and that knowledge can inform the feature generation process for machine learning. “Starting knowledge in a practical way so you can reuse it is a way to have a more efficient, scalable, maintainable approach to solve language understanding problems,” Varone observed.
  • Feature Engineering: Rules devised from enterprise knowledge accelerates the feature engineering process for high value use cases on transactional systems, for example. “You can take the top 100 rules for a set of transactions, then you can transpose those and use them as features for what customers are buying and use that for predictions,” Thompson mentioned.
  • Routing and Network Optimization: By creating a set of constraints based on rules, organizations can optimize supply chain deliveries, routes, and physical networks by employing machine learning as what Franz CEO Jans Aasman termed a “feedback loop mechanism,” for better, timelier results with each pass.
  • Security: Combining computer vision’s neural network approaches with text analytics encompassing taxonomies and machine learning can protect organizations from deep fakes and deep news to preclude the success of costly phishing attacks.

Machine Vision

The dissolution of the perceived boundaries between reasoning systems and learning systems, text and images, and supervised and unsupervised learning includes processes contiguous to computer vision—which is becoming more pervasive throughout the enterprise. Numerous medical facilities use computer vision to analyze images for diagnosis or treatment options. Thompson described a use case in which practitioners analyzing radiology images employed a taxonomy for name entity recognition on the patient notes, which word embeddings and contrastive learning can supplement for enhanced results. “You should always use rules with machine learning,” Thompson advised.

In other instances, organizations can avail themselves of transfer learning approaches that “are equally applicable to image and textual content,” revealed Indico Data CEO Tom Wilde. Such platforms contain a conceptual deep learning model comprised of hundreds of respective models pre-trained for text or image deployments. Wilde referenced a national waste hauling company using this method to remotely evaluate dumpsters “to determine the state of the container, i.e. was the lid closed, was it halfway open, was it spilling over. They use this to adjust pickup routes, invoicing, and other things with this huge stream of intelligence from these two billion pickups a year they do.”

Explainable AI

Explainable AI is a precursor to the responsible AI tenet assuaging a bevy of compliance and legal concerns. Merging AI’s knowledge foundation with its statistical one offers peerless explainability, whereas “with pure machine learning that is something that is very difficult to do,” Varone cautioned. Reasoning systems are based on words, which are more comprehensible to most people than numbers are. With rules, there are clear explanations between outputs and inputs. Such explainability “has two key benefits: for the end user and the developer,” specified Kyndi CEO Ryan Welsh. “When you return search results you can link back to the underlying data sources where you got the answers from so users can see this data in context. For developers, they can see which cognitive strategies are working to answer questions, so they can easily edit ones getting wrong answers to optimize them.”

There are several model management tools to facilitate explainable AI for learning systems. Top ones “give them a complete set of metrics around a model’s performance and expose what actually was used when that model was generated,” Wilde remarked. There are even measures for explaining the progression of models over time for visibility into the underlying “architecture, the framework, and the model life so, as you create new versions of that model, it keeps track of that,” Wilde summarized.

ModelOps

Accounting for the performance of models in real-time in relation to bias, compliance, and data governance is one of the hallmarks of ModelOps (which also acknowledges the necessity of rules, knowledge graphs, and inference techniques for AI). This capability is epitomized by remote deployments via the cloud for Internet of Things and edge computing applications. Model management mechanisms can be inserted into cloud deployments of AI in these settings to not only involve models operating there, but also impact their results.

“We can put those predictions and actual values back in [a] model manager and look at how that model’s performing in real-time,” Thompson denoted. Moreover, organizations can also adjust how those models are performing to conform to norms for governance, compliance, and specific use cases—like monitoring patient activity in the Internet of Medical Things. Businesses can “update the model links on the fly, that is, the parameters associated with the model,” Thompson explained. “So, everything can be done at the edge.”

Model Ownership

The most emergent trend related to AI and machine learning to affect prudent enterprises in the coming year is likely model ownership or the intellectual property for crafting machine learning models. If properly implemented, organizations can potentially monetize this capability. If not, they’ll fall victim to the reality that, in terms of vendors, “Anybody doing machine learning is trying to collect your data to give themselves an advantage,” Welsh cautioned. Alone, this fact seems innocuous. Its consequences, however, could impair organizations’ competitive advantage since nearly all machine learning vendors have an “incentive to train on your proprietary data that’s unique and no one else can train on, then transfer their learning from your data to your competitors because it makes them look like software rather than service businesses,” Welsh maintained.

Machine learning vendors don’t just covet organizations’ proprietary data on which to train the former’s models, but also the latter’s subject matter expertise in the form of labels that refines those models while defining certain weights and parameters. According to Wilde, “You should be building intellectual property with your machine learning. When you’re working with a vendor you should care about who owns the intellectual property around this. This is my training data, my expertise is providing examples for labeling it; I should own that.”

Full Circle

A brief look at the history of AI as a scientific discipline illustrates it’s long utilized both reasoning and learning, knowledge and statistics. Today, the above use cases are enabling the enterprise to follow suit by joining these approaches and other purported distinctions (image analysis alongside text, for example) for a holistic proficiency that’s not otherwise possible. “You need to be able to merge different techniques to get the best of each and also combine them in a rich pipeline,” Varone concluded.

About the Author

Jelani Harper is an editorial consultant servicing the information technology market. He specializes in data-driven applications focused on semantic technologies, data governance and analytics.

Sign up for the free insideAI News newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1

Comments

  1. Excellent article! We are linking to this particularly great article on our website. Keep up the good writing.

  2. Artificial intelligence is a human intelligence process by the machine. models, composite AI is truly evolving, and is possible for the machine to learn from the experience. The above article gives the overall idea about the structure of ai and the functions of ai. Thanks for providing the relevant information to us.

  3. This blog covers all the aspects of an informative blog. Here we get ample knowledge about the combination of Artificial Intelligence and machine learning. Machine learning is a very efficient tool to make human brains perform smarter works. These are fully explained in the blog. This is a must-read blog for the people who keep updated with the modern world.