The Amazing Applications of Graph Neural Networks

The predictive prowess of machine learning is widely hailed as the summit of statistical Artificial Intelligence. Vaunted for its ability to enhance everything from customer service to operations, its numerous neural networks, multiple models, and deep learning deployments are considered an enterprise surety for profiting from data.

But according to Franz CEO Jans Aasman, there’s just one tiny problem with this lofty esteem that’s otherwise accurate: for the most part, it “only works for what they call Euclidian datasets where you can just look at the situation, extract a number of salient points from that, turn it into a number in a vector, and then you have supervised learning and unsupervised learning and all of that.”

Granted, a generous portion of enterprise data is Euclidian and readily vectorized. However, there’s a wealth of non-Euclidian, multidimensionality data serving as the catalyst for astounding machine learning use cases, such as:

  • Network Forecasting: Analysis of all the varying relationships between entities or events in complex social networks of friends and enemies yields staggeringly accurate predictions about how any event (such as a specific customer buying a certain product) will influence network participants. This intelligence can revamp everything from marketing and sales approaches to regulatory mandates (Know Your Customer, Anti-Money Laundering, etc.), healthcare treatment, law enforcement, and more.
  • Entity Classification: The potential to classify entities according to events—such as part failure or system failure for connected vehicles, for example—is critical for predictive maintenance. This capability has obvious connotations for fleet management, equipment asset monitoring, and other Internet of Things applications.
  • Computer Vision, Natural Language Processing: Understanding the multidimensionality of the relationships of words to one another or images in a scene transfigures typical neural network deployments for NLP or computer vision. The latter supports scene generation in which, instead of machines looking at a scene of a car passing a fire hydrant with a dog sleeping near it, these things can be described so the machine generates that picture.

Each of these use cases revolves around high dimensionality data with multifaceted relationships between entities or nodes at a remarkable scale at which “regular machine learning fails,” Aasman noted. However, they’re ideal for graph neural networks, which specialize in these and other high-dimensionality data deployments.

High-Dimensionality Data

Graph neural networks achieve these feats because graph approaches focus on discerning relationships between data. Relationships in Euclidian datasets aren’t as complicated as those in high-dimensionality data, since “everything in a straight line or a two-dimensional flat surface can be turned into a vector,” Aasman observed. These numbers or vectors form the basis for generating features for typical machine learning use cases.

Examples of non-Euclidian datasets include things like the numerous relationships of over 100 aircraft systems to one another, links between one group of customers to four additional ones, and the myriad interdependencies of the links between those additional groups. This information isn’t easily vectorized and eludes the capacity of machine learning sans graph neural networks. “Each number in the vector would actually be dependent on other parts of the graph, so it’s too complicated,” Aasman commented. “Once things get into sparse graphs and you have networks of things, networks of drugs, and genes, and drug molecules, it becomes really hard to predict if a particular drug is missing a link to something else.”

Relationship Predictions

When the context between nodes, entities, or events is really important (like in the pharmaceutical use case Aasman referenced or any other complex network application), graph neural networks provide predictive accuracy by understanding the data’s relationships. This quality manifests in three chief ways, including:

  • Predicting Links: Graph neural networks are adept at predicting links between nodes to readily comprehend if entities are related, how so, and what effect that relationship will have on business objectives. This insight is key for answering questions like “do certain events happen more often for a patient, for an aircraft, or in a text document, and can I actually predict the next event,” Aasman disclosed.
  • Classifying Entities: It’s simple to classify entities based on attributes. Graph neural networks do this while considering the links between entities, resulting in new classifications that are difficult to achieve without graphs. This application involves supervised learning; predicting relationships entails unsupervised learning.
  • Graph Clusters: This capability indicates how many graphs a specific graph contains and how they relate to each other. This topological information is based on unsupervised learning.

Combining these qualities with data models with prevalent temporal information (including the time of events, i.e. when customers made purchases) generates cogent examples of machine learning. This approach can illustrate a patient’s medical future based on his or her past and all the relevant events of which it’s comprised. “You can say given this patient, give me the next disease and the next chance that you get that disease in order of descending chance,” Aasman remarked. Organizations can do the same thing for customer churn, loan failure, certain types of fraud, or other use cases.

Topological Text Classification, Picture Understanding

Graph neural networks render transformational outcomes when their unparalleled relationship discernment concentrates on aspects of NLP and computer vision. For the former it supports topological text classification, which is foundational for swifter, more granular comprehension of written language. Conventional entity extraction can pinpoint key terms in text. “But in a sentence, things can refer back to a previous word, to a later word,” Aasman explained. “Entity extraction doesn’t look at this at all, but a graph neural network will look at the structure of the sentence, then you can do way more in terms of understanding.”

This approach also underpins picture understanding, in which graph neural networks understand the way different images in a single picture relate. Without them, machine learning can just identify various objects in a scene. With them, it can glean how those objects are interacting or relate to each other. “[Non-graph neural network] machine learning doesn’t do that,” Aasman specified. “Not how all the things in the scene fit together.” Coupling graph neural networks with conventional neural networks can richly describe the images in scenes and, conversely, generate detailed scenes from descriptions.

Graph Approaches

Graph neural networks are based on the neural networks that were initially devised in the 20th century. However, graph approaches enable the former to overcome the limits of vectorization to operate on high-dimensionality, non-Euclidian datasets. Specific graph techniques (and techniques amenable to graphs) aiding in this endeavor include:

  • Jaccard Index: When trying to establish whether or not there should be a link that’s missing between one set of nodes or another set of nodes, for example, the Jaccard index can inform this decision by revealing “to what extent two nodes are similar in a graph,” Aasman said.
  • Preferential Attachment: This statistical concept is a “technique they call the winner takes all where you can predict if someone is going to get everything or you won’t get anything,” Aasman mentioned. Preferential attachment measures how close nodes are together.
  • Centrality: Centrality is an indicator of how important nodes are in networks, which is related to which ones are between other nodes.

These and other graph approaches enable graph neural networks to work on high-dimensionality data without vectorizing it, thereby expanding the overall utility of enterprise machine learning applications.

Poly-Dimensionality Machine Learning Scale

The critical distinction in applying graph neural networks to the foregoing use cases and applying typical machine learning approaches is the complexity of the relationships analyzed—and the scale of that complexity. Aasman explained a use case in which graph neural networks made accurate predictions about the actions of world leaders based on inputs spanning the better part of a year, over 20,000 entities, and nearly half a million events. Such foresight is far from academic when shifted to customer behavior, healthcare treatment, or other mission-critical deployments. Consequently, it may be impacting cognitive computing deployments sooner than organizations realize.

About the Author

Jelani Harper is an editorial consultant servicing the information technology market. He specializes in data-driven applications focused on semantic technologies, data governance and analytics.

Sign up for the free insideAI News newsletter.

Join us on Twitter: @InsideBigData1 – https://twitter.com/InsideBigData1