Welcome to insideAI News’s “Heard on the Street” round-up column! In this regular feature, we highlight thought-leadership commentaries from members of the big data ecosystem. Each edition covers the trends of the day with compelling perspectives that can provide important insights to give you a competitive advantage in the marketplace. We invite submissions with a focus on our favored technology topics areas: big data, data science, machine learning, AI and deep learning. Enjoy!
AI Bill of Rights. Commentary by Krishna Gade, co-founder and CEO of Fiddler
As an Explainable AI vendor, Fiddler welcomes regulations around consumer protections to drive the case for transparency, so that organizations will see the need to explain what their algorithms do and how results are produced. Because, without human oversight, AI could result in a dystopian view of the world where unfair decisions are made by unseen algorithms operating in the unknown. AI and ML are being used by companies these days for countless use cases that drive their business. If asked to explain why a ML model produces a certain outcome, most organizations would be hard-pressed to provide an answer. Frequently, data goes into a model, results come out, and what happens in between is best categorized as a “black box.” Companies often claim that algorithms are proprietary in order to keep all manner of AI-sin under wraps. We’re seeing this repeatedly happen whether that is Equifax Credit Glitch, or Zillow pricing models, where lack of visibility into AI is hurting businesses, making them lose customer trust.
Responsible AI blossoms between the marriage of synthetic data and HITL. Commentary by Jen Cole, Senior Vice President & GM, Enterprise at Appen
When it comes to AI, organizations can’t use tech to build tech. Despite the push for more automation, humans will continue to play a crucial role in developing reliable, responsible AI. While nine out of ten leading businesses have investments in AI technologies, the human in the loop aspect is an essential part of getting those AI models to deployment. However, there’s a common misconception that developers will eventually get to a point where they don’t need the Human-in-the-loop (HITL) element of AI. But, contrary to what many believe, humans are key to avoiding bias. AI models must constantly be tuned against actual human input to ensure they’re truly accurate to how humans think and that they’re evolving with the changes in our environment. And people need to be transparent about what they’re putting into their models to ensure they’re inclusive. Responsible AI is a key ingredient to business innovation, but it requires the combination of technology, such as synthetic data, and those vital human elements with models that are constantly being assessed and retrained, as needed.
Advice, resources, and growth opportunities for young techies. Commentary by Brian Otten, Digital Transformation Catalyst at Axway
We have been talking about digital transformation and the need for business to adapt or die for a few years now. I would say to the next generation of technologists to keep this in mind – digital strategy is a BUSINESS strategy. It amazes me how much separation there still is, especially in non-natively digital businesses, between those people responsible for digital business initiatives and those who do the technical implementation and operation of the solutions to provide the capabilities to support them. The best advice I can give is to concentrate on building those relationships with your business colleagues in a bi-directional way: they can help you learn the business and what gives your organization a business advantage, and you can help them understand what the impact of transformational technologies like API Management or Event-Driven Architecture can truly have. There is no better time for working in tech. Many frameworks and tools that did not exist back when I started are allowing techies to concentrate on the experience of digital rather than the scaffolding and the plumbing. Cloud computing means that connectivity and flexibility in building ecosystems has just exploded in an exciting way. The best thing about working in tech is making peoples’ lives easier through technology and through the consumption and adoption of what you’ve built. This means that techies can be more than just people in a back room clicking away at their keyboards but can come together with business partners to be part of a bigger story that adds value for everyone.
How Digitalization Supports Critical Carbon Capture Initiatives. Commentary by Ron Beck, Senior Director, Industry Marketing at AspenTech
As urgency increases among carbon-intensive industries to make progress toward net-zero carbon emissions, many organizations are turning to carbon capture utilization and storage (CCUS) to help offset their carbon footprints. However, these technologies still face significant economic and operational obstacles that hinder wider adoption. Digital solutions are helping companies across these industrial sectors—from oil and gas to manufacturing, mining, and chemicals—scale CCUS initiatives and make them economically viable so they can make tangible progress toward carbon reduction goals. A number of digital solutions are already in place, helping optimize the design and operation of capture systems, and their adoption is increasing. In the initial phase of a CCUS program rollout, companies can leverage digital simulation and economic models to do things like create workflows for selecting and permitting the ideal reservoir locations, improve production efficiency, achieve higher carbon yields, and keep carbon that’s been captured within target formations. Next, at the implementation phase, digital models leverage order modeling and AI to create reliable and effective execution plans that are transparent and auditable to ensure CCUS asset performance. Any carbon capture project requires considerable energy and cost to be effective at scale. Therefore, digital tools are mission-critical to make these efforts economical and scalable as industrial organizations grapple with regulatory and market demands for action on climate goals.
Unobtrusive Observation to Glean Operational Insights. Commentary by Manish Garg, Founder & CPO of Skan AI
In most legacy corporations, the strategic imperative of digital transformation and automation has revealed deep operational inefficiencies and process complexity. Companies are trying various fixes as they search for an answer to these foundational problems. Unfortunately, many of these solutions turn out to be nothing but expensive bandaids, further increasing an organization’s process complexity. Thankfully, with innovation, there is now a better way: process intelligence. Advances in computer vision, machine learning, and real-time data processing allow companies to rethink how to unobtrusively observe a large group of associates involved in a digital process and then create a map of underlying process understanding and its variations. A small computer-vision-based probe on an agent’s desktop can capture essential digital-system interactions. Analyzing a critical mass of cases that run both semantically and syntactically, machine learning algorithms and data science patterns can unearth a treasure trove of process data, thereby allowing leaders to glean operational insights. These insights are not just a one-time understanding of a process, but they also create a digital twin of the process which can be simulated and interrogated for performing what-if analyses. Of course, for this process intelligence paradigm to be successful, it is imperative to heed the twin mandates of ensuring the personal privacy of the agents and information security of critical data. Modern process intelligence platforms incorporate techniques such as selective redaction, inclusion/exclusion list of applications, associate privacy, and transmitting only the metadata. Today, corporations can achieve a data-driven and evidence-based holistic process picture, allowing for appropriate interventions rather than one-size-fits-all automation efforts. The remediation measures can include elimination, restructuring, re-platforming, automating, outsourcing, insourcing processes, and precision support for associates.
Surprise billing arbitration rules have been finalized. What does that mean for healthcare data? Commentary by Neel Butala, MD, MBA, co-founder, and chief medical officer at HiLabs
The highly anticipated final rule on surprise billing was recently released, outlining how payers and out-of-network providers will settle payment disputes using arbitration. From a healthcare provider perspective, this is not favorable since it will likely lead to lower payments from insurers. From a health plan perspective, this illustrates how plans will be on the line to pay, which they were not required to do prior to this law. This ruling makes the implementation of the law more concrete and closer to payers’ pocketbooks. Bigger picture, this ruling emphasizes the importance of good, clean data in healthcare. Surprisingly, this is a huge issue: Nearly 45% of all provider directory entries are inaccurate, according to a CMS study, and these inaccuracies are a big contributor to surprise billing. It is very challenging to disentangle who is in the network vs. out of the network. This leads to confusion for all parties (patients, providers, and insurers). To be clear, there does not appear to be any bad intent here from any party. Surprise billing is certainly not a large part of the business model for healthcare organizations, since there is so much cost, patient abrasion, and poor optics associated with it. Surprise billing is largely a function of information disconnect. To address it, healthcare providers and plans alike need to clean their data and address this disconnect. Getting rid of dirty data can help eliminate surprise billing and unlock the potential for broader healthcare transformation.
How Intelligent Automation is Enabling Businesses to Navigate the Labor Shortage. Commentary by Michael Spataro, Chief Customer Officer of Legion Technologies
The industries most affected by the labor shortage are those driven by hourly workers, who make up nearly two-thirds of the workforce. So, it’s no surprise that businesses operating in the retail, hospitality, and food services spaces are struggling to fill the more than 11 million open positions in the U.S., ahead of the holiday season. To avoid operational disruptions caused by these labor gaps, employers in these sectors need to look to workforce management (WFM) solutions that offer intelligent, automated demand forecasting to help navigate the labor shortage. Today’s businesses have access to more data than ever before. Utilizing WFM platforms that can automatically synthesize thousands of data points – including past and future events that will impact demand (e.g., historical customer purchasing behaviors, changing weather patterns, etc.) – enables them to easily create optimized labor plans that predict demand across all channels and ensure they have the appropriate number of employees in place at any given time. With upcoming holiday demand still being unclear, leveraging AI-powered demand forecasting can enable companies to be more agile and optimize their operations. In addition, ‘smart’ WFM platforms can continuously analyze and identify subtle patterns in operational data, adapting to the business without any human intervention. This enables employers to focus on what matters most: spending time with customers and employees and growing the business.
Algorithmic Bias Reveals the Limits of AI. Commentary by Dr. Karen Panetta, National Academy of Inventors 2021 Fellow
Most artificial intelligence (AI) systems training programs use limited scenarios that do not represent real-world images captured from security and/or thermal cameras. For example, the database may only contain a perfect straight on image pose using optimal lighting conditions, but how often does a camera capture someone’s face straight on? This rarely occurs in real-life image capture situations. Furthermore, databases used to develop AI systems have very little diversity in them. For instance, criminal database images we investigated had few to no women and mostly men of color. This means that when training the AI, the algorithm will automatically consider a person of color more likely to be a suspect/criminal than a white woman or white male. In the UK, we have witnessed a public outcry over the number of young Black males that have been interrogated without cause simply because a public AI recognition system identified them as a person of interest. Other examples include AI-driven resume screening to identify the most promising candidates. These systems were “trained” using examples of previously successful candidates—predominantly white males who had earned their degree credentials from the same “brand name” institutions. This creates bias on a number of levels. One is institutional, meaning that if you come from a community college or lesser-known school, your candidacy is weighted lower; another is gender, meaning that if your resume indicates that you’re a woman, the resume is less likely to be successful in securing an interview. In higher education institutions that used AI hiring systems, women faculty members were losing their lab space and getting lower pay than their male counterparts, and/or having their promotions “tabled” or delayed because of AI bias. While institutions proudly claim to be data-driven, which implies agnosticism, data requires expert context; leaving decisions entirely up to an algorithm is irresponsible. If the bias exists in the data BEFORE we use AI, it most certainly will show those biases when we train the AI.
Building an effective data management strategy. Commentary by JP Romero, Technical Manager, Kalypso
Data has become a commodity, one that continues to bring increasing value to business decisions, but it can also pose incredible risks if mismanaged. Modern organizations need to consider a few key elements for an effective, comprehensive data management strategy. Starting with data quality. It is not enough to only recognize that high-quality data is critical for advanced analytics and ML initiatives. To truly understand what “quality” is, each organization must validate if their data is fit for the purposes and the context that it is used for. Data governance is next. While many initiatives focus on ensuring data accessibility and trust, they often neglect its clearness. Successful data governance programs are led by the business and supported by IT to ensure that employees can find the data they need, but also understand it and trust it. Another critical aspect is data architecture. It’s clear that the architectures of the future must embrace the universality of data. Organizations must consider the current state of their data platforms and plan for the adoption of advanced capabilities, like supporting self-service analytics or domain-oriented data ownership. Lastly, data culture and literacy. It is important to create a supportive culture that promotes the adoption of data initiatives. Those directly involved with data input should track data quality metrics as performance goals to foster more accountability and understanding. Organizations need to evolve how they create and roll out their data strategies, placing special emphasis on their data quality, governance, architecture and foster a data culture to truly take advantage of the value behind all those collected insights.
How technology such as robots and AI can help the recycling industry. Commentary by Jagadeesh “JD” Ambati, Founder & CEO, EverestLabs
Currently, the U.S. materials recovery facility (MRF) industry is losing billions from recyclables that end up in landfills or are sold as materials to other countries. Although they are operating the best they can with the lack of integrated data, the gaps in knowledge from MRFs leaves a misunderstanding about how much recyclable materials are not being sorted properly. This ultimately leads to operational inefficiencies that cause several million tons of quality of recyclables to be lost to landfills, For example, in 2018 alone, the EPA estimated over 2.6 million tons of aluminum went to landfills. The result of the wasted recyclables is lost profits for the MRFs and increased greenhouse gasses (GHGs) for our climate. Additionally, employing human workers leads to inefficient and ineffective picking of products while sorting, as the workers’ speed and capabilities only extend so far. Utilizing technologies like AI to track and identify recyclable materials and robots to sort the materials quickly and effectively can remedy the MRF’s lost profits and decrease the overall GHGs from landfills. These new systems also benefit the human employees allowing them to move to more value added positions. MRFs can use a solution in which materials are monitored through a computer vision system and software as recyclables go through the sorting process at their facilities. Having a software-backed system allows for adjustments to the recycling process, including building a library of both recyclable and non-recyclable objects to inform the robots on the conveyor belt line how to sort objects properly based on their material. This system can also be integrated into all MRF equipment operating systems to make proactive adjustments to the process both for safety and efficiency. Overall, AI-powered robots and equipment help reduce labor costs, dangerous tasks that can pose safety risks for employees, and efficacy in the overall picking process as compared to their human counterparts. Combining innovations in the waste management industry with the use of technology, such as AI and robots, means less waste going to landfills, which has a potentially large impact on both the MRFs adopting the systems and the overall environment.
Sign up for the free insideAI News newsletter.
Join us on Twitter: https://twitter.com/InsideBigData1
Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/
Join us on Facebook: https://www.facebook.com/insideAI NewsNOW