Low Code/No Code

Print Friendly, PDF & Email

AI is making it easier than ever before for business users to get closer to technology in all forms, including using copilots to allow end users to aggregate data, automate processes, and even build apps with natural language. This signals a shift towards a more inclusive approach to software development, allowing a wider array of individuals to participate, regardless of their coding expertise or technical skills.

These technological advancements can also introduce new security risks that the enterprise must address now; shadow software development simply can’t be overlooked. The reality is that at many organizations, employees and third-party vendors are already using these types of tools, whether the business knows it or not. Failure to account for these risks may result in unauthorized access and the compromise of sensitive data, ​​as the misuse of Microsoft 365 accounts with PowerApps demonstrates.

Fortunately, security doesn’t have to be sacrificed for productivity. Application security measures can be applied to this new world of how business gets done despite the fact that traditional code scanning detection is rendered obsolete for this type of software development. 

Using low-code/no-code with help from AI

ChatGPT has experienced  the quickest adoption of any application ever, setting new records for fastest-growing user base – so it’s likely you and your organization’s business users have tried it in their personal, and even their work lives. While ChatGPT has made many processes extremely simple for consumers, on the enterprise side, copilots like Microsoft Copilot, Salesforce Einstein and OpenAI Enterprise have brought similar generative AI functionality to the business world. Similarly, generative AI technology and enterprise copilots are having a major impact on low- and no-code development. 

In traditional low-code/no-code development, business users can drag and drop individual components into a workflow with a wizard-based installation. Now, with AI copilots, they can type, “Build me an application that gathers data from a Sharepoint site and send me an email alert when new information is added, with a summary of what’s new” and voilà, you’ve got it.  This happens outside the purview of IT, and they are built into production environments without the checks and balances that a traditional SDLC or CI/CD tools would provide.

Microsoft Power Automate is one example of a citizen development platform designed to optimize and automate workflows and business processes and allow for anyone to build powerful apps and automations on it. Now, with the insertion of Microsoft Copilot, within this platform, you can type a prompt when an item is added to SharePoint: “Update Google Sheets and send a Gmail.” In the past, this would entail a multi-step process of dragging and dropping components and connecting all the work applications, but now you can just prompt the system to build the flow. 

All these use cases are doing wonders for productivity, but they don’t typically include a game plan for security. And there’s plenty that can go wrong, especially given that these apps can be easily over-shared through the enterprise.

Just as you’d carefully review that ChatGPT-written blog and customize it for your unique point of view, it’s important to enhance your AI-generated workflows and applications with security controls like access rights, sharing, and data sensitivity tags. But this isn’t usually happening, for the primary reason that most people creating these workflows and automations aren’t technically skilled to do this or even aware that they need to. Because the promise of an AI copilot in building apps is that it does the work for you, many people don’t realize that the security controls aren’t baked-in or fine-tuned. 

The problem of data leakage

The primary security risk that stems from AI-aided development is data leakage. As you’re building applications or copilots, you can publish them for broader use both across the company and within the app and copilot marketplace. For enterprise copilots to both interact with data in real time and interact with systems outside of that system (i.e. if you want Microsoft Copilot to interact with Salesforce), you need a plugin. So, let’s say the copilot you’ve built for your company creates greater efficiency and productivity, and you want to share it with your team. Well, the default setting for many of these tools is to not require authentication before others interact with your copilot. 

That means if you build the copilot and publish it so Employees A and B can use it, all other employees can use it, too – they don’t even need to authenticate to do so. In fact, anyone in the tenant can use it, including lesser-trusted or monitored guest users like third-party contractors. Not only is this arming the public with the ability to play around with this copilot, but it also makes it easier for bad actors to access the app/bot and then perform a prompt injection attack. Think of prompt injection attacks as short-circuiting the bot to get it to override its programming and give you information it shouldn’t. So, poor authentication leads to oversharing of a copilot that has access to data and then leads to the over-exposure of potentially sensitive data.

When you are building your application, it is also very easy to misconfigure a step due to the AI misunderstanding the prompt and results in the app connecting a data set to your personal Gmail account. At a big enterprise this equals non-compliance due to data escaping the corporate boundary. There’s also a supply chain risk here in that any time you insert a component or an app, there is a real risk that it is infected, unpatched, or otherwise insecure, and that then means your app is now infected, too. These plugins can be “sideloaded” by end users directly into their apps and the marketplaces where these plugins are stored is a total black box for security. That means the security fallout can be wide-ranging and catastrophic if the scale is large enough (i.e. SolarWinds).

Another security risk that’s common in this new world of modern software development is what’s known as credential sharing. Whenever you’re building an application or a bot, it’s very common for you to embed your own identity into that application. So, any time someone logs in or uses that bot, it looks like it’s you. The result is a lack of visibility for security teams. Members of an account’s team accessing information about the customer is fine, but it’s also accessible to other employees and even third parties who don’t need access to that information. That also becomes GDPR violation, and if you’re dealing with sensitive data, this can open a whole new can of worms for highly regulated industries like banking.

How to overcome security risks

Enterprises can and should be reaping the benefits of AI, but security teams need to put certain guardrails in place to ensure employees and third parties can do so safely. 

Application security teams need to have a firm understanding of just what exactly is happening within their organization, and they’ve got to get it quickly.  To avoid having AI-enabled low- and no-code development turn into a security nightmare, teams need:

  • Full visibility into what exists across these different platforms. You want to understand across the AI landscape what’s being built and why and by whom – and what data it’s interacting with. What you’re really after when you’re talking about security is understanding the business context behind what’s being built, why it was built to begin with, and how business users are interacting with it.
  • An understanding of the different components in each of these applications. In low-code and generative AI development, each application is a series of components that makes it do what it needs to do. Oftentimes, these components are housed in essentially their version of an app store that anyone can download from, and insert into corporate apps and copilots. Those are then ripe for a supply chain attack where an attacker could load a component with ransomware or malware. Furthermore, every application that then interjects that component into it is compromised. So, you also want to deeply understand the components in each of these applications across the enterprise so you can identify risks. This is accomplished with Software Composition Analysis (SCA) and/or a software bill of materials (SBOM) for generative AI and low-code.
  • Insight into the errors and pitfalls: The third step is to identify all the things that have gone wrong since an application was built and be able to fix them quickly, such as which apps have hard-coded credentials, which have access to and are leaking sensitive data, and more. Due to the speed and volume of which these apps are being built (remember, there’s no SDLC and no oversight from IT) there likely are not just a couple dozen apps to reckon with. Security teams are left to manage tens- and hundreds-of-thousands of individual apps (or more). That can be a massive challenge. To keep up, security teams should implement guardrails to ensure that whenever risky apps or copilots are introduced, they are dealt with swiftly; be it via alerts to the security team, quarantining those apps, deleting the connections, or otherwise. 

Master evolving technology
AI is democratizing the use of low-code/no-code platforms and enabling business users across enterprises to benefit from increased productivity and efficiency. But the flipside is that the new workflows and automations aren’t being created with security in mind, which can quickly lead to problems like data leakage and exfiltration. The generative AI genie isn’t going back in the bottle, which means application security teams must ensure they have the full picture of the low-code/no-code development happening within their organizations and put the right guard rails in place. The good news is you don’t have to sacrifice productivity for security if you follow the tips outlined above.

About the Author

Ben Kliger, CEO and co-founder, Zenity.

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW

Speak Your Mind

*