GenAI’s Role in Testing

Print Friendly, PDF & Email

Much has been said about AI’s ability to generate code. But what is often overlooked is its ability to generate test scripts as well.

Just as code generation requires a good User Story, so does generating a test script. It is important to capture all of the changes needed and to clearly define the resulting process. GenAI can help with that, but you can’t overlook this step. Well written acceptance criteria is the key to a good test.

Test scripts are susceptible to hallucinations just like code. So while GenAI can easily create the script, it is your responsibility to review the results. In fact, one good way to do this is to start with manual test steps. Ask the bot to generate a list of the manual steps required to test the new feature. It will come out as a list of actions in the UI that you can easily review. These manual steps become part of the documentation and can be shared with users to explain how the process works.

Check to ensure the steps are correct. If not, you might need to update the User Story requirements to make them more clear and then try the prompt again. It’s better to update the story rather than edit the manual steps since it documents the intent and that feeds into other steps in the process. In fact, if the AI is not generating the correct steps, you have likely made a few assumptions in your story.

Once the manual steps are correct, feed them into a prompt and ask for a test script that automates them in the language of your choice. While it is often daunting to write a test script from scratch, even non-coders can follow scripting languages after they are written. Check the flow to make sure it seems correct and make any necessary changes.

The script for the Happy Path is now ready when the feature is available to test. The test can be used by the Developer before committing her code and even be triggered automatically from your CICD orchestration engine as the story progresses through the stages. Many testing tools are able to create videos of the test execution along with a pass/fail determination. These videos not only enable you to visually check that the results are correct, they are useful as part of the release notes as well.

Release notes are quite often treated as a necessary evil of the development process. Proper documentation takes as much effort as writing the code in some cases. Then there is the challenge that Agile imposes on the release process. Instead of releasing a fixed set of features, agile teams release the features that are ready and hold back the ones that are not for the next release train. That’s great for the development team, but what about the writers? They must have all of the features documented and ready, but then wait until the release to know which features made it and which did not.

GenAI can help in a couple of ways here. 

First, AI can Generate the release notes for a feature directly from a well written User Story. Second, it can compile a document in minutes from the User Stories that made it into the release. The CICD system can trigger that upon successful Deployment. This documentation can even include the manual steps generated for the happy path test and a link to the video of the test run. 

Better yet, instead of sending a multi-page release note to all the users, send a short summary of the release highlights and provide a link to a ChatBot to answer any questions about the details. Why create an FAQ document when you can let the user ask whatever question they like?

Generative AI has really shown to be a powerful tool, but it is still early in its development. It can act as an accelerator, but in the near term, everything it generates should be reviewed and corrected. There are also security and privacy issues that must be taken into account. With the coming of Generative AI, there really is no more excuse. Every feature can be properly tested and documented with very little effort. And that’s good news for the users of our software.

About the Author

David Brooks is the SVP of Evangelism at Copado. He is a serial entrepreneur who has worked at 6 startups with 3 successful exits over the past 34 years in the valley. He joined Salesforce.com just after their IPO in 2005 to build AppExchange for the next 8 and a half years. He ran a third of the Force.com teams during his tenure.

Sign up for the free insideAI News newsletter.

Join us on Twitter: https://twitter.com/InsideBigData1

Join us on LinkedIn: https://www.linkedin.com/company/insidebigdata/

Join us on Facebook: https://www.facebook.com/insideAI NewsNOW

Speak Your Mind

*