In this contributed article, David Brooks, SVP of Evangelism at Copado, discusses how much has been said about AI’s ability to generate code. But what is often overlooked is its ability to generate test scripts as well. Test scripts are susceptible to hallucinations just like code. So while GenAI can easily create the script, it is your responsibility to review the results.
NetSPI Debuts ML/AI Penetration Testing, a Holistic Approach to Securing Machine Learning Models and LLM Implementations
NetSPI, the global leader in offensive security, today debuted its ML/AI Pentesting solution to bring a more holistic and proactive approach to safeguarding machine learning model implementations. The first-of-its-kind solution focuses on two core components: Identifying, analyzing, and remediating vulnerabilities on machine learning systems such as Large Language Models (LLMs) and providing grounded advice and real-world guidance to ensure security is considered from ideation to implementation.
Why Executives Need to Embrace Fake Data in Software Testing
In this contributed article, Alexey Sapozhnikov, CTO and Co-Founder of prooV explores how ideas move from concept to product through various forms of testing. Just as scientists use laboratories, enterprises (in theory) generate test environments to evaluate the potential and compatibility of new technologies before implementing them. Executives understand the importance of using test environments to minimize security risks, but are understandably fearful of inaccurate results based on their experiences with fake data. With the introduction of Deep Mirroring and Predictive Analytics technologies for testing, fake data should no longer be a concern—it should simply be embraced as a tool in the process of innovation.