Researchers from MIT and the MIT-IBM Watson AI Lab developed a technique to estimate the reliability of foundation models before they are deployed to a specific task. They do this by considering a set of foundation models that are slightly different from one another. Then they use their algorithm to assess the consistency of the representations each model learns about the same test data point. If the representations are consistent, it means the model is reliable.
MIT News: How to Assess a General-purpose AI Model’s Reliability Before It’s Deployed
2024 Trends in Data Technologies: Foundation Models and Confidential Computing
In this contributed article, editorial consultant Jelani Harper suggests that perhaps the single greatest force shaping—if not reshaping—the contemporary data sphere is the pervasive presence of foundation models. Manifest most acutely in deployments of generative Artificial Intelligence, these models are impacting everything from external customer interactions to internal employee interfaces with data systems.