At the recent 2024 AI Hardware & Edge AI Summit in San Jose, Calif., I caught up with Elio Van Puyvelde, CIO, Nscale, the hyperscaler engineeried for AI where you can access thousands of GPUs tailored to your requirements using the Nscale AI cloud platform.
Podcast: Intel Unveils Next-gen Solutions with Xeon 6 processors and Gaudi 3 to Tackle Enterprise Needs
Enterprises are increasingly in need of AI infrastructure that is both cost-effective and available for rapid development and deployment. To meet this demand head-on, Intel today launched Xeon 6 processors with Performance-cores (P-cores) and Gaudi 3 AI accelerators, bolstering the company’s commitment to deliver powerful AI systems with optimal performance per watt and lower total cost of ownership.
Podcast: Agentic AI – The Dawn of Autonomous Intelligence
This insideAI News “Power to the Data” podcast discusses how AI has been transforming industries and redefining the boundaries of technology for decades. From simple machine learning algorithms that sort emails to complex neural networks that predict market trends, AI has become an integral part of modern life. Among the various branches of AI, one […]
At 2024 AI Hardware & Edge AI Summit: Vasudev Lal, Principal AI Research Scientist, Cognitive AI, Intel Labs
At the recent 2024 AI Hardware & Edge AI Summit in San Jose, Calif., I caught up with Vasudev Lal, Principal AI Research Scientist, Cognitive AI, Intel Labs, who took us on a tour of the happenings at Intel Labs, specifically around the field of Cognitive AI. He discussed some key projects that the Intel Cognitive AI team is running at this time, and how his team is advancing with Intel Gaudi AI accelerators.
HPC News Bytes 20240715: AI Maturity ROI, OpenAI’s 5 Levels of AI, SoftBank Acquires Graphcore
Much has happened of late in the world of HPC-AI, here’s a quick (5:55) run through of the news, including: survey commissioned by Vultr points to AI maturity ROI, OpenAI proposes five levels of AI based on capability, SoftBank ….
Webinar: Getting Started with Llama 3 on AMD Radeon and Instinct GPUs
[Sponsored Post] This webinar: “Getting Started with Llama 3 on AMD Radeon and Instinct GPUs” provides a guide to installing Hugging Face transformers, Meta’s Llama 3 weights, and the necessary dependencies for running Llama locally on AMD systems with ROCm™ 6.0.
Video Highlights: Vicuña, Gorilla, Chatbot Arena and Socially Beneficial LLMs — with Prof. Joey Gonzalez
LLM Vicuña, Chatbot Arena, and the race to increase LLM context windows: In this video presentation, guest Joey Gonzalez joins our good friend Jon Krohn, Co-Founder and Chief Data Scientist at the machine learning company Nebula, to talk about developing models and platforms that leverage and improve LLMs, as well as the future of AI development and access.
#insideAI Newspodcast: Can A.I. Take a Joke?
Artificial intelligence, we’ve been told, will destroy humankind. No, wait — it will usher in a new age of human flourishing! Freakonomics Radio guest host Adam Davidson (co-founder of Planet Money) sorts through the big claims about A.I.’s future by exploring its past and present — and whether it has a sense of humor. Enjoy the podcast!
insideAI News AI News Briefs – 7/27/2023
Welcome insideAI News AI News Briefs, our podcast channel bringing you the latest industry insights and perspectives surrounding the field of AI including deep learning, large language models, generative AI, and transformers. We’re working tirelessly to dig up the most timely and curious tidbits underlying the day’s most popular technologies. We know this field is advancing rapidly and we want to bring you a regular resource to keep you informed and state-of-the-art.
Power to the Data Report Podcast: Large Language Models for Executives
Hello, and welcome to the “Power-to-the-Data Report” podcast where we cover timely topics of the day from throughout the Big Data ecosystem. I am your host Daniel Gutierrez from insideAI News where I serve as Editor-in-Chief & Resident Data Scientist. Today’s topic is “Large Language Models for Executives.” LLMs represent an important inflection point in the history of computing. After many “AI winters,” we’re finally seeing techniques like generative AI and transformers that are realizing some of the dreams of AI researchers from decades past. This article presents a high-level view of LLMs for executives, project stakeholders and enterprise decision makers.