Building an Open Future: Core Components for the AI-Driven Enterprise

Building an Open Future: Core Components for the AI-Driven Enterprise


OVERVIEW

AI isn’t one-size-fits-all-and it shouldn’t be. The most resilient AI strategies are open, collaborative, and built to scale with your mission.

In this quick-hit webinar series, you’ll learn how to lower barriers to adoption, improve efficiency, manage AI lifecycles, and extend AI to the edge-each session in under 15 minutes.

Join Red Hat experts for practical, public sector-focused insights you can put to work right away.

Any questions? Please email kmccabe@redhat.com

SESSION DETAILS

 
Session 1:  Lowering Barriers with Instructlab
  • Tuning Generative AI models is cost-prohibitive, demanding highly skilled resources to tune mission-centric data at scale and across hybrid cloud and edge environments.
  • InstructLab offers an open-source methodology for tuning LLMs of your choice and easing the burdens of creating synthetic data with far fewer computing resources and lowering the costs
     
    Speaker:
     
    Dan Domkowski | AI Specialist, Red Hat
Session 2: Driving Efficiency with vLLM
  • Serving LLMs is resource-intensive, with high demands on hardware with hefty price tags to meet the scalability and speed that agencies demand
  • (virtualized) LLMs allow agencies to “do more with less” offering LLM inferencing and serving with greater efficiency, scale, and speeds up to 24x higher throughput. 
     
    Speaker: 
     
    Michael Hardee | Chief Architect, Law Enforcement and Justice, Red Hat
Session 3: Managing Model Lifecycles with OpenShift AI
  • Cross-functional AI teams want self-service access to workspaces, on available or GPU-accelerated computing resources, integrated with a choice of tools to collaborate and quickly get to production without the toil and friction with current approaches.
  • OpenShift AI provides a platform that streamlines and automates the MLOps lifecycle with pipelines for Generative and Predictive AI models from data acquisition and preparation, model training and fine-tuning, through model serving and monitoring with consistency from the edge through hybrid clouds.
     
    Speaker:
     

    Dan Domkowski | AI Specialist, Red Hat
Session 4: Inferencing at the Edge
  • Agencies demand actionable intelligence where the mission occurs, but managing a distributed network of AI-enabled edge devices in constrained/disconnected environments brings significant operational complexities.
  • Red Hat enables AI inferencing and serving in disconnected, resource-constrained environments by providing lightweight, flexible platforms that allow models to run locally, without relying on cloud connectivity through containerized deployments, efficient resource usage, and secure, automated updates
     
    Speaker:
     
    Sompop Noiwan | Application Strategist, Department of Homeland Security, Red Hat