skip to main content

Data Science at NVIDIA GTC 2020

October 9, 2020 | 9:00 AM8:00 PM EDT | Virtual Gathering at NVIDIA's GTC

Where users, partners, customers, contributors and upstream project leads come together to collaborate and work together across the OpenShift Cloud Native ecosystem.

The event is over

Why deploy AI/ML (Artificial Intelligence & Machine Learning) workloads on OpenShift?

While organizations are turning to Artificial Intelligence and Machine Learning (AI/ML) to better serve customers, reduce cost, and gain other competitive advantages, there are significant challenges to executing these programs. Data Scientists need a self-service experience that allows them to build, scale, and share their machine learning (ML) modeling results across the hybrid cloud. With Red Hat OpenShift, you'll enable data scientists to easily enable and deploy their ML modeling without the dependency on IT to provision infrastructure. Learn more at openshift.com/ai-ml

Fraud Detection Using Open Data Hub on Openshift

Demonstration of an end-to-end AI/ML fraud detection use case using Open Data Hub on Openshift. For more information on Open Data Hub, visit http://opendatahub.io/

Installing Open Data Hub on OpenShift 4.1

Tutorial demonstrating how to deploy Open Data Hub in an Openshift 4.1 cluster using the Operator Hub catalog. At the end of this tutorial, you will be able to deploy a JupyterHub server, spark cluster and create your own Jupyter notebook. OpenShift 4.1 cluster was created using Code Ready Containers(https://code-ready.github.io/crc/ ) For more information on Open Data Hub, visit http://opendatahub.io/

Uploading data to Ceph via command line

In this tutorial we show you how to store data in an S3 Data Lake via the command line using the s3cmd tool.

Harness the power of AI/ML with Red Hat OpenShift

In this webinar, Abhinav Joshi shares how you can harness the power of AI/ML with the Red Hat portfolio. He covers the challenges, potential, and Containers and Kubernetes for AI/ML workloads. Abhinav is the Product Marketing Lead for Artificial Intelligence / Machine Learning on OpenShift. For more information, please visit: openshift.com/ai-ml

ML Workflows on Red Hat OpenShift

Red Hat believes that machine learning (ML) workflows are just like traditional software development workflows. In this video we demonstrate how Red Hat OpenShift Container Platform can enable data scientists to leverage traditional DevOps methodologies to accelerate their ML workflows.

Open Data Hub Introduction

Introduction to Open Data Hub an open source project that provides an end to end AI/ML platform on Openshift. Please visit opendatahub.io

OpenShift Commons Briefing: Continuous Development and Deployment of AI/ML Models with Kubernetes

OpenShift Commons Briefing Continuous Development &Deployment of AI/ML Models with containers and Kubernetes Guest Speakers: Will Benton (Red Hat) Parag Dave (Red Hat) Peter Brey (Red Hat) hosted by Diane Mueller (Red Hat) 2020-06-04

Top Considerations for Accelerating AI/ML Lifecycle in the Cloud-Native Era

Red Hat's Abhinav Joshi presents "Top Considerations for Accelerating AI/ML Lifecycle in the Cloud-Native Era" at Cloud Architecture Summit: News: Integration Developer News. To learn more about OpenShift: visit: OpenShift.com To download the slides, visit: https://www.idevnews.com/registration?event_id=506&code=ws_sidebar

Using AI to Solve Problems

Michael Clifford, a data scientist at Red Hat, discusses how Red Hat uses Artificial Intelligence to solve operational problems and make the company's products better. Learn more: openshift.com/ai-ml

Ask Me Anything on OpenDataHub with Landon LaSmith Red Hat and ODH team members

AMA on OpenDataHub with Landon LaSmith Red Hat 07 20 2020

Intelligent Data Summit: Fast Track AI from Pilot to Production with a Kubernetes-powered platform

Red Hat's Abhinav Joshi presents "Fast track AI from Pilot to Production with a Kubernetes-powered platform" From: Intelligent Data Summit: Integration Developer News.

Red Hat Research: University Collaboration

Hugh Brock, Research Director at Red Hat, discusses how Red Hat connects University research with Red Hat engineers and ultimately driving innovation upstream. Learn more: https://research.redhat.com/

AI/ML at the edge with Red Hat OpenShift

Red Hat OpenShift simplifies the deployment and life-cycle management of AI-powered intelligent applications at the edge, just like it does in the cloud. Learn more at: openshift.com/edge

AIOps: Anamoly Detection (Marcel Hild)

Red Hat uses techniques such as anamoly detection to identify issues in infrastructure and proactively addresses them to make the products even better. Learn more: openshift.com/ai-ml and openshift.com/storage

AIOps vs MLOps vs DevOps Zak Berrie (Red Hat) | NVIDIA GTC OSCG

AIOps vs MLOps vs DevOps Zak Berrie (Red Hat) NVIDIA GTC OpenShift Commons Gathering October 9, 2020

Applying AIOps to Kubernetes Telemetry Data with Open Data Hub at OpenShift | NVIDIA GTC OSCG

Applying AIOps to Kubernetes Telemetry Data with OpenDataHub Alex Corvin and Ivan Necas (Red Hat) NVIDIA GTC OpenShift Commons Gathering 2020 October 9 2020

The Enterprise Neurosystem Initiative Bill Wright (Red Hat) | NVIDIA GTC OSCG

The Enterprise Neurosystem Initiative Bill Wright (Red Hat) NVIDIA GTC OpenShift Commons Gathering October 9, 2020

NVIDIA GTC Buck Woody MSFT

NVIDIA GTC Michael Bennett Dell Diane Feddema Red Hat

NVIDIA GTC Cory Latchkowski ExxonMobil

Using MPI Operator for GPU-Accelerated Workloads with Lustre FS David Gray Red Hat | NVIDIA GTC OSCG

David Gray (Red Hat) NVIDIA GTC OpenShift Commons Gathering October 9, 2020 High performance computing workloads increasingly rely on containers that make applications easier to manage, preserve their dependencies, and add portability across different environments. Red Hat OpenShift Container Platform is an enterprise-ready Kubernetes-based platform for deploying containerized applications on shared compute resources. An Operator is a method of packaging, deploying, and managing a Kubernetes-native application that can make it easier to run complex workloads. We'll demonstrate how GPU-accelerated scientific applications can be deployed on OpenShift, using message-passing interface and backed by the Lustre file system for data storage.

A Day in the Life of a Red Hat Data Scientist

Don Chesworth describes what it's like to be a data scientist at Red Hat, including the use the latest and greatest open source tooling. Don also shares an experience working with the Open Data Hub team to submit improvements for changing Red Hat OpenShift shared memory size across multiple GPUs. Learn more at openshift.com/ai-ml

Using Open Data Hub as a Red Hat Data Scientist

Isabel Zimmerman demonstrates how to build, deploy and monitor machine learning models using the Open Data Hub project for simplified end-to-end machine learning workflows. Isabel shows an ML workflow that includes Jupyter notebooks, Seldon for model hosting, Prometheus and Grafana for data visualization. Learn more at openshift.com/ai-ml

A Customer’s Story with Red Hat AI Solutions

Red Hat Consulting Services describes how they helped a customer solve problems of multiple siloed data sets, accelerating and scaling data science workflows across thousands of users, and implementing open source AI/ML technologies leveraging the Open Data Hub project. Results include: speeding up their ML deployments (faster time to solution) and increased collaboration between developers and Ops. Learn more at https://www.openshift.com/learn/topics/ai-ml

PICT Voices #25: Ivan Oransky, The Use and Abuse of Science

Our twenty-fifth interview is with Ivan Oransky, science journalist (New York City, USA) by Kristof K.P. Vanhoutte (Basel, Switzerland) Friday, February 12, 2021 PICT Voices is an interview series conducted by PICT faculty with notable members of the broader PICT community. Our goal is to present our community with a variety of voices across the spectrum of the humanities and critical, creative thinking. To achieve this, we will interview a broad spectrum of thinkers ranging from scholars to journalists. website: https://parisinstitute.org/ soundcloud: https://soundcloud.com/parisinstitute instagram: https://www.instagram.com/parisinstitute/ facebook: https://www.facebook.com/parisinstitute.org/ twitter: https://twitter.com/ParisCritical linkedin: https://www.linkedin.com/school/paris-institute-for-critical-thinking/

Red Hat's Artificial Intelligence (AI) Vision (part 1)

Steven Huels describes Red Hat’s AI/ML business focus and vision, including our investments in Kubernetes and DevOps as well as the technology partner ecosystem. Through the Open Data Hub project, Red Hat provides a reference for implementing open source tooling on OpenShift. Learn more at openshift.com/learn/topics/ai-ml

Red Hat's AI vision and Open Data Hub (part 2)

As a proof point of Red Hat’s AI investments and vision, Steven highlights the origin of the Open Data Hub project, its use by Red Hat Consulting services to help customers put models into production and the ecosystem of AI/ML technology partners on Red Hat OpenShift. Learn more at: openshift.com/learn/topics/ai-ml

Preview demo of Red Hat OpenShift Data Science

See a preview demo of the Red Hat OpenShift Data Science managed cloud service offering. Red Hat OpenShift Data Science combines common open source tooling, partner software and other Red Hat portfolio software to provide a fully supported sandbox in which to rapidly develop, train, and test ML models in the public cloud. Chris Chase runs through a tutorial of how easy it is to launch Jupyter notebooks to build a model in a TensorFlow framework and deploy the model in a container-ready format.

Red Hat’s AI/ML Technology Partnerships

Red Hat has taken a partner ecosystem approach to AI/ML use cases in a hybrid cloud environment, not only for the Open Data Hub project but also its Red Hat OpenShift Data Science managed cloud services offering. In this short video, Ryan describes the approach with these key data science ISV partnerships.

Red Hat Innovators in the Open | HCA Healthcare

HCA Healthcare uses an innovative data platform to accelerate detection of sepsis to save lives by using a containers ML application. Learn more at https://www.OpenShift.com/ai-ml

NVIDIA and Red Hat enabling faster delivery of AI-powered intelligent applications

Together, Red Hat’s OpenShift hybrid cloud platform and NVIDIA’s innovative GPUs, CUDA-X libraries, GPU Operator and AI software from the NGC catalog helps businesses quickly, consistently, and securely develop, deploy, and scale AI applications across the hybrid cloud. Learn more at https://www.OpenShift.com/nvidia

Delivering MLOps with OpenShift GitOps and Pipelines

As AI/ML modeling becomes more important the growing expection is to adopt and implement an MLOps strategy to make it easier to productize models and keep them up to date. Explore the challenges this poses for Data Scientists and what OpenShift is doing to make productizing AI/ML models more simple with features like OpenShift GitOps, OpenShift Pipelines, Quay, and Red Hat OpenShift Data Science.

Red Hat OpenShift - Moonshot ideas

Innovation without limitation. Learn more at OpenShift.com

How NTT and Red Hat have Built an Edge Offer to Deliver New Cloud-Native AI-Platform Services

Learn how cloud-native AI applications can be enabled for a new digital service edge platform from NTT East. The Multi-Access Edge Computing (MEC) platform is based on a Red Hat OpenShift architecture leveraging GPUs and NVIDIA GPU-Operator to deliver new AI applications services across multi-cloud edge environments including private 5G, fiber, telco edge and customer’s enterprise edge. The cloud-native edge solution is for multiple industry solutions and a next generation intelligent video analytics (IVA) example is highlighted for a retailer.

Using Open Data Hub for MLOps Demo

Juana Nakfour describes MLOps and demonstrates how to use Open Data Hub, a community project integrating open source AI/ML tools in an end-to-end AI platform on Red Hat OpenShift. First, Juana shows using the JupyterHub Elyra tool to auto-generate and run an MLOps KubeFlow pipeline. After showcasing Red Hat Ceph Storage and bucket notification to trigger and automate the MLOps pipeline, Juana shows how to deploy the fraud detection neural network machine learning model using KFServing and using Canary rollout to introduce a new version of the model.

What is Kubeflow?

In this short 45 second video, Juana Nakfour describes Kubeflow and its advantages.

Top 5 Considerations for an AI/ML Platform

Will McGrath, Product Marketing Manager in Red Hat’s Data Services business unit, discusses the top 5 considerations for building out an AI platform. Whether it’s developing a data strategy or building a collaborative environment, these guidelines will help your organization achieve success as your machine learning projects move from experimentation to production.

Starburst partners with Red Hat on OpenShift Data Science for AI/ML workloads

Justin Borgman, CEO of Starburst, describes how Starburst partners with Red Hat, integrating with Red Hat OpenShift Data Science to provide insights into an organization’s data for analytics and AI/ML workloads. Justin describes how the Starburst data mesh capabilities can help data engineers to access decentralized data and eliminate the need for data wrangling.

SWIFT and Red Hat OpenShift deliver an AI platform for financial transaction intelligence at scale

SWIFT is shaping the future of payments and securities to be faster, smarter and better. As the financial industry’s neutral and trusted provider, we help our community of over 11,000 financial institutions move value around the world reliably and more securely at scale. SWIFT is now leveraging our pivotal role in the financial industry to develop transformative AI solutions, enabled by a high-performance AI platform that is future ready for hybrid cloud. Join this keynote to learn how SWIFT, along with Red Hat, C3.ai, Kove and partner financial institutions, is embarking on a journey to enhance the effectiveness and efficiency of shared services like payment screening and anomaly detection, leveraging unique global transaction data – without compromising the integrity of transaction data or the privacy of its users. Speakers: Marius Bogoevici (Red Hat) and Chalapathy Neti (SWIFT)

Event Overview

This OpenShift Commons Gathering on AI and Machine Learning is co-located with NVIDIA's GTC virtual event on October 5–9, 2020!

The OpenShift Commons Gatherings bring together experts from all over the world to discuss container technologies, best practices for cloud native application developers and the open source software projects that underpin the OpenShift ecosystem. This event will gather developers, data scientists, devops professionals and sysadmins together to explore the next steps in making container technologies successful and secure for your ML and AI workloads.

Where

Virtual Gathering at NVIDIA's GTC
Virtual at NVIDIA's GTC

When

Friday, October 9, 2020

Price

Included in your GTC Registration

Please note: Pre-registration is required. Your GTC registration grants you access to the virtual on-demand OpenShift Commons Gathering talks on October 5-9 from 9 a.m. - 1 p.m. ET. This forum will feature a discussion of best practices, lessons learned, and the open source projects that support OpenShift and Kubernetes—all from project leads with production enterprise deployments.

Schedule

Code of Conduct: We follow the Code of Conduct of other events such as KubeCon. Similarly we are dedicated to providing a harassment-free experience for participants at all of our events, whether they are held in person or virtually. All event participants, whether they are attending an in-person event or a virtual event, are expected to behave in accordance with professional standards, with both this Code of Conduct as well as their respective employer's policies governing appropriate workplace behavior and applicable laws.

COVID-19 Health + Safety Information: CNCF is committed to our attendee's health and safety; this remains our top priority as we continue to monitor COVID-19 and look to the venue, local, state, CDC, and WHO guidelines to make the best and most informed decisions around onsite safety and requirements.

Health and safety information for KubeCon + CloudNativeCon North America

See sessions from previous gatherings

9:00 AM
The Enterprise Neurosystem Initiative William Wright (Red Hat)
9:30 AM
AIOps vs MLOps vs DevOps Zak Berrie (Red Hat)
10:00 AM
GPU-Accelerated Machine Learning with OpenShift Container Platform Diane Feddema (Red Hat)Michael Bennett (Dell)
10:30 AM
Using MPI operator to run GPU-accelerated scientific workloads on Red Hat OpenShift with Lustre FS David Gray (Red Hat)
11:00 AM
Using GPUs for Data Science & Optimization Containers in OpenShift Cory Latschkowski (ExxonMobil)
11:30 AM
Applying AIOps to Kubernetes Telemetry Data with Open Data Hub at OpenShift Alex Corvin (Red Hat)Ivan Necas (Red Hat)
12:00 PM
Accelerating AI on the Edge Nick Barcet (Red Hat)Kevin Jones (NVIDIA)
12:31 PM
Data driven insights with SQL Server Big Data Clusters and OpenShift Buck Woody (Microsoft)

Speakers

Alex CorvinManager, Software Engineering
Red Hat

Buck WoodyApplied Data Scientist
Microsoft

Cory LatschkowskiOpenshift Architect /Senior Linux Engineer
ExxonMobil

David GraySoftware Engineer
Red Hat

Diane FeddemaPrincipal Software Engineer,AI and Machine Learning CoE
Red Hat

Ivan NecasSoftware Architect
Red Hat

Kevin JonesPrincipal Product Manager
NVIDIA

Michael BennettEngineer, Server CTO Office
Dell

Nick BarcetSenior Director Technology Strategy
Red Hat

William WrightHead of AI/ML and Intelligent Edge for Global Verticals
Red Hat

Zak BerrieHybrid Cloud Machine Learning Solution Sales Specialist
Red Hat

Venue

October 9, 2020 | 9:00 AM8:00 PM EDT

Virtual at NVIDIA's GTC
Virtual Gathering at NVIDIA's GTC