Learn how NVIDIA FLARE faciliates research from simulation tools to real-world case studies.
Research Tools
NVIDIA FLARE is an excellent research tool, offering robust simulation capabilities and extensive support for experimentation in federated learning.
Many researchers need to simulate federated learning scenarios without setting up an actual federated learning system.
NVIDIA FLARE allows for repeated experimentation with different parameters, facilitating quick evaluations and monitoring of results.
NVIDIA FLARE Simulation Tools
Simulator:
The Simulator is a multi-threaded/process simulation tool that offers both a Command Line Interface (CLI) and a Python API.
It enables the simulation of different numbers of clients and the execution of various federated learning jobs.
Once a simulation is complete, users can deploy the same code in production without any changes.
Additionally, users can utilize an Integrated Development Environment (IDE) debugger to step through the code for easier debugging.
Proof of Concept (POC) Mode:
POC mode simulates real-world deployment on a local host.
Clients and servers can be deployed in different directories and launched using separate terminals, each representing a different client or server startup.
This mode allows for job submissions to the server as would occur in a real production environment.
Interaction Methods with FL Server
Admin Console:
Issue interactive commands such as submitting jobs, listing jobs, and aborting jobs.
Job CLI:
Command-line interface is used for job submission.
FLARE API:
Allows submission and listing of jobs through Python code.
Experiment Tracking Tools Integration
Experiment tracking:
FLARE supports logging metrics using FLARE's metrics tracking log writers. Users can choose from TensorBoard, MLflow, or W&B syntax.
Metrics can be streamed to either the FL server or FL client, and changing the metric system does not require any code changes.
Research Works
NVIDIA FLARE offers a lot of state of art research work, here is a quick view of the recent work.
Learn more in the research directory in our GitHub and view our list of Publications.
Case Studies
NVIDIA has worked with several institutions to test and validate the utility of federated learning. We hope this will help to inspire or relate to your research cases.
Here are five real life implementations in healthcare, pushing the envelope for training robust, generalizable AI models:
- EXAM AI Model for Predicting Oxygen Requirements in COVID Patients
- ADOPS (ACR DASA OSU Partners HealthCare Stanford)
- University of Minnesota and Fairview X-Ray COVID AI Model
- SUN Initiative Prostate Cancer AI Model
- CT Pancreas Segmentation AI Model
EXAM AI Model for Predicting Oxygen Requirements in COVID Patients
AI model to predict oxygen requirements
NVIDIA researchers, Massachusetts General Brigham Hospital
During COVID-19, it was challenging to determine which patients would need a higher level of care in the near future, despite perhaps presenting with minimal symptoms. Therefore the goal of this study was to train a previously developed AI model that determines whether a person with COVID-19 symptoms will need supplemental oxygen hours or even days after an initial exam. The approach included a separate server hosted on AWS, which held the global deep neural network. Each client-site received a copy of the model to train on its own dataset and FLARE was used to aggregate to form a global model.
Training was completed in two weeks and produced a global model with .94 Area Under the Curve (AUC), resulting in excellent prediction for the level of oxygen required by incoming patients.
Federated learning for predicting clinical outcomes in patients with COVID-19 ADOPS (ACR DASA OSU Partners HealthCare Stanford)
Breast Mammography AI Model. Early detection: breast density classification Improvement
The American College of Radiology (ACR), Diagnosticos da America (DASA), Ohio State University (OSU), Partners HealthCare (PHS), and Stanford University
Breast Mammography AI Model Early detection through mammography is critical when it comes to reducing breast cancer deaths, but breast density can make it harder to detect the disease. The team used a 2D mammography classification model provided by PHS, which was trained using NVIDIA Clara Train on NVIDIA GPUs. The model was then retrained using NVIDIA Clara Federated Learning at PHS, as well as the client-sites, without any data being transferred.
Each institution obtained a better performing model that had an overall superior predictive power on their own local dataset. In doing so, Federated Learning enabled improved breast density classification from mammograms, which could lead to better breast cancer risk assessment.
Federated Learning for Breast Density Classification: A Real-World Implementation University of Minnesota and Fairview X-Ray COVID AI Model
Improve AI models for COVID-19 diagnosis based on chest X-rays
University of Minnesota and Fairview Mhealth, Indiana University (Indiana, USA), and Emory University (Georgia, USA)
The goal of this study was to improve real-world AI models for COVID-19 diagnosis based on chest X-rays. This study leveraged a three-phase pipeline composed of U-Net lung segmentation, a conditional Generative Adversarial Network (cGAN) for outlier detection, and a DenseNet121 COVID-19 Classification model. The aggregate multi-institutional dataset consisted of approximately 80,000 labeled images with a 30/70% positive/negative COVID classification. This classification model was trained with a federation of Federated Learning server and Federated Learning clients at University of Minnesota and Fairview (Minnesota, USA), with additional participant clients at Indiana University (Indiana, USA) and Emory University (Georgia, USA) using a mix of cloud (AWS/Azure) and local servers.
Initial results showed an improvement in performance of the global model of 5% AUROC and 8% AUPRC on the UMN local dataset as compared to the UMN local model.
SUN Initiative Prostate Cancer AI Model
Federated segmentation model
SUNY, UCLA, NIH
Prostate cancer is a common cancer of the prostate gland in men. Accurate segmentation of the prostate gland is useful for developing AI models to help in detection of Prostate cancer. In this initiative, we tested the hypothesis that Federated Learning can be used to train a segmentation model comparable to one trained from a pooled data (PD) set.
The results showed equivalent performance from both the experimental Federated Learning and benchmark PD models, showing the feasibility of training an AI model in a Federated Learning approach.
CT Pancreas Segmentation AI Model
Automated segmentation model of the pancreas and pancreatic tumors in abdominal CT
National Taiwan University, Taiwan, and Nagoya University, Japan
The aim of this experiment was to build models for the automated segmentation of the pancreas and pancreatic tumors in abdominal CT. A 3D segmentation model based on neural architecture search developed by NVIDIA's Applied Research team was collaboratively trained using Federated Learning.
The global Federated Learning model achieved a segmentation performance of 82.3% Dice score on healthy pancreatic patients on average.