Trusted Artificial Intelligence Online

OVERVIEW

Artificial Intelligence systems – enabled by advancement in sensor and control technologies, artificial intelligence, data science, and machine learning – promise to deliver new and exciting applications to a broad range of industries. However, a fundamental trust in their application and execution must be established in order for them to succeed. People, by and large, do not trust a new entity or system in their environment without some evidence of trustworthiness. To trust an artificial intelligence system, we need to know which factors affect system behaviors, how those factors can be assessed and effectively applied for a given mission, and the risks assumed by trusting.

This course aims to provide a foundation for building trust in artificial intelligence systems. A framework for evaluating trust is defined and highlights three perspectives - data, artificial intelligence algorithms, and cybersecurity. An overview of the state-of-the art in research, methods, and technologies for achieving trust in AI is reviewed along with current applications.

LEARNING OBJECTIVES

  • Establishing Trust in Data
  • Understanding Data Management
  • Understanding AI Interpretability and Explainability
  • Understanding Adversarial Robustness, including intersection of AI and Cybersecurity
  • Understanding Monitoring & Control

WHO SHOULD ATTEND
This course is intended for program managers, researchers, engineers, academics, and graduate students who want to get a deeper understanding of AI, both as a field of study and from an aerospace application perspective as well.
 
Course Information:
Type of Course: Instructor-Led Short Course
Course Length: 1 day
AIAA CEU's available: Yes

 
Outline
  • Introduction
    • Motivation for establishing trust in AI systems
    • Defining Trust
    • Establishing a Trusted AI Framework
  • Data
    • Overview of Data Management
    • Defining Data Governance
    • Ensuring Fairness in AI Systems
      • Assessing bias
      • Techniques for removing bias
      • Metrics for fairness
    • Data poisoning
      • Introduction to poisoning attacks
      • Data poisoning defenses
    • Building trust through reproducibility
      • Handling domain shift
      • Treating models as data
      • Tools and best practices
  • Interpretability & Explainability
    • Need for interpretability
    • Tool and techniques
  • Adversarial Robustness
    • Intersection of AI and cybersecurity
    • Adversarial attacks
      • Decision boundary attacks
      • Adversarial patch attacks
    • Defenses against adversarial attacks
      • Adversarial training
      • Exact methods
      • Lower bound estimation
      • Randomized smoothing
    • Open areas of research
  • Monitoring & Control
    • Model acceptance testing
    • Covariate | Target | Domain shifts
    • Mechanisms for control of AI systems
    • Dealing with confidence and uncertainty
  • Conclusion
Materials
 
Instructors
Mr. Andrew Brethorst is the Associate Department Director for the Data Science and AI Department at The Aerospace Corporation. Mr. Brethorst completed his undergraduate degree in cybernetics from UCLA, and later completed his master’s degree in computer science with a concentration in machine learning from UCI. Much of his work involves applying machine learning techniques to image exploitation, telemetry anomaly detection, intelligent artificial agents using reinforcement learning, as well as collaborative projects within the research labs.
 
Dr. Erik Linstead is a professor in AI at Chapman University. Dr. Linstead completed his undergraduate degree in computer science from Stanford University, and later went on to complete his PhD in Artificial Intelligence and machine learning from UC Irvine. He currently operates a research lab where he focuses on using AI technology for enhancing learning as well as studying new treatment effects for autism.

 

AIAA Training Links