Trusted Artificial Intelligence Online
OVERVIEW
Artificial Intelligence systems – enabled by advancement in sensor and control technologies, artificial intelligence, data science, and machine learning – promise to deliver new and exciting applications to a broad range of industries. However, a fundamental trust in their application and execution must be established in order for them to succeed. People, by and large, do not trust a new entity or system in their environment without some evidence of trustworthiness. To trust an artificial intelligence system, we need to know which factors affect system behaviors, how those factors can be assessed and effectively applied for a given mission, and the risks assumed by trusting.
This course aims to provide a foundation for building trust in artificial intelligence systems. A framework for evaluating trust is defined and highlights three perspectives - data, artificial intelligence algorithms, and cybersecurity. An overview of the state-of-the art in research, methods, and technologies for achieving trust in AI is reviewed along with current applications.
LEARNING OBJECTIVES
- Establishing Trust in Data
- Understanding Data Management
- Understanding AI Interpretability and Explainability
- Understanding Adversarial Robustness, including intersection of AI and Cybersecurity
- Understanding Monitoring & Control
WHO SHOULD ATTEND
This course is intended for program managers, researchers, engineers, academics, and graduate students who want to get a deeper understanding of AI, both as a field of study and from an aerospace application perspective as well.
Course Information:
Type of Course: Instructor-Led Short Course
Course Length: 1 day
AIAA CEU's available: Yes
- Introduction
- Motivation for establishing trust in AI systems
- Defining Trust
- Establishing a Trusted AI Framework
- Data
- Overview of Data Management
- Defining Data Governance
- Ensuring Fairness in AI Systems
- Assessing bias
- Techniques for removing bias
- Metrics for fairness
- Data poisoning
- Introduction to poisoning attacks
- Data poisoning defenses
- Building trust through reproducibility
- Handling domain shift
- Treating models as data
- Tools and best practices
- Interpretability & Explainability
- Need for interpretability
- Tool and techniques
- Adversarial Robustness
- Intersection of AI and cybersecurity
- Adversarial attacks
- Decision boundary attacks
- Adversarial patch attacks
- Defenses against adversarial attacks
- Adversarial training
- Exact methods
- Lower bound estimation
- Randomized smoothing
- Open areas of research
- Monitoring & Control
- Model acceptance testing
- Covariate | Target | Domain shifts
- Mechanisms for control of AI systems
- Dealing with confidence and uncertainty
- Conclusion
AIAA Training Links
For information, group discounts,
and private course pricing, contact:
Lisa Le, Education Specialist (lisal@aiaa.org)