Responsible Artificial Intelligence
Program Overview
Introduction to the Responsible Artificial Intelligence Program
The Responsible Artificial Intelligence program is designed to equip leaders, product managers, developers, buyers, and users of AI products and services with the knowledge and tools to implement AI that benefits society responsibly. This course begins by introducing the core principles of Responsible AI, setting the stage for a deeper exploration of ethics, explainability, robustness, privacy, fairness, bias, and responsible generative AI.
Course Overview
The course is thoughtfully curated for working professionals aspiring to design, implement, and oversee Responsible AI practices in their respective functions. This non-technical course requires no prior knowledge of Artificial Intelligence or tech expertise.
Course Modules
- Module 1: Introduction to Responsible AI
- This module introduces artificial intelligence and responsible AI, addressing misconceptions, highlighting the importance of stakeholder involvement, and ensuring accountability throughout the development and procurement stages.
- You will learn how to:
- Define the concepts of AI and Responsible AI
- Debunk common AI myths
- Outline RAI characteristics
- Identify stakeholder responsibilities in RAI
- Consider RAI at each development stage
- Build accountability in RAI by assigning tasks
- Module 2: Explainability
- This module explores the concept of making AI systems understandable and interpretable, focusing on building trust, ensuring transparency, and evaluating both internal and external systems for clarity and accountability.
- You will learn how to:
- Emphasize explainable and interpretable AI
- Guide development of explainable systems
- Evaluate and critique AI systems on explainability and interpretability
- Develop a testing plan to measure explainability
- Avoid common misconceptions about explainable AI systems
- Module 3: Robustness
- This module covers the principles of ensuring reliability in AI systems, addressing their resilience, evaluating their performance under different conditions, and identifying critical flaws while proposing effective solutions.
- You will learn how to:
- Define robustness in the context of AI
- Guide development of robust AI systems
- Evaluate and critique AI systems on robustness
- Create organization-level policies for robustness
- Module 4: Privacy
- This module explores privacy concepts, their application in AI/ML systems, and design modifications to ensure compliance with privacy regulations, emphasizing privacy by design and privacy-enhancing technologies.
- You will learn how to:
- Define privacy in the context of AI systems
- Evaluate AI/ML system designs through a privacy lens
- Consider Fair Information Practice Principles and regulatory frameworks like GDPR
- Apply privacy by design methodologies
- Implement data minimization principles
- Module 5: Fairness and Bias
- This module addresses the challenges of bias in AI, examining its origins, impacts, and methodologies to ensure fairness in AI systems, fostering equity and inclusivity for all users.
- You will learn how to:
- Define and explain the importance of fairness and bias in AI
- Discuss different fairness levels and identify potential sources of bias in AI
- Guide creation of fair AI systems
- Evaluate and critique AI systems for fairness and bias
- Discuss policies for fairness training
- Identify common misconceptions and red flags related to fairness and bias in AI
- Module 6: Connecting the Concepts
- This module consolidates the previous concepts, offering practical guidance on implementing AI system development principles, with a focus on real-life applications and actionable insights for participants' use cases.
- You will learn how to:
- Apply the concepts of robustness, explainability, privacy, fairness, and bias at the same time
- Module 7: Responsible Generative AI
- This module delves into the world of generative AI, focusing on its capabilities, applications, and ethical challenges, helping participants navigate the fine line between human and machine-generated creativity.
- You will learn how to:
- Define Generative AI and LLMs
- Explain uses and capabilities of Generative AI
- Assess risks and ethical concerns associated with the use of LLMs
- Identify AI-generated text with different techniques
- Interpret copyright implications of using LLMs
- Design guidelines for ethical LLM use
Course Faculty
The course is led by a team of distinguished faculty members from Carnegie Mellon University, including:
- Alex John London, K&L Gates Professor of Ethics and Computational Technologies
- Pradeep Ravikumar, Professor, Machine Learning Department
- Norman Sadeh, Professor, Software and Societal Systems Department
- Rayid Ghani, Distinguished Career Professor, Machine Learning Department & Heinz College
- Lei Li, Assistant Professor, Language Technologies Institute
- Claire Le Goues, Associate Professor, CMU
- Travis Breaux, Director of MSE Programs, Assoc. Professor, Software and Societal Systems
- Hasan Yasar, Technical Director of Continuous Deployment of Capability at Software Engineering Institute (SEI), CMU
- Praveen Garimella, Adjunct Research Fellow, S3D, CMU, Dean, MSIT Program, IIIT-Hyderabad
- Asokan Pichai, Chief Learning Officer, TalentSprint
- Indira Priyadarshini, Academics Lead, TalentSprint
- Nicholas Groesch, Lead Instructor, TalentSprint
- David Kosbie, Director and Co-Founder, CMU CS Academy, Teaching Professor, School of Computer Science CMU
Course Outcomes
As AI continues to transform industries, understanding how to design and deploy systems that are ethical, transparent, fair, and inclusive is essential. This course will equip you with the knowledge and tools to ensure your ability to implement AI that benefits society responsibly.
Tuition and Scholarship
- Tuition: $2500
- Scholarship: 10%
- Fee post Scholarship: $2250
Ideal Candidates
The course is ideal for working professionals aspiring to design, implement, and oversee Responsible AI practices in their respective functions. No prior knowledge of Artificial Intelligence or tech expertise is required.
