Skip to main content
ATGO AI
| Accountability, Trust, Governance and Oversight of Artificial Intelligence |

ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence |

By ForHumanity Center

ATGO AI is podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence).
Available on
Apple Podcasts Logo
Google Podcasts Logo
Pocket Casts Logo
RadioPublic Logo
Spotify Logo
Currently playing episode

#OPENBOX - AUTORL - OPEN ISSUES AND ETHICS PERSPECTIVES DISCUSSION WITH RAGHU RAJAN Part1

ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence |Aug 03, 2022

00:00
16:12
#OpenBox The Data Brokers & Emerging Governance with Heidi Part 2

#OpenBox The Data Brokers & Emerging Governance with Heidi Part 2

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

I spoke with Heidi Saas

Heidi is a Data Privacy and Technology Attorney. She regularly advise SMEs and start ups working in a wide variety of industries, on data privacy and ethical AI strategies. She is also a ForHumanity Contributor and algorithmic auditor.

This is part 2. She is speaking about how enterprises can manage the challenges by good governance practices .


Dec 20, 202320:17
#OpenBox The Data Brokers & Emerging Governance with Heidi

#OpenBox The Data Brokers & Emerging Governance with Heidi

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. 


I spoke with Heidi Saas Heidi is a Data Privacy and Technology Attorney. She regularly advise SMEs and start ups working in a wide variety of industries, on data privacy and ethical AI strategies. She is also a ForHumanity Contributor and algorithmic auditor.


This is part 1. She is speaking about how regulations are emerging in the context of data brokers and how enterprises need to adopt to the changing compliance environment in managing data.

Dec 14, 202321:40
#openbox - Open issues and problems in dealing with dark patterns

#openbox - Open issues and problems in dealing with dark patterns

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series

Today, we have with us Marie. Marie Potel-Saville is the founder and CEO of amurabi, a legal innovation by design agency. She was a lawyer for over 10 years at Magic Circle law firms such as Freshfields and Allen & Overy in London, Brussels and Paris. She is also the founder of Fair-Patterns, a SAAS platform to fight against dark patterns. She is spearheading efforts towards addressing the challenging problem of deceptive designs in applications using innovative technology. We are going to be discussing some nuances with her on this.

In this episode, Marie speaks about the enterprise approaches in working on fair patterns and the emerging regulatory interests in addressing the gap.


Dec 06, 202313:37
#OpenBox - Open issues in dealing with dark patterns and/ or deceptive designs

#OpenBox - Open issues in dealing with dark patterns and/ or deceptive designs

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series


Today, we have with us Marie. Marie Potel-Saville is the founder and CEO of amurabi, a legal innovation by design agency. She was a lawyer for over 10 years at Magic Circle law firms such as Freshfields and Allen & Overy in London, Brussels and Paris. She is also the founder of Fair-Patterns, a SAAS platform to fight against dark patterns. She is spearheading efforts towards addressing the challenging problem of deceptive designs in applications using innovative technology. We are going to be discussing some nuances with her on this. 


In this episode, Marie speaks about the key considerations in dealing with the deceptive designs and how fair patterns enable a better business proposition

Nov 23, 202320:29
#openbox Bias identification and mitigation with Patrick Hall - Part 2

#openbox Bias identification and mitigation with Patrick Hall - Part 2


OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. 

Today, we have with us Patrick Hall. Patrick is a Assistant Professor at George Washington University. He is conducting research in support of the NIST AI Risk Management Framework and a contributor to NIST work on building a Standard for Identifying and Managing Bias in Artificial Intelligence. He is also the collaborator running the open-source initiative called “Awesome Machine Learning Interpretability” which maintains and curates a list of practical and awesome responsible machine learning resources. He is also one of the authors of Machine Learning for High Risk Applications released by O’reilly. He is also managing the AI incident Database.


This is part 2 of the episode


He spoke about key approaches for bias mitigation and the limitations therein. He also discusses the open problems in this area.



Nov 02, 202322:46
#Openbox - bias discussion with Patrick Hall part 1

#Openbox - bias discussion with Patrick Hall part 1

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. 


Today, we have with us Patrick Hall. Patrick is a Assistant Professor at George Washington University. He is conducting research in support of the NIST AI Risk Management Framework and a contributor to NIST work on building a Standard for Identifying and Managing Bias in Artificial Intelligence. He is also the collaborator running the open-source initiative called “Awesome Machine Learning Interpretability” which maintains and curates a list of practical and awesome responsible machine learning resources. He is also one of the authors of Machine Learning for High Risk Applications released by O’reilly. He is also managing the AI incident Database.


He spoke about key considerations for metrics regarding bias for varied types of data. He also discusses the open problems in this area.


Oct 18, 202322:39
#OPENBOX Navigating Causality with Aleksander Molak - Part 2

#OPENBOX Navigating Causality with Aleksander Molak - Part 2

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.


This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/

Today, we have with us Aleksandr. Aleksander Molakis a Machine Learning Researcher, Educator, Consultant,and Authorwho who gained experience working with Fortune 100, Fortune 500, and Inc. 5000 companies across Europe, the USA,and Israel, designing and building large-scale machine learning systems. On a mission to democratize causality for businesses and machine learning practitioners, Aleksander is a prolific writer, creator,and international speaker. He is the author of the book Causal inference and discovery in Python.

This is Part 2. He discusses about some critical considerations regarding causality including honest reflections on how to leverage causality for humanity.



Sep 28, 202323:18
#OPENBOX - Navigating Causal Discovery with Aleksander Molak

#OPENBOX - Navigating Causal Discovery with Aleksander Molak

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/

Today, we have with us Aleksandr. Aleksander Molakis a Machine Learning Researcher, Educator, Consultant,and Authorwho who gained experience working with Fortune 100, Fortune 500, and Inc. 5000 companies across Europe, the USA,and Israel, designing and building large-scale machine learning systems. On a mission to democratize causality for businesses and machine learning practitioners, Aleksander is a prolific writer, creator,and international speaker. He is the author of the book Causal inference and discovery in Python.

This is Part 1. He discusses about open issues and considerations in causal discovery, Directed acrylic graphs, and Causal effect estimators.


Sep 28, 202325:27
#OpenBox - Charting the Sociotechnical Gap in Explainable AI - Part 2

#OpenBox - Charting the Sociotechnical Gap in Explainable AI - Part 2

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. Today, we have with us Upol Ehsan is a Researcher and Doctoral Candidate in the School of Interactive Computing at Georgia Tech and an affiliate at the Data & Society Research Institute. Combining AI, HCI, and philosophy, his work in Explainable AI (XAI) and Responsible AI aims to foster a future where anyone, regardless of their background, can use AI-powered technology with dignity. His work has pioneered the area ofHuman-centered Explainable AI (a sub-field of XAI), receiving multiple awards at ACM CHI, FAccT, and HCII and been covered in major media outlets. By promoting equity and ethics in AI, he wants to ensure stakeholders who aren’t at the table do not end up on the menu. Outside research, he is a founder and advisor for Aalor Asha, an educational institute he started for underprivileged children subjected to child labor. He is also a social entrepreneur and has co-founded DeshLabs, a social innovation lab focused on fostering grassroots innovations in emerging markets.

We discuss the paper titled “Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI” which It will be presented at CSCW 2023 and was co-authors with Koustuv Saha, Munmun de Choudhury, and Mark Riedl.

Upol explains about specific nuances on why Explainability cannot be considered independent of model development and deployment environment.


This is part 2 of the discussion.


Sep 05, 202322:44
#OpenBox - Charting the Sociotechnical Gap in Explainable AI - Part 1

#OpenBox - Charting the Sociotechnical Gap in Explainable AI - Part 1

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. Today, we have with us Upol Ehsan is a Researcher and Doctoral Candidate in the School of Interactive Computing at Georgia Tech and an affiliate at the Data & Society Research Institute. Combining AI, HCI, and philosophy, his work in Explainable AI (XAI) and Responsible AI aims to foster a future where anyone, regardless of their background, can use AI-powered technology with dignity. His work has pioneered the area ofHuman-centered Explainable AI (a sub-field of XAI), receiving multiple awards at ACM CHI, FAccT, and HCII and been covered in major media outlets. By promoting equity and ethics in AI, he wants to ensure stakeholders who aren’t at the table do not end up on the menu. Outside research, he is a founder and advisor for Aalor Asha, an educational institute he started for underprivileged children subjected to child labor. He is also a social entrepreneur and has co-founded DeshLabs, a social innovation lab focused on fostering grassroots innovations in emerging markets.

We discuss the paper titled “Charting the Sociotechnical Gap in Explainable AI: A Framework to Address the Gap in XAI” which It will be presented at CSCW 2023 and was co-authors with Koustuv Saha, Munmun de Choudhury, and Mark Riedl.

Upol explains about specific nuances on why Explainability cannot be considered independent of model development and deployment environment.


This is part 1 of the discussion.

Sep 05, 202323:01
#OPENBOX - Open issues in Data Poisoning defence with Antonio Part 2

#OPENBOX - Open issues in Data Poisoning defence with Antonio Part 2

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

My name is Sundar. I am an Ethics and Risk professional and an AI Ethics researcher. I am the host of this podcast.

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.

In this part 2 of the podcast, he speaks about emerging types of attacks wherein the attack approaches are less sophisticaand computer vision. He is expected to join CISPA labs saarbrücken, Germany. He is passionate about machine learning security and closely follows the cutting edge research in this space. He also authored a paper with Kathrin (one of our earlier podcast guest). We are discussing about” Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning” Paper that he co-authored.


In this part 2 of the podcast, he speaks about emerging types of attacks wherein the attack approaches are less sophisticated, but impactful. 

Mar 07, 202322:27
#OPENBOX - Data Poisoning and associated Open issues with Antonio Part 1

#OPENBOX - Data Poisoning and associated Open issues with Antonio Part 1

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in a variety of areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.

Today, we have with us Antonio. He is a PhD student at University Ca' Foscari of Venice working in the fields of adversarial machine learning and computer vision. He is expected to join CISPA labs saarbrücken, Germany. He is passionate about machine learning security and closely follows cutting-edge research in this space. He also authored a paper with Kathrin (one of our earlier podcast guests). We are discussing ” Wild Patterns Reloaded: A Survey of Machine Learning Security against Training Data Poisoning” Paper that he co-authored.


In this he is speaking about the varied attack vectors and specific open issues in this space

Mar 07, 202316:05
#OPENBOX - Eric Smith - Human Evaluation of Open-domain Conversations - Part 2

#OPENBOX - Eric Smith - Human Evaluation of Open-domain Conversations - Part 2

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.

Today, we have Eric with us. Eric is a Research Engineer at Facebook AI Research (FAIR). He is interested in questions around (a) conversational ai-how to make it better, and how to evaluate and (b) questions of bias in language models. He is interested in understanding languages and their underlying constructs. We will cover a paper titled “Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agents,” published in May 2022, which he co-authored.

This is part 2 of the podcast. Eric discusses known limitations of per-turn, per-dialogue, and pairwise methods [scenario-based testing, fault injection, adversarial attacks

Oct 21, 202216:50
#OPENBOX - Eric Smith - Human Evaluation of Open-domain Conversations - Part 1

#OPENBOX - Eric Smith - Human Evaluation of Open-domain Conversations - Part 1

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.

Today, we have Eric with us. Eric is a Research Engineer at Facebook AI Research (FAIR). He is interested in questions around (a) conversational ai-how to make it better, and how to evaluate and (b) questions of bias in language models. He is interested in understanding languages and their underlying constructs. We will cover a paper titled “Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agents,” published in May 2022, which he co-authored.

This is part 1 of the podcast. In this podcast, Eric discusses human evaluation in open-domain conversational contexts, Likert scales, and subjective outcomes. 

Oct 21, 202218:58
#OPENBOX - Carol Anderson - Zero-Shot Learning Part 2

#OPENBOX - Carol Anderson - Zero-Shot Learning Part 2

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.

Today, we have Carol with us. Carol Anderson machine learning practitioner working on NLP for the past 5 years, most recently with Nvidia. Before this, she was working in the field of molecular biology. She is currently focusing on AI ethics, given the potential harm that AI contributes to people. She wants to use her skills to prevent those harms. I am glad to be having a conversation with him. We will cover a paper titled “Issues with Entailment-based Zero-shot Text Classification,” published in 2021.  and also discuss specific practical issues associated with Zero-shot learning.

This is part 2 of the conversation. She covers the difficulty of classifying among the available options, labor-intensive label design, and the underlying bias encoded in models. 

Oct 21, 202209:24
#OPENBOX - Carol Anderson - Zero Shot Learning Part 1

#OPENBOX - Carol Anderson - Zero Shot Learning Part 1

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.

Today, we have Carol with us. Carol Anderson machine learning practitioner working on NLP for the past 5 years, most recently with Nvidia. Before this, she was working in the field of molecular biology. She is currently focusing on AI ethics, given the potential harm that AI contributes to people. She wants to use her skills to prevent those harms. I am glad to be having a conversation with him. We will cover a paper titled “Issues with Entailment-based Zero-shot Text Classification,” published in 2021.  and also discuss specific practical issues associated with Zero-shot learning.

This is part 1 of the conversation. 

Oct 21, 202219:56
#OPENBOX - Maura Pintor - Improving Optimization of Adversarial Examples - Part 1

#OPENBOX - Maura Pintor - Improving Optimization of Adversarial Examples - Part 1

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.

Today, we have with us, Maura. Maura Pintor is a Postdoctoral Researcher at the PRA Lab, at the University of Cagliari, Italy. She received the MSc degree in Telecommunications Engineering with honors in 2018 and the Ph.D. degree in Electronic and Computer Engineering from the University of Cagliari in 2022. Her Ph.D. thesis, "Towards Debugging and Improving Adversarial Robustness Evaluations", provides a framework for optimizing and debugging adversarial attacks. Maura co-authored a paper titled “Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples, “ which was the podcast's focus.

This is part 1 of the podcast. In this podcast, Maura covered why evaluating defenses is complex, the nature of mitigation failures and why robustness is overestimated in evasion attacks in the context of open issues.

Oct 21, 202223:38
#OPENBOX - Maura Pintor - Open issues in improving optimization of adversarial examples - Part 2

#OPENBOX - Maura Pintor - Open issues in improving optimization of adversarial examples - Part 2

OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/.

Today, we have with us, Maura. Maura Pintor is a Postdoctoral Researcher at the PRA Lab, at the University of Cagliari, Italy. She received the MSc degree in Telecommunications Engineering with honors in 2018 and the Ph.D. degree in Electronic and Computer Engineering from the University of Cagliari in 2022. Her Ph.D. thesis, "Towards Debugging and Improving Adversarial Robustness Evaluations", provides a framework for optimizing and debugging adversarial attacks. Maura co-authored a paper titled “Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples, “ which was the podcast's focus.

This is part 2 of the podcast. In this podcast, Maura covered transferability analysis, testing ML models for robustness, and challenges associated with repurposing models in the context of open issues. 

Oct 21, 202212:38
#OPENBOX - Machine Learning Security Against Data Poisoning - Kathrin Grosse Part 2

#OPENBOX - Machine Learning Security Against Data Poisoning - Kathrin Grosse Part 2

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

In this podcast we have Kathrin Grosse. Kathrin Grosse is a Post Doc researcher with Battista Biggio at the University of Cagliari working on Adversarial learning.

This podcast covers a paper titled “Machine Learning Security against Data Poisoning: Are We There Yet? ” published in April 2022, which she co-authored.

This is part 2 of the podcast. In this podcast, she covers the thoughts around gaining a better understanding of how defenses work, adaptive attacks and thus, our knowledge about the limits of existing defenses is rather narrow

Sep 27, 202209:50
#OPENBOX - Machine Learning Security Against Data Poisoning - Kathrin Grosse - Part 1

#OPENBOX - Machine Learning Security Against Data Poisoning - Kathrin Grosse - Part 1

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

In this podcast we have Kathrin Grosse. Kathrin Grosse is a Post Doc researcher with Battista Biggio at the University of Cagliari working on Adversarial learning.

In this podcast we cover a paper titled “Machine Learning Security against Data Poisoning: Are We There Yet? ” published in April 2022, which she co-authored.

This is part 1 of the podcast. In this podcast, she covers the thoughts around the impracticality of some threat models considered for poisoning attacks in a real-world application and scalability of poisoning attacks against large-scale models


Sep 27, 202213:29
#OPENBOX AUTORL - OPEN PROBLEMS & ETHICAL PERSPECTIVES DISCUSSION WITH RAGHU RAJAN Part3

#OPENBOX AUTORL - OPEN PROBLEMS & ETHICAL PERSPECTIVES DISCUSSION WITH RAGHU RAJAN Part3

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/. 

Today, we have Raghu with us. Raghu is a Ph.D. student at the Machine Learning Group at the Univerity of Freiburg, under the supervision of Frank Hutter. He is working on automating hyperparameter optimization for RL, AutoRL. His master's thesis was on Reinforcement learning. Artificial General Intelligence is an area of interest to him in the long term. He is also exploring Dynamic Algorithm configuration (Controlling hyperparameter dynamically). We will cover a paper titled “Automated Reinforcement Learning (AutoRL): A Survey and Open Problems” published in June 2022, which he co-authored. 

This is part 3 of the discussion. In this part, he covers the open issues in hyper parameter optimization using the Environmental design, Hybrid approaches and Benchmarks.

Aug 03, 202214:25
#OPENBOX AUTORL - OPEN PROBLEMS & ETHICAL PERSPECTIVES DISCUSSION WITH RAGHU RAJAN Part2

#OPENBOX AUTORL - OPEN PROBLEMS & ETHICAL PERSPECTIVES DISCUSSION WITH RAGHU RAJAN Part2

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. 

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/. 

Today, we have Raghu with us. Raghu is a Ph.D. student at the Machine Learning Group at the Univerity of Freiburg, under the supervision of Frank Hutter. He is working on automating hyperparameter optimization for RL, AutoRL. His master's thesis was on Reinforcement learning. Artificial General Intelligence is an area of interest to him in the long term. He is also exploring Dynamic Algorithm configuration (Controlling hyperparameter dynamically). We will cover a paper titled “Automated Reinforcement Learning (AutoRL): A Survey and Open Problems” published in June 2022, which he co-authored. 

This is part 2 of the discussion. In this part, he covers the open issues in Evolutionary approaches, Meta gradient for online tuning and Blackbox online tuning.

Aug 03, 202217:35
#OPENBOX - AUTORL - OPEN ISSUES AND ETHICS PERSPECTIVES DISCUSSION WITH RAGHU RAJAN Part1

#OPENBOX - AUTORL - OPEN ISSUES AND ETHICS PERSPECTIVES DISCUSSION WITH RAGHU RAJAN Part1

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series.

Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.

This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.

Today, we have Raghu with us. Raghu is a Ph.D. student at the Machine Learning Group at the Univerity of Freiburg, under the supervision of Frank Hutter. He is working on automating hyperparameter optimization for RL, AutoRL.

His master's thesis was on Reinforcement learning. Artificial General Intelligence is an area of interest to him in the long term. He is also exploring Dynamic Algorithm configuration (Controlling hyperparameter dynamically).

We will cover a paper titled “Automated Reinforcement Learning (AutoRL): A Survey and Open Problems” published in June 2022, which he co-authored.

This is part 1 of the discussion. In this part, he covers the open issues in hyper parameter optimization using the Random grid search approach and Bayesian optimization. 

Aug 03, 202216:12
#OPENBOX - OPEN ISSUES IN APPLYING DEEP REINFORCEMENT LEARNING IN COMMUNICATION NETWORKS - 2/2

#OPENBOX - OPEN ISSUES IN APPLYING DEEP REINFORCEMENT LEARNING IN COMMUNICATION NETWORKS - 2/2

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems.

Today, we have with us paul. Paul is a PhD Student at Barcelona Neural Networking Center Technical University of Catalunya working on the use of ML to solve problems in communication networks. We are going to cover a paper titled “Towards Real-Time Routing Optimization with Deep Reinforcement Learning: Open Challenges ” published recently which he co-authored. In this podcast, he is covering aspects of (a) Training time and cost associated with Deep Reinforcement Learning and (b) lack of performance bounds. This is part 2 of the podcast

This project is in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization with a mission to minimize the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.

Jul 18, 202213:43
#OPENBOX - OPEN ISSUES IN APPLYING DEEP REINFORCEMENT LEARNING IN COMMUNICATION NETWORKS - 1/2

#OPENBOX - OPEN ISSUES IN APPLYING DEEP REINFORCEMENT LEARNING IN COMMUNICATION NETWORKS - 1/2

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems.

Today, we have with us paul. Paul is a PhD Student at Barcelona Neural Networking Center Technical University of Catalunya working on the use of ML to solve problems in communication networks. We are going to cover a paper titled “Towards Real-Time Routing Optimization with Deep Reinforcement Learning: Open Challenges ” published recently which he co-authored. In this podcast, he is covering aspects of (a) Generalization in Deep Reinforcement Learning and (b) Defining an appropriate action space. This is part 1 of the podcast

This project is in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization with a mission to minimize the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.




Jul 17, 202217:37
#OPENBOX - OPEN ISSUES IN OFFLINE REINFORCEMENT LEARNING - 2/2

#OPENBOX - OPEN ISSUES IN OFFLINE REINFORCEMENT LEARNING - 2/2

OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems.

In this episode, Rafael Figueiredo Prudencio discusses open issues in Offline Reinforcement Learning. He covers aspects relating to (a) Approximation function and generalization and (b) leveraging unlabelled data. Conversation with Rafael is 2 part podcast series, and this podcast is part 2. Listen to the podcast to understand specific ethical issues arising from the open issues.

This project is in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization with a mission to minimize the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.

Jun 30, 202215:19
#OPENBOX - OPEN ISSUES IN OFFLINE REINFORCEMENT LEARNING - 1/2
Jun 30, 202215:52
E11: #EUAIRegs Regulations have Gaps and having an Industry Standard for Ethics can Mitigate Risks

E11: #EUAIRegs Regulations have Gaps and having an Industry Standard for Ethics can Mitigate Risks

ATGO AI is a podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence).

Joshua Bucheli is an international polyglot. He is based in Switzerland and has a background in event and project management as well as applied ethical research and critical-analysis. He is an independent researcher, writer, and editor with an MA in Political, Legal, and Economic Philosophy and has experience ranging from humanitarian fieldwork for the UNHCR in Malaysia to researching and editing for academic journal articles and Swiss think tanks on the subject of ethical AI and robotics. He is a former Head of Community Management at Let’s Phi and a freelance writer for cyberunity on the subject of careers in cybersecurity.

Joshua shares that he is working on a code of ethics in order to build an industry standard for ethics and AI. The draft EU AI regulations will help set a course - will there be a more human centered focus in regards to technological innovation? Taking a careful approach towards the ethical grey area is impactful. Pause is needed to evaluate and mitigate potential harm.

Visit us at https://forhumanity.center/ to learn more

Jul 13, 202112:24
E10: #EUAIRegs Rules are needed to develop an #InfrastructureOfTrust for #AI

E10: #EUAIRegs Rules are needed to develop an #InfrastructureOfTrust for #AI

ATGO AI is a podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence).

Enrico Panai is a lecturer, information and data ethicist, and has worked for years in the AI space as. He is also the Chief Information Officer of ForHumanity. He is a founder of Be Ethical and helps lead AI ethics in France.

Enrico shares an analogy about games and rules for how there should be rules for #AI to develop an infrastructure of trust.  AI is challenging how we organize and categorize data. The EU is achieving two things with these regulations: it helps humanity to step towards a better world with these regulations and is also ensuring the geopolitical value and impact of the EU for AI. Enrico proposes a Chief Ethics Data Officer become a position at organizations. Third party auditors will lead to transparency.

Visit us at https://forhumanity.center/ to learn more

Jul 13, 202113:49
E9: #EUAIRegs Explore Compliance from a Human-Centered Focus

E9: #EUAIRegs Explore Compliance from a Human-Centered Focus

ATGO AI is a podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence).

Tristi Tanaka who has worked for over 20 years in and with technologists to deliver organisational and functional change. She is always learning and using design thinking techniques to create, adapt and improve the way services are designed, delivered and evolve to meet the needs of users and teams in the workplace. She believes technology doesn’t deliver success, people do. She is a current ForHumanity fellow

Tristi states the EU is standing out to make a difference between a high risk and low risk case for AI and autonomous technology. She discusses issues of scale in regards to bias. She explores how one of the largest outcomes will be related to skills, compliance, and auditing of AI and autonomous technology.

Visit us at https://forhumanity.center/ to learn more

Jul 09, 202113:30
E8: #EUAIReg important advances to navigate high risks

E8: #EUAIReg important advances to navigate high risks

ATGO AI is a podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence).

Ryan serves as ForHumanity’s Executive Director and Chairman of the Board of Directors, in these roles he is responsible for the day-to-day function of ForHumanity and the overall process of Independent Audit. He started this non-profit in 2016 with a mission to Managing AI risks for Humanity   Ryan has conducted business in over 55 countries and is a frequent speaker at industry conferences around the world on the topic of audit of AI systems. The ForHumanity mission is one of the largest and notable crowdsourced efforts on audit of AI system

In this episode Ryan shares ideas about how the regulations will be managed from a global perspective. He also discusses what the conformity assessments will look like and the questions around definitions about transparency and governance. AI audits involve certified practitioners, third party sets of rules, and regulations that govern a targetive organization, the auditors and the public.

Visit us at https://forhumanity.center/ to learn more

Jun 28, 202111:28
E7: #EUAIReg What is Conformity Assessments from a ‘Risk’ lens?

E7: #EUAIReg What is Conformity Assessments from a ‘Risk’ lens?

ATGO AI is a podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence).

Sarah Clarke has two decades of experience in IT, information security, GDPR, and data protection. Also a guest lecturer in Vendor Security Governance for the University of Manchester and increasingly frequent speaker.She is also a ForHumanity fellow.

In this episode Sarah shares about five stages of grief in relation to auditing. Then she discusses operationalizing in order to work outwards regarding layers of uncertainty to get closer and closer to a place of certainty for the regulations. She believes in helping people to create space to have those rational conversations and improve the auditing of AI.

Jun 18, 202113:11
E6: #EUAIRegs Conformity Assessments: What does “Good” look like?

E6: #EUAIRegs Conformity Assessments: What does “Good” look like?

About the podcast

ATGO AI is a podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence).

Rohan Light is a fellow with RSA and ForHumanity he works on data governance on DataWonks and focuses on strategy and risk regarding the decision management value chain.

In this episode Rohan shares about a rights based approach to the draft EU AI regulation. Regulations are much needed because AI and autonomous technology can be as dangerous as cars. He is glad to see that care is being taken for regulation and is looking forward to more tools being added for auditing. Overall, disruption can be associated with quality to audit AI.

Visit us at https://forhumanity.center/ to learn more

Jun 16, 202113:43
E5: The Swiss View: #EUAIRegs make ethics affordable

E5: The Swiss View: #EUAIRegs make ethics affordable

ATGO AI, a short form podcast from ForHumanity. This series is about the recent draft EU regulations for AI. The ForHumanity fellows, leading international experts on AI will be interviewed by international hosts, and the fellows will share their thoughts about the regulations.

The draft #EUAIRegs mandate classification of high risk AI and also require specific approaches to ensure that such AI systems do not harm people. This regulation has proposed a penalty of 6% of global revenues or Euro 30 million for violations.

Dorothea Baur is an independent consultant, speaker (incl. TedX), and lecturer on ethics, responsibility and sustainability. She has a specific focus on tech and finance. At the intersection of technology, society and the environment, companies are often confronted with ethical questions and she helps in aligning value with values. She is joining us from Switzerland.

In this episode Dorothea talks about her perspectives on EU AI regulations. Dorothea shares, the Eu regulations make ethics affordable - it refutes claims by companies who say we are forced to make these choices. We are democratizing AI. The EU regulation is brave, Dorothea hopes this enhances trust and that certain negative manipulations are not possible anymore when working with AI systems.

Visit us at https://forhumanity.center/ to learn more

Jun 10, 202111:24
E4: #EUAIRegs Focus on standardization to mitigate bias

E4: #EUAIRegs Focus on standardization to mitigate bias

ATGO AI, a short form podcast from ForHumanity. This series is about the recent draft EU regulations for AI. The ForHumanity fellows, leading international experts on AI will be interviewed by international hosts, and the fellows will share their thoughts about the regulations.

The draft #EUAIRegs mandate classification of high risk AI and also require specific approaches to ensure that such AI systems do not harm people. This regulation has proposed a penalty of 6% of global revenues or Euro 30 million for violations.

Dr. Shea Brown is a researcher, lecturer, speaker, and consultant in AI Ethics, Machine-learning, and Astrophysics. He earned his Ph.D. in Astrophysics from the University of Minnesota. He is the founder and CEO of BABL, AI. He is a current ForHumanity fellow focusing on algorithmic auditing and AI governance.

In this episode, Shea discusses his perspectives on EU AI regulations. He shares that a focus on standardization in order to mitigate bias is crucial. Regulations will also help other countries, like the USA, to increase their own AI regulations. These regulations will push the rest of the world to continue developing and evolving their AI regulations.

Visit us at https://forhumanity.center/ to learn more

Jun 10, 202110:38
E1: Our Goal: Managing AI Risks for Humanity

E1: Our Goal: Managing AI Risks for Humanity

ATGO AI, a short form podcast from ForHumanity. 

In this episode, Ryan Carrier, founder of Forhumanity expressing about the mission.

Ryan serves as ForHumanity’s Executive Director and Chairman of the Board of Directors, in these roles he is responsible for the day-to-day function of ForHumanity and the overall process of Independent Audit. He started this non-profit in 2016 with a mission to Managing AI risks for Humanity   Ryan has conducted business in over 55 countries and is a frequent speaker at industry conferences around the world on the topic of audit of AI systems. The ForHumanity mission is one of the largest and notable crowdsourced efforts on audit of AI system

Listen to the insightful podcast series

Visit us at https://forhumanity.center/ to know more

May 25, 202103:47
E2: #EUAIRegs: AI Regulations can help companies to stay accountable

E2: #EUAIRegs: AI Regulations can help companies to stay accountable

ATGO AI, a short form podcast from ForHumanity. This series is about the recent draft EU regulations for AI. The ForHumanity fellows, leading international experts on AI will be interviewed by international hosts, and the fellows will share their thoughts about the regulations.

The draft #EUAIRegs mandate classification of high risk AI and also require specific approaches to ensure that such AI systems do not harm people. This regulation has proposed a penalty of 6% of global revenues or Euro 30 million for violations.

Merve Hickok, a ForHumanity fellow, is a founder of the Lighthouse Career Consulting - an advisory consultancy, working in the space of AI ethics and Responsible AI. She is an independent consultant, lecturer and speaker on AI ethics and bias & its implications.

Merve states, we need to pay attention to the whole supply chain for AI and really ask the ethical questions to think about bias mitigation. These regulations will be a tool to help companies stay accountable. Other countries will see the regulations as a sign that it’s ok, we are starting now, and we can feel more comfortable about actually regulating AI.

Visit us at https://forhumanity.center/ to know more

May 25, 202111:34
E3: #EUAIRegs: Starting gun fired. A kickoff for Artificial Intelligence standards

E3: #EUAIRegs: Starting gun fired. A kickoff for Artificial Intelligence standards

ATGO AI, a short form podcast from ForHumanity. This series is about the recent draft EU regulations for AI. The ForHumanity fellows, leading international experts on AI will be interviewed by international hosts, and the fellows will share their thoughts about the regulations.

The draft #EUAIRegs mandate classification of high risk AI and also require specific approaches to ensure that such AI systems do not harm people. This regulation has proposed a penalty of 6% of global revenues or Euro 30 million for violations.

Adam Lyon Smith is a specialist in software quality, continuous integration, and AI. He has held senior technology roles at several multinationals, delivering large complex projects. Chair of Specialist group in Software Testing. He was also the editor for standards including bias in AI system and quality model. He is a ForHumanity Fellow and a Board member of ForHumanity.

In this episode Adam talks about his perspectives on EU AI regulations. Adam shares, “standards need to be applied” instead of just looking at regulation. He welcomes the regulation and believes that this will help in a robust and data driven approach. Read the regulation...this is the start, and not the end. “The starting gun has been fired for the standards committee.”

Visit us at https://forhumanity.center/ to know more

May 25, 202110:02