Skip to main content
ATGO AI
| Accountability, Trust, Governance and Oversight of Artificial Intelligence |

ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence |

By ForHumanity Center

ATGO AI is podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence).
Where to listen
Apple Podcasts Logo

Apple Podcasts

Google Podcasts Logo

Google Podcasts

Pocket Casts Logo

Pocket Casts

RadioPublic Logo

RadioPublic

Spotify Logo

Spotify

Currently playing episode

E5: The Swiss View: #EUAIRegs make ethics affordable

ATGO AI | Accountability, Trust, Governance and Oversight of Artificial Intelligence |

1x
#OPENBOX - Eric Smith - Human Evaluation of Open-domain Conversations - Part 2
#OPENBOX - Eric Smith - Human Evaluation of Open-domain Conversations - Part 2
OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/. Today, we have Eric with us. Eric is a Research Engineer at Facebook AI Research (FAIR). He is interested in questions around (a) conversational ai-how to make it better, and how to evaluate and (b) questions of bias in language models. He is interested in understanding languages and their underlying constructs. We will cover a paper titled “Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agents,” published in May 2022, which he co-authored. This is part 2 of the podcast. Eric discusses known limitations of per-turn, per-dialogue, and pairwise methods [scenario-based testing, fault injection, adversarial attacks
16:50
October 21, 2022
#OPENBOX - Eric Smith - Human Evaluation of Open-domain Conversations - Part 1
#OPENBOX - Eric Smith - Human Evaluation of Open-domain Conversations - Part 1
OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/. Today, we have Eric with us. Eric is a Research Engineer at Facebook AI Research (FAIR). He is interested in questions around (a) conversational ai-how to make it better, and how to evaluate and (b) questions of bias in language models. He is interested in understanding languages and their underlying constructs. We will cover a paper titled “Human Evaluation of Conversations is an Open Problem: comparing the sensitivity of various methods for evaluating dialogue agents,” published in May 2022, which he co-authored. This is part 1 of the podcast. In this podcast, Eric discusses human evaluation in open-domain conversational contexts, Likert scales, and subjective outcomes. 
18:58
October 21, 2022
#OPENBOX - Carol Anderson - Zero-Shot Learning Part 2
#OPENBOX - Carol Anderson - Zero-Shot Learning Part 2
OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/. Today, we have Carol with us. Carol Anderson machine learning practitioner working on NLP for the past 5 years, most recently with Nvidia. Before this, she was working in the field of molecular biology. She is currently focusing on AI ethics, given the potential harm that AI contributes to people. She wants to use her skills to prevent those harms. I am glad to be having a conversation with him. We will cover a paper titled “Issues with Entailment-based Zero-shot Text Classification,” published in 2021.  and also discuss specific practical issues associated with Zero-shot learning. This is part 2 of the conversation. She covers the difficulty of classifying among the available options, labor-intensive label design, and the underlying bias encoded in models. 
09:24
October 21, 2022
#OPENBOX - Carol Anderson - Zero Shot Learning Part 1
#OPENBOX - Carol Anderson - Zero Shot Learning Part 1
OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/. Today, we have Carol with us. Carol Anderson machine learning practitioner working on NLP for the past 5 years, most recently with Nvidia. Before this, she was working in the field of molecular biology. She is currently focusing on AI ethics, given the potential harm that AI contributes to people. She wants to use her skills to prevent those harms. I am glad to be having a conversation with him. We will cover a paper titled “Issues with Entailment-based Zero-shot Text Classification,” published in 2021.  and also discuss specific practical issues associated with Zero-shot learning. This is part 1 of the conversation. 
19:56
October 21, 2022
#OPENBOX - Maura Pintor - Improving Optimization of Adversarial Examples - Part 1
#OPENBOX - Maura Pintor - Improving Optimization of Adversarial Examples - Part 1
OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/. Today, we have with us, Maura. Maura Pintor is a Postdoctoral Researcher at the PRA Lab, at the University of Cagliari, Italy. She received the MSc degree in Telecommunications Engineering with honors in 2018 and the Ph.D. degree in Electronic and Computer Engineering from the University of Cagliari in 2022. Her Ph.D. thesis, "Towards Debugging and Improving Adversarial Robustness Evaluations", provides a framework for optimizing and debugging adversarial attacks. Maura co-authored a paper titled “Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples, “ which was the podcast's focus. This is part 1 of the podcast. In this podcast, Maura covered why evaluating defenses is complex, the nature of mitigation failures and why robustness is overestimated in evasion attacks in the context of open issues.
23:38
October 21, 2022
#OPENBOX - Maura Pintor - Open issues in improving optimization of adversarial examples - Part 2
#OPENBOX - Maura Pintor - Open issues in improving optimization of adversarial examples - Part 2
OPENBOX aims to bring an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization that minimizes the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more visit https://forhumanity.center/. Today, we have with us, Maura. Maura Pintor is a Postdoctoral Researcher at the PRA Lab, at the University of Cagliari, Italy. She received the MSc degree in Telecommunications Engineering with honors in 2018 and the Ph.D. degree in Electronic and Computer Engineering from the University of Cagliari in 2022. Her Ph.D. thesis, "Towards Debugging and Improving Adversarial Robustness Evaluations", provides a framework for optimizing and debugging adversarial attacks. Maura co-authored a paper titled “Indicators of Attack Failure: Debugging and Improving Optimization of Adversarial Examples, “ which was the podcast's focus. This is part 2 of the podcast. In this podcast, Maura covered transferability analysis, testing ML models for robustness, and challenges associated with repurposing models in the context of open issues. 
12:38
October 21, 2022
#OPENBOX - Machine Learning Security Against Data Poisoning - Kathrin Grosse Part 2
#OPENBOX - Machine Learning Security Against Data Poisoning - Kathrin Grosse Part 2
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. In this podcast we have Kathrin Grosse. Kathrin Grosse is a Post Doc researcher with Battista Biggio at the University of Cagliari working on Adversarial learning. This podcast covers a paper titled “Machine Learning Security against Data Poisoning: Are We There Yet? ” published in April 2022, which she co-authored. This is part 2 of the podcast. In this podcast, she covers the thoughts around gaining a better understanding of how defenses work, adaptive attacks and thus, our knowledge about the limits of existing defenses is rather narrow
09:50
September 27, 2022
#OPENBOX - Machine Learning Security Against Data Poisoning - Kathrin Grosse - Part 1
#OPENBOX - Machine Learning Security Against Data Poisoning - Kathrin Grosse - Part 1
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. In this podcast we have Kathrin Grosse. Kathrin Grosse is a Post Doc researcher with Battista Biggio at the University of Cagliari working on Adversarial learning. In this podcast we cover a paper titled “Machine Learning Security against Data Poisoning: Are We There Yet? ” published in April 2022, which she co-authored. This is part 1 of the podcast. In this podcast, she covers the thoughts around the impracticality of some threat models considered for poisoning attacks in a real-world application and scalability of poisoning attacks against large-scale models —
13:29
September 27, 2022
#OPENBOX AUTORL - OPEN PROBLEMS & ETHICAL PERSPECTIVES DISCUSSION WITH RAGHU RAJAN Part3
#OPENBOX AUTORL - OPEN PROBLEMS & ETHICAL PERSPECTIVES DISCUSSION WITH RAGHU RAJAN Part3
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.  Today, we have Raghu with us. Raghu is a Ph.D. student at the Machine Learning Group at the Univerity of Freiburg, under the supervision of Frank Hutter. He is working on automating hyperparameter optimization for RL, AutoRL. His master's thesis was on Reinforcement learning. Artificial General Intelligence is an area of interest to him in the long term. He is also exploring Dynamic Algorithm configuration (Controlling hyperparameter dynamically). We will cover a paper titled “Automated Reinforcement Learning (AutoRL): A Survey and Open Problems” published in June 2022, which he co-authored.  This is part 3 of the discussion. In this part, he covers the open issues in hyper parameter optimization using the Environmental design, Hybrid approaches and Benchmarks.
14:25
August 03, 2022
#OPENBOX AUTORL - OPEN PROBLEMS & ETHICAL PERSPECTIVES DISCUSSION WITH RAGHU RAJAN Part2
#OPENBOX AUTORL - OPEN PROBLEMS & ETHICAL PERSPECTIVES DISCUSSION WITH RAGHU RAJAN Part2
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems.  This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.  Today, we have Raghu with us. Raghu is a Ph.D. student at the Machine Learning Group at the Univerity of Freiburg, under the supervision of Frank Hutter. He is working on automating hyperparameter optimization for RL, AutoRL. His master's thesis was on Reinforcement learning. Artificial General Intelligence is an area of interest to him in the long term. He is also exploring Dynamic Algorithm configuration (Controlling hyperparameter dynamically). We will cover a paper titled “Automated Reinforcement Learning (AutoRL): A Survey and Open Problems” published in June 2022, which he co-authored.  This is part 2 of the discussion. In this part, he covers the open issues in Evolutionary approaches, Meta gradient for online tuning and Blackbox online tuning.
17:35
August 03, 2022
#OPENBOX - AUTORL - OPEN ISSUES AND ETHICS PERSPECTIVES DISCUSSION WITH RAGHU RAJAN Part1
#OPENBOX - AUTORL - OPEN ISSUES AND ETHICS PERSPECTIVES DISCUSSION WITH RAGHU RAJAN Part1
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. These are published as podcast series. Ideas emerge when curiosity meets clarity. Here we go with OPENBOX to bring clarity to those curious minds looking to solve real-world problems. This project is done in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization dedicated to minimizing the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/. Today, we have Raghu with us. Raghu is a Ph.D. student at the Machine Learning Group at the Univerity of Freiburg, under the supervision of Frank Hutter. He is working on automating hyperparameter optimization for RL, AutoRL. His master's thesis was on Reinforcement learning. Artificial General Intelligence is an area of interest to him in the long term. He is also exploring Dynamic Algorithm configuration (Controlling hyperparameter dynamically). We will cover a paper titled “Automated Reinforcement Learning (AutoRL): A Survey and Open Problems” published in June 2022, which he co-authored. This is part 1 of the discussion. In this part, he covers the open issues in hyper parameter optimization using the Random grid search approach and Bayesian optimization. 
16:12
August 03, 2022
#OPENBOX - OPEN ISSUES IN APPLYING DEEP REINFORCEMENT LEARNING IN COMMUNICATION NETWORKS - 2/2
#OPENBOX - OPEN ISSUES IN APPLYING DEEP REINFORCEMENT LEARNING IN COMMUNICATION NETWORKS - 2/2
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. Today, we have with us paul. Paul is a PhD Student at Barcelona Neural Networking Center Technical University of Catalunya working on the use of ML to solve problems in communication networks. We are going to cover a paper titled “Towards Real-Time Routing Optimization with Deep Reinforcement Learning: Open Challenges ” published recently which he co-authored. In this podcast, he is covering aspects of (a) Training time and cost associated with Deep Reinforcement Learning and (b) lack of performance bounds. This is part 2 of the podcast This project is in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization with a mission to minimize the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.
13:43
July 18, 2022
#OPENBOX - OPEN ISSUES IN APPLYING DEEP REINFORCEMENT LEARNING IN COMMUNICATION NETWORKS - 1/2
#OPENBOX - OPEN ISSUES IN APPLYING DEEP REINFORCEMENT LEARNING IN COMMUNICATION NETWORKS - 1/2
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. Today, we have with us paul. Paul is a PhD Student at Barcelona Neural Networking Center Technical University of Catalunya working on the use of ML to solve problems in communication networks. We are going to cover a paper titled “Towards Real-Time Routing Optimization with Deep Reinforcement Learning: Open Challenges ” published recently which he co-authored. In this podcast, he is covering aspects of (a) Generalization in Deep Reinforcement Learning and (b) Defining an appropriate action space. This is part 1 of the podcast This project is in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization with a mission to minimize the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.
17:37
July 17, 2022
#OPENBOX - OPEN ISSUES IN OFFLINE REINFORCEMENT LEARNING - 2/2
#OPENBOX - OPEN ISSUES IN OFFLINE REINFORCEMENT LEARNING - 2/2
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems. In this episode, Rafael Figueiredo Prudencio discusses open issues in Offline Reinforcement Learning. He covers aspects relating to (a) Approximation function and generalization and (b) leveraging unlabelled data. Conversation with Rafael is 2 part podcast series, and this podcast is part 2. Listen to the podcast to understand specific ethical issues arising from the open issues. This project is in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization with a mission to minimize the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.
15:19
June 30, 2022
#OPENBOX - OPEN ISSUES IN OFFLINE REINFORCEMENT LEARNING - 1/2
#OPENBOX - OPEN ISSUES IN OFFLINE REINFORCEMENT LEARNING - 1/2
OPENBOX aims at bringing an easier understanding of open problems that helps in finding solutions for such problems. For the said purpose, I interview researchers and practitioners who have published works on open problems in various areas of Artificial Intelligence and Machine Learning to collect a simplified understanding of these open problems.  In this episode, Rafael Figueiredo Prudencio discusses open issues in Offline Reinforcement Learning. He covers aspects relating to (a) Off Policy evaluation methods and (b) the lack of adequate real-world benchmarks. Conversation with Rafael is 2 part podcast series, and this podcast is part 1. Listen to the podcast to understand specific ethical issues arising from the open issues. This project is in collaboration with ForHumanity. ForHumanity is a 501(c)(3) nonprofit organization with a mission to minimize the downside risks of AI and autonomous systems. ForHumanity develops criteria for an independent audit of AI systems. To know more, visit https://forhumanity.center/.
15:52
June 30, 2022
E11: #EUAIRegs Regulations have Gaps and having an Industry Standard for Ethics can Mitigate Risks
E11: #EUAIRegs Regulations have Gaps and having an Industry Standard for Ethics can Mitigate Risks
ATGO AI is a podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence). Joshua Bucheli is an international polyglot. He is based in Switzerland and has a background in event and project management as well as applied ethical research and critical-analysis. He is an independent researcher, writer, and editor with an MA in Political, Legal, and Economic Philosophy and has experience ranging from humanitarian fieldwork for the UNHCR in Malaysia to researching and editing for academic journal articles and Swiss think tanks on the subject of ethical AI and robotics. He is a former Head of Community Management at Let’s Phi and a freelance writer for cyberunity on the subject of careers in cybersecurity. Joshua shares that he is working on a code of ethics in order to build an industry standard for ethics and AI. The draft EU AI regulations will help set a course - will there be a more human centered focus in regards to technological innovation? Taking a careful approach towards the ethical grey area is impactful. Pause is needed to evaluate and mitigate potential harm. Visit us at https://forhumanity.center/ to learn more
12:24
July 13, 2021
E10: #EUAIRegs Rules are needed to develop an #InfrastructureOfTrust for #AI
E10: #EUAIRegs Rules are needed to develop an #InfrastructureOfTrust for #AI
ATGO AI is a podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence). Enrico Panai is a lecturer, information and data ethicist, and has worked for years in the AI space as. He is also the Chief Information Officer of ForHumanity. He is a founder of Be Ethical and helps lead AI ethics in France. Enrico shares an analogy about games and rules for how there should be rules for #AI to develop an infrastructure of trust.  AI is challenging how we organize and categorize data. The EU is achieving two things with these regulations: it helps humanity to step towards a better world with these regulations and is also ensuring the geopolitical value and impact of the EU for AI. Enrico proposes a Chief Ethics Data Officer become a position at organizations. Third party auditors will lead to transparency. Visit us at https://forhumanity.center/ to learn more
13:49
July 13, 2021
E9: #EUAIRegs Explore Compliance from a Human-Centered Focus
E9: #EUAIRegs Explore Compliance from a Human-Centered Focus
ATGO AI is a podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence). Tristi Tanaka who has worked for over 20 years in and with technologists to deliver organisational and functional change. She is always learning and using design thinking techniques to create, adapt and improve the way services are designed, delivered and evolve to meet the needs of users and teams in the workplace. She believes technology doesn’t deliver success, people do. She is a current ForHumanity fellow Tristi states the EU is standing out to make a difference between a high risk and low risk case for AI and autonomous technology. She discusses issues of scale in regards to bias. She explores how one of the largest outcomes will be related to skills, compliance, and auditing of AI and autonomous technology. Visit us at https://forhumanity.center/ to learn more
13:30
July 09, 2021
E8: #EUAIReg important advances to navigate high risks
E8: #EUAIReg important advances to navigate high risks
ATGO AI is a podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence). Ryan serves as ForHumanity’s Executive Director and Chairman of the Board of Directors, in these roles he is responsible for the day-to-day function of ForHumanity and the overall process of Independent Audit. He started this non-profit in 2016 with a mission to Managing AI risks for Humanity   Ryan has conducted business in over 55 countries and is a frequent speaker at industry conferences around the world on the topic of audit of AI systems. The ForHumanity mission is one of the largest and notable crowdsourced efforts on audit of AI system In this episode Ryan shares ideas about how the regulations will be managed from a global perspective. He also discusses what the conformity assessments will look like and the questions around definitions about transparency and governance. AI audits involve certified practitioners, third party sets of rules, and regulations that govern a targetive organization, the auditors and the public. Visit us at https://forhumanity.center/ to learn more
11:28
June 28, 2021
E7: #EUAIReg What is Conformity Assessments from a ‘Risk’ lens?
E7: #EUAIReg What is Conformity Assessments from a ‘Risk’ lens?
ATGO AI is a podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence). Sarah Clarke has two decades of experience in IT, information security, GDPR, and data protection. Also a guest lecturer in Vendor Security Governance for the University of Manchester and increasingly frequent speaker.She is also a ForHumanity fellow. In this episode Sarah shares about five stages of grief in relation to auditing. Then she discusses operationalizing in order to work outwards regarding layers of uncertainty to get closer and closer to a place of certainty for the regulations. She believes in helping people to create space to have those rational conversations and improve the auditing of AI.
13:11
June 18, 2021
E6: #EUAIRegs Conformity Assessments: What does “Good” look like?
E6: #EUAIRegs Conformity Assessments: What does “Good” look like?
About the podcast ATGO AI is a podcast channel from ForHumanity. This podcast will bring multiple series of insights on topics of pressing importance specifically in the space of Ethics and Accountability of emerging technology. You will hear from game changers in this field who have spearheaded accountability, transparency, governance and oversight in developing and deploying emerging technology (including Artificial Intelligence). Rohan Light is a fellow with RSA and ForHumanity he works on data governance on DataWonks and focuses on strategy and risk regarding the decision management value chain. In this episode Rohan shares about a rights based approach to the draft EU AI regulation. Regulations are much needed because AI and autonomous technology can be as dangerous as cars. He is glad to see that care is being taken for regulation and is looking forward to more tools being added for auditing. Overall, disruption can be associated with quality to audit AI. Visit us at https://forhumanity.center/ to learn more
13:43
June 16, 2021
E5: The Swiss View: #EUAIRegs make ethics affordable
E5: The Swiss View: #EUAIRegs make ethics affordable
ATGO AI, a short form podcast from ForHumanity. This series is about the recent draft EU regulations for AI. The ForHumanity fellows, leading international experts on AI will be interviewed by international hosts, and the fellows will share their thoughts about the regulations. The draft #EUAIRegs mandate classification of high risk AI and also require specific approaches to ensure that such AI systems do not harm people. This regulation has proposed a penalty of 6% of global revenues or Euro 30 million for violations. Dorothea Baur is an independent consultant, speaker (incl. TedX), and lecturer on ethics, responsibility and sustainability. She has a specific focus on tech and finance. At the intersection of technology, society and the environment, companies are often confronted with ethical questions and she helps in aligning value with values. She is joining us from Switzerland. In this episode Dorothea talks about her perspectives on EU AI regulations. Dorothea shares, the Eu regulations make ethics affordable - it refutes claims by companies who say we are forced to make these choices. We are democratizing AI. The EU regulation is brave, Dorothea hopes this enhances trust and that certain negative manipulations are not possible anymore when working with AI systems. Visit us at https://forhumanity.center/ to learn more
11:24
June 10, 2021
E4: #EUAIRegs Focus on standardization to mitigate bias
E4: #EUAIRegs Focus on standardization to mitigate bias
ATGO AI, a short form podcast from ForHumanity. This series is about the recent draft EU regulations for AI. The ForHumanity fellows, leading international experts on AI will be interviewed by international hosts, and the fellows will share their thoughts about the regulations. The draft #EUAIRegs mandate classification of high risk AI and also require specific approaches to ensure that such AI systems do not harm people. This regulation has proposed a penalty of 6% of global revenues or Euro 30 million for violations. Dr. Shea Brown is a researcher, lecturer, speaker, and consultant in AI Ethics, Machine-learning, and Astrophysics. He earned his Ph.D. in Astrophysics from the University of Minnesota. He is the founder and CEO of BABL, AI. He is a current ForHumanity fellow focusing on algorithmic auditing and AI governance. In this episode, Shea discusses his perspectives on EU AI regulations. He shares that a focus on standardization in order to mitigate bias is crucial. Regulations will also help other countries, like the USA, to increase their own AI regulations. These regulations will push the rest of the world to continue developing and evolving their AI regulations. Visit us at https://forhumanity.center/ to learn more
10:38
June 10, 2021
E1: Our Goal: Managing AI Risks for Humanity
E1: Our Goal: Managing AI Risks for Humanity
ATGO AI, a short form podcast from ForHumanity.  In this episode, Ryan Carrier, founder of Forhumanity expressing about the mission. Ryan serves as ForHumanity’s Executive Director and Chairman of the Board of Directors, in these roles he is responsible for the day-to-day function of ForHumanity and the overall process of Independent Audit. He started this non-profit in 2016 with a mission to Managing AI risks for Humanity   Ryan has conducted business in over 55 countries and is a frequent speaker at industry conferences around the world on the topic of audit of AI systems. The ForHumanity mission is one of the largest and notable crowdsourced efforts on audit of AI system Listen to the insightful podcast series Visit us at https://forhumanity.center/ to know more
03:47
May 25, 2021
E2: #EUAIRegs: AI Regulations can help companies to stay accountable
E2: #EUAIRegs: AI Regulations can help companies to stay accountable
ATGO AI, a short form podcast from ForHumanity. This series is about the recent draft EU regulations for AI. The ForHumanity fellows, leading international experts on AI will be interviewed by international hosts, and the fellows will share their thoughts about the regulations. The draft #EUAIRegs mandate classification of high risk AI and also require specific approaches to ensure that such AI systems do not harm people. This regulation has proposed a penalty of 6% of global revenues or Euro 30 million for violations. Merve Hickok, a ForHumanity fellow, is a founder of the Lighthouse Career Consulting - an advisory consultancy, working in the space of AI ethics and Responsible AI. She is an independent consultant, lecturer and speaker on AI ethics and bias & its implications. Merve states, we need to pay attention to the whole supply chain for AI and really ask the ethical questions to think about bias mitigation. These regulations will be a tool to help companies stay accountable. Other countries will see the regulations as a sign that it’s ok, we are starting now, and we can feel more comfortable about actually regulating AI. Visit us at https://forhumanity.center/ to know more
11:34
May 25, 2021
E3: #EUAIRegs: Starting gun fired. A kickoff for Artificial Intelligence standards
E3: #EUAIRegs: Starting gun fired. A kickoff for Artificial Intelligence standards
ATGO AI, a short form podcast from ForHumanity. This series is about the recent draft EU regulations for AI. The ForHumanity fellows, leading international experts on AI will be interviewed by international hosts, and the fellows will share their thoughts about the regulations. The draft #EUAIRegs mandate classification of high risk AI and also require specific approaches to ensure that such AI systems do not harm people. This regulation has proposed a penalty of 6% of global revenues or Euro 30 million for violations. Adam Lyon Smith is a specialist in software quality, continuous integration, and AI. He has held senior technology roles at several multinationals, delivering large complex projects. Chair of Specialist group in Software Testing. He was also the editor for standards including bias in AI system and quality model. He is a ForHumanity Fellow and a Board member of ForHumanity. In this episode Adam talks about his perspectives on EU AI regulations. Adam shares, “standards need to be applied” instead of just looking at regulation. He welcomes the regulation and believes that this will help in a robust and data driven approach. Read the regulation...this is the start, and not the end. “The starting gun has been fired for the standards committee.” Visit us at https://forhumanity.center/ to know more
10:02
May 25, 2021