Skip to main content
AI Asia Pacific Institute Podcast

AI Asia Pacific Institute Podcast

By AI Asia Pacific Institute

The rise of AI presents important legal and ethical challenges for society. In this podcast, we invite leaders from different industries and creators of new AI to debate the big questions. From the future of work to the future of our humanity, stay tuned to learn more.

Got questions or want to be involved? Email us at contact@aiasiapacific.org
Available on
Apple Podcasts Logo
Google Podcasts Logo
Overcast Logo
Pocket Casts Logo
RadioPublic Logo
Spotify Logo
Currently playing episode

#18: Neuralink: Potential Legal and Ethical Implications with Dr Allan McCay

AI Asia Pacific Institute PodcastSep 23, 2020

00:00
52:45
#47: Examining Regulation for ChatGPT: Dr. Luciano Floridi

#47: Examining Regulation for ChatGPT: Dr. Luciano Floridi

The AI Asia Pacific Institute (AIAPI) is hosting a series of conversations with leading artificial intelligence (AI) experts to study ChatGPT and its risks, looking to arrive at tangible recommendations for regulators and policymakers. These experts include Dr. Toby WalshDr. Stuart RussellDr. Pedro Domingos, and Dr. Luciano Floridi, as well as our internal advisory board and research affiliates. We have published a briefing note outlining some of the critical risks of generative AI and highlighting potential concerns. 

The following is a conversation with Dr. Luciano Floridi

Dr. Luciano Floridi holds a double appointment as professor of philosophy and ethics of information at the University of Oxford, Oxford Internet Institute where he is also Governing Body Fellow of Exeter College, Oxford, and as Professor of Sociology of Culture and Communication at the University of Bologna, Department of Legal Studies, where he is the director of the Centre for Digital Ethics. He is adjunct professor ("distinguished scholar in residence"), Department of Economics, American University, Washington D.C. 
Dr. Floridi is best known for his work on two areas of philosophical research: the philosophy of information, and information ethics (also known as digital ethics or computer ethics), for which he received many awards, including the Knight of the Grand Cross of the Order of Merit, Italy’s most prestigious honour. According to Scopus, Floridi was the most cited living philosopher in the world in 2020.
Between 2008 and 2013, he held the research chair in philosophy of information and the UNESCO Chair in Information and Computer Ethics at the University of Hertfordshire. He was the founder and director of the IEG, an interdepartmental research group on the philosophy of information at the University of Oxford, and of the GPI the research Group in Philosophy of Information at the University of Hertfordshire. He was the founder and director of the SWIF, the Italian e-journal of philosophy (1995–2008). He is a former Governing Body Fellow of St Cross College, Oxford.


***

For show notes and past guests, please visit https://aiasiapacific.org/podcast/

For questions, please contact us at contact@aiasiapacific.org or follow us on Twitter or Instagram to stay in touch.




May 31, 202354:23
#46: Examining Regulation for ChatGPT: Dr. Pedro Domingos

#46: Examining Regulation for ChatGPT: Dr. Pedro Domingos

The AI Asia Pacific Institute (AIAPI) is hosting a series of conversations with leading artificial intelligence (AI) experts to study ChatGPT and its risks, looking to arrive at tangible recommendations for regulators and policymakers. These experts include Dr. Toby WalshDr. Stuart RussellDr. Pedro Domingos, and Dr. Luciano Floridi, as well as our internal advisory board and research affiliates. We have published a briefing note outlining some of the critical risks of generative AI and highlighting potential concerns.

The following is a conversation with Dr. Pedro Domingos. 

Dr. Pedro Domingos is a professor emeritus of computer science and engineering at the University of Washington and the author of The Master Algorithm. He is a winner of the SIGKDD Innovation Award and the IJCAI John McCarthy Award, two of the highest honors in data science and AI. He is a Fellow of the AAAS and AAAI, and has received an NSF CAREER Award, a Sloan Fellowship, a Fulbright Scholarship, an IBM Faculty Award, several best paper awards, and other distinctions. Dr. Domingos received an undergraduate degree (1988) and M.S. in Electrical Engineering and Computer Science (1992) from IST, in Lisbon, and an M.S. (1994) and Ph.D. (1997) in Information and Computer Science from the University of California at Irvine. He is the author or co-author of over 200 technical publications in machine learning, data mining, and other areas. He is a member of the editorial board of the Machine Learning journal, co-founder of the International Machine Learning Society, and past associate editor of JAIR. Dr. Domingos was program co-chair of KDD-2003 and SRL-2009, and served on the program committees of AAAI, ICML, IJCAI, KDD, NIPS, SIGMOD, UAI, WWW, and others. He has written for the Wall Street Journal, Spectator, Scientific American, Wired, and others. He helped start the fields of statistical relational AI, data stream mining, adversarial learning, machine learning for information integration, and influence maximization in social networks.


***

For show notes and past guests, please visit https://aiasiapacific.org/podcast/

For questions, please contact us at contact@aiasiapacific.org or follow us on Twitter or Instagram to stay in touch.



May 23, 202301:12:41
#45: Examining Regulation for ChatGPT: Dr. Toby Walsh & Dr. Stuart Russell

#45: Examining Regulation for ChatGPT: Dr. Toby Walsh & Dr. Stuart Russell

The AI Asia Pacific Institute (AIAPI) has hosted a series of conversations with leading artificial intelligence (AI) experts to study ChatGPT and its risks, looking to arrive at tangible recommendations for regulators and policymakers. These experts include Dr. Toby Walsh, Dr. Stuart Russell, Dr. Pedro Domingos, and Dr. Luciano Floridi, as well as our internal advisory board and research affiliates. The following is a conversation with Dr. Toby Walsh and Dr. Stuart Russell. 

Dr. Toby Walsh is Chief Scientist at UNSW.ai, UNSW's new AI Institute. He is a Laureate Fellow and Scientia Professor of Artificial Intelligence in the School of Computer Science and Engineering at UNSW Sydney, and he is also an adjunct fellow at CSIRO Data61. He was named by the Australian newspaper as a "rock star" of Australia's digital revolution. He has been elected a fellow of the Australian Academy of Science, a fellow of the ACM, the Association for the Advancement of Artificial Intelligence (AAAI) and of the European Association for Artificial Intelligence. He has won the prestigious Humboldt Prize as well as the NSW Premier's Prize for Excellence in Engineering and ICT, and the ACP Research Excellence award. He has previously held research positions in England, Scotland, France, Germany, Italy, Ireland and Sweden. He has played a leading role at the UN and elsewhere on the campaign to ban lethal autonomous weapons (aka "killer robots"). His advocacy in this area has led to him being "banned indefinitely" from Russia.

Dr. Stuart Russell is a Professor of Computer Science at the University of California at Berkeley, holder of the Smith-Zadeh Chair in Engineering, and Director of the Center for Human-Compatible AI and the Kavli Center for Ethics, Science, and the Public. He is a recipient of the IJCAI Computers and Thought Award and Research Excellence Award and held the Chaire Blaise Pascal in Paris. In 2021 he received the OBE from Her Majesty Queen Elizabeth and gave the Reith Lectures. He is an Honorary Fellow of Wadham College, Oxford, an Andrew Carnegie Fellow, and a Fellow of the American Association for Artificial Intelligence, the Association for Computing Machinery, and the American Association for the Advancement of Science. His book "Artificial Intelligence: A Modern Approach" (with Peter Norvig) is the standard text in AI, used in 1500 universities in 135 countries. His research covers a wide range of topics in artificial intelligence, with a current emphasis on the long-term future of artificial intelligence and its relation to humanity. He has developed a new global seismic monitoring system for the nuclear-test-ban treaty and is currently working to ban lethal autonomous weapons.


***

For show notes and past guests, please visit https://aiasiapacific.org/podcast/

For questions, please contact us at contact@aiasiapacific.org or follow us on Twitter or Instagram to stay in touch.


May 16, 202301:15:15
Season 5: Examining Regulation for ChatGPT

Season 5: Examining Regulation for ChatGPT

Generative Artificial Intelligence systems have significantly advanced in recent years, enabling machines to generate highly realistic content such as text, images, and audio. While these advancements offer numerous benefits, it is critical that we are aware of the associated risks. The AI Asia Pacific Institute has hosted a series of conversations with leading AI experts to study ChatGPT and its risks, looking to arrive at tangible recommendations for regulators and policymakers. These experts include Dr. Toby Walsh, Dr. Stuart Russell, Dr. Pedro Domingos, and Dr. Luciano Floridi. 
Join us for season 5 of this Podcast.
Subscribe now, wherever you are listening to join these conversations.

Apr 30, 202300:58
#44: Professor Seongwook Heo on the AI Landscape & Governance in South Korea

#44: Professor Seongwook Heo on the AI Landscape & Governance in South Korea

This podcast series details our most recent publication '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Region'. The report provides continuity to the work the AI Asia Pacific Institute (AIAPI) published in 2021, but adopts a more in-depth focus on Singapore, Japan, South Korea, Australia, and India.

The report assesses each country based on the following indicators: best practices; opportunities and challenges; and prospects for collaboration, for the end goal of assessing each country’s unique approach to Trustworthy AI, and identifying mutual grounds as impetus for regional and international collaboration.

In this podcast, Dr Heo shares the recent developments in South Korea to advance trustworthy AI.

Dr. Heo is an associate professor at Seoul National University Law School in Korea. He holds a Ph. D. in law from Seoul National University. Before joining the faculty of SNU, he served as a judge of Seoul Central District Court in Korea.

This conversation covers the recent report published by the AI Asia Pacific Institute '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Report'.

***

For show notes and past guests, please visit https://aiasiapacific.org/podcast/

For questions, please contact us at contact@aiasiapacific.org or follow us on Twitter or Instagram to stay in touch.

Oct 17, 202254:34
#43: Wan Sie LEE on the AI Landscape & Governance in Singapore

#43: Wan Sie LEE on the AI Landscape & Governance in Singapore

This podcast series details our most recent publication '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Region'. The report provides continuity to the work the AI Asia Pacific Institute (AIAPI) published in 2021, but adopts a more in-depth focus on Singapore, Japan, South Korea, Australia, and India.

The report assesses each country based on the following indicators: best practices; opportunities and challenges; and prospects for collaboration, for the end goal of assessing each country’s unique approach to Trustworthy AI, and identifying mutual grounds as impetus for regional and international collaboration.

In this podcast, Wan Sie LEE shares the recent developments in Singapore to advance trustworthy AI.

This conversation covers the recent report published by the AI Asia Pacific Institute '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Report'.

***

For show notes and past guests, please visit https://aiasiapacific.org/podcast/

For questions, please contact us at contact@aiasiapacific.org or follow us on Twitter or Instagram to stay in touch.

Links to some of the initiatives that have been covered in this conversation: Veritas; AI Singapore; NovA!

Sep 27, 202230:17
#42: Arunima Sarkar on the AI Landscape & Governance in India

#42: Arunima Sarkar on the AI Landscape & Governance in India

This podcast series details our most recent publication '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Region'. The report provides continuity to the work the AI Asia Pacific Institute (AIAPI) published in 2021, but adopts a more in-depth focus on Singapore, Japan, South Korea, Australia, and India.

The report assesses each country based on the following indicators: best practices; opportunities and challenges; and prospects for collaboration, for the end goal of assessing each country’s unique approach to Trustworthy AI, and identifying mutual grounds as impetus for regional and international collaboration.

In this podcast, Arunima Sarkar shares the recent developments in India to advance trustworthy AI.

This conversation covers the recent report published by the AI Asia Pacific Institute '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Report'.

***

For show notes and past guests, please visit https://aiasiapacific.org/podcast/

For questions, please contact us at contact@aiasiapacific.org or follow us on Twitter or Instagram to stay in touch.

Aug 17, 202235:43
#41: The AI Landscape & Governance in Australia and India

#41: The AI Landscape & Governance in Australia and India

This podcast series details our most recent publication '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Region'. The report provides continuity to the work the AI Asia Pacific Institute (AIAPI) published in 2021, but adopts a more in-depth focus on Singapore, Japan, South Korea, Australia, and India.

The report assesses each country based on the following indicators: best practices; opportunities and challenges; and prospects for collaboration, for the end goal of assessing each country’s unique approach to Trustworthy AI, and identifying mutual grounds as impetus for regional and international collaboration.

In this podcast, we will dissect the salient points and key findings from our study, looking closely at Australia and India to locate convergence for greater collaboration and coordination amid increasing pressure for regulation and international collaboration to advance trustworthy AI in the Asia Pacific region.

This conversation covers the recent report published by the AI Asia Pacific Institute '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Report'.

***

For show notes and past guests, please visit https://aiasiapacific.org/podcast/

For questions, please contact us at contact@aiasiapacific.org or follow us on Twitter or Instagram to stay in touch.

Aug 01, 202255:33
#40: The AI Landscape & Governance in Singapore, South Korea and Japan

#40: The AI Landscape & Governance in Singapore, South Korea and Japan

This podcast series details our most recent publication '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Region'. The report provides continuity to the work the AI Asia Pacific Institute (AIAPI) published in 2021, but adopts a more in-depth focus on Singapore, Japan, South Korea, Australia, and India.  

The report assesses each country based on the following indicators: best practices; opportunities and challenges; and prospects for collaboration, for the end goal of assessing each country’s unique approach to Trustworthy AI, and identifying mutual grounds as impetus for regional and international collaboration.

In this podcast, we will dissect the salient points and key findings from our study, looking closely at Singapore, Japan and South Korea to locate convergence for greater collaboration and coordination amid increasing pressure for regulation and international collaboration to advance trustworthy AI in the Asia Pacific region.

This conversation covers the recent report published by the AI Asia Pacific Institute '2022 Trustworthy Artificial Intelligence in the Asia-Pacific Report'.

***

For show notes and past guests, please visit https://aiasiapacific.org/podcast/

For questions, please contact us at contact@aiasiapacific.org or follow us on Twitter or Instagram to stay in touch.


Jul 19, 202259:33
Season 4: Asia-Pacific Collaboration & AI Governance

Season 4: Asia-Pacific Collaboration & AI Governance

AI is transnational and borderless. As AI developments unfold at such unprecedented scale and pace, the call for building trustworthy AI has been resounding. To achieve it, international cooperation is crucial.

Join us for a whole new season of this Podcast, where we are deepen our research aiming to promote and amplify collaboration in the Asia-Pacific region.

We will dissect Singapore, Japan, South Korea, Australia and India to find convergence for greater collaboration and coordination amid increasing pressure for regulation and international collaboration to advance trustworthy AI in the Asia-Pacific.

Subscribe now, wherever you are listening to join these conversations.


Jun 29, 202201:16
#39: Algorithmic Decisions for Security, Standardisation and Trustworthy AI

#39: Algorithmic Decisions for Security, Standardisation and Trustworthy AI

"Regulation is coming" — David Berend

David Berend is leading the standardisation of AI Security in Singapore where he and his team are about to publish the world first version of the standard in the next 2 months. Furthermore, he developed the research tools for AI quality assurance as part of his Ph.D., which he is now commercialising as a spin off from Nanyang Technological University, Singapore. Finally, he is member of the German Standards Commission and ISO, to integrate the Singapore Standard Achievements into global context.

***

For show notes and past guests, please visit https://aiasiapacific.org/podcast/

If you have questions, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.



Dec 08, 202145:30
#38: Algorithmic Decisions & Power and Sustainability

#38: Algorithmic Decisions & Power and Sustainability

"What always needs to be at the forefront: what physical and regulatory constraints is your system contending with at any given time and how do you design a suite of methods that actually satisfy those constraints" — Priya L. Donti

Priya L. Donti is a Ph.D. student in the Computer Science Department and the Department of Engineering & Public Policy at Carnegie Mellon University, co-advised by Zico Kolter and Inês Azevedo. She is also co-founder and chair of Climate Change AI, an initiative to catalyze impactful work at the intersection of climate change and machine learning.

Her work focuses on machine learning for forecasting, optimization, and control in high-renewables power grids. Specifically, Priya's research explores methods to incorporate the physics and hard constraints associated with electric power systems into deep learning models. Please see here for a list of her recent publications.

Priya is a member of the MIT Technology Review 2021 list of 35 Innovators Under 35, and a 2022 Siebel Scholar. She was previously a U.S. Department of Energy Computational Science Graduate Fellow, an NSF Graduate Research Fellow, and a Thomas J. Watson Fellow. Priya received her undergraduate degree at Harvey Mudd College in computer science and math with an emphasis in environmental analysis.

***

For show notes and past guests, please visit https://aiasiapacific.org/podcast/

If you have questions, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

Nov 01, 202142:56
#37: Algorithmic Decisions & the Financial Industry

#37: Algorithmic Decisions & the Financial Industry

In this conversation, we covered many practical recommendations on operationalising ethics of AI, discussed FEAT, Singapore’s framework for the financial industry and arrived at some predictions in relation to what’s next for the industry.

David Hardoon is the Senior Advisor for Data and Artificial Intelligence at UnionBank Philippines, Chair of Data Committee at Aboitiz Group and acting in capacity of Managing Director for Aboitiz Data Innovation. He is also an external advisor to Singapore's Corrupt Investigation Practices Bureau (CPIB) and to Singapore's Central Provident Fund Board (CPF). 

Prior to his current roles, David was Monetary Authority of Singapore (MAS) first appointed Chief Data Officer and Head of Data Analytics Group. In these roles he led the development of the AI strategy both for MAS and Singapore’s financial sector as well as driving efforts in promoting open cross-border data flows. David pioneered the regulator and central bank adoption of data science as well as establishment of the Fairness, Ethics, Accountability and Transparancy (FEAT) principles, first-of-a-kind guidelines for adopting Artificial Intelligence in the financial industry, as well as establishing the MAS-backed Veritas consortium.

…..

Hardeep Arora has around 21yrs of experience in Analytics and Data Science Technology in Financial Services. He is based out of Singapore and is heading the AI Research and Engineering at a social media startup called Aaqua where he is building their AI infrastructure and engineering team. Hardeep was the head of AI in FS @ Element AI, where he worked on numerous client engagements, including MAS Veritas Phase 1. He also did a brief stint in Accenture Singapore where he setup an AI Lab focusing on Financial Services use-cases.  He also spent more than a decade working in AI and Analytics teams in banks including Standard Chartered, JP Morgan, and Barclays.

Hardeep has a graduate degree in Computer Science and MBA in Finance, he is AI advisor for few startups in the region, and a regular speaker at events in Singapore.

***

For show notes and past guests, please visit https://aiasiapacific.org/podcast/

If you have questions, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

Oct 01, 202101:01:43
Season 3: Algorithmic Decisions & Impact on Humans

Season 3: Algorithmic Decisions & Impact on Humans

How’s algorithmic decisions impacting our humanity? It’s the question of our times. 

Join us for a whole new season of this Podcast, where we are deepen our research into the impact of algorithmic decisions on humans.

To do this, we will go on a journey and look at the application of algorithm decisions and its implications in different industries. We will explore the Financial Industry, Power & Sustainability and Social Media. 

Wish fascinating guests, as part of this series we are also joined by different guest interviewers, they will guide us as we explore this world of algorithmic decisions. 

Starting on 1st October, Subscribe now, wherever you are listening to join these conversations.  

Sep 07, 202100:59
#36: Irakli Beridze on a Global Governance of Artificial Intelligence

#36: Irakli Beridze on a Global Governance of Artificial Intelligence

"I would argue that the growing digital divide could be as dangerous as the climate crises" — Irakli Beridze

Irakli Beridze is the Head of the Centre for Artificial Intelligence and Robotics at UNICRI, United Nations. More than 20 years of experience in leading multilateral negotiations, developing stakeholder engagement programmes with governments, UN agencies, international organisations, private industry and corporations, think tanks, civil society, foundations, academia, and other partners on an international level. Mr Beridze is advising governments and international organizations on numerous issues related to international security, scientific and technological developments, emerging technologies, innovation and disruptive potential of new technologies, particularly on the issue on crime prevention, criminal justice and security.  

Mr Beridze is supporting governments worldwide on the strategies, action plans, roadmaps and policy papers on AI. Since 2014, Initiated and managed one of the first United Nations Programmes on AI. Initiating and organizing number of high-level events at the United Nations General Assembly, and other international organizations. Finding synergies with traditional threats and risks as well as identifying solutions that AI can contribute to the achievement of the United Nations Sustainable Development Goals. He is a member of various international task forces, including the World Economic​ Forum’s Global Artificial Intelligence Council, the UN High-level panel for digital cooperation, the High-Level Expert Group on Artificial Intelligence of the European Commission. He is frequently lecturing and speaking on the subjects related to technological development, exponential technologies, artificial intelligence and robotics and international security. He has numerous publications in international journals and magazines and frequently quoted in media on the issues related to AI. Irakli Beridze is an International Gender Champion supporting the IGC Panel Parity Pledge. He is also recipient of recognition on the awarding of the Nobel Peace Prize to the OPCW in 2013.​

***

For show notes and past guests, please visit https://aiasiapacific.org/podcasts/

If you have questions, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.



May 07, 202149:39
#35: Jake Taylor and Alan Ho on Social Media and AI

#35: Jake Taylor and Alan Ho on Social Media and AI

"We believe that working from the perspective of harms, rather than risks, and developing pathways where humans grapple with the challenges of technology as they deploy have been and will be a path for enabling good from these new technologies" — Jake Taylor and Alan Ho

Jake Taylor has been doing research in quantum information science and quantum computing for the past two decades, most recently at the National Institute of Standards and Technology. In addition to his research, he spent the last three years as the first Assistant Director for Quantum Information Science at the White House Office of Science and Technology Policy, where he led the creation and implementation of the National Quantum Initiative (quantum.gov) and the COVID-19 High Performance Computing Consortium (covid19-hpc-consortium.org). Now taking a year as a TAPP Fellow at Harvard's Belfer Center for Science and International Affairs, Jake is looking at how lessons learned in implementing science and tech policy for an emerging field can enable public purpose in other areas. He is the author of more than 150 peer reviewed scientific articles, a Fellow of the American Physical Society and the Optical Society of America, and recipient of the Silver and Gold medals from the Department of Commerce. He can be found on twitter @quantum_jake and at https://www.quantumjake.org

Alan Ho is a life-long engineer and entrepreneur. He has worked at a number of large and small technology companies that deployed artificial intelligence in their products. He is currently the product management lead at Google’s Quantum AI team. His responsibilities include the identification of applications of quantum computing that can benefit society.

You can find the article mentioned in the conversation 'Identifying and Reducing Harms: a Look at Artificial Intelligence' here

***

For show notes and past guests, please visit https://aiasiapacific.org/podcasts/

If you have questions, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

Apr 27, 202101:04:52
#34: Vincent Vuillard on the Future of Work

#34: Vincent Vuillard on the Future of Work

"People need to embrace technology rather than fear it" — Vincent Vuillard

Vincent is the Co-Founder of FutureWork Studio, a tech and consulting company helping organisations navigate the rapidly changing work-scape by equipping them with the tools and thinking they need to thrive now and in the future. 

Vincent has deep expertise in large scale transformations across multiple industries and sectors globally, having been with McKinsey & Company for a number of years before moving into the Corporate sector, where he led the Strategic Capabilities and Future of Work function for Fonterra, a $20bn organisation with more than 22,000 employees globally.  As an officer in the French Armed Forces, Vincent also spent more than a decade leading teams in challenging situations across many parts of the globe. 

Vincent has a Masters Degree in Statistics and Data Analytics, an MBA from the University of Melbourne, and speaks fluent English and Portuguese in addition to his native French. Vincent was one of two inaugural TEDx Auckland Salon ‘in the Dark’ speakers in 2019, a world first event for TEDx, and is also a member of the SingularityU global expert faculty, speaking regularly on the future of work topic.

***

For show notes and past guests, please visit https://aiasiapacific.org/podcasts/

If you have questions, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

Apr 13, 202145:59
#33: Hilary Sutcliffe on Trust and Tech Governance

#33: Hilary Sutcliffe on Trust and Tech Governance

"If we don't see soft law actually working, then societal trust is not going to follow" — Hilary Sutcliffe

Hilary runs London-based not-for-profit SocietyInside. The name is a riff on the famous brand ‘IntelInside’ and its focus is the desire that innovation should have the needs and values of people and planet at its heart - not simply the making of money.

She explores the issues of trust, ethics, values and governance of technology (AI, nanotech, biotech and gene editing in particular) through collaborative research, exploring trustworthy process design, public speaking, coaching, mentoring and acting as a critical friend to organisations of all types.

She is director of the TIGTech initiative which explores trustworthiness and trust in the governance of technology, was previously co-chair of the World Economic Forum Global Future Council on Values, Ethics & Innovation and member of its Agile Governance Council. She was recently named one of the 100 Brilliant Women in AI Ethics for 2021. 

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa

If you have questions, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

Mar 31, 202146:18
#32: Amba Kak and Shazeda Ahmed on the Implications of Biometrics & Emotion Recognition

#32: Amba Kak and Shazeda Ahmed on the Implications of Biometrics & Emotion Recognition

Today, we are welcoming Amba Kak and Shazeda Ahmed from the AI Now Institute: A research institute examining the social implications of artificial intelligence. Amba is currently Director of Global Policy & Programs at AI Now Institute at New York University where she develops and leads the Institute’s global policy engagement and partnerships, and is also a fellow at the NYU School of Law. Amba has over a decade of experience in the field of technology-related policy across multiple jurisdictions and has provided her expertise to government regulators, civil society organizations, and philanthropies. She is currently part of the Strategy Advisory Board of the Mozilla Foundation.

Shazeda is a doctoral candidate at the University of California at Berkeley’s School of Information. She is a 2020-21 fellow in the Transatlantic Digital Debates at the Global Public Policy Institute. From 2019-20 she was a pre-doctoral fellow at two Stanford University research centers, the Institute for Human-Centered Artificial Intelligence (HAI) and the Center for International Security and Cooperation (CISAC). Shazeda has worked as a researcher for Upturn, the Mercator Institute for China Studies, Ranking Digital Rights, and the Citizen Lab. From 2018–19, she was a Fulbright fellow at Peking University's Law School in Beijing, where she conducted field research on how tech firms and the Chinese government are collaborating on the country's social credit system. Shazeda's work on the social inequalities that arise from state-firm tech partnerships in China has been featured in outlets including the Financial Times, WIRED, the South China Morning Post, Logic magazine, TechNode, The Verge, CNBC, Voice of America, and Tech in Asia.

This conversation covers the recent reports published by the AI Now Institute 'Regulating Biometrics: Global Approaches and Urgent Questions' and by Article 19 'Emotional Entanglement: China’s emotion recognition market and its implications for human rights'.

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

Mar 17, 202157:41
#31: Cathy O’Neil on Weapons of Math Destruction

#31: Cathy O’Neil on Weapons of Math Destruction

"We have to acknowledge that it doesn't benefit everyone and understand the extend to which it harms people" — Cathy O’Neil

Cathy O’Neil earned a Ph.D. in math from Harvard, was a postdoc at the MIT math department, and a professor at Barnard College where she published a number of research papers in arithmetic algebraic geometry. She then switched over to the private sector, working as a quant for the hedge fund D.E. Shaw in the middle of the credit crisis, and then for RiskMetrics, a risk software company that assesses risk for the holdings of hedge funds and banks. She left finance in 2011 and started working as a data scientist in the New York start-up scene, building models that predicted people’s purchases and clicks. She wrote Doing Data Science in 2013 and launched the Lede Program in Data Journalism at Columbia in 2014. She is a regular contributor to Bloomberg View and wrote the book Weapons of Math Destruction: how big data increases inequality and threatens democracy. She recently founded ORCAA, an algorithmic auditing company.

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.









Mar 03, 202151:49
#30: John C. Havens on Prioritising Ethics

#30: John C. Havens on Prioritising Ethics

"I sync, therefore I am" — John C. Havens

John C. Havens is Executive Director of The IEEE Global Initiative on Ethics of Autonomous and Intelligent Systems that has two primary outputs – the creation and iteration of a body of work known as Ethically Aligned Design: A Vision for Prioritizing Human Well-being with Autonomous and Intelligent Systems and the recommendation of ideas for Standards Projects focused on prioritizing ethical considerations in A/IS. Currently there are fifteen approved Standards Working Groups in the IEEE P7000™ series.

He is also Executive Director for The Council on Extended Intelligence (CXI) that was created to proliferate the ideals of responsible participant design, data agency and metrics of economic prosperity prioritizing people and the planet over profit and productivity. CXI is a program founded by The IEEE Standards Association and MIT whose members include representatives from the EU Parliament, the UK House of Lords, and dozens of global policy, academic, and business leaders.

Previously, John was an EVP of Social Media at PR Firm, Porter Novelli and a professional actor for over 15 years. John has written for Mashable and The Guardian and is author of the books, Heartificial Intelligence: Embracing Our Humanity To Maximize Machines and Hacking Happiness: Why Your Personal Data Counts and How Tracking it Can Change the World.

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.











Feb 16, 202137:12
#29: Renée Cummings on Diversity, Equity and Inclusion in AI

#29: Renée Cummings on Diversity, Equity and Inclusion in AI

"For AI to be successful there must be trust. We must have AI systems we can trust." — Renée Cummings

Renée Cummings is a criminologist, criminal psychologist, therapeutic jurisprudence specialist, AI ethicist and the historic first Data Activist in Residence, at The School of Data Science, University of Virginia.

Renée is also a community scholar at Columbia University. Advocating for AI we can trust, more diverse, equitable, and inclusive AI, she is on the frontline of ethical AI, generating real time responses to many of the consequences of AI. Renée also specializes in AI risk management, justice-oriented AI, social justice AI, AI policy and governance, and using AI to save lives. She is committed to using AI to empower and transform communities by helping governments and organizations navigate the AI landscape and develop future AI leaders.

Renée works at the intersection of AI, criminal justice, racial justice, social justice, design justice, epidemiological and urban criminology, and public health. She has extensive experience in trauma-informed justice interventions, homicide reduction, gun and gang violence prevention, juvenile justice, evidence based policing and law enforcement leadership. Her work extends to rehabilitation, reentry and reducing recidivism.

Renée is committed to fusing AI with criminal justice for ethical real time solutions to improve law enforcement accountability and transparency, reduce violence, enhance public safety, public health, and quality of life.

A thought-leader, motivational speaker, and mentor, Renée is an articulate, dynamic, and passionate speaker who has mastered the art of creative storytelling and deconstructing complex topics into critical everyday conversations that inform and inspire.

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.













Feb 03, 202135:36
Season 2 Trailer: AI Asia Pacific Institute

Season 2 Trailer: AI Asia Pacific Institute

Join Kelly Forbes for Season 2 of the AI Asia Pacific Institute Podcast. With fascinating guests, we will explore the legal, ethical and social implications of artificial intelligence. What's its greatest potential? And what could possibly go wrong? 

Subscribe now, wherever you listen and join the AI Asia Pacific Institute newsletter at https://aiasiapacific.org. 

Jan 29, 202100:50
#28: Beyond Ethical Principles in AI with Matthew Newman

#28: Beyond Ethical Principles in AI with Matthew Newman

"Even if we don't get to that stage of AGI, it is absolutely probable that we will get to the point where there is enough complexity in some of these AI systems which can have an ongoing and profound effect on people's lives" — Matthew Newman

Matthew is a global leader in the operationalisation of AI Ethics and responsible use of frontier technology. He is founder and CEO of TechInnocens, a consultancy that provides practical advisory to C-suite, board and AI-leadership on embracing trusted-use; as well as a member of The Cantellus Group, an innovative boutique consulting group helping business and policy leaders harness the opportunities, new risks and trade-offs with AI and other frontier technologies.

Matthew has over 20 years experience of advising leadership teams on technology-driven transformation at some of the world's most respected enterprises, as well as start-ups, SMEs and government organisations. He engages with the World Economic Forum's Artificial Intelligence & Machine Learning Platform, co-develops standards for Ethical-use of AI at the IEEE and is a member of the Global Governance of AI Roundtable. Matthew also provides expertise to the Australian Human Rights Commission, the Australian Federal Government and the European AI Alliance on issues of AI policy and the intersection with social license to operate.

This episode is brought to you by Audible! Audible has the largest selection of audiobooks on the planet. Audible is kindly offering AI Asia Pacific listeners two free audiobooks with a 30-day trial membership. Click here and browse the unmatched selection of audio programs. If you are listening on Apple Podcasts, make sure you subscribe to see the link. Then, download your free title and start listening! It’s that easy. Looking for book recommendations? We highly recommend Weapons of Math Destruction by Cathy O'Neil.

***

Season 2 starts in January 2021, with all-new episodes. Subscribe now, wherever you are listening.

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.






Dec 08, 202044:14
#27: A Practical Guide to Building Ethical AI with Oliver Smith

#27: A Practical Guide to Building Ethical AI with Oliver Smith

"Trust is the canvas that we operate on without really recognising it" — Oliver Smith

Ollie is responsible for overall strategy and is Head of Ethics at Koa Health, alongside establishing and maintaining strong partnerships, and business model development. He has extensive experience in strategy and innovation across a range of sectors. Before joining Koa he was Director of Strategy and Innovation at Guy’s and St Thomas’ Charity, responsible for investing £100m over five years in innovations across acute, primary, and integrated care, and biomedical research and digital health. He was a Senior Civil Servant in the UK Department of Health; responsible for UK Tobacco Control Policy, and wrote the government’s first comprehensive childhood obesity strategy. Oliver was also a Policy Adviser in the Prime Minister’s Strategy Unit under Tony Blair. He has an MA in Politics, Philosophy, and Economics from Oxford University.

This episode is brought to you by Audible! Audible has the largest selection of audiobooks on the planet. Audible is kindly offering AI Asia Pacific listeners two free audiobooks with a 30-day trial membership. Click here and browse the unmatched selection of audio programs. If you are listening on Apple Podcasts, make sure you subscribe to see the link. Then, download your free title and start listening! It’s that easy. Looking for book recommendations? We highly recommend Weapons of Math Destruction by Cathy O'Neil.

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

Nov 24, 202059:13
#26: A Centre of Excellence to Champion the Ethical Use of AI with Kate MacDonald

#26: A Centre of Excellence to Champion the Ethical Use of AI with Kate MacDonald

"We need to start thinking about how we can use AI to break down some of these borders, to break down some of these jurisdictions and to make sure that we work together as humans, not just as different countries" — Kate MacDonald

Kate MacDonald is the New Zealand Government Fellow to the World Economic Forum and the Government representative to the OECD Network of AI Experts, working in the areas of artificial intelligence and regulation. Kate is an experienced civil servant, with a background in policy, international relations and futures thinking, and has spent the last ten years working in the areas of cybersecurity and digital policy. She has held various roles for the New Zealand government, including setting up the new Digital Minister’s office and working closely with New Zealand’s international partners in the Digital Nations group. While home in New Zealand, she is currently based at the Ministry of Business Innovation and Employment, where she is working on digital and AI policy.

This episode is brought to you by Audible! Audible has the largest selection of audiobooks on the planet. Audible is kindly offering AI Asia Pacific listeners two free audiobooks with a 30-day trial membership. Click here and browse the unmatched selection of audio programs. If you are listening on Apple Podcasts, make sure you subscribe to see the link. Then, download your free title and start listening! It’s that easy. Looking for book recommendations? We highly recommend AI Superpowers: China, Silicon Valley, and the New World Order is a 2018 non-fiction book by Kai-Fu Lee.

***

You can access the World Economic Forum article "AI is here. This is how it can benefit everyone" mentioned in the conversation here.

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.



Nov 17, 202036:38
#25: Implementing Ethics in AI with Merve Hickok

#25: Implementing Ethics in AI with Merve Hickok

"The field of AI ethics is calling for implementation" — Merve Hickok

Merve Hickok is the founder of www.aiEthicist.org platform and Lighthouse Career Consulting. She is an independent AI ethics consultant, lecturer and speaker, focusing on capacity building, awareness raising on ethical and responsible development and use of AI and its governance. She has over 15 years of senior level experience in Fortune 100 companies. Merve is part of IEEE workgroups 7008 and P2863 that work to set global standards and frameworks on ethics of autonomous and intelligent systems; is an instructor at RMDS Lab providing training on AI & Ethics; a founding editorial board member of Springer Nature AI & Ethics journal. She is a ForHumanity Fellow working to draft rules for independent audit of AI systems; a technical/policy expert for AI Policy Exchange; and a member of the leadership team at Women in AI Ethics™ Collective working to empower women in the field.

This episode is brought to you by Audible! Audible has the largest selection of audiobooks on the planet. Audible is kindly offering AI Asia Pacific listeners two free audiobooks with a 30-day trial membership. Click here and browse the unmatched selection of audio programs. If you are listening on Apple Podcasts, make sure you subscribe to see the link. Then, download your free title and start listening! It’s that easy. Looking for book recommendations? We highly recommend AI Superpowers: China, Silicon Valley, and the New World Order is a 2018 non-fiction book by Kai-Fu Lee.

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.



Nov 10, 202049:39
#24: Building a digital consciousness with Dr. Mark Sagar

#24: Building a digital consciousness with Dr. Mark Sagar

"If human cooperation is the most powerful force in history, then human cooperation with intelligent machines will actually define the next era of history" — Dr. Mark Sagar

Double Academy Award winner Dr. Mark Sagar is the CEO and co-founder of Soul Machines and Director of the Laboratory for Animate Technologies at the Auckland Bioengineering Institute.

Mark has a Ph.D. in Engineering from the University of Auckland, and was a post-doctoral fellow at M.I.T. He has previously worked as the Special Projects Supervisor at Weta Digital and Sony Pictures Imageworks and developed technology for the digital characters in blockbusters such as Avatar, King Kong, and Spiderman 2. His pioneering work in computer-generated faces was recognised with two consecutive Scientific and Engineering Oscars in 2010 and 2011, and Mark was elected as a Fellow of the Royal Society in 2019 in recognition of his world-leading research.

Mark is responsible for driving the technology vision of Soul Machines and sits on the Board of Directors.

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

Nov 02, 202001:01:06
#23: Revolutionising the Regulatory Landscape with Mona Zoet

#23: Revolutionising the Regulatory Landscape with Mona Zoet

"It does not really mean that you should be afraid of using the technology, but you need to understand what you are doing" — Mona Zoet

Mona Zoet, founder and CEO of RegPac Revolution, has over 18 years of experience in the Financial Services Industry within the Legal, Risk and Compliance areas, previously specializing in AML and KYC within some of the world’s biggest banks.

During her time in the banking industry, she became acutely aware of the Regulatory, Operational and Risk Management pain points faced by banks and other Financial Institutions alike. She is an Executive Board Member, Southeast Asia Lead and Singapore Chapter President of the International RegTech Association (IRTA) which exists to ease and accelerate the evolution of the RegTech industry, by facilitating integration, collaboration and innovation of all stakeholders, within the Financial Services sector.

Mona recently contributed to “The Legal Aspects of Blockchain”, a book published by the UNOPS, focusing on the legal implications that blockchain has, not only in humanitarian and development work, but also on existing regulatory frameworks, data and identity. Mona also shares co-authorship for the book #RegTech Blackbook, which highlights all the latest development of RegTech, FinTech, WealthTech from different angles. To end, Mona has also been mentioned as one of the RegTech top 100 influencers, a report created by Analytica One.

This episode is brought to you by Audible! Audible has the largest selection of audiobooks on the planet. Audible is kindly offering AI Asia Pacific listeners two free audiobooks with a 30-day trial membership. Click here and browse the unmatched selection of audio programs. Then, download your free title and start listening! It’s that easy. Looking for book recommendations? We highly recommend AI Superpowers: China, Silicon Valley, and the New World Order is a 2018 non-fiction book by Kai-Fu Lee, an Artificial Intelligence pioneer, China expert and venture capitalist.

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

Oct 27, 202040:10
#22: The Role of AI in Climate Change with Sherif Elsayed-Ali

#22: The Role of AI in Climate Change with Sherif Elsayed-Ali

"There are no human rights without a liveable planet." — Sherif Elsayed-Ali

Sherif Elsayed-Ali is a leading expert in the tech for good space and has unique experience at the intersection of technology and social issues. He co-founded Amnesty Tech, which leads Amnesty International’s work on the impact of technology on human rights and the potential uses of new technologies to advance human rights protection. Sherif also previously co-chaired the World Economic Forum's Global Future Council on human rights and technology. He is the former Director, AI for Climate, at Element AI. 

Sherif has been at the forefront of technology and human rights, instigating, among others, the development of the Toronto Declaration on equality and non-discrimination in machine learning and Amnesty International’s groundbreaking research on surveillance and online abuse. Over the past few years, he co-authored various reports on the theme of technology and human rights, including the World Economic Forum’s report on preventing discriminatory outcomes in machine learning.

Sherif previous speaking engagements include the World Economic Forum at Davos, Chatham House, Web Summit, CogX and RightsCon. His opinion pieces were published by The Guardian, Reuters, Aljazeera and Open Democracy, among others.

Sherif studied engineering and international law at the American University in Cairo and has a master in public administration from Harvard Kennedy School. He is now setting up a new climate tech venture, which will be a deep tech company focused on developing and deploying new solutions to enable a net zero future.

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

Oct 19, 202041:10
#21: Investors’ Expectations on Responsible Artificial Intelligence and Data Governance with Janet Wong

#21: Investors’ Expectations on Responsible Artificial Intelligence and Data Governance with Janet Wong

"If you don't gain trust from customers and regulators, things will just backlash." — Janet Wong

Janet Wong is part of the EOS at Federated Hermes Asia and global emerging markets stewardship team, engaging with listed companies on material ESG topics. They have been shortlisted by the Principles for Responsible Investment’s Stewardship project of the year 2020. Janet is also the global lead on responsible artificial intelligence and governance in financial services. She joined the team following the completion of her two-year Master of Public Administration degree in Social Impact from the London School of Economics and Political Science. Previously, she worked for HSBC Global Banking in Hong Kong overseeing global banking relationships with Hong Kong blue chip firms. Janet holds a Bachelor of Business Administration degree in Global Business and Management from the Hong Kong University of Science and Technology. She is fluent in Cantonese, Mandarin and English. She is also a CFA charter holder.

This episode is brought to you by Audible! Audible has the largest selection of audiobooks on the planet. Audible is kindly offering AI Asia Pacific listeners two free audiobooks with a 30-day trial membership. Click here and browse the unmatched selection of audio programs. Then, download your free title and start listening! It’s that easy. Looking for book recommendations? We highly recommend AI Superpowers: China, Silicon Valley, and the New World Order is a 2018 non-fiction book by Kai-Fu Lee, an Artificial Intelligence pioneer, China expert and venture capitalist.

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.fsa

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.



Oct 13, 202052:46
#20: UNESCO's developments on Artificial Intelligence and Gender Equality with Saniye Gülser Corat

#20: UNESCO's developments on Artificial Intelligence and Gender Equality with Saniye Gülser Corat

"We need to make gender equality more explicit and we need to position gender equality principles in a way that provides for greater accountability." — Saniye Gülser Corat

Saniye Gülser Corat served as Director for Gender Equality at UNESCO from September 2004 to August 2020. She is the lead author of the landmark 2019 study “I’d Blush if I Could: Closing Gender Divides in Digital Skills in Education”, which found widespread inadvertent gender bias in the most popular artificial intelligence tools for consumers and business. This report sparked a global conversation with the technology sector, culminating in a keynote address at the 2019 Web Summit in Lisbon, the largest annual global technology conference. As a result of her report and address, Gülser was interviewed by more than 600 media outlets around the world, - including the BBC, CNN, CBS, ABC, NYT, The Guardian, Forbes, Time. She published UNESCO’s follow-up research in August 2020. This report, Artificial Intelligence and Gender Equality is based on a dialogue with experts from the private sector and civil society and sets forth proposed elements for a framework on gender equality and AI for further consideration, discussions and elaboration amongst various stakeholders. The Digital Future Society named Gülser one of the top ten women leaders in technology for 2020.

During her tenure at UNESCO, Gülser launched special campaigns and programs for girls’ education in STEM and digital skills, the safety of women journalists, and the advancement of women in science. She successfully led change at UNESCO, convincing the 195 member states to recognize gender equity as a global priority for the organization, and achieving gender parity among senior leadership that stood at a mere 9% when she joined UNESCO.

Gulser has deep and broad cultural fluency with OECD and emerging markets, having run projects in her native Turkey, Europe, Canada, Southeast Asia, and sub-Saharan Africa. She holds graduate degrees from Carleton University, Canada and College of Europe, Belgium and Executive Education certificates from the Harvard Business School and Harvard Kennedy School.

Gülser is a TED and international keynote speaker. She serves on the boards of Women’s Leadership Academy (China), International Advisory Committee for Diversity Promotion, Kobe University (Japan), UPenn Law School Global Women’s Leadership Project (USA), and Exponent, a global gender equality incubator. Based in Paris, she is a strategic advisor to Coopersmith Law + Strategy for technology, education, multi-laterals, and gender equality. She speaks Turkish, English and French.

This episode is brought to you by Audible! Audible has the largest selection of audiobooks on the planet. Audible is kindly offering AI Asia Pacific listeners two free audiobooks with a 30-day trial membership. Click here and browse the unmatched selection of audio programs. Then, download your free title and start listening! It’s that easy. Looking for book recommendations? We highly recommend AI Superpowers: China, Silicon Valley, and the New World Order is a 2018 non-fiction book by Kai-Fu Lee, an Artificial Intelligence pioneer, China expert and venture capitalist.

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

Oct 08, 202001:11:12
#19: Ethical by Design: Principles for Good Technology with Dr Matthew Beard

#19: Ethical by Design: Principles for Good Technology with Dr Matthew Beard

"If ethics frames and guides our collective decision-making, we can ensure we reap the benefits of technology without falling foul of avoidable, manageable shortcomings." — Ethical by Design: Principles for Good Technology

Dr Matt Beard is a moral philosopher with an academic background in applied and military ethics. He has taught philosophy and ethics at university for several years, during which time he has been published widely in academic journals, book chapters and spoken at national and international conferences. Matt’s has advised the Australian Army on military ethics including technology design. In 2016, Matt won the Australasian Association of Philosophy prize for media engagement, recognising his “prolific contribution to public philosophy”. He regularly appears on television, radio, online and in print.

How do we ensure that the technology we create is a force for good? How do we protect the most vulnerable? How do we avoid the risks inherent in a belief in unlimited progress for progress' own sake? What are the moral costs of restraint - and who will bear the costs of slower development? 

This conversation covers the recent paper published by The Ethics Centre which addresses the above questions by proposing a universal ethical framework for technology: Ethical by Design: Principles for Good Technology

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch.

Oct 01, 202053:37
#18: Neuralink: Potential Legal and Ethical Implications with Dr Allan McCay

#18: Neuralink: Potential Legal and Ethical Implications with Dr Allan McCay

"If a person were to commit a crime by way of brain-computer interface, what would the ‘criminal act’ be?" — Dr Allan McCay

Dr Allan McCay teaches criminal law at the University of Sydney. He is a member of the Management Committee of the Julius Stone Institute of Jurisprudence, also at the University of Sydney Law School, and at Macquarie University is an Affiliate Member of the Centre for Agency, Values, and Ethics. He has previously taught at the Law School at the University of New South Wales, and the Business School at the University of Sydney. 

Allan trained as a solicitor in Scotland and has also practiced in Hong Kong with the global law firm Baker McKenzie. 

His first book, Free Will and the Law: New Perspectives is published by Routledge. His second book (with Nicole Vincent and Thomas Nadelhoffer) is entitled Neurointerventions and the law: Regulating human mental capacity and is published by Oxford University Press.

He holds a PhD from the University of Sydney Law School and is interested in behavioural genetics, neuroscience, neurotechnology, and the criminal law. His philosophical interests relate to free will and punishment, and ethical issues emerging from artificial intelligence. In relation to legal practice, he is interested in behavioural legal ethics and the future of legal work.

His work has appeared in The Sydney Morning Herald, The Age, The Australian, and Radio National, and overseas/global media sources including The Independent (UK), The Statesman (India), The Huffington Post and The Conversation.

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch. 







Sep 23, 202052:45
#17: AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies

#17: AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies

"What most matters today is the question about individuals and their own data: Is gathered personal information processed to invigorate self-determination and expand opportunities, or does it narrow possible human experiences?." — James Brusseau, AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies 

Does AI conform to humans, or will we conform to AI? In this conversation, James proposes an ethical evaluation of AI-intensive companies which will allow investors to knowledgeably participate in the decision. The evaluation is built from nine performance indicators that can be analyzed and scored to reflect a technology’s human-centering. When summed, the scores convert into objective investment guidance. The larger goal is a model for humanitarian investing in AI intensive companies that is intellectually robust, manageable for analysts, useful for portfolio managers, and credible for investors.

For the full paper: AI Human Impact: Toward a Model for Ethical Investing in AI-Intensive Companies 

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch. 



Sep 14, 202043:14
#16: The State of AI Ethics

#16: The State of AI Ethics

"It has never been more important that we keep a sharp eye out on the development of this field and how it is shaping our society and interactions with each other." — Abhishek Gupta, Montreal AI Ethics Institute 

The Montreal AI Ethics Institute is an international, non-profit research institute dedicated to defining humanity’s place in a world increasingly characterized and driven by algorithms. They do this by creating tangible and applied technical and policy research in the ethical, safe, and inclusive development of AI.

The Institute's goal is to build public competence and understanding of the societal impacts of AI and to equip and empower diverse stakeholders to actively engage in the shaping of technical and policy measures in the development and deployment of AI systems.

For the full report: The State of AI Ethics

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch. 

Sep 08, 202029:47
#15: We Need to Talk about A.I.

#15: We Need to Talk about A.I.

"My key takeaway would be that there is an urgency around the conversation. It doesn't matter how far away AGI is. AI is having an impact now. It's going to continue to have an impact. It's going to affect our lives. It's going to affect the lives of our children. We need to have the conversation now because we don't actually know how much time we have before it's too late to have the conversation." — Leanne Pooley 

Leanne Pooley has been a documentary filmmaker for over 25 years and has directed films all over the world.  In 2011 Leanne’s work was recognised by the New Zealand Arts Foundation and she was made a New Zealand Arts Laureate. Leanne was named an “Officer of the New Zealand Order of Merit”  for Services to Documentary Filmmaking in the 2017 New Year’s Honours List and she is a member of  The Academy of Motion Picture Arts and Sciences (The Oscars). 

Leanne is the director of the recently released documentary WE NEED TO TALK ABOUT A.I. for Universal Pictures and GFC Films. The documentary explores the existential risk and exponential benefits of Artificial General Intelligence.

Leanne has served as a judge for the International Emmy Awards,  is a voting member of the Documentary Branch of the Academy of Motion Picture Arts and Sciences (The Oscars),  has extensive teaching experience and has published several articles on documentary filmmaking.

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch. 

Sep 03, 202034:15
#14: Emotion AI: A Scientist’s Quest to Reclaim our Humanity

#14: Emotion AI: A Scientist’s Quest to Reclaim our Humanity

"Perhaps if ethics had been a mandatory part of the core curriculum of computer scientists, these companies wouldn't have lost the public trust in the way they have today. " — Rana el Kaliouby 

A pioneer in Emotion AI, Rana el Kaliouby, Ph.D. (@Kaliouby), is Co-Founder and CEO of Affectiva, and author of the newly released book Girl Decoded: A Scientist’s Quest to Reclaim Our Humanity by Bringing Emotional Intelligence to Technology

A passionate advocate for humanizing technology, ethics in AI and diversity, Rana has been recognized on Fortune’s 40 Under 40 list and as one of Forbes' Top 50 Women in Tech. Rana is a World Economic Forum Young Global Leader and a newly minted Young Presidents' Organization member, and co-hosted a PBS NOVA series on AI. 

Rana holds a Ph.D. from the University of Cambridge and a Post Doctorate from MIT.   

In this podcast, Rana shares her journey as she follows her calling – to humanize our technology and how we connect with one another. According to Rana, if the point of AI was to design smarter computers that could emulate human thought and decision making, our machines would need more than pure logic. Like human beings, they would need a way to interpret and process emotion.  

***

For show notes and past guests, please visit https://aiasiapacific.org/index.php/podcasts/.

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch. 

Aug 31, 202039:48
#13: Humans and AI: Why Upskilling is Key

#13: Humans and AI: Why Upskilling is Key

"My humble recommendation is to empower ourselves through upskilling and having an open mindset to become a life-long learner" — Carolyn Chin-Parry

Carolyn Chin-Parry is the current Women of the Year from the Women in IT Asia Awards. She is a Managing Director and Digital Innovation Leader at PwC Singapore and also leads PwC's Asia Pacific Digital Upskilling Initiative for 84,000 employees in the region. Carolyn is an active contributor of PwC Singapore's Diversity & Inclusion Committee and provides pro bono digital upskilling for charities, NGOs and social enterprises. She is a Board Director for a charity, the Digital Industry Vice Chair for the Australian Chamber of Commerce in Singapore, and sits on the Advisory Boards for the Australian Institute of Company Directors, Shes Loves Data (non-profit) and EGN. Carolyn is a former Chief Digital Officer and has led some of the largest transformation projects in Asia Pacific for multiple industries. She has been previously featured by The Economist, CIO Magazine, Standard Chartered Bank, Microsoft, IBM, Nomura, GovTech, SGInnovate and many more. In her free time, Carolyn enjoys time with her young family and actively researches on technology to help under-represented communities.  

In this podcast, we discussed on pressing topics relating to how the current challenging times has impacted:

- the Future of Work

- the Future of the Workplace

- the casual workforce which is disproportionately female (and what we can all do to help under-represented communities)

- how humans and AI have their own roles to play in the future and why upskilling is key


You can get in touch with Carolyn on LinkedIn

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch. 


Aug 24, 202020:29
#12: How to build trust in AI

#12: How to build trust in AI

"Trust needs to be built" — Dr Antonio Feraco

In this conversation, we focused on what processes are available to encourage trust within AI systems. Antonio has a lot to share in this area, having been active in the space as TÜV SÜD explores the development of a quality framework. This is a significant contribution to the development of a future certification or accreditation process for AI systems.

Dr Antonio Feraco, is Managing Consultant for Industry 4.0 at TÜV SÜD. He is responsible for supporting the process manufacturing sector, in adopting technologies within the Industrial Internet of things and Industry 4.0 space to enable end to end integration, improve HSE and optimize efficiency.

With a PhD in Artificial Intelligence, a MSc in Industrial Engineer and a PMP®, Antonio ran successful projects in IOT and I4.0 for Oil and Gas, Mining and Pharma, and has joined several international and large initiatives in EU and ASEAN, as a Digitisation and Industry 4.0 expert.

He worked with R&D, consultancy and technology advisory sector before and has comprehensive experience in AR and VR, AI, business process optimisation, project management, energy efficiency and robotics.

Antonio is also an Adjunct Professor of Innovation Process Management at the University of Vitez since 2013. He also delivers talks for industry and academia in both Technical topics such as Predictive Maintenance, and non-Technical ones like Management of Innovation Processes and Technology Transfer Strategies.

You can get in touch with Antonio on LinkedIn or by email: Antonio.FERACO@tuv-sud.sg. 

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch. 




Sponsorships: off for this episode

Aug 04, 202048:06
#11: The Future of the Financial Industry in Light of AI

#11: The Future of the Financial Industry in Light of AI

"We are going to see an accelerated shift from the west to the east as we come out of this pandemic" — Scott Bales

In this conversation, we covered a wide range of topics: from how technology startups need to manage their initial purpose to the future of the financial industry in a post-pandemic world. Scott discussed the latest developments in Singapore in working towards becoming an AI hub and encouraging innovation. We discussed the potential for AI in bringing a positive impact in the world. We also covered the challenges that arise from its development and how the collaboration between human and machine might unfold in the next few years, specifically the importance of education in shaping these changes. 

Scott is a technology enthusiast leading senior executive and a global leader in the cutting edge arena known as ‘The Digital Shift’, encompassing innovation, culture, design, and technology in a digital world. As a trusted strategic advisor, Scott thrives on the intersection between cultural and behavioural changes in the face of technology innovations and how those reshape industries. Scott enjoys helping leaders navigate accelerated change and complexity in the digital economy, and as a 'Digital Warrior', Scott has found a way to mesh a fascination with people and what motivates them together with a raw enthusiasm for technology.

In a world where technology reigns, one must practice what you preach, and Scott does exactly that. He’s a founding member of Next Money, mentor to entrepreneurs across the world, sits on the multiple boards and holds advisory positions at several startups. Scott worked previously as Chief Mobile Officer for Moven, the world’s first-ever digital everyday bank, lead Amazon Web Services growth in enterprise, built MetLife’s innovation lab LumenLab, and built multiple Fintech ventures that have surpassed US $100 million valuations. 

As a multiple-time best-selling author, Scott has appeared at TEDx, Social Media Week, Google Think, Fund Forum, Asian Banker, Next Money and a long list of private events. His thought leadership has appeared in WIRED, Australian Financial Review and E27. 

You can get in touch with Scott here on Twitter, LinkedIn or find his latest book here

If you have questions or are interested in sponsoring the podcast, please email us at contact@aiasiapacific.org or follow us on Twitter to stay in touch. 


Jun 16, 202051:26
#10: Webinar Series: The Next Wave of Innovation — Lead by COVID-19

#10: Webinar Series: The Next Wave of Innovation — Lead by COVID-19

In this episode, we are sharing our latest webinar, the second in a series of dialogues on COVID-19. This was an enlightening discussion as our panellists, Vincent Vuillard from FutureWork Studio and Michelle Hancic from Pymetrics, shared their view on how COVID-19 might fuel the next wave of innovation and its impact on the future of work. Here are some important topics that have been highlighted during the webinar:

1. Working from Home vs Working Anywhere - the changing landscape of where and when work is done, what it means for the future of the office.

2. Who owns the talent - COVID-19 has seen unprecedented collaboration between organisations which is challenging the traditional mindset that employees belong to the organisation.

3. Diversity & Inclusion - We are seeing women and minority groups losing their jobs at a faster rate than men. How can we address these challenges?

4. Mindset - Linear vs exponential shift, challenging the mindset around how work is done. Will COVID-19 result in a long-lasting change or will we fall back into old patterns?

For the full recording of the panel, head over here: https://lnkd.in/gTASBrA

Stay connected by signing up for our mailing list, following us on Twitter or sending us an email on contact@aiasiapacific.org.

Jun 02, 202001:23:03
#9: Ethics, Privacy and Trust by Design

#9: Ethics, Privacy and Trust by Design

"Do what is preferable, not acceptable. Preference is greater than acceptance" — Nathan Kinch

Nathan is the CEO of Greater Than X. He's the creator of Data Trust by Design and dabbles in startup investments when he can. Nathan spent the bulk of his career grappling with the complexity and nuance of the rapidly evolving personal information economy. He's led work for governments, big tech, banks, teclos, startups, as well as research and policy institutes. He writes often and speaks at events all around the world.

In this conversation, we discussed the challenges in navigating ethical principles and how organisations can increase their trustworthiness by designing appropriate frameworks and placing social preferability as the goal. On the way, we covered definitions such as trust and ethics and covered some of the current challenges arising in the intersection of COVID-19 & technology. Specifically the challenge of using AI to fight and manage the virus while respecting privacy and other digital rights.  

Nathan proposed suggestions on how we can encourage trust within organisations, discussed the possibility of regulation and whether ethical frameworks can effectively have an impact. 


May 26, 202049:52
#8: Determining our Digital Future in the Age of AI and in the Midst of COVID-19

#8: Determining our Digital Future in the Age of AI and in the Midst of COVID-19

"We have a real opportunity at this turning point of the digital revolution to make sure that we remember alternatives are possible" — Lizzie O'Shea

In today's episode, we discussed how different technologies are impacting us and how we can navigate these challenges in the age of AI. Lizzie discussed some of the technologies deployed to fight and manage COVID-19 and how we can use this challenging time to determine our digital future. 

Lizzie is a lawyer, writer, and broadcaster. Her commentary is featured regularly on national television programs and radio, where she talks about law, digital technology, corporate responsibility, and human rights. In print, her writing has appeared in the New York Times, Guardian, and Sydney Morning Herald, among others. 

Lizzie is a founder and board member of Digital Rights Watch, which advocates for human rights online. She also sits on the board of the National Justice Project, Blueprint for Free Speech and the Alliance for Gambling Reform. At the National Justice Project, Lizzie worked with lawyers, journalists and activists to establish a Copwatch program, for which she was a recipient of the Davis Projects for Peace Prize. In June 2019, she was named a Human Rights Hero by Access Now. 

As a lawyer, Lizzie has spent many years working in public interest litigation, on cases brought on behalf of refugees and activists, among others. I was proud to represent the Fertility Control Clinic in their battle to stop harassment of their staff and patients, as well as the Traditional Owners of Muckaty Station, in their successful attempt to stop a nuclear waste dump being built on their land. 

Lizzie’s book, Future Histories looks at radical social movements and theories from history and applies them to debates we have about digital technology today. It has been shortlisted for the Premier’s Literary Award. When we talk about technology we always talk about the future—which makes it hard to figure out how to get there. In Future Histories, Lizzie O’Shea argues that we need to stop looking forward and start looking backwards. Weaving together histories of computing and social movements with modern theories of the mind, society, and self, O’Shea constructs a “usable past” that help us determine our digital future.


May 11, 202041:07
#7: Ethicability: How to Decide What's Right and Find the Courage to Do it

#7: Ethicability: How to Decide What's Right and Find the Courage to Do it

I don't believe that AI is ever going to be capable of resolving ethical dilemmas in a way that we can all agree about - Roger Steare 

Professor Roger Steare is internationally recognized as one of the leading experts advising the Boards and executive teams on building high performing, ethical organizations. His work with BP after the Gulf of Mexico disaster has been crucial to the company’s recovery plan, with Roger’s decision-making framework and leadership training endorsed within the US Department of Justice Consent Agreement of 2016. He has advised Barclays, HSBC, Lloyds Bank and RBS after the credit crisis, PPI mis-selling and Libor manipulation scandals, with his work publicly endorsed by the Financial Conduct. He is the author of "ethicability" and "Thinking outside the inbox"; and co-designer of the MoralDNA, a psychometric profile that measures moral values and decision-making preferences, which has a database of over 70,000 people from more than 200 countries. He has been described as a "disruptive", "provocative" and "world-class" keynote speaker on leadership, culture and ethics. His work has also been profiled in The Times, the Financial Times, The Wall Street Journal, Les Echos and The Guardian.

This conversation covered a wide range of topics, from the ethical challenges of AI and other technologies to Superintelligence. 

Got questions or interested in sponsoring the podcast? Please email us at contact@aiasiapacific.org

Mar 25, 202043:56
#6: AI to fight hiring bias
Mar 02, 202034:30
AI Asia Pacific Institute Podcast Trailer

AI Asia Pacific Institute Podcast Trailer

Welcome to the AI Asia Pacific Institute Podcast. The rise of AI presents important legal and ethical challenges for society. In this podcast, we invite leaders from different industries and creators of new AI to debate the big questions.

Feb 20, 202000:40
#5: Innovation Tools

#5: Innovation Tools

"Innovation connects the novelty of the invention with the purpose and value of entrepreneurship." — Evan Shellshear

In an exponentially changing world where cost is no longer such an inhibiting factor for innovating, companies can now embrace techniques that not only drive innovation but are doing it in a low risk and powerful way. In this conversation, Evan Shellshear shares some of the low risk innovation techniques, examples and case studies around innovation.

Evan has been an entrepreneur for more than a decade and is the author of the best-selling book titled, Innovation Tools. He has a passion for innovation and not just from a managerial perspective but also from a doing perspective. Evan has a PhD in Game Theory and is currently the Head of Analytics at Biarri, a world leading consulting company building SaaS apps powered by advanced analytics.

Reach out to Evan on LinkedIn to ask him any questions around innovation and the podcast.

Sep 16, 201935:30
#4: What are knowledge-intensive organisations doing differently? The case of the Australian Centres of Research Excellence.

#4: What are knowledge-intensive organisations doing differently? The case of the Australian Centres of Research Excellence.

We stand on the brink of a technological revolution, the Fourth Industrial Revolution. What are knowledge-intensive organisations doing differently In this conversation, we investigate the case of the Australian Centres of Research Excellence and what we can learn from them in the innovation space. 

This is one of the very promising alternatives that governments are looking at. How new technologies, such as artificial intelligence can be used and further developed to meet greater goals in society.  —
Fabiana Barros

Fabiana has been recently awarded her PhD from the University of Melbourne where she investigated the organisational capacity of Centres of Research Excellence in Australia. During the past years she has been working at the LH Martin Institute (of the same university) focusing on Research and Innovation Policy and Management. Prior to that she has worked for many years in Europe as a project manager and fundraiser for international scientific projects involving European universities and research centres. She was a trainee at the European Comission in Luxembourg. She has a bachelor's degree in Computer Science and a master's degree in the field of Higher Education Policy and Management.

Aug 16, 201949:13
#3: Smart Cities: The Ethical Challenges and the Opportunities

#3: Smart Cities: The Ethical Challenges and the Opportunities

By 2050, 6.5 billion people will choose to live in cities. These individuals will require employment and access to better healthcare from an infrastructure that is already extremely vulnerable. How can we use technology to create sustainable cities for the future? 

Aug 01, 201950:28