The Artificial Intelligence Podcast
By Dr. Tony Hoang
The Artificial Intelligence PodcastApr 15, 2024
Adobe to Introduce AI-Powered Video Editing Tools for Premiere Pro
Adobe is working on developing generative AI video tools for its Premiere Pro video editing platform. These tools will allow users to generate and edit videos using text prompts, add or remove objects from videos, and extend the length of video clips. Adobe is also exploring integrations with third-party AI models from companies like Runway, Pika Labs, and OpenAI's Sora, which would provide more options for users. While the release date is not yet announced, Adobe has showcased the capabilities of its own video model. The integration of AI models into Premiere Pro aims to provide creative professionals with a seamless editing experience and more creative possibilities.
Ron Jones and Alexa Tsui, CEO and COO at G2X
US Government is adapting to AI introduction, but concerns remain about job prospects
The U.S. government is taking steps to regulate AI and ensure society can effectively adapt to its introduction, although the regulatory environment appears relatively lenient compared to the European Union. Klarna, a financial services provider, predicts that its AI assistant tool will increase profits by $40 million by the end of 2024. Open AI's systems, which power Klarna's AI tool, have garnered attention from Congress. The U.S. Senate Task Force on AI has enacted at least 15 bills into law, but concerns remain about the potential impact of AI on job prospects. Some lawmakers have proposed robot taxes to limit job losses, but economists argue for setting them at a relatively low level to avoid hampering technological growth.
Air Force harnessing AI in wargaming for strategic advantage
The Air Force is exploring the use of artificial intelligence (AI) in wargaming to enhance decision-making and strategic analysis. Traditional wargaming methods can be expensive and limited in analysis, and AI has been identified as a potential solution. Lt. Gen. David Harris of the Air Force Futures office highlighted the use of AI assistants to run thousands of iterations in a single turn of wargaming, allowing for a better understanding of optimal solutions in different conflict scenarios. The Air Force is also interested in adjusting the levels of autonomy for future systems, particularly human-machine teaming, and is pursuing next-generation drones with advanced capabilities.
AI tool Tyche by MIT researchers allows multiple plausible segmentations for medical images
Researchers from MIT, the Broad Institute of MIT and Harvard, and Massachusetts General Hospital have developed an AI tool called Tyche that can generate multiple plausible segmentations for medical images. Unlike existing models that provide only one answer, Tyche allows users to select the most appropriate segmentation based on their purpose. It does not require retraining and can be used for various tasks like identifying lesions in lung X-rays or anomalies in brain MRIs. The researchers modified a neural network architecture to build Tyche, and testing showed that its predictions were diverse and often better than baseline models. The research was funded by the National Institutes of Health and Quanta Computer.
MIT researchers revolutionize AI safety testing with innovative machine learning technique
MIT researchers have developed a new machine learning technique to enhance the red-teaming process, which involves testing AI models for safety. The approach involves using curiosity-driven exploration to encourage the generation of diverse and novel prompts that expose potential weaknesses in AI systems. This method has proven to be more effective than traditional techniques, producing a wider range of toxic responses and improving the robustness of AI safety measures. The researchers aim to enable the red-team model to generate prompts covering a greater variety of topics and explore using a large language model as a toxicity classifier for compliance testing.
Virginia Congressman Pursuing Master's Degree in Machine Learning to Regulate AI
Rep. Don Beyer of Virginia, a 73-year-old congressman tasked with regulating artificial intelligence (AI), is pursuing a master's degree in machine learning at George Mason University. Beyer believes that lawmakers must have a deep understanding of AI to effectively regulate the industry. This reflects a broader effort by members of Congress to educate themselves about AI as they consider legislation that will shape its development. Lawmakers like Rep. Jay Obernolte of California, who chairs the House's AI Task Force, acknowledge the importance of understanding AI and regularly vote on bills related to complex topics. Rep. Beyer's decision to advance his education highlights the need for lawmakers to have a clear-eyed understanding of AI and its implications in various sectors.
AI Therapy App Called Therabot Offers Personalized Mental Health Care
Dartmouth College researchers are conducting a clinical trial for an AI therapy app called Therabot. The app, powered by generative AI technology, aims to provide mental health care to underserved populations. Unlike other therapy apps, Therabot uses AI to learn patterns and personalize advice for users. It has been trained on data from online peer support forums and traditional therapy sessions. The app's developers believe that it could help combat the mental health crisis and expand access to care. However, Therabot is not intended to replace human therapists but rather offer additional support. The team plans to conduct more trials and seek FDA approval.
AI Revolutionizing Financial Industry Hiring Process
Financial services giants like Goldman Sachs and Morgan Stanley are exploring the use of AI tools that could potentially replace entry-level financial analyst positions. These tools can automate tasks such as compiling reports, crunching numbers, and interpreting data. The adoption of AI in the financial services industry could significantly impact hiring strategies and render the need for hiring thousands of college graduates obsolete. AI automation is already being used in the industry to increase efficiency and productivity. However, the implementation of AI also presents challenges in terms of managing and implementing these tools. The financial industry is expected to be one of the early adopters of AI technology, impacting CIOs and other employees in the sector.
Amazon CEO Andy Jassy Foresees Generative AI as Key Driver for Company's Growth
Amazon CEO Andy Jassy has highlighted the potential of generative artificial intelligence (AI) to drive the company's future growth in his annual letter to shareholders. Jassy believes that generative AI could bring significant societal and business benefits and compares it to the cloud and the Internet in terms of technological transformations. He emphasized that this revolution will be built on top of Amazon's existing cloud infrastructure. Jassy also discussed other areas like grocery, Prime Video, and the success of Amazon Web Services (AWS) and logistics operations. The letter highlights Amazon's commitment to cost-cutting and its investments in AI-related initiatives and space endeavors.
Meet Aerobotics: The AI Helping Farmers Boost Crop Yields
South African company Aerobotics is using artificial intelligence (AI) to aid fruit and nut farmers in boosting their crop yields. Established nine years ago, Aerobotics is active in 18 countries, with the US being its biggest market. Their AI platform analyses over 1 million fruit images per month and has mapped over 600,000 acres of US farmland. CEO James Paterson, who grew up on a fruit farm, developed an interest in improving farming operations using data. The company's AI software analyses drone-captured images to detect individual fruits, predict crop yield and assist with planning, reducing time spent on tasks such as pest monitoring.
Nvidia and Georgia Tech Unveil AI Supercomputer for Student Use
Nvidia and the Georgia Institute of Technology have partnered to create an AI supercomputer exclusively for student use. The supercomputer, powered by Nvidia's enterprise AI software and equipped with Nvidia GPUs, aims to provide students with access to high-performance computing resources. Initially available to undergraduate students, the supercomputer will eventually be expanded to include all undergraduate and graduate students. The collaboration reflects the growing importance of training the next generation of workers in the field of artificial intelligence. The supercomputer consists of 20 Nvidia HGX H100 systems with a total of 160 Nvidia H100 GPUs, offering computational capabilities for various projects in fields like computer vision, robotics, and more.
Google Cloud and Bayer team up to create AI platform for radiologists
Google Cloud and Bayer have partnered to develop an AI-powered platform that aims to assist radiologists in diagnosing patients more efficiently. The platform utilizes generative AI to identify anomalies within medical images and retrieve relevant information from a patient's medical history. It aims to address the growing labor shortage among radiologists and help them manage higher caseloads. The platform does not replace radiologists but supports them by providing necessary information and saving time spent searching through patient records. Google Cloud and Bayer's platform faces competition from other companies exploring AI applications in medical imaging, and establishing trust and ensuring technical accuracy and security will be crucial for its success.
AI-assisted Mammograms: A Game-Changer in Detecting Breast Cancer
Clinics in the US are offering mammogram readings by both radiologists and artificial intelligence (AI) models. AI tools can speed up radiologists' work and detect breast cancer earlier than traditional mammograms alone. While experts are excited about improved accuracy, concerns remain about effectiveness across diverse patients and impact on breast cancer survival rates. AI models highlight suspicious areas on mammogram images and can reduce false positives. However, it is unclear whether AI analysis will reduce deaths from breast cancer or simply increase the number of earlier detections. Generalizability of European findings, potential limitations of AI, and lack of billing codes for insurers also pose challenges.
US Air Force to Deploy 1,000 AI Drones for Future Air Warfare
The US Air Force plans to deploy a fleet of more than 1,000 AI-operated drones for future air warfare. The drones, referred to as "loyal wingmen," will be led by a piloted jet and used in collaborative combat missions. To showcase the capabilities of these autonomous aircraft, Air Force Secretary Frank Kendall will personally fly in one of the modified F-16s during a flight. The aim of using AI-operated drones is to enhance the Air Force's capabilities while minimizing costs and reducing risks faced by human operators. These drones could be especially valuable in potential conflicts with China, where sending manned crews close to combat may be risky due to upgraded anti-access capabilities and advanced air defense systems.
Intel unveils powerful new AI chip Gaudi 3 to challenge Nvidia's dominance
Intel has announced the release of its latest artificial intelligence chip, Gaudi 3, in a bid to compete with Nvidia's dominance in the AI chip market. The Gaudi 3 chip is touted to be more power-efficient and faster than Nvidia's H100 GPU, offering improved performance for AI models. It can be configured in different ways and has been tested on various models, demonstrating its suitability for training and deploying AI models. Intel plans to make the chips available in the third quarter and has formed partnerships with companies like Dell and Hewlett Packard Enterprise. The release is part of Intel's strategy to expand its presence in the AI chip market.
UK Elections 2024: Cyber Experts Warn of AI-Powered Disinformation Threat
The United Kingdom is gearing up for local and general elections in 2024, which are expected to be highly contentious. Cyber experts warn that malicious actors may target these elections through disinformation campaigns aided by artificial intelligence (AI). Disinformation has been a significant cyber risk in previous UK elections, and state-backed cyberattacks are also expected to increase leading up to the upcoming elections. AI-generated deepfakes, synthetic images, videos, and audio, make it easier for malicious actors to spread false information. Cybersecurity experts stress the need for increased awareness and international cooperation to mitigate the threat of AI-powered disinformation. Tech giants like Meta, Google, and TikTok will play a crucial role in combatting deepfakes and preventing the spread of misinformation.
AI College Advising Tool AVA Sparks Debate Among Students and Counselors
AI-powered college advising tools like AVA are gaining popularity and promising to assist high school students in their college admissions journey. AVA, an artificial intelligence-powered college counseling assistant, was introduced during a virtual roundtable held by the College Guidance Network (CGN). Supporters argue that these tools can provide guidance and support for students 24/7 and alleviate the workload of overworked counselors. However, critics worry that they may disadvantage students who need personalized counseling. The National Association of College Admissions Counselors (NACAC) partnered with CGN to launch AVA, aiming to bridge the equity gap in college counseling. Ethical concerns and racial bias are also being discussed with the use of AI chatbots.
AI Essay Grading Raises Ethical Concerns
The use of artificial intelligence (AI) in grading essays is growing in popularity among teachers, but it raises ethical concerns. Teachers are utilizing AI tools like Chat GPT to provide feedback and improve students' work, but there is a need for personal feedback as well. A report by Tyton Partners found that half of college students and 22% of faculty members used AI tools in the fall of 2023. However, relying solely on AI for grading can undermine the teacher-student relationship, and there are concerns about the upload of students' work to AI platforms. Clearer policies and ongoing discussions are necessary to ensure ethical use of AI in education.
Survey Finds 41% of C-suite Executives Expect AI to Reduce Workforce
A global survey conducted by Adecco Group and Oxford Economics has found that 41% of C-suite executives expect artificial intelligence (AI) to result in a reduced workforce over the next five years. The study, which polled executives from nine countries, also revealed that 46% of respondents plan to redeploy affected employees internally, while two-thirds intend to hire individuals with AI skills. A World Economic Forum survey, however, found that while 25% of companies anticipate job losses due to AI, 50% believe the technology will create new jobs. Earlier this year, Dropbox and Duolingo cited AI as the reason for implementing layoffs.
Judge in Washington bans AI-enhanced videos as trial evidence
A judge in Washington has banned the use of AI-enhanced videos as evidence in a trial, stating that it could lead to confusion and disputes due to the non-peer-reviewed process used by AI technology. The ruling came in a case involving a man accused of killing three people, whose lawyers sought to introduce AI-enhanced cellphone video footage. The judge's decision highlights the challenges faced by lawmakers and legal professionals in incorporating AI into the justice system. It also emphasizes the need for clear guidelines and standards for the admissibility of AI-enhanced evidence in court. The ruling comes as governments work on policies to address AI risks.
Google's AI-Enhanced Search Feature Could Soon Carry a Price Tag
Google is reportedly considering introducing a charge for its AI-enhanced search features as part of a potential shift in the company's revenue model. The high cost of providing this service is said to be the main driving force behind the proposal. It is predicted that other industry leaders will also adopt subscription models to cover their expenses. Google's plan involves offering the new search feature exclusively to users of its premium subscription services, and these services are already required to access AI assistants in other Google tools. The expense of AI primarily lies in the computing power needed to train advanced generative models.
Samsung revolutionizes Bixby with generative AI upgrade
Samsung is planning to upgrade its voice assistant, Bixby, with generative AI technology to improve its conversational abilities. The upgrade aims to make Bixby smarter and more versatile for users, with the integration of generative AI allowing for more natural and engaging interactions. Samsung's executive vice president of its mobile business confirmed the company's dedication to bringing generative AI capabilities to Bixby. While generative AI is already available on Samsung's smart home appliances, the upcoming upgrade could have significant impacts on smartphones and smartwatches, enabling real-time feedback and assistance in various situations. Overall, this move by Samsung underscores its commitment to compete in the AI space.
US and UK Team Up for Groundbreaking AI Safety Partnership
The United States and the United Kingdom have partnered to focus on the science of AI safety. They will collaborate on research, safety evaluations, and guidance for AI safety, aiming to develop tests for advanced AI models. This partnership will align the approaches of the U.S. and UK AI Safety Institutes and accelerate the development of robust evaluations for AI systems. They plan to build a common approach to AI safety testing and conduct joint testing exercises. Both governments recognize the need for a shared approach to AI safety and are committed to developing similar partnerships with other countries. The collaboration will contribute to addressing the risks associated with AI while harnessing its potential.
AI Mimics Human Voices with Impressive Accuracy
Open AI has unveiled Voice Engine, a tool that can mimic human voices with impressive accuracy. Users can input a text and the AI-generated voice will read it in the chosen voice. Open AI believes this tool could have various applications, like translation services and assisting children and individuals who've lost their ability to speak. However, concerns have been raised about the tool being misused to spread misinformation or facilitate scams. Open AI is closely monitoring feedback and experiences from its trusted partners before deciding on widespread use. They also suggest phasing out voice-based authentication and implementing safeguards to prevent voices too similar to public figures. Open AI has also announced the upcoming release of Sora, an AI-generated video tool, and made their popular chatbot, ChatGPT, available without the need to sign up.
AI Revolutionizes Fast-Food Giant Yum Brands
Yum Brands, the parent company of popular fast-food chains Taco Bell, Pizza Hut, KFC, and Habit Burger Grill, is turning to artificial intelligence (AI) to transform its operations. With around 45% of Yum's sales being digital, the company is heavily investing in AI to increase its digital sales further. Yum Brands is integrating AI into various areas, such as employee training and customer interactions, through its SuperApp and other digital tools. The company also recognizes the importance of data in its AI strategy and aims to develop customer profiles and personalized offers to attract and retain customers. However, Yum faces challenges in integrating new technologies due to its outdated technology systems.
New Hampshire Takes Action Against Deceptive AI in Political Ads
The New Hampshire state House has passed a bill aimed at regulating the use of artificial intelligence (AI) in political ads. The move comes after voters in the state received robocalls with an AI-generated voice pretending to be President Biden. The bill, which passed without debate, would require political ads using deceptive AI to disclose their use of the technology. The motivation behind the bill is concerns about the dangers of AI in political campaigns. The bill is part of a larger trend of states introducing legislation to regulate AI in election-related content. The incident has also prompted calls for federal regulations to safeguard against AI election disinformation.
Founder's Mysterious Disappearance Raises Concerns Among Stability AI's Employees and Investors
Emad Mostaque, the founder and CEO of Stability AI, went missing in December 2022, causing concern among employees and investors. Although he reappeared and downplayed the incident, it highlighted deeper problems within the company. Mostaque's relationship with key investors, Coatue and Lightspeed Venture Partners, was crucial for the company's success. However, tensions escalated, ultimately leading to Mostaque's resignation in March 2024. Stability AI had gained attention for its open-source AI model, but issues with company structure, financials, and competition hindered its progress. The future of Stability AI is uncertain, and its downfall serves as a cautionary tale in the AI industry.
AI Tool VA-ResNet-50 Predicts Risk of Lethal Heart Rhythm
Researchers from the University Hospitals of Leicester NHS Trust have developed an artificial intelligence (AI) tool called VA-ResNet-50 that can predict the risk of lethal heart rhythm in individuals. The tool analyses Holter electrocardiograms (ECGs) taken from adults during their daily routines and can identify ventricular arrhythmia (VA), a condition that can lead to sudden death if not treated promptly. The AI tool correctly identified VA in 80% of cases where patients experienced lethal arrhythmias. Current clinical guidelines for assessing the risk of VA are not accurate enough, leading to avoidable deaths. The findings highlight the potential of AI to improve risk assessment and treatment decisions in healthcare.
Visa Unveils Cutting-Edge AI Solutions to Combat Fraud in 2024
Visa has announced the launch of three AI-powered risk and fraud prevention solutions, set to be available in the first half of 2024. These solutions will be a part of Visa's Protect portfolio and aim to address the challenges faced by issuers in the digital fraud landscape. The solutions include Visa Deep Authorization, which manages card-not-present payments, Visa Advanced Authorization, which expands fraud risk management tools, and Visa Risk Manager, which focuses on instant payment processes. By utilizing advanced deep learning AI, these solutions will provide real-time risk scores, improving fraud defense mechanisms and reducing operational costs for financial institutions.
AI model SyntheMol develops six new drugs to fight antibiotic-resistant bacteria
Scientists from Stanford Medicine and McMaster University have developed an artificial intelligence model called SyntheMol that has generated structures and chemical recipes for six novel drugs to combat antibiotic-resistant strains of Acinetobacter baumannii bacteria. This bacterium is responsible for many antibiotic resistance-related deaths. The model was trained to construct potential drugs using a library of over 130,000 molecular building blocks and a set of validated chemical reactions. Out of the 58 compounds that were successfully generated and tested in the lab, six of them proved effective against A. baumannii and other antibiotic-resistant bacteria. The researchers plan to further test the compounds for toxicity and collaborate with other research groups for drug discovery in different areas.
AI search engines struggle to compete with Google, still have a long way to go
AI search engines have made strides in improving their capabilities, but they still have a long way to go to compete with Google. These AI tools, such as Chat GPT, Google Gemini, and Microsoft Copilot, lack a true understanding of what a search engine is and how people use them. While they can find information, they struggle to replicate all the functionalities that Google offers. In tests comparing AI tools to Google, Google consistently provided faster and more accurate results. AI search engines excel in answering buried information queries, but fall short in responding to complex queries and recommending what to watch. Overall, Google's multifaceted functionality and speed make it difficult for AI search engines to truly compete. However, advancements in AI technology still hold potential for the future of search engines.
Uber Eats Courier Wins Settlement Over Racial Discrimination
A recent case involving a Black Uber Eats bike courier, Pa Edrissa Manjang, who faced racial discrimination due to facial recognition checks and received a settlement from Uber. The use of facial recognition checks resulted in Manjang being unable to access the Uber Eats app, causing him to lose job opportunities. The case raises concerns about the lack of transparency and rushed implementation of automated systems, which can lead to biased outcomes. It also highlights the need for stronger enforcement of existing laws and the implementation of dedicated AI safety legislation to address AI bias and protect against discrimination and human rights abuses.
Microsoft merges Windows and Surface divisions under new leader for focus on AI PCs
Microsoft has undergone a major shake-up, merging its Windows and Surface divisions under a new leader, Pavan Davuluri, as the company prepares for a focus on "AI PCs." This move follows frustrations from the top indicating that the previous split between the divisions was not successful. Additionally, Mustafa Suleyman, co-founder of DeepMind, has been hired by CEO Satya Nadella to oversee Microsoft's consumer AI push as CEO of Microsoft AI. This hiring suggests that the previous shake-up involving Windows and AI did not have the desired outcomes. The combination of Windows and Surface is expected to bring clarity to Microsoft's AI efforts for Windows.
Grindr's Controversial Plan to Introduce AI Chatbot and Monetize App with In-App Purchases
Grindr, the popular gay dating app, is facing financial challenges and is under pressure to increase revenue. The company plans to monetize the app and introduce new in-app purchases, including the development of an AI chatbot that can engage in sexually explicit conversations with users. Grindr's push for monetization reflects declining stock prices in the dating app industry due to slowed growth. However, some employees are concerned about the impact on user trust and privacy. The company is also facing allegations of inflating the number of paying users. As Grindr considers putting previously free features behind a paywall, there are concerns about the negative impact on the user experience.
Amazon's $2.75 Billion Investment in AI Startup Anthropic to Boost Cloud Computing Leadership
Amazon's recent $2.75 billion investment in AI startup Anthropic aims to solidify its position as a leader in cloud computing. However, analysts suggest that the move may not be enough for Amazon to maintain its dominance. While Amazon's cloud computing service, AWS, currently holds a significant market share, competitors like Microsoft Azure and Google Cloud are gaining momentum in the AI space. Analysts have criticized Amazon for its underwhelming efforts in generative AI and slow response compared to Microsoft's Copilot offering. Additionally, Amazon faces challenges in leveraging its infrastructure and accommodating AI-based TPUs. Nonetheless, analysts believe in AWS's leadership and resilience to overcome these challenges. As competition intensifies, Amazon must deliver innovative AI solutions to differentiate itself from rivals.
AI chatbot in NYC providing misleading information; raises concerns about legality
New York City's AI-powered chatbot intended to assist business owners in understanding government regulations has been found to provide misleading information that may encourage illegal activities. After five months of its launch, the chatbot has been observed to offer incomplete and dangerously inaccurate guidance on housing policies, worker rights, and rules for entrepreneurs. An example is its incorrect information that landlords are not obliged to accept tenants with Section 8 vouchers or rental assistance, whereas it is illegal to discriminate based on income source in the city. This finding raises concerns about the chatbot's reliability and underscores the importance of testing and verification before implementing AI-powered systems.
Microsoft's Copilot AI Service Boosted by Local PC Integration and Enhanced Features
Microsoft's Copilot AI service will soon be able to run locally on PCs, thanks to built-in neural processing units (NPUs) with over 40 trillion operations per second (TOPS) of power. By running more elements of Copilot locally, lag time will be reduced and performance and privacy may be improved. Currently, Copilot runs primarily in the cloud, causing delays for smaller tasks. Intel's Lunar Lake chips, shipping in 2025, will have triple the NPU speeds of its current chips. Microsoft is also expanding Copilot's capabilities in Teams, allowing it to pull insights from both meeting chat and call transcripts, help rewrite messages, and generate new messages based on chat context. Microsoft is also introducing features to improve hybrid meetings, such as individual video feeds for each attendee and automatic camera switching for the best view. These updates will be rolled out in the coming months.
AI Revolutionizes Eye Exams, Saving Time and Costs
Artificial intelligence (AI) is revolutionizing eye exams, particularly in the detection of diabetic retinopathy, the leading cause of blindness among working-age adults. An AI algorithm implemented in clinics is able to perform eye exams using retinal images, providing quick diagnoses without the need for a doctor to be present. This saves time, reduces costs, and increases accessibility for patients. Several companies have received FDA approval for their AI eye exams, and the Centers for Medicare & Medicaid Services (CMS) has provided a reimbursement code to facilitate adoption. The technology is evolving, with ongoing studies exploring new applications and potential advancements in screening for other eye diseases.
Meta's new tech update transforms Ray-Ban glasses into A.I.-powered smart spectacles
Tech company Meta is set to release a major update for its Ray-Ban Meta camera glasses, incorporating artificial intelligence (A.I.) software. Priced at $300 and up, the glasses can now scan landmarks, translate languages, and identify objects such as animal breeds and exotic fruits, among other tasks. Activated by saying "Hey, Meta," followed by a request, the A.I. delivers responses through the glasses' speakers using computer-generated voice. Early testers found the glasses to be both amusing and impressive in their abilities, although there may still be some limitations and room for incorrect information. The incorporation of A.I. software provides users with a unique and interactive way to engage with the world.
Microsoft reinforces security measures for AI chatbots combating malicious attacks
Microsoft has implemented new security measures to safeguard its AI chatbots from malicious attacks. The tools, integrated into Azure AI, aim to prevent prompt injection attacks that manipulate the AI system into generating harmful content or extracting sensitive data. Microsoft is also addressing concerns relating to the AI system's quality and reliability, with prompt shields to detect and block injection attacks, groundedness detection to identify AI "hallucinations," and safety system messages to guide model behavior. The collaboration between Microsoft and OpenAI has been crucial in training AI models using diverse datasets and propelling generative AI forward. These measures underscore Microsoft's commitment to responsible AI usage.
Writers Unions Urge Senate Leader for AI Protections
The Writers Guild of America (WGA) and other unions representing film and TV writers and journalists have sent a letter to Senate Majority Leader Chuck Schumer, requesting protections for their industries in relation to Artificial Intelligence (AI) legislation. The letter raises concerns over the use of AI-generated content to replace journalists' work, unauthorized use of writers' work by AI developers, and the need for safeguards to protect the rights and compensation of creative professionals. The unions call for immediate action to prevent the abuse of AI technology and ensure the preservation of writers' work in the face of advancing technology.
AI Ph.D. graduates favoring tech giants over open innovation poses challenges for AI development
AI Ph.D. graduates are increasingly choosing to work for large technology companies, which could have negative implications for open innovation. The debate between open and closed AI models is overshadowing a comprehensive understanding of openness in AI. Open science, transparency, and equity are crucial for developing AI that serves the public interest. The Partnership on AI exemplifies open innovation by bringing together academia, civil society, industry partners, and policy-makers to collaborate on AI challenges. Public funding and open publication of research foster an open ecosystem. However, recent data shows a decline in AI Ph.D.s pursuing academia, limiting open models and research availability. Transparency and accountability are important for responsible and safe AI deployment. Creating an open ecosystem requires investment in skills and education while promoting diversity of perspectives. Establishing research and open innovation systems that foster scientific integrity and resilience is essential. Incorporating diverse voices is imperative for AI development.
NYPD to Test AI Metal Detectors in NYC Subway Stations
The NYPD plans to test metal detectors equipped with AI technology in subway stations to address the presence of guns in the transit system. The scanners, manufactured by Evolv, are being deployed in response to an increase in subway-related killings. The decision to implement the scanners follows the deployment of National Guard soldiers and increased police presence in the subway system. The scanners, which will not use facial recognition technology and will be mobile within the stations, will not be mandatory for riders. The exact number and locations of the scanners have not been disclosed, and the decision to expand their use will depend on the results of the testing phase. Evolv is currently under investigation for overstating the capabilities of its technology.
AI-Driven Deepfakes Pose Identity Verification Challenges for Businesses
The rise of AI-driven deepfakes is creating challenges for identity verification, forcing CIOs and IT leaders to understand the impact on identity management processes. The process typically involves submitting a photo ID and taking a selfie for biometric comparison. However, deepfakes pose a risk by allowing attackers to present fraudulent identities. To combat this, organizations should ensure their identity verification vendor uses robust liveness detection during the selfie step, actively prompting users or passively evaluating micro movements. Additionally, leveraging AI can enhance the process by training algorithms to detect deepfake attacks more accurately and addressing issues of demographic bias. Collaboration and vigilance through partnerships and bounty programs are also crucial for staying ahead of potential deepfake adversaries.
AI Misuse in Schools Sparks Outrage and Calls for Regulation
The growing popularity of artificial intelligence (AI) is causing concern in schools as instances of its misuse emerge. One high school student in Illinois used AI to create and circulate nude photos of classmates, sparking outrage among parents and prompting calls for stricter regulation. Similar incidents have been reported across the country, leading to a debate about policing AI use in schools. Some states have implemented laws against non-consensual creation of explicit images using AI, but legal experts argue that charging young individuals with felonies is excessive. Experts recommend that schools establish clear rules to prevent future misuse of AI technology.
5 Key Steps for Businesses to Safely Implement Artificial Intelligence
Businesses looking to adopt artificial intelligence (AI) in a secure way need to follow five key steps. The integration of AI technology has the potential to revolutionize various aspects of business operations, but it also brings new risks and security implications. One of the primary concerns is identity management, with cyber attackers increasingly using AI for sophisticated fraud attacks. There are steps businesses should take to securely embrace AI, including defining their stance on AI and its impact on cybersecurity, fostering effective communication and awareness, changing their approach to AI adoption, demonstrating its business value, and staying updated with the latest risk landscape. Additionally, AI can also help bolster cybersecurity efforts and address the shortage of cybersecurity workers.
Turkey on high alert for AI-generated disinformation ahead of elections
Concerns are rising in Turkey ahead of the nationwide local elections on March 31, as disinformation and fake media created through artificial intelligence (AI) are becoming more prevalent. Manipulated images and videos are being used by some politicians for electoral advantage, raising fears about media manipulation in an election where the ruling Justice and Development Party (AK Party) is aiming to retake cities won by the opposition in 2019. The director of fact-checking project Teyit warns that "cheap fake" videos are more common than AI-created disinformation and pose a significant threat. The Turkish government passed a law last year criminalizing the dissemination of false information to combat disinformation.
Biden Administration Unveils New AI Regulations
The Biden administration has introduced three new policies to regulate the use of artificial intelligence (AI) within the federal government. These policies address concerns about the risks associated with AI and aim to protect citizens while promoting responsible implementation. The policies require federal agencies to ensure that their use of AI does not jeopardize the rights and safety of Americans, publish a list of AI systems being used and assess associated risks, and designate a chief AI officer to oversee AI utilization. The administration hopes that these policies will serve as a blueprint for global action on AI regulation.
Biden administration weighs "nutritional labels" for AI tech products
The Biden administration is considering the introduction of "nutrition labels" for new tech products utilizing artificial intelligence (AI). These labels would provide standardized descriptions for AI systems, similar to food labeling by the Food and Drug Administration. The Department of Treasury and Department of Commerce have released reports examining the risks of AI and are exploring the possibility of implementing these labels. Challenges may arise due to differing opinions on defining AI. The Treasury Department plans to collaborate with various organizations to investigate AI's impact and address risk and fraud issues, with the aim of enhancing transparency and accountability in AI technology use.