The Responsible AI Blog
Get the latest news and insights about Responsible AI and its transformative impact on today’s business landscape.
Featured
April 25, 2024
What You Should Know: Canada’s Artificial Intelligence and Data Act
Dive into our blog post to learn more about Canada's AI and Data Act, including key actors, targeted technologies and core objectives.
April 23, 2024
Beware the Pitfalls: Why Skimping on AI-Specific Risk Management Spells Trouble
Did you know that neglecting AI-specific risk management could spell trouble for your organization? Learn why in our latest blog post.
April 19, 2024
Decoding the EU AI Act: Influence on Market Dynamics
Learn more about the EU AI Act and how its provisions are set to generate widespread effects throughout the EU AI market in our latest blog post.
April 12, 2024
Important Factors to Consider When Choosing a Responsible AI Platform
Responsible AI platforms offer comprehensive assessments and resilient strategies to align AI technology with legal standards and ethical principles.
April 11, 2024
What You Should Know: California Generative AI Procurement Guidelines
Dive into our latest blog post to find out everything you need to know about California's new guidelines on public sector procurement of Generative AI systems.
April 10, 2024
Lumenova AI: Now Featured on GOV.UK for AI Assurance Techniques
We're thrilled to announce a significant milestone in our journey – Lumenova AI has been included in GOV.UK for AI Assurance Techniques.
April 4, 2024
Decoding the EU AI Act: Laying the Groundwork for Standardized AI Regulation
Learn more about how the EU AI Act is laying the groundwork for standardized AI regulation in our latest blog post.
April 2, 2024
2024: The Year of Responsible Generative AI
In 2024 we expect new kinds of Generative AI to emerge. However, it’s also a pivotal year in the evolution of AI regulation. Find out more in our latest blog post.
March 29, 2024
What You Should Know About ISO 42001
Did you know? Early attempts at AI risk management standardizatio are now being made, with ISO 42001 being the first global AI risk management standard. Read on to learn more.
March 26, 2024
4 Types of AI Cyberattacks Identified by NIST
NIST identifies four major types of cyberattacks and offers mitigation strategies to protect, detect, respond and recover.
March 14, 2024
Decoding the EU AI Act: Transparency Obligations, Governance, and Post-Market Monitoring
Learn more about the transparency obligations, governance roles and structures, as well as post-market monitoring procedures proposed by the EU AI Act.
March 11, 2024
From AI to Trustworthy AI: Connecting the Dots Between Innovation and Integrity
Our Responsible AI platform offers comprehensive assessments and resilient strategies to achieve Trustworthy AI through innovation and integrity.
March 7, 2024
NIST's Cybersecurity Framework 2.0 Was Officially Released. Here's What You Should Know
Learn more about NIST's Cybersecurity Framework 2.0 and how it can help your organization to efficiently manage risk.
March 4, 2024
Decoding the EU AI Act: Regulatory Sandboxes and GPAI Systems
Learn more about the EU AI Act’s provisions concerning the implementation of regulatory sandboxes as well as the development and deployment of general purpose AI (GPAI) systems.
February 26, 2024
How to Get Started with Generative AI Governance
Read our latest blog post to learn how to get started with Generative AI governance in 2024.
February 22, 2024
Understand Your AI: Compliance & Regulatory Insights for AI in The Insurance Industry
Dive into our latest blog post to find out more about the latest regulatory insights for AI in the insurance industry.
February 20, 2024
Understand Your AI: Exploring the Opportunities and Risks of AI in Insurance
Read our latest blog post to find out more about how AI is shaping the future of insurance.
February 16, 2024
Decoding the EU AI Act: Regulatory Scope, Purpose and Impact
Read our latest blog post to find out more about the regulatory scope, purpose, and impact of the EU AI Act.
February 9, 2024
Lumenova AI Joins NIST's AI Safety Institute Consortium (AISIC)
Lumenova AI joins the MIST's AI Safety Institute Consortium (AISIC), a new public-private partnership led by the National Institute of Standards and Technology (NIST) to support the development and deployment of trustworthy and safe AI.
February 8, 2024
The Progress of AI Policy in 2023 and Predictions for 2024
2023 was an exciting and ambitious year for AI policymaking. Find out more about the progress of AI policy in 2023 and our predictions for 2024.
January 29, 2024
What you need to know about NIST's AI Risk Management Framework
NIST released a framework to guide and manage the use of AI products, services, and systems.
January 23, 2024
Demystifying Responsible AI Governance: 4 Myths to Let Go Of in 2024
Read our blog post to uncover the truth behind Responsible AI Governance. Explore common myths, gain insights, and shape an ethical AI strategy for 2024.
January 19, 2024
AI in 2023: A Year in Review
As we gear up to face the challenges and opportunities of 2024, let's reflect on the most noteworthy AI advancements of 2023.
January 17, 2024
Key Takeaways: Regulatory Initiatives Concerning Automated Decision Making Technologies and Generative AI in California
Dive into our latest blog post to explore the key changes in the AI landscape driven by recent California state regulations.
January 16, 2024
Managing the Risk of Large Language Models (LLMs) in Financial Services
Learn more about mitigating risk and following regulation when using LLMs in financial services.
January 12, 2024
Case Study: How a Retail Bank Can Safely Leverage Generative AI with Lumenova AI
This case study examines how a retail bank can safely scale generative AI initiatives by implementing AI governance platform Lumenova AI.
January 10, 2024
Navigating the New Normal: Decoding Colorado's Trailblazing AI Regulation (SB 21-169)
Read more about Colorado's new AI regulation in our latest blog post. Stay informed on Responsible AI practices in the evolving landscape of AI Governance.
December 20, 2023
Navigating the AI safety landscape in Generative AI
As we continue to make strides in the realm of Artificial Intelligence (AI), the safety and ethical use of these systems remain paramount. This is especially true for generative AI models.
December 11, 2023
European Parliament Reaches Landmark Agreement on Artificial Intelligence Act
The European Parliament has achieved a significant milestone in the regulation of Artificial Intelligence (AI) with a provisional agreement on the much-anticipated Artificial Intelligence Act.
December 1, 2023
Responsible Generative AI: Principles for building ethical and trustworthy solutions
In this article, we explore key principles inspired by various industry-leading approaches to building responsible generative AI solutions.
November 28, 2023
President Biden's Bold Step: Inside the White House's New Executive Order Transforming AI
Dive into President Biden's bold AI executive order – read our blog post to uncover strategic steps, transformative impact, and the roadmap to a secure future.
November 17, 2023
New Stanford study reveals the lack of transparency from major AI companies
A new Stanford study reveals the lack of transparency from major AI companies, highlighting the urgent need for improved accountability.
November 10, 2023
The future of healthcare: Navigating the applications and challenges of AI
Explore the future of healthcare as we delve into the applications and challenges of integrating AI. Understand potential benefits, risks, and the roadmap for a responsible AI approach.
October 18, 2023
Lumenova AI Joins the Ranks of Responsible AI Innovators in EAIDB
Lumenova AI is now part of the Ethical AI Database (EAIDB). The Ethical AI Database (EAIDB) is the only publicly available, vetted database of AI startups that offer ethical AI services.
October 3, 2023
Lumenova AI: A Proud Addition to the OECD.AI Database
We're thrilled to announce a significant milestone in our journey – Lumenova AI has been included in OECD.AI’s Trustworthy AI Tools
September 15, 2023
Navigating the Ethics of Generative AI in Finance: A Responsible Approach
This article discusses key insights from Accenture's report "Responsible AI In Finance: Navigating The Ethics Of Generative AI" on how the finance industry can responsibly leverage the transformative potential of Generative AI while navigating complex ethical challenges.
August 15, 2023
Lumenova AI Is Now SOC 2 Type II Compliant
We are excited to announce that Lumenova AI has achieved its SOC 2 Type II attestation, setting new benchmarks in our unwavering commitment to Responsible AI.
July 6, 2023
How Businesses Can Prepare for the EU AI Act
Understanding the EU AI Act: Read our blog post for insights and best practices regarding Europe's AI Act.
June 29, 2023
Europe's AI Act: Key Takeaways
Discover the essential takeaways from the Europen Union's AI Act. Read our blog post to find out more.
June 10, 2023
What Businesses Need to Know About the EU AI Act
Unveiling the EU AI Act: A must-read blog post for businesses seeking insights on AI regulations in the European Union. Read more.
May 10, 2023
Putting Generative AI to Work Responsibly
Learn more about integrating generative AI into your risk management process and why it is important to use this technology responsibly.
May 2, 2023
Machine Learning Model Building Made Easy with ChatGPT: Myth or Reality?
Is building an ML model with ChatGPT easy or just a myth? Our latest blog post explores this question and reveals the reality. Read on to find out more!
April 12, 2023
The Risks & Rewards of Generative AI
Learn more about the risks and rewards of Generative AI, and what to be on the lookout for in our latest blog post.
March 2, 2023
Adversarial Attacks vs Counterfactual Explanations
Adversarial examples are closely related to counterfactual explanations, yet their goal is fundamentally different. Find out more in our latest blog post.
February 27, 2023
Introduction to Counterfactual Explanations in Machine Learning
Discover what counterfactual explanations are and why they are a great tool for explainability in our latest blog post.
February 22, 2023
Types of Adversarial Attacks and How To Overcome Them
Machine Learning powered algorithms are susceptible to a variety of adversarial attacks that aim to degrade their performance. Here’s what you need to know.
February 3, 2023
NIST Releases New AI Risk Management Framework
The National Institute of Standards and Technology released the first version of its AI Risk Management Framework. Find out what it means for your organization.
January 26, 2023
Understanding Adversarial Attacks in Machine Learning
Adversarial attacks and adversarial learning have become key focus points for data scientists and machine learning engineers worldwide. Find out why.
September 21, 2022
Why Business Leaders Should Care About AI Transparency
Transparent AI propels organizations forward by means of enhanced trust, performance, security, and compliance. Find out why business leaders should care.
September 14, 2022
Why AI Transparency Is Essential to Building Trust
AI transparency is becoming an important advantage for businesses that have it and a major roadblock for those that don’t. Read more on our blog.
September 7, 2022
Bias and Unfairness in Machine Learning Models
Algorithmic bias and unfairness raise ethical concerns regarding the use of AI in real-life situations. Find out more.
August 31, 2022
Group Fairness vs. Individual Fairness in Machine Learning
In fair machine learning research, group and individual fairness measures are placed at distinct levels. Learn more about the nuances of algorithmic fairness.
July 29, 2022
The Nuances of Fairness in Machine Learning
As the problem of AI bias is becoming a global concern, companies should prioritize the implementation of a fair and equitable ML strategy. Find out more.
July 15, 2022
The Benefits of Responsible AI for Today's Businesses
Find out why you should care about Responsible AI and how it can help your business meet new and upcoming legal requirements.
July 1, 2022
Why Explainable AI Is Key to the Future of Machine Learning
Find out why explainable AI is key to the future of machine learning and how companies can benefit from untangling the intricate logic of black box models.