The Responsible AI Blog

Get the latest news and insights about Responsible AI and its transformative impact on today’s business landscape.

Featured

Putting Generative AI to Work Responsibly

May 10, 2023

Putting Generative AI to Work Responsibly

Learn more about integrating generative AI into your risk management process and why it is important to use this technology responsibly.


Machine Learning Model Building Made Easy with ChatGPT: Myth or Reality?

May 2, 2023

Machine Learning Model Building Made Easy with ChatGPT: Myth or Reality?

Is building an ML model with ChatGPT easy or just a myth? Our latest blog post explores this question and reveals the reality. Read on to find out more!

Lumenova AI Is Now SOC 2 Compliant

April 25, 2023

Lumenova AI Is Now SOC 2 Compliant

We are excited to announce that Lumenova AI has achieved its SOC 2 attestation, setting new benchmarks in our unwavering commitment to Responsible AI.

The Risks & Rewards of Generative AI

April 12, 2023

The Risks & Rewards of Generative AI

Learn more about the risks and rewards of Generative AI, and what to be on the lookout for in our latest blog post.

Adversarial Attacks vs Counterfactual Explanations

March 2, 2023

Adversarial Attacks vs Counterfactual Explanations

Adversarial examples are closely related to counterfactual explanations, yet their goal is fundamentally different. Find out more in our latest blog post.

Introduction to Counterfactual Explanations in Machine Learning

February 27, 2023

Introduction to Counterfactual Explanations in Machine Learning

Discover what counterfactual explanations are and why they are a great tool for explainability in our latest blog post.

Types of Adversarial Attacks and How To Overcome Them

February 22, 2023

Types of Adversarial Attacks and How To Overcome Them

Machine Learning powered algorithms are susceptible to a variety of adversarial attacks that aim to degrade their performance. Here’s what you need to know.

NIST Releases New AI Risk Management Framework

February 3, 2023

NIST Releases New AI Risk Management Framework

The National Institute of Standards and Technology released the first version of its AI Risk Management Framework. Find out what it means for your organization.

Understanding Adversarial Attacks in Machine Learning

January 26, 2023

Understanding Adversarial Attacks in Machine Learning

Adversarial attacks and adversarial learning have become key focus points for data scientists and machine learning engineers worldwide. Find out why.

Why Business Leaders Should Care About AI Transparency

September 21, 2022

Why Business Leaders Should Care About AI Transparency

Transparent AI propels organizations forward by means of enhanced trust, performance, security, and compliance. Find out why business leaders should care.

Why AI Transparency Is Essential to Building Trust

September 14, 2022

Why AI Transparency Is Essential to Building Trust

AI transparency is becoming an important advantage for businesses that have it and a major roadblock for those that don’t. Read more on our blog.

Bias and Unfairness in Machine Learning Models

September 7, 2022

Bias and Unfairness in Machine Learning Models

Algorithmic bias and unfairness raise ethical concerns regarding the use of AI in real-life situations. Find out more.

Group Fairness vs. Individual Fairness in Machine Learning

August 31, 2022

Group Fairness vs. Individual Fairness in Machine Learning

In fair machine learning research, group and individual fairness measures are placed at distinct levels. Learn more about the nuances of algorithmic fairness.

The Nuances of Fairness in Machine Learning

July 29, 2022

The Nuances of Fairness in Machine Learning

As the problem of AI bias is becoming a global concern, companies should prioritize the implementation of a fair and equitable ML strategy. Find out more.

The Benefits of Responsible AI for Today's Businesses

July 15, 2022

The Benefits of Responsible AI for Today's Businesses

Find out why you should care about Responsible AI and how it can help your business meet new and upcoming legal requirements.

Why Explainable AI Is Key to the Future of Machine Learning

July 1, 2022

Why Explainable AI Is Key to the Future of Machine Learning

Find out why explainable AI is key to the future of machine learning and how companies can benefit from untangling the intricate logic of black box models.

Accelerate your path to Responsible AI with
Lumenova AI

Request demo