The Responsible AI Blog
Get the latest news and insights about Responsible AI and its transformative impact on today’s business landscape.
November 17, 2023
A new Stanford study reveals the lack of transparency from major AI companies, highlighting the urgent need for improved accountability.
November 10, 2023
Explore the future of healthcare as we delve into the applications and challenges of integrating AI. Understand potential benefits, risks, and the roadmap for a responsible AI approach.
October 18, 2023
Lumenova AI is now part of the Ethical AI Database (EAIDB). The Ethical AI Database (EAIDB) is the only publicly available, vetted database of AI startups that offer ethical AI services.
October 3, 2023
We're thrilled to announce a significant milestone in our journey – Lumenova AI has been included in OECD.AI’s Trustworthy AI Tools
September 15, 2023
This article discusses key insights from Accenture's report "Responsible AI In Finance: Navigating The Ethics Of Generative AI" on how the finance industry can responsibly leverage the transformative potential of Generative AI while navigating complex ethical challenges.
August 15, 2023
We are excited to announce that Lumenova AI has achieved its SOC 2 Type II attestation, setting new benchmarks in our unwavering commitment to Responsible AI.
July 6, 2023
Understanding the EU AI Act: Read our blog post for insights and best practices regarding Europe's AI Act.
June 29, 2023
Discover the essential takeaways from the Europen Union's AI Act. Read our blog post to find out more.
June 10, 2023
Unveiling the EU AI Act: A must-read blog post for businesses seeking insights on AI regulations in the European Union. Read more.
May 10, 2023
Learn more about integrating generative AI into your risk management process and why it is important to use this technology responsibly.
May 2, 2023
Is building an ML model with ChatGPT easy or just a myth? Our latest blog post explores this question and reveals the reality. Read on to find out more!
April 12, 2023
Learn more about the risks and rewards of Generative AI, and what to be on the lookout for in our latest blog post.
March 2, 2023
Adversarial examples are closely related to counterfactual explanations, yet their goal is fundamentally different. Find out more in our latest blog post.
February 27, 2023
Discover what counterfactual explanations are and why they are a great tool for explainability in our latest blog post.
February 22, 2023
Machine Learning powered algorithms are susceptible to a variety of adversarial attacks that aim to degrade their performance. Here’s what you need to know.
February 3, 2023
The National Institute of Standards and Technology released the first version of its AI Risk Management Framework. Find out what it means for your organization.
January 26, 2023
Adversarial attacks and adversarial learning have become key focus points for data scientists and machine learning engineers worldwide. Find out why.
September 21, 2022
Transparent AI propels organizations forward by means of enhanced trust, performance, security, and compliance. Find out why business leaders should care.
September 14, 2022
AI transparency is becoming an important advantage for businesses that have it and a major roadblock for those that don’t. Read more on our blog.
September 7, 2022
Algorithmic bias and unfairness raise ethical concerns regarding the use of AI in real-life situations. Find out more.
August 31, 2022
In fair machine learning research, group and individual fairness measures are placed at distinct levels. Learn more about the nuances of algorithmic fairness.
July 29, 2022
As the problem of AI bias is becoming a global concern, companies should prioritize the implementation of a fair and equitable ML strategy. Find out more.
July 15, 2022
Find out why you should care about Responsible AI and how it can help your business meet new and upcoming legal requirements.
July 1, 2022
Find out why explainable AI is key to the future of machine learning and how companies can benefit from untangling the intricate logic of black box models.