April 2, 2024

2024: The Year of Responsible Generative AI

There’s no question that Generative AI took center stage in 2023, rapidly transitioning from a technology reserved for innovators and early adopters only, to a tool readily available for curious minds everywhere.

OpenAI’s release of ChatGPT in November 2022 merely set the initial benchmark, only to be consistently surpassed throughout the course of 2023. Soon enough, Google, Microsoft, Anthropic, alongside numerous other companies, both large and small, entered the fray with a flurry of new releases, captivating updates, and substantial funding announcements.

If you’re interested in a comprehensive overview of the dynamic AI landscape of 2023, feel free to check out our previous blog post dedicated to the topic.

Looking ahead, while 2023 was undoubtedly the year when Generative AI captured everyone’s imagination, 2024 will be the year of Responsible AI. Read on to find out why.

Generative AI Has Raised the Stakes, Amplifying the Significance of Responsible AI

AI certainly isn’t new, and neither is the idea of designing a responsible system. Before we could log into ChatGPT and ask the chatbot to help us write a better email, AI already played its part in our lives, albeit in a more discreet way.

Before Generative AI seized the spotlight, AI quietly powered our Netflix movie suggestions, populated our social media timelines, and fueled our digital assistants. It underpinned every Google query, orchestrating and refining our search results.

So what’s different now?

Generative AI Crossed the Chasm

It’s safe to say that Generative AI achieved a feat that other AI systems didn’t: it successfully crossed the chasm.

In Geoffrey Moore’s book, Crossing the Chasm, he describes how high-tech products and solutions usually follow a specific trajectory post-release, transitioning from early adopters to mainstream acceptance by bridging the innovation-adoption gap.

Crossing the Chasm

Up until the release of ChatGPT, AI tools primarily operated within the remit of technical professionals, such as data scientists, machine learning engineers, and AI developers. However, the broader mainstream market, comprising the early and late majority, remained largely disconnected from direct engagement with these tools.

Generative AI disrupted this motion by democratizing AI, and making it accessible to individuals with limited technical acumen or background.

Unlike other AI-based technologies—such as autonomous vehicles, for example—that have yet to bridge the AI chasm, Generative AI has become a tool for a broader audience.

With Great Power Comes Great Responsibility

The widespread adoption of Generative AI, also brought with it an increasing number of risks and misuse associated with this technology. In 2023, we witnessed a spike in lawsuits pertaining to copyright infringement, sophisticated deep fake scams, and various other forms of manipulation.

October 18, 2023: Universal Music Publishing Group, Concord Music Group, and ABKCO filed a lawsuit against AI company Anthropic, claiming that Anthropic unlawfully used copyrighted song lyrics as training data for its models.

December 27, 2023: The New York Times initiated legal action against OpenAI and Microsoft for purported copyright infringement. The lawsuit alleges that millions of articles published by The Times were used to train automated chatbots, which are now competing with the news outlet as a source of reliable information.

March 11, 2024: Authors Brian Keene, Abdi Nazemian and Stewart O’Nan have sued Nvidia, alleging unauthorized use of their copyrighted books to train its AI model NeMo.

And the list goes on.

Hence, 2023 also marked the year when the necessity for AI regulation became apparent to all stakeholders, including OpenAI’s CEO Sam Altman who urged for AI regulation in the Senate hearing that took place on May 16, 2023.

In a similar context, the sentiments echoed by Steven Mills, Global Chief AI Ethics Officer at Boston Consulting Group in the Regulating AI: Innovate Responsibly podcast, hold considerable weight:

‘Clear guardrails can actually be accelerators of innovation. The biggest thing that slows it is uncertainty. […] If you ask a race car driver why race cars go so fast, they will tell you: it’s because I have really good brakes that I trust.’

After all, following Tim Berners-Lee’s launch of the first website on August 6th, 1991, the world also saw the advent of the first SSL. It was only through this additional layer of security that users could fully harness the benefits of the World Wide Web.

In 2023, Generative AI demonstrated the potential to revolutionize business operations. Yet the same year also underscored the importance of navigating a complex labyrinth of ethical challenges to fully realize its transformative power. Therefore, 2024 is the year when prioritizing the adoption of enterprise-wide Responsible AI practices becomes crucial.

AI Regulation is On the Rise

While AI governance has long been a significant concern, the fast integration of Generative AI into the fabric of daily operations, has created a sense of urgency for AI regulators, policymakers and safety researchers, who are now working fervently to craft robust policies.

These policies aim to harness the benefits of AI while effectively managing and minimizing its inherent risks.

Overall, 2023 proved to be a dynamic and forward-looking year for AI policymaking. From the introduction of the EU Data Act and the UK’s AI Safety Summit, to the White House’s Executive Order on Safe, Secure, and Trustworthy AI and the EU AI Act - significant strides were made in laying down the foundation for regulating AI development and deployment.

These developments will continue in 2024. As the AI policy landscape is rapidly evolving, it’s imperative for businesses to initiate the development of internal AI governance and Responsible AI protocols.

On our blog, we’ve delved extensively into topics concerning AI regulation, and we will continue to do so on a weekly basis. Here’s a list of what you shouldn’t overlook in this area:


What’s next for Generative AI in 2024?

From LLM to Multimodal Generalist Agent

Since its release in 2022, we’ve witnessed ChatGPT progress from a relatively basic—by current standards—Large Language Model (LLM) to a sophisticated, albeit still flawed, generalist agent capable of performing an increasingly wide variety of tasks across different domains. In other words, ChatGPT isn’t just useful for text and code-based tasks anymore, but also for other functions like data analytics, intelligent search, text-to-image generation, voice-to-text transcription, and customization.

While other frontier AI developers such as Anthropic and Google have been a bit slower to develop a similarly wide variety of multimodal features, we expect that mounting competitive pressures will push Generative AI developers to rapidly enhance the capabilities repertoire of their models—2024 will be the year of multimodal Generative AI, especially as integration with major tech platforms like Office 365 and Azure continues.

Moreover, advancements in Large Audio, Video, and “X” Models (LAMs, LVMs, LXMs), which we’ll discuss later on, also lay the foundation for potentially groundbreaking AI integration efforts, whereby 2024 could yield the first truly generalist AI systems capable of interpreting and generating content across language, audio and video domains, and executing some real-world actions.

The open-source world deserves some attention too—LLMs like Falcon 180B and LLaMa 2, which are impressive in their own right, can be fine-tuned to accommodate specific kinds of functions or tasks. For organizations that adopt an open-source approach to AI integration, multiple different versions of fine-tuned open-source models could be synthesized to create AI systems with far broader capabilities repertoires whereas post-training enhancements could improve models’ adaptability to certain tasks and environments without compromising model architecture and training data integrity.

Integration at the Edge

To leverage Generative AI features today, you typically need access to a Generative AI model, regardless of whether it’s proprietary, private, or public. But, in 2024, this will change dramatically as producers of edge devices, such as smartphones and wearables, begin implementing many more native Generative AI features within their products.

Samsung is one such example. The latest Galaxy S24 smartphone series now possesses several built-in AI features, ranging from real-time translation and transcription to generative photo editing, chat assist, and circle to search, which allows you to circle any part of an image, video, or text and search it instantly. Similarly, Google, as the first company to integrate AI features into smartphones via its Pixel 8, has now introduced Gemini Nano—a version of Google’s Gemini model designed specifically for the Pixel 8—that powers AI features like conversation summarization and smart reply, which enables auto-generation of contextually appropriate text messages.

Alternatively, Apple has taken a less rapid approach to Generative AI product integration, but this isn’t to say their AI objectives are any less ambitious than those of their competitors. The company is currently considering potential partnerships with Google or OpenAI regarding the integration of their models—Gemini and ChatGPT, respectively—into iPhones. Moreover, Tim Cook, Apple’s CEO, recently hinted that IOS 18 (the newest version of Apple’s operating software) will support several new and powerful AI features.

On the side of wearables, Garmin, a leading producer of smart-watches and fitness trackers, is now working with NeuroBrave, a Generative AI company, to develop integrated recommender systems that leverage wearable data to provide personalized lifestyle recommendations, targeting aspects like a user’s chronic stress or anxiety. Other popular wearables companies, such as FitBit and Zepp Health, have also indicated plans to integrate advanced AI features into their products.

In 2024, Generative AI integration with edge devices will scale substantially, especially as it concerns smartphones, wearables, and digital assistants like Amazon’s Alexa. However, by the end of 2024, we’re also likely to begin seeing sophisticated Generative AI features emerge in other domains of edge technology, such as electric vehicles, smart homes, autonomous weapons, facial recognition systems, and factory robotics, to name a few.

New Kinds of Generative AI

LLMs have caught the world by storm, democratizing advanced AI through ease of use and accessibility. LLMs may not be perfect yet, and state-of-the-art systems will surely undergo numerous improvements in 2024, but one thing is clear: LLMs have reawakened the world to the immense power and potential of AI.

Right now, we’re in an AI summer—lots of AI projects are receiving funding and support, and most organizations are extremely excited about the possibilities associated with AI integration. On the other hand, rapidly emerging AI regulations are also pressuring some AI developers and researchers to accelerate AI innovation efforts due to uncertainties about what kinds of AI developments will be allowed in the near future.

Regardless of what’s motivating AI innovation, the following point still stands: 2024 will witness the widespread emergence and commercialization of new kinds of Generative AI, namely LAMs, LVMs, and LXMs. Simply put, 2024 will be the year in which we’re introduced to LLM’s extended family.

Large Audio Models

LAMs like OpenAI’s Whisper—an open-source multilingual speech recognition model—and Meta’s AudioCraft—a platform containing several open-source audio models, most notably AudioGen and MusicGen, which can perform text-to-sound and text-to-music operations—will see a steep rise in popularity in 2024 for a few important reasons.

For one, speech recognition already plays a prominent role in the user operation of frontier Generative AI models like ChatGPT and Gemini, which offer users both the ability to speak their intended prompts and listen to model outputs. As these kinds of models are adopted globally or throughout targeted contexts such as special education or medicine, the necessity for multilingual and versatile speech recognition abilities will increase accordingly. Moreover, tech giants, like Google, OpenAI, Meta, and Amazon operate within international markets, so they have strong incentives to ensure that any AI products they offload are accessible and useful to non-native English speakers.

Second, the competitive pressure to integrate Generative AI features into edge devices, especially those that leverage virtual assistants like Siri or Alexa, will only continue to increase. Imagine being able to operate virtually any function on your smartphone or smartwatch, from taking a picture to summarizing your fitness data, using only your voice—Siri and Alexa can already do a lot, but people always want more.

Third, AI features like real-time translation and transcription, are highly desirable in a remote work setting, especially when considering that since 2020, work-related video calls have increased by 50% in the US whereas Forbes estimates that by 2025, approximately 22% of the American workforce will be remote. Real-time translation and transcription features aren’t only excellent for summarizing and interpreting recorded meetings and conversations, but also for organizations that want to expand their reach by streamlining and enabling interdisciplinary collaboration between international teams. In this context, it’s no surprise that collaborative work platforms like Zoom and Slack are now offering built-in features and integration options with third-party real-time translation and transcription apps.

Large Video Models

Those who’ve been keeping up with the latest AI innovations may be aware of Sora, OpenAI’s recently unveiled LVM capable of generating hyperrealistic and animated videos ranging up to 1-minute in length using simple text-based prompts. Similarly, in November of 2023, Stability AI announced its Stable Video Diffusion model, which allows users to leverage text and image inputs to produce vivid, high-quality videos between 2 and 5 seconds long. Google and Meta are also developing their own LVMs, Lumiere and Emu Video, respectively. However, none of these models—except for Emu—are commercially available yet, though they have been made accessible to select researchers and artists for the purposes of performance and safety evaluations.

State-of-the-art LVMs are likely to become available in the latter half of 2024, once the risks associated with their deployment—primarily the ability to create deep fakes or inherently persuasive/manipulative audio-visual content, especially during election cycles—are addressed. But, looking slightly further ahead, it’s not hard to understand why LVMs could blow up this year.

According to Goldman Sachs, the global content creator economy is expected to grow at a compounding annual rate of 10-20% over the following 5 years and could reach a total value of over $500 billion by 2027. Moreover, the continued popularity of platforms like TikTok, coupled with “reels” and “threads” features on legacy sites like YouTube and Instagram, underscores the incentive to maintain a vibrant and prolific digital creator economy. Seeing as content creators have to put out an extremely high volume of content to remain relevant, much of which takes the format of sub-60-second videos, and social media platforms are continually updating their feature suite, LVMs can and will emerge as an essential tool for future content creators.

Similarly, LVMs are poised to generate profound impacts in the entertainment industry, in particular, among streaming service providers like Netflix and Hulu, who, having experienced significant declines in their 2023 market share, mainly due to competitors like Paramount Plus, are scrambling to find new and effective ways to cut costs while remaining competitive. LVMs won’t be able to produce full TV series, or even short films, by the end of 2024, but they could be instrumental in helping streaming providers envision the scope and nature of their future content-related projects much more quickly and efficiently, allowing them to fast track the content-development process.

Finally, the virtual and augmented reality sector, as well as the gaming industry, could benefit enormously from LVMs, especially when considering how they might be integrated with devices like the Apple Vision Pro, Meta’s Metaverse platform, or world-building systems like Unreal Engine 5. Seeing as this kind of integration would be very complex and inspire a variety of novel risks, we may begin witnessing early integration efforts in late 2024, though 2025 is admittedly much more likely.

Large “X” Models

What if you could, through simple text or voice-based commands, instruct an AI system to perform real-world tasks for you, like brewing your morning coffee or doing your laundry? This is what LXMs strive to do, by combining the capabilities of Generative AI with robotics. While there’s still plenty of work to be done in this domain, we’re no longer operating in the realm of science-fiction—within the next decade, over 40% of household activities are likely to be automated, whereas most manufacturers indicate plans to automate at least 80% of their work-related tasks. For tasks requiring manual labor, whether paid or unpaid, the potential utility of LXMs is undeniable.

Just last week, Nvidia announced Project GROOT, a humanoid robotics initiative that leverages Nvidia’s Isaac Platform to build Generative AI-powered robots. To promote and uphold this initiative, Nvidia has also established partnerships with leading robotics companies including Boston Dynamics (known for “Spot” the robot dog) and Unitree (also known for its quadruped robots), among several others. But, Nvidia isn’t the only company pursuing this course of action—Tesla has been prototyping Optimus, their version of a humanoid robot, since 2022, and expects that it will be ready for commercial release sometime between 2025 and 2027.

Nvidia, Tesla, Boston Dynamics—all of these companies have become household names to the tech enthusiast, synonymous with the idea of state-of-the-art AI, computation, and robotics technologies. In terms of LXMs, however, there’s one lesser-known British company, by the name of Engineered Arts, whose humanoid robot, Ameca, is particularly impressive and commercially available.

According to Engineered Arts, Ameca’s purpose is simple: to be an embodied, physical platform for AI development and experimentation. To robotics experts, Ameca may not appear that impressive at first, especially since it can’t yet walk or manipulate objects really well—tasks that come very easily to humans. However, where Ameca stands out is in its ability to interact with humans in a human-like fashion—Ameca mimics and responds to human micro facial expressions, appropriately adjusts the tone and cadence of speech, reacts to and recognizes changes in syntax and semantics, and speaks in a way that demonstrates a contextual understanding of the conversation being had. Similarly notable results have been achieved with Hanson Robotics’ Sophia, however, Ameca’s interactions tend to be more versatile, natural, and consistent.

There’s still a way to go before LXMs evolve into their sci-fi counterparts (if they ever do), but we predict that by the end of 2024, the first fully functional LXMs will hit the market—by fully functional, we mean AI robots that don’t only verbally and visually interact with you, but also possess the ability to execute at least some tasks requiring fine motor skills on your behalf, like making you a cup of coffee.

Final Words

The rise of Generative AI has marked a transformative milestone in AI accessibility and given way to the current AI boom, with funding pouring into projects, organizations eagerly embracing AI integration, and end-users enthusiastically awaiting new Generative AI advancements.

Yet, evolving regulations also prompt caution in AI innovation. Alongside rapid AI developments, 2024 also promises the development and extensive spread of new regulations impacting AI.

To navigate this dynamic landscape responsibly, we invite you to explore Lumenova AI - an AI Governance platform that empowers you to automate, simplify, and streamline your end-to-end AI risk management and compliance process.

Make your AI ethical, transparent, and compliant - with Lumenova AI

Book your demo