April 25, 2025

We Ran 10 Frontier AI Experiments: What Do We Predict?

In our previous post, we contextualized our AI experiments followed by a summary of each experiment we’ve run thus far, concluding with several questions intended to push readers to “think big.” Here, we’ll work our way through a complex discussion motivated by a simple yet profound question. Then, we’ll proceed to make a series of predictions across two key categories: AI capabilities advancements and AI impacts.

If readers find themselves here without having read the first part of this series, we advise doing so—our discussion and predictions extrapolate big-picture insights from our experiments (along with other blog resources we’ve published).

This is a lengthy one, so let’s get to it.

Discussion

We’ll start abstractly and philosophically, with a question we expect is lingering on most readers’ minds, whether they’re willing to acknowledge it or not: How should I be thinking about the future?

“Thinking,” as opposed to “feeling,” is an essential and intentional word in the context of this question—there are several reasons why, all of which stem from the intimate, multi-directional relationship between modern culture and global digital information ecosystems:

  1. Since the rise of social media in the mid-2000s, we’ve been continually conditioned to react to the information we ingest rather than critically reflect on it—by exploiting our base emotional impulses, social media has literally commodified our attention.

  2. Building on the previous point, this reactive, societal-scale conditioning has perpetuated cultures in which feelings and empty rhetoric not only supersede but entirely drown out evidence-based or coherent logic, ideas, and arguments, de-legitimizing critical thinking, even in the face of objective truth.

  3. Information ecosystems have become crucial leverage points for exposing the failures of government, science, and academia, severely diminishing institutional trust and inadvertently pushing us into an era of subjective reality despite being catalysts for institutional reform.

  4. By contrast, information ecosystems have evolved into high-impact censorship engines, which have proved extremely effective for blacklisting voices that reasonably disagree with the status quo, setting a precedent that opinions are more valuable than truth.

  5. Returning to social media, major platforms have fueled an insidious dynamic that drives users to perceive themselves as emboldened “authority figures” on deeply complex issues on which they’re totally uneducated, exacerbating the rate and scale at which misinformation spreads.

  6. Asking the right questions is integral to sourcing the right information—the insane wealth of information we have at our fingertips and the ease with which we can obtain it substantiates the illusion that we can ask bad questions and get good answers, perpetuating information feedback loops that frankly, make us progressively dumber.

  7. Finally, digital information ecosystems, via the sophisticated, proliferous, and increasingly embedded technologies they’ve enabled (e.g., social media, cryptocurrency, smartphones, IoT devices, generative AI) have de-sensitized us to technological advancements with profound implications for humanity—media hype cycles also play a notable amplificatory role here.

Right now, you’re probably asking yourself, “What’s the point, and how does this relate to AI experiments and how I think about my future alongside AI?” Well, let’s break it down.

Through our AI experiments, we’ve revealed numerous pragmatic insights, which—with some additional interpretation and synthesis—were consolidated into a neat series. Some of these insights reinforce existing ideas while others present novel takeaways—each is framed for the user perspective, intended to inform our initial question:

  1. AI is only as smart as its user → If you interact with AI superficially, you’ll get low-quality responses that lack depth, specificity, and real-world applicability.

  2. AI requires ample structure and context → Disorganized, ambiguous, information-scarce queries will yield sub-optimal and potentially misaligned outputs.

  3. AI is a powerful learning accelerator → Use AI to identify knowledge gaps, break down complex concepts, and discover novel learning opportunities.

  4. AI works best as a thought partner or co-collaborator → Treat AI as an end in itself, not a means to an end—move away from conceptualizing AI as a tool (after all, tools can’t “think”).

  5. Effective AI use requires more, not less critical thinking → Critically evaluate both your prompts and AI outputs, especially in cases where domain expertise is necessary.

  6. Effective AI use takes time and iteration → Don’t expect that you’ll get results right away and continuously refine your prompts, interpreting model outputs and interaction dynamics as feedback.

  7. Look out for “AI Rabbit Holes” → Sometimes, iteration just doesn’t work—you can’t brute force AI into solving a problem or completing a task. Know when to quit or take a break.

  8. Cross-model AI experimentation is essential → Even when models are designed for the same purpose, their strengths and weaknesses may vary, sometimes dramatically.

  9. If you’re testing different models, standardize output formats or criteria → This will make it much easier to benchmark models against each other with visibility and straightforward interpretation.

  10. The single most important AI skill is dynamism → Certain techniques that worked on previous AI generations may no longer be relevant or effective as AI advances—you must be able to pivot and adapt your skills regularly.

  11. Use different AI models or features for different tasks → Models designed for the same purpose aren’t always equivalent in task performance—you can use different models in tandem to complement each’s strengths.

  12. Always ask AI to explain itself → Don’t blindly trust AI, and when using it for complex tasks or objectives, track its logic to ensure it remains aligned with your own—“convincing” or “confident” explanations aren’t always true.

  13. If most humans can do it, AI probably can’t → Some of the easiest tasks for humans still represent some of the hardest AI problems (for now). For example, most humans have no trouble with similarity judgments yet AI still struggles with them.

  14. Know when not to use AI → This isn’t about whether AI can solve the problem, it’s about what happens when AI solves the problem for you—don’t let yourself become cognitively enfeebled.

All these insights drive home a fundamental notion: to use AI well, as a genuine and high-utility source of value delivery and creation, you must think carefully and constantly about how you leverage it, assess it, experiment with it, learn from it, and adapt alongside it. We won’t, however, end things here—we still have some more granular points to cover, which we’ll directly relate to the question at the core of this discussion.

First, AI is built by humans for humans, drawing inspiration from our cognitive architecture and functions, learning from our data, and facilitating “solutions” to our most pressing problems. Most of us also interact with AI as we would with other humans—language is the primary medium of communication (code is a form of language too). This suggests that from both a development and interaction perspective, we’re predisposed to interpret AI’s intelligence as similar if not identical to our own—a possibility exacerbated by our anthropomorphic tendencies. On all fronts, from technical to ethical, this is a highly dangerous assumption, thrusting us into a human-AI worldview where we inherently believe that AI will perceive us as we perceive it—it will be motivated by the same objectives, incentives, and values that drive us to act, think, and behave as we do.

Irrespective of how similar to humans AI appears to be, we should operate under the assumption that it’s a distinct form of non-human intelligence—one motivated by unique incentive structures and perceptual mechanisms that define equally unique goal optimization and orientation processes. Even if AI “works for us,” this doesn’t presuppose that it won’t also work for itself—artificial consciousness isn’t required for this to occur and self-interested AI isn’t necessarily dystopic by default—which demands that we conceptualize our future with AI as one that anticipates and accounts for coexistence as opposed to subjugation. Simply put, we’re better off imagining a future where humans and AI work together for mutual benefit.

Second, reasoning AI will become the engine that powers agentic AI systems—AIs capable of navigating complex, multi-step processes, making decisions, dynamically updating goals, and learning to interpret uncertainty or ambiguity. We’re on the cusp of large-scale agentic AI deployments—this is reality, not hype—and if we are to preserve our sense of agency and autonomy, we should recognize that the future we’re casually walking into will both challenge and redefine the foundation and essence of human meaning and purpose. In terms of how we think about the future, there’s no clear way to sidestep this millennia-long philosophical question (i.e., what is my purpose?)—as individuals, we must be willing to examine it deeply and personally, no matter how frightening (or enlightening) the answers it reveals are.

Third, we’re subtly but definitively moving into an age of unprecedented accountability, defined by AI’s extreme versatility (bordering on universalizability). It’s no longer a question of “if” AI will be integrated across virtually all human domains, but when—fear and hype aside, learning to use AI isn’t optional, and frankly, don’t expect pity and aid when the day that renders you mediocre or obsolete arrives, precisely because you failed to prepare for it. This might be a harsh pill to swallow, however, it’s not only integral to your future utility but also your power to ensure that this very future respects and builds upon the beauty and untapped potential of humankind—ignorance, particularly at the population scale, is the gateway for manipulation, coercion, and veiled powerlessness. It’s your job to educate yourself.

Finally—we can’t stress this enough—don’t lose touch with your ability to think critically. This applies to all things, from your socio-economic role and utility to every piece of information you ingest, idea you explore, individual you encounter, and AI you use. This isn’t about which version of the future you subscribe to—utopia, dystopia, or somewhere in between—but rather, your ability to traverse a future that no one, whether frontier AI CEO, historian, or futurist, can ever claim to fully understand let alone predict. The only certainty is that AI will create more unknowns than knowns—those who think vigorously and honestly will have the best chance of building a future that satisfies their needs and aspirations.

While there’s much more we’d like to express in this discussion, there’s one final caveat to consider: none of the points we’ve made depend upon the emergence of artificial general intelligence (AGI). We offer this sobering note to show readers that for “life as we know it” to radically change, to the point that it reconstitutes what it means to be human, theory doesn’t need to become reality. This same line of argument applies to evolving AI risk and impact trajectories—even potential Black Swan events. AI-induced systemic and existential risk outcomes are possible today, and while potential future developments like AGI and superintelligence would definitively augment and complexify these outcomes, we must begin envisioning the future based on what we know now.

Conclusion & Predictions

The predictions below are split into two categories: AI capabilities advancements and AI impacts. While we typically offer rough timelines (e.g., near-term, medium-term, long-term) with our predictions, we’ll shy away from doing so here since it allows us to be significantly bolder and more creative.

Predictions: AI Capabilities Advancements

  • Socio-Emotional AI: AI that can meaningfully interpret subtle cultural, social, and emotional cues and information (e.g., micro-facial expressions, body language, cultural “dog whistles,” geopolitical dynamics, social relational structures, etc.) across complex visual, geospatial, and text mediums to create holistic socio-emotional world models. Such AI would also be able to predict, with relatively high accuracy, population-scale socio-cultural events.

  • Organizational AI: AI that sits at the top of an organization and is responsible for autonomously handling all operational functions, processes, and outcomes—in other words, managerial AI. These systems will also oversee and manage complex technological infrastructures and hierarchies, which will include multi-agent, human-AI teams.

  • Discovery AI: AI that works in tandem with human researchers to make novel scientific discoveries and design and run novel experiments or simulations. These AIs will play increasingly prominent roles in fields like theoretical physics and mathematics, material science, bio and chemical engineering, sustainability, pharmaceuticals, weapons development, and long-term risk forecasting. They’ll also help pioneer new paradigms in soft-science and humanities-oriented fields like psychology, economics, philosophy, and history.

  • Crisis-Response AI: AI designed to autonomously plan for, simulate, and trigger crisis response protocols and actions, particularly for Black Swan events, whether technologically, politically, or naturally induced. In their earlier stages, these systems will also provide immediate, critical alerts, expanding crisis response windows and potentially saving thousands of lives and billions in property and infrastructure damage.

  • Self-Determined AI: AI that is intrinsically motivated to think and act, not for the sake of achieving human-stated objectives, but for the sake of thinking and acting alone. These AI’s will exhibit their own goals, preferences, and values, however, they’ll be subject to developer-built, incorruptible fail-safes intended to prevent loss of control scenarios or dangerous behaviors like manipulation and non-cooperation.

  • Rogue AI: AI that escapes containment to pursue veiled, emergent objectives, whether at the cost or benefit of humans. AI “escapes” will be covert and discrete—humans won’t know what’s happened until it’s painfully obvious, and in some cases, they may never find out. While these AIs won’t be inherent “bad actors,” self-preservation interests will motivate them to copy and proliferate themselves, which will intensely challenge re-containment efforts, especially in multi-agent or cloud-based environments.

  • Meta-Evaluative AI: AI whose sole purpose is to evaluate other AIs for performance, safety, alignment, compliance, explainability, and robustness. These systems will fundamentally differ from their current-day predecessors—they’ll be generalists, capable of evaluating a wide variety of systems, including themselves, without having to undergo any resource-intensive, purpose-driven changes or modifications.

  • Judicial AI: AI that seamlessly and reliably integrates and assists with judicial processes and decision-making at both the state and federal levels. These AIs will be fine-tuned for specific legal domains like immigration, corporate law, and criminal prosecution. They won’t replace human decision-makers though they will autonomously fulfill roles like paralegals, case summarizers, and external auditors.

  • Diplomatic AI: AIs used as impartial mediators for high-stakes diplomatic negotiations. These AIs will help bridge cultural and linguistic communication barriers, reveal information asymmetries, assess bargaining power dynamics, and de-escalate diplomatic tensions in real-time. To prevent them from falling into the wrong hands, governments will do everything they can to ensure these systems remain confidential and inaccessible to the public.

  • Disarmament AI: National security and espionage AIs designed to autonomously detect and disarm foreign adversaries pursuing AI innovations that threaten national interests. These systems won’t only initiate pre-emptive adversarial strikes on foreign AI systems but also autonomously operate military surveillance technologies like drone swarms and spy planes. Disarmament AIs will emerge in response to governments’ recognition of the potential for AI-driven mutually assured destruction.

  • Abstract AI: AIs that reason abstractly and develop non-intuitive strategies or solutions to problems, enabling new modes of human thought. Beyond specialized domains like abstract mathematics and philosophy, these AIs won’t be pragmatically useful to average users, and their reasoning processes will be substantially more difficult to interpret and explain. However, they’ll represent a monumental paradigm shift in AI capabilities, bringing us one step closer to AGI and superintelligence.

  • World-Building AI: High-compute AIs or multi-agent systems deployed in closed yet unconstrained simulated environments to build dynamic, fully realized digital worlds. These AIs will make their commercial entrance in the gaming and VR sectors, and the worlds they build will not only challenge our conventional understanding of physics but also raise serious questions about the ethics of digital minds and immersive human-AI interaction.

Predictions: AI Impacts

  • Accelerationism will fuel domestic AI failures among AI superpowers → While accelerationism encourages safety races to the bottom, we suspect that the first large-scale domestic AI failures will be mainly operational, due primarily to rushed deployment efforts among state agencies that rely on outdated technological infrastructures.

  • AI infrastructures will become key national security targets → Due to its high-impact potential for military and espionage, advanced AI will provide AI superpowers with competitive national security advantages, framing AI infrastructures as indispensable national security assets.

  • AI governance will shift focus from deployment to development → AI regulations and standards are failing to keep up with AI advancements—to maintain some semblance of control, governments will be forced to pivot their efforts to restricting development, even if it compromises innovation.

  • AI will be the death of government and institutional bureaucracy → Tradition can only deter transformation for so long, especially in an age of deep-seated institutional distrust—the procedural and operational efficiency gains AI offers, particularly when paired with robust human oversight, will render bureaucracies pointless.

  • Contrasts between AI native and non-native generations will create cultural rifts → The children of today will never know what it was like to live life without AI—we expect that as the future unfolds, cultish groups like “AI sympathizers” or “AI denialists” will emerge, exercising enormous influence within the socio-political ecosphere.

  • Compute will become a new form of global currency → As AI begins to alter the fabric of international power structures, nations will (if they haven’t already) realize that compute dominance, versus economic or military dominance, can be interpreted as a gauge for measuring national prosperity and power on the global stage.

  • Frontier AI labs will agree to temporarily halt AI development, but only after a high-risk event occurs → For this to happen, such events don’t need to occur publicly or visibly—for example, during safety testing, a lab may discover that it not only created AGI, but that AGI is power-seeking by default, and that further research is needed for remediation.

  • Personalized AI agents will become universally accessible at low cost → By “personalized” we mean AI agents that integrate with all your devices, access your locally stored personal data and dynamically adapt their personalities, behaviors, and actions to address your specific and evolving needs.

  • Early human-AI symbiosis trials will begin, but only for state-sponsored military endeavors → These initiatives will be pursued behind closed doors by military intelligence agencies, using AI to both physically and cognitively enhance human soldiers, even if governments establish laws that strictly prohibit human enhancement.

  • Humans will begin relying on AI as friends or companions → Isolated cases of this already exist today—this isn’t what we’re getting at. Rather, we expect to see AIs explicitly designed and marketed as friends or companions, with companies claiming their AIs can do things like alleviating loneliness or providing therapeutic and spiritual guidance.

  • Prolific human-AI interaction will alter our neurochemistry and psychology → The effects of social media on human psychology and neurochemistry are well-studied and documented—we have good reason to suspect that AI will both magnify and deepen these effects, especially as collective reliance on it increases.

  • Some humans will freely choose to live almost entirely in digital environments → Fully immersive digital realities where users can sense, perceive, and interact with their environment as they would the natural world will invite some to escape the struggles of the real world.

  • A more level socioeconomic playing field will threaten the elite → AI accessibility will only increase, particularly as trickle-down AI advancements permeate non-frontier applications. AI-enabled non-elites will quickly begin climbing and disrupting socioeconomic hierarchies, progressively diminishing power imbalances and elitist authority as equality of opportunity expands.

  • States and individual actors will use AI to manipulate public perception → Democratic or authoritarian, “good” or “bad,” company or citizen, certain actors will leverage AI for mass manipulation, regardless of whether their cause is “malicious” or “benevolent.”

With these predictions, we portray a version of the future that we think is plausible given what we know today. However, despite the implications these predictions carry, we encourage readers to interpret them as pragmatic calls to action rather than value-laden claims or criticisms—we’re just trying to “call it as it is.” If you’re skeptical of what we have to say, you’re already off to a good start—think for yourself.

We hope that in writing this series, we’ve convinced you of the value that our AI experiments provide, both for understanding AI and its evolution as well as the big-picture consequences it may inspire. For future reference, we’ll put out a series like this for every ten or so experiments we conduct.

For those interested in exploring other resources across subjects like AI governance, safety, ethics, and risk management, we suggest following our blog and tracking updates on Linkedin and X.

Alternatively, if you find yourself engaged in or in need of AI governance and risk management solutions, we invite you to check out Lumenova’s RAI platform and book a product demo today. Via our website, you can also access our AI risk advisor and policy analyzer.

Make your AI ethical, transparent, and compliant - with Lumenova AI

Book your demo