December 11, 2025

How AI is Actually Being Used (Part IV)

impact of AI on society

Across this series, we’ve examined AI usage data, identified convergent patterns, predicted future trajectories, and envisioned the strategic implications of current and evolving AI usage trends for businesses, society, and individuals. In doing so, we’ve illustrated how usage patterns bifurcate, in what contexts automation vs. augmentation concentrate, early but potent signs of a global digital divide, and growing skill/cognitive atrophy concerns. All our discussions have drawn from three core evidence sources, comprehensively broken down in the first post of this series, which functions as our empirical foundation. 

In this final part, we’ll move from the micro to the macro-scale, boldly investigating a transformation so fundamental that it could force us to reconceptualize our most basic assumptions about human capability, economic value, and the nature of intelligence itself. This transition refers to the following trajectory: AI isn’t just a powerful tool anymore, it’s becoming infrastructure. To be clear, this isn’t a novel idea (and might even be viewed as the originally intended goal), though it is one that we think warrants serious inquiry because it represents one of the most significant technological changes to human civilization we’ve ever encountered. 

In our view, it isn’t unreasonable to interpret this transition as one that parallels, if not exceeds, the magnitude of historical innovations like the creation of the printing press, the development of assembly lines, or the advent of the internet, and later, social media, which have all transformed what it means to be human in one way or another. The construction and commercial mass-deployment of advanced AI was just the first step; the impacts we see today represent a mere fraction of those we might expect to encounter in the future.

Some of what follows may border on speculation or provocation. Nonetheless, we assure readers that we’re remaining grounded in previous discussions, and that our extrapolations are motivated by intellectual honesty and curiosity about the new world we might be collectively walking into. We’re not asking or expecting you to agree with our view, though we do hope that you meaningfully consider it. Most importantly, we stress that this view articulates only a possible version of the future, and that it remains within our power as individuals, businesses, and regulators to establish the scaffolding required for AI prosperity at scale. Lumenova AI’s mission is about more than governance and compliance. We believe in people and AI, most crucially, in what we can accomplish together. 

From Tool to Infrastructure: What It Means 

A technology with a high usage intensity and scale doesn’t become infrastructure by default. From our perspective, a technology morphs into infrastructure when it becomes environmental, exercising tangible influence on the very context in which thought and action materialize. 

This is an abstract concept, so let’s clarify with an analogy. During the early 20th century, the first automobiles were owned primarily by wealthy individuals who made deliberate choices to use them for specific trips. Cities still catered to pedestrians and other transportation means (e.g., horses, carriages, rail systems) while economic organization (i.e., where people chose to live and work) remained mainly centralized in urban areas. 

In the mid-20th century, cars became more affordable and widely adopted, inspiring infrastructural adaptations in urban hubs like better roads, traffic lights, and mixed-use layouts (i.e., walking and driving). Economic organization also began a corresponding shift, with suburbs starting to emerge despite cities continuing to function as primary commercial and employment hubs. 

In the latter half of the 20th century, everything changed. Modern-day cities are designed around automotive access, and physical elements (that were untenable in the early 1900s) like suburbs, strip malls, and highway systems are commonplace. Suburban living and commuting have come to represent the new age standard, and it’s difficult to see how this could’ve happened without automotive infrastructure. Today, most people don’t consciously think “I’m using the highway.” Instead, they simply think, “I’m heading to work/school/store.” Car access is implicitly assumed. 

This is an approximate analogy, and we’re not suggesting that it wholly explains the infrastructure of the world we live in now. That being said, it does highlight a key transformation: 

  • Cars haven’t just changed travel. They’ve redefined where we live, work, and how society organizes space. If you’re unconvinced, imagine how difficult it is for someone who doesn’t live in an urban center or can’t afford a car to participate in rich economic and social opportunities in today’s America. For most Americans, cars are a prerequisite for modern-day existence and prosperity. 

If we apply the logic of this analogy to AI, it reveals the following parallels: 

  • Roughly one billion people use AI today, gaining access to capabilities that range far beyond their individual cognition. AI is enabling a global cognitive sprawl. 
  • As more people outsource their cognition to AI, AI-dependency will increase, perhaps to a point where functioning without it is almost impossible. 
  • Benefits are unlikely to be equally distributed. AI will provide compounding advantages to those who understand the distinction between augmentative and automation-centric use. 
  • Once cognitive infrastructure is developed around AI availability, reversing it could come to constitute an intractable problem. 
  • AI can augment human intelligence, but it also risks replacing cognitive capabilities that will atrophy if they remain untrained. 

We note some additional points as well, unique to AI infrastructure: 

  • AI represents the first instance of non-biological intelligence recorded (we’re not claiming AGI or anything like that, just that current systems can perform complex tasks and solve problems that, in the past, have been reserved for intelligent humans). 
  • AI might not be infrastructure yet, but when this transition is completed, intelligence will become environmental, existing throughout the spaces inhabited by humans, independently of biological substrates. 
  • The transition from tool to infrastructure will redefine what kinds of problems are conceivable, the nature of capability access and needs, and the breadth of the future we can imagine or expect. 
  • Most radically, it will force humans to confront and make sense of the experience of being an intelligent agent in a world where intelligence scarcity no longer exists. 

Shortly, we’ll lay out a rough timeline illustrating our expectations for how AI as infrastructure will transform the human cognitive environment. But before we do, we feel it’s important to communicate why we believe this particular technological transition matters more than others humanity has previously encountered. We’ll consider two concrete examples, but the core idea is this: AI alters the nature of human thought. 

Example 1 – The Printing Press: Before the printing press was invented, knowledge accessibility was limited and concentrated. After its invention, knowledge became much easier to distribute and access. The printing press changed who could think about what. The change → who can think about what. 

Example 2 – Computers: Before computers, complex calculations and data manipulations proceeded slowly and remained prone to human error. Once computers entered the scene, information could be processed at high volumes, scales, speeds, and with reliability. The change → how information is processed, calculated, and analyzed. 

With AI, the expected change is much more drastic. Before advanced infrastructural AI, intelligence is confined to biological substrates, inherently limited by human cognitive capacity. Afterward, it becomes environmental, essentially unlimited in supply and unbound by biological constraints. If external intelligence becomes available everywhere and always, then internal intelligence must evolve to serve different purposes and functions; AI as infrastructure will alter what thinking is. To be clear, however, we aren’t arguing that this transformation has already occurred, only that it’s begun. Our proposed timeline should help illustrate this point. 

Cognitive Environment Transformation: The Timeline

We identify three core phases: phase 1: Tool (2020-2025), phase 2: infrastructure (2026 – 2030), and phase 3: environment (2030 and beyond). We describe each phase below: 

Phase 1: Tool (2020 – 2025) 

  • AI usage focuses on specific tasks and problems. 
  • AI usage is defined by a subject-object relationship (e.g., I’m using AI for X). 
  • Users consciously decide when to use and not to use AI. 
  • Most present-day users are defined by these characteristics, though some, particularly early adopters, have begun moving beyond them. 

Phase 2: Infrastructure (2026 – 2030)

  • AI usage transcends workflows, applications, and environments, physical and/or operational. 
  • AI usage becomes non-explicit (e.g., I’m working on X). 
  • Users default to working with AI. Not working with AI becomes a deliberate choice. 
  • Current usage patterns suggest we’re heading in this direction. 

Phase 3: Environment (2030 and beyond) 

  • AI is fully integrated with the human cognitive environment. Questioning use becomes an absurdity. 
  • The concept of AI usage dissolves entirely. All environments provide intelligence accessibility. 
  • Human thinking occurs in an environment where accessibility to external intelligence (i.e., AI) is equivalent to accessibility to internal intelligence (i.e., human cognition). 
  • AI is no longer perceived as an extension of human thought and agency. Humans are forced to confront their newfound existence as non-unique, highly intelligent agents. 

The Bifurcation of Human Capability 

In the previous post, we began by highlighting two human capability trajectories: AI-augmented expertise and AI-dependent operation. Here, we’ll dive into precisely what this bifurcation might signify, and more broadly, why it warrants existential considerations. 

If these two trajectories continue at their current rate and scale, this dynamic will result in a deep cognitive divergence. However, it won’t be as straightforward as “you’re AI-dependent and I’m AI-augmented.” What we expect is the emergence of two distinct modes of human intelligence, or more simply, two types of “minds”: 

  1. The Augmented Mind
  2. The Dependent Mind

The augmented mind will approach AI with experimentalism, skepticism, pragmatism, and an overall awareness of which human cognitive capabilities deserve preservation, training, and enhancement. This mind will view AI as a source of external intelligence that can supercharge internal intelligence while maintaining a clear replacement boundary. People who fall in this category will implicitly strengthen their metacognitive skills, developing the ability to delineate between the need for independent vs. AI-assisted thought, reliably identify AI limits and capabilities, and maintain the knowledge required for critical assessment of AI outputs, behaviors, and actions. 

The dependent mind will not hesitate to outsource cognitive capabilities to AI, conceptualizing it as a tool for decreasing cognitive load in virtually all cases. This mind interprets external intelligence as a surrogate for internal intelligence, dissolving the replacement boundary; all cognitive tasks are “fit” for AI. Individuals with this profile will exhibit high proficiency when using AI for task completion and/or problem-solving, but they won’t understand when usage is warranted vs. unwarranted. As their cognitive capacities atrophy, they’ll lose the ability to provide value without AI. Whereas the augmented mind will likely result in some degraded performance, the dependent mind culminates in a complete cognitive incapacity. 

What’s happening here extends beyond usage patterns. We may cultivate a world in which humans no longer play on the same field of cognitive architecture. What’s more, if these architectures become entrenched, they’ll be extremely difficult to reverse. Sure, the augmented mind can choose to become dependent (but why would it?), but the dependent mind is fundamentally precluded from choosing to become augmented; once core cognitive capabilities have atrophied, they can’t just be “recovered” easily. 

The Socioeconomic Dimension

Previously, we also pointed out (according to the data) that augmentation-centric use correlates with higher education and income, whereas automation-centric use correlates with comparatively lower education and income. Although we briefly entertained what this might mean, we need to consider this phenomenon’s application on a generational timescale. 

To briefly reiterate/reconceptualize two points: 

  • Wealthy nations and individuals lean in favor of the augmented mind paradigm. AI extends cognitive capability, but does not replace it. 
  • Less wealthy nations and individuals lean in favor of the dependent mind paradigm. AI replaces cognitive capability, risking atrophy. 

When looking at these points through a generational lens, the implications are nothing short of monumental. For instance: 

At N+0 (present-day) → Wealthy actors consolidate even more wealth, outsourcing mundane/procedural tasks to AI to accelerate innovation, adaptation, and discovery, while preserving (and enhancing) their strategic, creative, and critical judgment skills. Meanwhile, less wealthy actors rely on AI heavily, outsourcing their cognition, rendering themselves obsolete in a socioeconomic environment where value is concentrated with high-order cognitive capabilities. The rich become richer and more capable, while the poor become poorer and less capable. 

At N+1 (many generations from now) → The wealthy inherit the wealth and cognitive architecture of their predecessors, cementing their positions at the top of the socioeconomic ladder. Conversely, the poor inherit the poverty and atrophied cognitive architecture of their predecessors, entrenching their place at the bottom of the socioeconomic ladder. 

We also have a real-world precedent for believing AI-induced changes to cognitive architecture are possible (and imminent): although neuroscience is still investigating the full extent of sustained social media use on neurological and cognitive architecture, tangible findings have been made, particularly in relation to neurochemical pathways and functional brain development. Seeing as AI can be just as immersive as social media, and orders of magnitude more interactive, we find it highly unlikely that it will not only affect the way we think, but the very neural architecture that sustains our thinking. 

Moreover, we’d argue that in this particular context, it might be somewhat naive to assume that humans, especially those who are AI-augmented, will not seek to gain more and more control over their evolutionary trajectory and pioneer genuinely unprecedented augmentation through means like human-AI symbiosis, genetic engineering, and pharmaceutical enhancement (the foundations for these technologies already exist today). Even if none of these accelerants materialize, the existential risk of widespread cognitive enfeeblement persists. Of course, natural evolution is a notoriously slow process, though it can be accelerated when pressures are sufficiently strong and consistent; we simply don’t know what pressures will emerge in the future. 

Even if we move away from this speculative inquiry and return to our evidence-based foundation, we must still confront a painful truth: we’re in the process of creating cognitive classes. The usage patterns we’ve talked about are more than economic trends. They illustrate the early stages of a trajectory where humanity is bifurcating into cognitive classes that will not only be differentiated by resource access, but also by fundamental capabilities. 

Fortunately, we still have some time to address this possibility by ensuring that humans remain at the center of all AI initiatives. This doesn’t mean avoiding automation or possible AI gains, but it does mean applying responsible AI standards and principles rigorously, at every step of AI deployment and operation. This is precisely what Lumenova AI is dedicated to; we’re not only here to simplify your governance and risk management needs. We’re here to provide the guidance and step-by-step input required to build a future where human capacity can blossom beyond what we can imagine today. 

The Work Transformation

We’ve already covered workforce restructuring, articulating how entry-level roles face obsolescence, mid-level roles require adaptation, and senior-level roles are poised for augmentation. These are not certain claims by any means, but they do allude to a more profound and long-horizon question, namely, “what does work become in a post-AI world?” Although we’ll share our thoughts on this subject, we encourage readers to interpret them with a high degree of skepticism and critical thinking. At the end of the day, no one knows what will happen. 

Throughout human history, value has originated with humans, but AI now disrupts this paradigm. In the past, economic value was derived from the synthesis of labor, capital, and resources. But capital and resources would remain virtually useless without human labor to activate them; human labor is, in a way, the “special” ingredient. Even so, not all labor is value-equivalent, which explains why skilled labor (in a landscape of scarcity) is considered more valuable than unskilled labor. Fundamentally, economic systems operate on the assumption that most humans can create value by working and that this value is differentiable with respect to the kind of work being done. 

In the future, this equation could change substantially. Human labor might no longer be the special ingredient; economic value could be derived from the synthesis of AI, capital, and resources, with a “sprinkle” of human judgment on top. If an abundance of capable AI manages to somehow eliminate (or sufficiently diminish) labor scarcity, then the primary constraints that make humans economically valuable collapse. In simple terms, AI becomes the special ingredient. This raises a critical question: If humans can no longer create value through their work (because AI can do it better and cheaper), how do humans meaningfully participate in the economy? 

For the sake of the argument, let’s assume that AI automation will eventually eliminate all mundane and complex procedural human work, and, to go a step further, that AI will play an increasingly prominent (but mainly assistive) role in domains that require creativity, strategic analysis, decision-making, and relationship management. In this case, we might say that high-order cognitive functions are reserved for humans, but is that really true? What persists as “uniquely” human isn’t necessarily labor in the traditional sense. Let’s imagine what this might look like: 

  • Judgment: Humans won’t be the ones making decisions in contexts where correct answers are known/clear; if training data is sufficient, AI will excel at this. Instead, human judgment will be reserved for situations in which multiple reasonable perspectives, opportunities for value-conflict, and a lack of precedent exist. Human decision-making value will concentrate on ambiguity and be directed toward solving the hardest problems we face, the ones for which solutions are truly unknown and underexplored. 
  • Relationship & Trust Building: AI will play an increasingly salient role in communication and negotiation, given its ability to generate compelling arguments. However, the creation and maintenance of relationships grounded in mutual recognition, vulnerability, and shared experience is a uniquely human process. We envision a future where AI helps humans connect, resonate, and collaborate with others by bridging communication/cooperation gaps, while humans continue to define the foundation, structure, and scope of human relationships. 
  • Creating Meaning/Purpose: While present-day AI can perform impressive feats of generative creativity and novel conceptual synthesis, much of this ability still hinges on the quality and ingenuity of initial human input. We think the capacity to make genuine discoveries and innovations, specifically the ability to decide what matters, why it matters, and what is worth doing, will rest with humans who can find significance beyond instrumental utility. Human creativity will fundamentally prioritize the creation of meaning, and ideally, AI will help us execute it. 

This is an undeniably optimistic view, though it still gets at the dynamics we wish to illustrate. In the AI economy, work no longer centers on the ability to transform data or experience into valuable knowledge, as it did in the information economy. Instead, work becomes about transforming intelligence into valuable wisdom, wisdom that requires forms of human judgment, relationships, and meaning-making that AI may never be able to replicate. 

Unfortunately, this seemingly optimistic perspective doesn’t quell our systemic and existential concerns and might even be interpreted as an argument in their favor. The nature of the work we’ve outlined assumes the augmented mind paradigm, which, if current usage trends persist, will materialize predominantly among wealthy actors and nations. In the absence of aggressive intervention, we see one highly plausible trajectory: beyond radical restructuring, the workforce will shrink dramatically. 

We’ll provide a conceptualization of this at the most basic level with some simple math (using the sources originally covered in the first post of this series). 

Note: According to the U.S. Bureau of Labor Statistics, Occupational Employment and Wage Statistics, the average knowledge worker works approximately 2,080 hours per year, and total annual knowledge work hours are roughly 106 billion. 

Calculation

Sources: How People Use ChatGPT & Anthropic Economic Index Report

  • 45% of work activities involve information work (e.g., retrieval, interpretation, documentation), which is well-suited for AI automation.  
  • 77% of enterprise API usage is automation-focused (and increasing based on current usage trajectories)
  • Calculation: 45% x 77% = ~35% of total work hours automated.  
  • Impact: ~37 billion work hours eliminated. At 2,080 hours per worker, this means that ~17.8 million positions could be eliminated. 
  • Assumption: one automated hour equals one eliminated hour. Real impacts depend on how much human oversight automated tasks necessitate. 

We don’t deny that this is a speculative and assumption-laden calculation. Though we still think it alludes to a meaningful pragmatic insight: AI-induced unemployment won’t typically manifest as temporary displacement resolvable via retraining, but persistent structural redundancy where humans are no longer necessary for many economic functions. This means that the remaining workforce will be much smaller, more elite, and hyperspecialized around decisively human capabilities. The existential concerns are serious: 

  • Human worth has historically been tied to economic productivity. What happens if most humans become economically obsolete? 
  • Social stability can be maintained through the promise that hard work yields prosperity. What happens if hard work isn’t enough anymore, because AI can do it better? 
  • Professional and occupational achievement play a major role in the construction and shaping of individual identity. What happens if the majority of traditional occupations disappear? 

From the existential viewpoint, we see a series of paths forward. 

Path 1: Universal Basic Income (UBI) → The provision of an unconditional baseline income that is enough to satisfy basic human needs independent of employment. There are multiple ways this could be achieved, most likely via taxation on AI productivity gains and wealth taxes on those who might capture such gains, or alternatively, sovereign wealth fund models. Ultimately, UBI would reframe economic participation around citizenship as opposed to contribution. UBI is far from a novel idea, though we expect it would encounter severe political and cultural resistance as a radical new paradigm shift. 

Path 2: Job Guarantee Programs → Governments establish federal and state programs to guarantee that anyone who wants to work can work. Realistically, this would require paying living wages for work that matters from a human standpoint but doesn’t necessarily generate market value. This could result in a genuine socio-philosophical reconceptualization of what work means beyond market-productive labor. To be clear, these employment opportunities would be exclusive to the public sector, meaning that governments would need to drastically expand professional offerings to an unprecedented scale. 

Path 3: Stakeholder Capitalism → Mandate that AI profits are redistributed among all stakeholders, not only those who continue to work or own capital. Companies deploying AI are also held to enforceable standards, where they must preserve certain employment levels, reduce working hours without lowering pay, and/or make generous contributions to programs targeting retraining and professional transition. To account for job displacement impacts, companies might also be required to compensate workers with a share of their gains for previously completed work. Overall, this would necessitate a deep transformation of corporate governance that applies the same weight to stakeholder value as it does to shareholder value. Frankly, in a country as influenced by corporations and lobbyists as the US, we see this as an extremely low probability outcome. 

Path 4: Accelerated Retraining → Companies and governments make colossal but strategic investments to retrain displaced workers for occupations poised to remain distinctly human, jobs that target judgment-intensive, relationship-centered, and creative synthesis work. This approach assumes that AI-resistant jobs will emerge at a scale sufficient to absorb millions of displaced workers, while further assuming that we can confidently predict which kinds of professions will continuously yield value as AI innovates. We don’t think this is an impossible outcome, but we do believe it would require enormous work to identify what skills are critical to train (in light of rapid AI evolution), and to accurately anticipate how labor markets will adapt in response to retraining, AI-induced job displacement, and new job creation. 

Path 5: Technological Deceleration → Deliberately slow or halt AI innovation entirely. Companies will not do this unless they are required by law. Regulations could create permitting requirements for large-scale automation, employment impact assessments, and mandatory transition periods while strategically limiting AI capability expansion and innovation across domains where displacement risks are highest. In all honesty, although we’d characterize this as the least radical and most reasonable path, we think it’s almost impossible. AI race dynamics, at both national and international scales, have become so potent that we fear we’re already locked into a trajectory that vehemently rejects any attempts at technological deceleration. That being said, this outcome might become much more likely if a catastrophic AI failure occurs, though we believe it would need to be of a sufficiently high magnitude. 

Path 6: Passive Acceptance (The Default Trajectory) → No one takes any action. We all relinquish ourselves to market forces and existing institutions and simply hope that things will turn out okay. We trust that adaptation will occur organically; socio-cultural structures will adjust, those suffering from displacement will find new opportunities, and the systems (e.g., welfare, unemployment) that have historically supported us through periods of hardship will continue to prove resilient. This is the path we’re on now, and we don’t see how it could possibly end well, given everything we’ve discussed until now. If we don’t veer off it soon, we may face mass unemployment, inequality amplification, social safety net collapse, political instability, and even socio-cultural violence. 

By illustrating these paths, we want to point out that we’re not implying their emergence is guaranteed; no one can know, with certainty, what will happen. We illustrate these possible outcomes because we believe, as a company that strives for responsible AI development, deployment, and operation, that it is our responsibility to educate audiences as to what might happen in the absence of tangible action. This is precisely why we do what we do. We’re giving you the tools and capabilities to take action today, before it’s too late.  

Conclusion

We recognize that this piece illustrates an uncomfortably bleak vision of what our future might someday evolve into. To be clear, this is a possible future, not a guaranteed one. We’d also argue that the vast majority of the concerns we’ve articulated are not criticisms of the technology itself, but rather observations about how people use AI at scale in ways that may lead to collectively detrimental outcomes. We believe the trajectory we’ve outlined cannot only be counteracted but avoided entirely if we recognize the importance of strong governance, ethics, and safety as the core pillars of AI innovation. 

This is where Lumenova AI comes in. We’re committed to making sure that your AI deployments are responsible, resilient, compliant, and safe in the long-term. We want to help you extract as much value as you can from this powerful technology while ensuring that its risks and impacts are holistically addressed, that human wellbeing and fundamental rights are preserved, that safety, ethics, and governance sit at the core of your AI mission, that compliance with relevant standards and regulations is proactively driven, and that you’re prepared to navigate an uncertain future with confidence in both your people and your technology. 

We believe AI can usher in a new age of prosperity, where we are more connected with others, ourselves, and society than ever before, but whether this happens depends on the choices we make at the individual, business, and regulatory levels. We want to help you build a robust foundation for a future where AI can operate at scale without introducing risks at scale, where crises don’t need to be confronted reactively, where complex compliance requirements are simplified, and where trust, quality, security, speed, ROI, and scaling are mutually supported and compatible. 

For those who’ve found this series interesting, we invite you to dive deeper into Lumenova AI’s blog and explore all our other content on AI governance, safety, ethics, innovation, and more. For those of you who are experimentally inclined, we suggest checking out our AI experiments, where we regularly assess the frontier AI models’ capabilities and limitations. 

If you’re already familiar with AI governance and risk management, or you’d like to expand your efforts, we recommend that you take a look at Lumenova AI’s responsible AI platform and book a product demo today. 


Related topics: AI SafetyArtificial IntelligenceLarge Language Models

Make your AI ethical, transparent, and compliant - with Lumenova AI

Book your demo