
Contents
In part I of this series, we comprehensively presented AI usage trends from three key sources: The Anthropic Economic Index Report, How People Use ChatGPT, and our Reddit-focused deep-research findings, obtained using a variety of AI tools, primarily Perplexity AI. Given how informationally dense this post was, we determined it would be best to simply lay out the data before discussing it.
Consequently, this post will examine the trends revealed by our key sources and deep research findings. We’ll begin by exploring major convergent usage trends across these resources, then transition to the underlying forces that may be driving them. Next, we’ll offer a series of future AI usage trend predictions, divided into two categories: near-term (1-5 years from now) and long-term (5+ years from now). Finally, we’ll conclude by addressing critical uncertainties revealed by this inquiry. This will set the stage for part III of this series, which will focus on the strategic implications and adaptation pathways of the AI usage trend trajectory.
For readers who have yet to review our previous post, we advise doing so. We won’t cover the data again here; we’ll extrapolate from it.
Major Convergent Trends
- Automation-Augmentation Evolution
↳What the data shows:
- Anthropic: Automation tasks grew significantly, eclipsing augmentation tasks for the first time.
- OpenAI: Non-work usage has increased substantially, and Asking queries tend to edge out Doing queries in frequency.
- Reddit: Many users characterize their AI interactions as collaborative dances and emphasize AI as a thought partner rather than an autopilot.
The convergence → Personal use appears to favor AI as an advisor or research assistant, whereas enterprise use accelerates toward full task delegation. Importantly, the most sophisticated users seem to recognize how critical it is to maintain strategic human oversight.
- Capability Expansion Paradox
↳What the data shows:
- Anthropic: New capabilities clearly influence the emergence of new usage patterns.
- OpenAI: Early adopter usage rates continue to accelerate while capability improvements enable novel applications.
- Reddit: Users express high satisfaction when using AI for research despite articulating persistent hallucination concerns.
The convergence → The addition of new capabilities may create cascading usage opportunities that range beyond what the capability itself can offer. Interestingly, as models become more capable, their limitations also become more apparent.
- Coding’s Dominance Continues, But Differently
↳What the data shows:
- Anthropic: Coding is the #1 use case, but novel code generation is now more common than debugging tasks.
- OpenAI: Queries requesting technical help declined overall, as well as for work-related messages.
- Reddit: Programmers exhibit some of the highest adoption and satisfaction rates, with tools like Cursor rapidly gaining popularity.
The convergence → Coding persists as a core use case, but the dynamics have shifted visibly from problem-solving assistance to generative creation. Anthropic’s API insights suggest that technical help could be migrating to enterprise contexts.
- Geographic & Socioeconomic Divide
↳What the data shows:
- Anthropic: Wealthy Western nations unequivocally dominate per-capita usage, and GDP increases appear to correlate reliably with usage increases.
- OpenAI: Low-income nations exhibit high growth, and while the gender gap has closed, an education gap persists.
- Reddit: Reddit is a predominantly Western and English-language platform; this is an inherent research limitation.
The convergence → AI accessibility is democratizing in absolute terms but concentrating in relative terms. Usage intensity strongly correlates with economic development, which could perpetuate a widening capability gap even as global adoption deepens.
- Quality-Intent Relationship
↳What the data shows:
- Anthropic: How well certain capabilities are aligned with certain tasks heavily influences usage patterns. Nations with high per-capita usage gravitate toward augmentation over automation.
- OpenAI: Asking queries are outperforming Doing queries, according to OpenAI’s good-to-bad ratio scoring system. Self-expression queries have the best good-to-bad ratio, by a significant margin.
- Reddit: Although they’re close to each other, research satisfaction tops coding satisfaction. Satisfaction levels for other core usage domains like writing and customer service are comparatively lower.
The convergence → Tasks that require information retrieval and understanding tend to outperform tasks focused on content generation. Unsurprisingly, output quality also appears to improve as usage matures.
- Writing’s Ubiquity & Anxiety
↳What the data shows:
- Anthropic: Although writing and education tasks have declined, they continue to represent a significant proportion of AI usage.
- OpenAI: OpenAI echoes Anthropic’s findings here, while also revealing that almost half of work-related messages are writing-centric, two-thirds of which center on editing/critique, argument generation, and translation.
- Reddit: AI adoption for writing is high, but satisfaction is comparatively lower. Users have signalled concerns around authenticity and the recognizability of AI-generated writing content. Some users have also highlighted that AI in writing works best when approached as a starting point.
The convergence → Although writing remains a highly popular use case, most users continue to express quality concerns and authenticity anxiety. Most commonly, users treat AI outputs as drafts necessitating human refinement.
- Dependency & Skill Atrophy
↳What the data shows:
- Anthropic: As previously mentioned, nations with high per-capita usage favor augmentation over automation. This implies an expanding view of AI as collaborative instead of delegation-focused.
- OpenAI: Although not directly addressed, correlations between education levels and work-related usage patterns suggest an awareness of potential skill requirements.
- Reddit: Users across domains express concerns about prolonged AI work leading to cognitive fatigue and skill atrophy.
The convergence → Skill atrophy represents a consistent concern across user communities, transcending domain, experience level, and use case.
- Enterprise Adoption: High-Value, Automation-Focused, Infrastructure-Constrained
↳What the data shows:
- Anthropic: Businesses prefer expensive, high-value tasks, with the vast majority of API usage focusing on automation. One of the key adoption barriers is data/IT infrastructure preparedness.
- OpenAI: Work usage predictably concentrates on the tasks that are necessary for a specific profession. Over half of business/management messages target writing, while decision-making and problem-solving are also popular.
- Reddit: Users, particularly developers, are beginning to integrate AI into their professional workflows, but some users also note that implementation quality can significantly affect outcomes.
The convergence → Enterprise AI adoption is progressing relatively slowly, but can accelerate rapidly when organizations have established the infrastructure required to support it. Businesses also appear to prioritize high-cost task automation over cost minimization.
Underlying Forces Driving These Trends
- Capability expansion as a usage driver: New features immediately generate new usage patterns while early adopters continue to discover novel applications, acting as the driving force behind accelerating usage curves. Meanwhile, capability-task alignment determines adoption intensity within specific application domains.
- The context-complexity relationship: Complex tasks require extensive context but tend to result in diminishing returns, inspiring users to adapt via fragmentation or elaborate workarounds. For enterprises, infrastructure limitations inherently constrain context-intensive applications.
- The quality-confidence feedback loop: As output quality increases, so does user trust, which also enables users to pursue more automation-centric patterns. Nonetheless, continued quality issues, particularly hallucinations, stress the importance of verification and human oversight requirements.
- The value-cost calculus: Even though high-value automations are cost-intensive, enterprises prioritize them; cost reductions don’t appear to perpetuate large usage increases. This implies businesses are optimizing for capability access over cost minimization.
- The democratization-divide dynamic: While GDP per capita is strongly correlated with usage rates, low-income nations are exhibiting rapid growth, but starting from a low base. Essentially, this allows wealthy nations to create compounding advantages for themselves.
- Labor market transformation: As many have predicted, entry-level, routine work is especially susceptible to AI-induced displacement; “good enough” might no longer be sufficient as AI raises the bar for excellence. Fortunately, strategic and creative work is “safe” in this context, maintaining a premium value.
- The trust-verification paradox: Trust in AI automation is building, but skilled users continue to recognize how critical verification processes remain. This can perpetuate a dualistic dynamic in which casual users over-trust, whereas skilled users may be more appropriately skeptical (over-skepticism is not a possibility to discount).
- Authenticity-efficiency tradeoff: How users are willing to trade off authenticity for efficiency could be highly domain-dependent and task-influenced. Writing is a perfect example: it still represents a leading use case, with most users maintaining awareness of authenticity and quality concerns. Generally, the nature of this tradeoff may depend on the nature of the use case.
- Skill development dilemma: The productivity gains offered by AI are clear, and so are the risks of overreliance. What is unclear, however, is whether most users have meaningfully figured out how to balance convenience with the potential for skill atrophy. How an individual uses AI is integral to the cognitive consequences they may expect to face.
- Platform specialization tension: General-purpose models like ChatGPT and Claude lead in overall usage, but specialized tools like Cursor and Perplexity aren’t to be overlooked for their domain-specific utility.
AI Usage Predictions
Near-Term
- AI usage patterns will bifurcate greatly, such that skilled users progressively become more augmentation-focused while casual users fixate on automation-centric workflows. This will also create a perceptual rift at the societal scale, with some user cohorts seeing AI as more than a tool and others viewing it as a task executor.
- The specialization wave will accelerate. Purpose-built tools could end up capturing nearly a third of the market share within specific application domains, though this won’t affect general-purpose dominance for cross-domain application and exploratory engagement.
- Enterprise adoption will stall on infrastructure, not capability. This will lead companies to heavily increase spending on data consolidation, contextualization, and access infrastructure. We also expect that API usage will increase in accordance with enterprise infrastructure advancements.
- Education and scientific tasks will see even more rapid growth, driven by academic integrity adaptation, AI-native pedagogy, and research acceleration. This will align with a wider recognition of the benefits AI can offer for the democratization of knowledge.
- Skill atrophy concerns will begin manifesting in measurable outcomes. We’ll see the first cohorts of AI-native professionals entering the workforce, and this will reveal systematic differences in foundational skills, especially for domains like coding and writing. This could force organizations to begin distinguishing between AI-augmented expertise and AI-dependent operationalization.
- As infrastructure constraints dissolve, enterprises will favor API automation even more than they do today, perhaps even to the point that it comprises roughly 90% of all enterprise AI use. This will result in catastrophic accountability and governance failures that will set back enterprise AI trust significantly.
- Quality-task alignment will heavily influence market segmentation dynamics. Some models will specialize in quality sensitivity (e.g., research tools optimizing for accuracy and depth) while others will specialize in volume sensitivity (content generation tools optimizing for speed and variation). This could inspire a profound shift in hiring, with employers asking not “What AI skills do you have?” but instead, “Which domain-specific AI tools are you proficient in?”
- The “tools getting worse over time” pattern will initially intensify but eventually correct itself. The intensification will stem from factors including compute cost optimization, user base expansion, and profit prioritization, but later correction will occur as quality-sensitive users choose to migrate to better platforms (or subscription tiers).
- New and/or emergent capabilities will fuel cascading usage patterns. In other words, one new capability won’t just unlock a single new usage category, but likely anywhere from a few to multiple new categories, depending on the capability’s versatility. We don’t expect that today’s most prominent use cases will continue to represent the majority of use cases by 2030.
Long-Term
- The augmentation-automation divide will transform into socioeconomic stratification, resulting in the creation of “AI capability classes.” Wealthy nations and individuals will preserve and intensify augmentation-centric usage, maintaining collaborative mindsets and skill preservation, while the less wealthy will be locked into skill-replacement and task delegation. This anticipated digital divide must be addressed now to prevent an even greater consolidation of power among wealthy and influential actors.
- Knowledge work will enter a new paradigm. Instead of revolving around getting, interpreting, and documenting information, it will shift toward strategic decision-making, novel problem framing, stakeholder relationship management, and creative synthesis. Simply put, routine information work will no longer be an integral component of full-time human roles.
- The information ecosystem will reach a critical inflection point. Misinformation will continue to amplify and proliferate as AI-generated content exceeds, by a massive margin, human-generated content across most, if not all, digital platforms. To prevent what seems like an imminent information ecosystem collapse accompanied by the collective destruction of trust in information, governments will implement enforceable provenance and authenticity verification standards, with severe penalties for violation.
- Labor market restructuring will complete the first wave; entry-level professionals and knowledge workers who are “good enough” will either have to reskill or radically pivot their career trajectories to avoid obsolescence. We’ll also see the rise of a new professional category, known as “AI orchestrators,” who are tasked with designing and managing AI-augmented workflows. Those whose work is concentrated within strategic, creative, and behavioral domains will remain just as, if not more valuable, than they are today.
- Academic integrity will transform via structural adaptation. Current assessment crises (e.g., using AI for homework) will resolve as educators recognize that student AI use is inevitable and widespread, while also being extremely difficult to detect reliably. Not only will educational institutions redesign assessments to prioritize knowledge application and synthesis, but they’ll also enforce AI as a core component of their curricula.
Critical Uncertainties
Here, we’ll take a close look at the uncertainties revealed by this discussion through the lens of four core categories: technical, socio-behavioral, economic-structural, and regulatory and policy.
Technical Uncertainties
The Plateau Question
The possibility that current state-of-the-art systems have already reached their architectural limits has been raised by users and researchers alike, raising the question of whether fundamental breakthroughs are the only way to achieve the next leap forward. If this is true and no fundamental breakthroughs are made, then we expect that developers will accelerate their specialization strategies, effectively optimizing existing capabilities to overcome the backlash received in response to increasingly slow incremental improvements. While we don’t think the large user cohorts will suddenly abandon AI, we do think it’s possible that usage will decelerate substantially as limitations become painfully obvious to users. We’re not confident in either side here, and we hope that evidence supporting a possible plateau or improvement will emerge within the next year.
The Context Problem
Anthropic’s finding that a 1% increase in input length yields a measly .38% increase in output length, especially when combined with Reddit’s persistent sentiments about context loss, forces us to consider whether the context problem is in fact a fundamental limitation or a solvable technical challenge. If solved, we envision an intense acceleration in enterprise adoption and automation, particularly for complex, context-intensive tasks. If unsolved, we don’t see how fragmented workflows could be successfully consolidated, nor do we see how infrastructure constraints would be any less binding than they are today. Currently, we lean toward the perspective that this is a foundational flaw that will require some kind of genuine innovation, not just a “fix.”
Socio-Behavioral Uncertainties
The Skill Atrophy Realization
We’ve predicted that skill atrophy will manifest in measurable outcomes in the near-term. Though we stand by this prediction, it’s important to consider that precisely when this happens matters, since an AI-native generation is on the cusp of entering the workforce. We hope that this will occur early, within the next 1-2 years, because this would enable a correction toward augmentation-focused usage as degradation becomes tangible and before AI-native cohorts saturate the workforce. If, however, this happens within the next 3 to 5 years or later, we may witness an entire generation enter the workforce with systematically impoverished capabilities; capabilities that are not built on any fundamental skills and competencies. Since we have not seen education systems meaningfully adapt to this possibility at scale yet, nor have we seen sufficient evidence of professional environments evolving their requirements, we find the latter outcome more likely.
The Trust-Verification Equilibrium
The data shows us that automation-focused interactions have grown significantly, which markedly suggests an increase in user trust. However, this also directly contradicts widespread and sustained hallucination concerns, which persist almost universally. In this context, we have to ask ourselves, will trust continue building as quality metrics improve, or will high-profile failures in consequential domains trigger correction? At the very least, we think that catastrophic failures in high-stakes environments, notably medical, legal, financial, and safety-critical systems will occur, but we don’t know if these failures will (a) be plentiful enough, and (b) public enough to cause a deep shift in the trust equilibrium. In essence, trust sustainability hinges on whether quality improvements continue to steadily outpace expansion into higher-stakes domains and on how common and publicized catastrophic failures are.
Economic-Structural Uncertainties
The Geographic Convergence Question
Wealthy nations hold a prominent lead in per-capita usage, and compound advantages indicate that early adopters will continue to discover novel applications while late adopters lag behind, focusing on the basics. The current data suggests that this gap will only widen further, even if developing nations exhibit steady growth. Can we confidently rely on the correlation between GDP growth and increased usage as a meaningful indicator of this gap potentially being closed? This remains to be seen, and we expect that other key factors, specifically declines in compute cost, increased efficiency, and localized model deployments, could play an equally important role in determining whether a convergence is reached, let alone feasible. Nonetheless, much more data on this phenomenon is necessary before any concrete claims can be made, though we believe we’re in a critical period now, and soon we’ll be able to clarify whether the current stratification becomes locked in.
The Specialization-Platform Dynamic
General-purpose models decisively dominate overall usage, but specialized tools can provide high utility in domains requiring deep integration and precise capability-task alignment. We’ve already alluded to the possibility that specialized tools could capture significant domain-specific market shares, but it’s also worth entertaining another possibility: major general-purpose platforms could consolidate specialized functions within their offerings instead of providing distinct tool suites; this is already happening with platforms like OpenAI and Gemini, which have integrated functions like deep research and image generation directly into their commercial models. Even if this dynamic continues, we still think it’s likely the specialized tools will capture a substantial domain-specific market share, mainly because the advantages these precisely tuned tools offer tend to be userbase-centric and tricky to replicate.
The Value Capture Question
If enterprises decide to retain the majority of their AI-induced gains via mechanisms like wage suppression and headcount reduction, this will likely result in more inequality and a decline in consumer purchasing power. Of course, AI could profoundly influence shared prosperity, but only if enterprises leverage their gains to provide higher wages, reduce working hours, and make tangible policy interventions that preserve and augment professional well-being. Unfortunately, we view this latter outcome as desirable but naive, particularly within highly competitive markets underpinned by capitalist incentives. We don’t see this optimistic trajectory realistically unfolding unless a few especially high-profile companies decide to take a chance and radically reshape their economic and social objectives.
Regulatory & Policy Uncertainties
Labor Market Support
Can policy interventions like retraining programs, income support, career counseling, or education subsidies provide meaningful transition support to enable a labor market that recognizes and addresses the risks of displacement (e.g., extended unemployment, social tension, etc.) before they become widespread? Sadly, we don’t think so, and here’s a few reasons why: (1) such policies would require a standardized AI literacy framework, with domain-specific AI skills assessments (no frameworks of this kind currently exist), (2) governments alone can’t guarantee that policy interventions would be successful, and the policy design and implementation process would necessitate cooperation with non-governmental stakeholders, which could lead to collaboration bottlenecks and conflicts of interest, and (3) policymaking is notoriously slow-moving and reactive; by the time certain policies are implemented, they may not longer capture real-world displacement dynamics. There are many more critiques one could offer, but as of now, we find the inadequate support scenario to be the most likely outcome.
Information Ecosystem Governance
AI-generated information will continue to proliferate across digital ecosystems, and we believe it will comprise the majority of digital content within the next 1 to 2 years. This not only raises serious concerns about a potential information ecosystem collapse and a collective destruction of institutional and individual trust, but also concerns about the data on which future AI systems are trained, most importantly, the impact that synthetic data has on AI performance and reliability. Moreover, policy interventions would implicitly operate on the assumption that we can reliably identify and authenticate AI-generated content and somehow measure the proportion of AI-generated to human-generated content with confidence; we find both of these assumptions unrealistic. While we don’t necessarily think that information ecosystem collapse is imminent, we do believe that, at minimum, widespread information degradation will occur, and that some kind of crisis (could be collective distrust, or something else) will need to materialize to trigger meaningful policy interventions—at this point, it may already be too late.
Conclusion
To those readers who enjoyed this piece, stay tuned for our next post in this series. We also recommend exploring Lumenov AI’s blog and AI experiments, where you can find a wealth of resources across numerous topics, including AI governance, safety, ethics, risk management, literacy, and much more. By contrast, the more technically/experimentally inclined may be interested in our experiments, which focus exclusively on testing frontier AI vulnerabilities and capabilities.
For those who have initiated their AI governance journey, we invite you to check Lumenova AI’s responsible AI platform and book a product demo today, along with our AI risk advisor and policy analyzer.