Updated: August 30, 2025
LLMs produce the statistically most probable output that reflects the average aggregated text given a specific prompt condition, efficiently summarizing existing patterns
The averaging to the mean does not provide sustainable competitive advantages - those are the prerogative of the human agency and ability to generate the original ideas about the image of one's unique skillset
Gen AI tools such as LLMs can be useful for leveling everyone off at the beginning of the semester; the educational progression should be based on training divergent thinking and critical assessment of the possible outlier cases that are not represented in the LLM outputs
In a time when discussions of generative artificial intelligence (AI) are saturated with promises of productivity, we often risk overestimating what this productivity truly entails. The narrative that AI can dramatically accelerate learning, creativity, or even competitive advantage sometimes creates a distorted perception of its actual capacities. What we are left to ask, then, is a more fundamental question: What do we really get when we query a large language model (LLM) and receive an output? At its core, the response we receive is not an expert opinion based on human-like reasoning or discovery, but the statistical aggregation of patterns in language, producing what is most probable rather than what is most original.
Figure 1. Screenshot from the YouTube's Show 'WIRED' Autocomplete Interviews'
Note: LLMs somewhat use the same autocomplete principle
At first glance, it may seem as though the system is delivering an authoritative answer or even exercising a form of intelligence comparable to human reasoning. In reality, however, what we receive is the product of a vast aggregation of writing patterns drawn from countless texts available on the web. In this sense, the output reflects not originality or insight, but the convergence of innumerable prior voices into a coherent, averaged response.
Large language models (LLMs) operate by analyzing vast corpora of text to detect patterns of word usage. For instance, if the model has encountered the phrase “peanut butter and …” millions of times, it learns that “jelly” is the most statistically probable continuation, while alternatives like “honey” or “banana” are less common but still possible. Hence, when prompted with a question, the model generates its response token by token, selecting at each step the statistically most probable continuation given the context. Thus, the model does not construct its response based on its “understanding” of the topic but rather selects the top-most frequent combination of words that is most likely to appear among the digital corpora of written text available to it.
Note that this essay will not dive into the computational nuances of the machine learning models underlying modern AI. If you look for some, we recommend the courses: MIS 431, GBUS 738, CS 480, CS 747, or the Machine and Deep Learning specialization on MOOC platforms.
Because of these underlying statistical mechanisms, the responses of LLMs tend to converge toward what is most probable, or average. This property explains why the outputs often appear coherent and accurate: they represent the central tendency of how a topic is typically discussed. However, this same feature also defines the model’s limitation. By design, it does not prioritize originality or deviation from the mean; it is optimized to reproduce the most likely form of an answer rather than to generate novel insights.
For students, this has significant implications for how such tools influence learning and competitive positioning. LLMs are particularly effective for enriching one’s understanding of a subject to the level of average competence, providing clarifications and filling knowledge gaps in ways that can be immediately useful. Yet, because these outputs reflect common patterns of discourse, they cannot provide an inherent competitive advantage. If all learners have access to the same tool, and if that tool reliably reproduces “average” ideas, then relying solely on its output will place one at the same level as others rather than offering distinctiveness.
Understanding the fundamental properties of the underlying mechanisms of generative AI can help alleviate the pressure on both learners and educators, providing clarity on how to best approach the use of gen AI tools in your curriculum.
Implications for learners
As a student, you may use generative AI as a resource to bring your understanding up to the level of the informed average. Minding your own knowledge gap is the key step in this process. By recognizing where your understanding is incomplete and deliberately seeking clarification, you can direct the model to provide structured, aggregated responses that address those gaps. Because of the ubiquity and ease of use of these tools – where a plain-English question can yield an organized explanation without the need for extensive searching or processing – the task of filling in missing knowledge is now easier than ever. However, it is easy to be trapped in the illusion of information accessibility: Do not expect that reproducing the answers provided by LLM, however polished they may sound, will enable you to differentiate yourself. Remember, if that tool reliably reproduces “average” ideas, then relying solely on its output will place one at the same level as others rather than offering distinctiveness.
A recent study illustrates this equalizing effect in the context of an art creativity contest. Researchers found that AI tools significantly improved the performance of less-skilled artists, helping them close gaps in technique and presentation. By contrast, highly skilled artists benefited little or not at all, since the model’s outputs could not extend their abilities beyond the average level. The net result was that AI raised the floor but did not lift the ceiling, leveling participants toward a common baseline rather than creating new opportunities for distinction.
Figure 3. Illustrative Example of the Extraordinary Achievements as Outliers rather than the Good Average Performance
Implications for educators
As educators, we should remain attentive to the capacity of generative AI to reduce heterogeneity in students’ knowledge backgrounds. When used as a foundation-building tool, it can help level the playing field and, given the constraints of the academic calendar, serve as a valuable time saver at the beginning of a semester. At the same time, we must also recognize its tendency to homogenize responses and thereby weaken the signaling power of generalized questions traditionally used to assess proficiency. The encouraging point is that genuine expertise and critical thinking emerge through engagement with a variety of cases – including outlier cases – that LLMs are unlikely to reproduce. By redesigning assignments to highlight divergent rather than convergent thinking, educators can establish new signaling mechanisms that differentiate student outcomes and elevate the overall quality of learning.
Scientific evidence suggests that much of the perceived productivity of generative AI may be illusory. Grounded in the core statistical principles of pattern recognition and predictive analytics, this article has highlighted the genuine benefits of generative AI in making information acquisition faster, cheaper, and more accessible. At the same time, we have emphasized the risk of mediocrity – of converging to the mean – if one relies solely on AI tools in career or skill development. In contexts such as job applications, where differentiation matters, competitive advantage arises from the ability to generate unique ideas, insights, and perspectives. Once such distinct contributions are in place, AI can serve as a powerful assistant, helping to articulate thoughts with greater clarity, structure, and precision. Ultimately, human agency remains the decisive factor in determining whether technology elevates one’s work beyond the average.
Acemoglu, D. (2024, May 21). Don’t believe the AI hype. Project Syndicate. https://www.project-syndicate.org/commentary/ai-productivity-boom-forecasts-countered-by-theory-and-data-by-daron-acemoglu-2024-05
Spence, M. (1973). Job market signaling. The Quarterly Journal of Economics, 87(3), 355–374. https://doi.org/10.2307/1882010
Zhou, E., & Lee, D. (2023). Generative artificial intelligence, human creativity, and art. PNAS Nexus, 3(3), pgae052. https://doi.org/10.1093/pnasnexus/pgae052. Available at SSRN: https://ssrn.com/abstract=4594824
Machine learning specialization. DeepLearning.AI. https://www.deeplearning.ai/courses/machine-learning-specialization/
Deep learning specialization. DeepLearning.AI. https://www.deeplearning.ai/courses/deep-learning-specialization/
Please cite this article as:
Petryk, M. (2025, August 30). Learning With, Not Competing Through, Large Language Models. MariiaPetryk.com. https://www.mariiapetryk.com/blog/post-21