AIResearch AIResearch
Back to articles
Ethics

AI Is Rewriting How Organizations Know

Large language models act as 'epistemic monsters' that challenge traditional knowledge theories, forcing companies to rethink inquiry, validation, and accountability in the age of intelligent technology.

AI Research
March 26, 2026
3 min read
AI Is Rewriting How Organizations Know

Large language models (LLMs) are not just tools for automating tasks; they are fundamentally reshaping how organizations create and use knowledge. In a forthcoming paper in Strategic Organization, researchers Samer Faraj, Joel Perez Torrents, Saku Mantere, and Anand Bhardwaj argue that LLMs disrupt long-held assumptions about knowledge in business, challenging both the idea that knowledge can be stored as an object and the view that it emerges solely from human practice. This shift forces companies to confront new risks and opportunities, as these AI systems generate insights through statistical patterns rather than human understanding, transforming everything from strategic decision-making to daily workflows.

The researchers found that LLMs act as 'analogy engines,' generating connections between concepts by analyzing vast amounts of text data. Unlike humans, who rely on embodied experience and causal reasoning, LLMs use vector operations in high-dimensional spaces to identify similarities, such as mapping 'king' to 'queen' based on statistical co-occurrence. This allows them to produce analogies across four key dimensions: surface or deep, and near or far domains. For example, surface-near analogies, like comparing a KPI dashboard to a car's instrument panel, aid quick comprehension but may reinforce clichés, while deep-far analogies, such as linking IT security to immunology, offer innovative reframings but risk hallucinations or misleading mappings. This capability expands organizational knowing by revealing hidden patterns, but it also introduces epistemic uncertainties that require careful management.

To understand this transformation, the paper draws on philosopher Donna Haraway's concept of the 'monster'—a boundary-crossing entity that destabilizes categories like human/machine and tacit/explicit knowledge. The researchers position LLMs as such monsters because they blur traditional distinctions in organizational epistemology. They analyzed how LLMs two dominant perspectives: representationalism, which treats knowledge as a codifiable resource stored in technologies, and practice-based knowing, which views knowledge as emerging from embodied, situated human activities. LLMs undermine both by generating knowledge without human intention or participation in practice, dissolving the line between storage and inference and producing outputs that mimic expertise without grounding in lived experience.

Of this are profound, as highlighted in the paper's analysis of three key s. First, LLMs transform inquiry by widening the scope of inference beyond human expertise, but they also risk producing plausible yet unverifiable outputs that can lead to knowledge drift or information overload. Second, the need for dialogical vetting becomes critical—a recursive process where humans must continually question, interpret, and test AI-generated insights against contextual knowledge to ensure relevance and accuracy. Third, agency is redistributed, blurring lines between tool and collaborator and raising questions about authorship, accountability, and responsibility when organizational actions are guided by machine-generated analysis. These s require organizations to develop new practices for validation and decision-making.

Despite their potential, LLMs come with significant limitations. The paper notes that their analogical reasoning is based purely on statistical patterns, lacking the intentionality, causal understanding, or embodied grounding of human cognition. This can lead to outputs that are superficially convincing but fundamentally ungrounded, especially in deep-far analogies where hallucinations are a risk. Additionally, the researchers caution that LLMs may erode tacit knowledge by bypassing the slow, experiential learning that underpins human expertise. Organizations must therefore balance the generative possibilities of these systems with rigorous vetting and a recognition of their epistemic risks, as blindly adopting AI insights could undermine long-term knowledge development and strategic coherence.

Original Source

Read the complete research paper

View on arXiv

About the Author

Guilherme A.

Guilherme A.

Former dentist (MD) from Brazil, 41 years old, husband, and AI enthusiast. In 2020, he transitioned from a decade-long career in dentistry to pursue his passion for technology, entrepreneurship, and helping others grow.

Connect on LinkedIn