One-Line Summary
This paper introduces the concept of "epistemic boundedness" to analyze how generative AI fundamentally transforms — rather than merely extends — the knowledge limitations of public administrators, arguing that AI-augmented decision-making requires entirely new institutional safeguards beyond those designed for traditional bounded rationality.
Background & Motivation
Public administrators increasingly turn to generative AI tools such as large language models (LLMs) to support policy analysis, document drafting, and decision-making. While Herbert Simon's classic notion of "bounded rationality" (1947) has long explained the cognitive limits of human decision-makers — limited information, finite processing capacity, time pressure — the integration of AI introduces a qualitatively distinct set of epistemic challenges that existing frameworks do not capture.
Simon's foundational insight was that human decision-makers lack the cognitive resources to identify all alternatives, evaluate all consequences, and select the optimal course of action. Instead, they employ heuristics and satisfice — choosing the first option that meets a minimum threshold of acceptability. Subsequent scholarship by March and Simon (1958), Lindblom (1959), and others extended this into theories of incrementalism and organizational decision-making. For decades, information technology was understood as a tool that could push back the boundaries of rationality: databases provided more information, spreadsheets enabled faster calculation, and decision-support systems structured choices more clearly. Generative AI, however, breaks this paradigm.
The Core Problem: From Bounded Rationality to Epistemic Boundedness
- Bounded rationality (Simon, 1947): Decision-makers have limited cognitive capacity to search for, process, and evaluate information — they "satisfice" rather than optimize. The bottleneck is the decision-maker's processing capacity.
- AI promise: Generative AI can process vast corpora, draft policy documents, summarize regulations, and generate analyses at superhuman speed, seemingly overcoming bounded rationality by offloading cognitive labor to machines.
- The hidden trap: AI outputs are fluent, confident, and authoritative-sounding — yet may be factually incorrect (hallucination), biased by training data distributions, or lacking contextual understanding of specific administrative and legal settings. Unlike a calculator or database, the AI generates knowledge rather than merely retrieving it, and the generation process is opaque.
- Epistemic boundedness: Decision-makers cannot assess the reliability and validity of AI-generated knowledge itself. Unlike traditional bounded rationality, the limitation is not about processing more information but about knowing whether the information is trustworthy. The bottleneck shifts from the decision-maker's cognitive capacity to the epistemic opacity of the AI system.
This gap is especially dangerous in public administration, where decisions carry legal authority, affect citizens' rights, and demand democratic accountability. Consider concrete scenarios: an LLM-drafted policy memo may contain fabricated legal precedents or mischaracterized regulatory requirements that an administrator without legal expertise cannot detect — yet the memo's professional tone provides false confidence in its accuracy. A chatbot deployed for citizen services may provide confidently incorrect information about benefit eligibility, leading to real harm. A risk-assessment summary generated by AI may systematically underweight factors relevant to minority communities because of biases in training data. In each case, the administrator faces not a shortage of information but an inability to evaluate the epistemic status of the information they have received.
The paper situates this challenge within broader debates about algorithmic governance and the automation of discretion in public institutions. Unlike earlier forms of IT that augmented human capacity while leaving the locus of judgment intact, generative AI threatens to shift the locus of knowledge production from human experts to opaque statistical models — a shift with profound implications for the legitimacy and accountability of administrative action.
Proposed Method: Theoretical Framework Development
The paper constructs a multi-layered theoretical framework that bridges public administration theory, epistemology, and AI scholarship to systematically characterize the new knowledge challenges posed by generative AI in governance. Rather than proposing a computational method or empirical experiment, the contribution is conceptual and normative — developing new analytical vocabulary and institutional design principles through rigorous theoretical analysis grounded in the public administration, philosophy of science, and AI safety literatures.
1
Conceptual Analysis of Epistemic Boundedness
The paper formally distinguishes epistemic boundedness from bounded rationality along three dimensions: (a) source of limitation — cognitive capacity vs. inability to verify AI-generated knowledge; (b) nature of the problem — information scarcity vs. information opacity; and (c) remediation strategy — heuristics and satisficing vs. institutional verification and epistemic governance. A critical insight here is the paradox of augmentation: while bounded rationality can be partially addressed by providing more information, epistemic boundedness is amplified by more AI-generated information because each additional output carries unassessable reliability. The paper draws on philosophical epistemology — particularly the distinction between justified true belief and mere true belief — to argue that AI-generated outputs may be true but cannot be epistemically justified by the administrator who uses them, undermining the knowledge basis of administrative action.
2
Taxonomy of AI Failure Modes in Administrative Contexts
The paper systematically maps how generative AI's known failure modes manifest specifically in public administration scenarios, constructing a detailed taxonomy with five categories:
(i) Hallucination — fabrication of plausible but nonexistent facts (e.g., citing judicial rulings that do not exist in drafted legal opinions, inventing statistical data in policy analyses);
(ii) Training data bias — systematic skewing toward overrepresented viewpoints (e.g., welfare policy recommendations reflecting majority-group perspectives while marginalizing minority needs);
(iii) Temporal knowledge gaps — inability to reflect recent legislative changes, court rulings, or policy updates beyond the training data cutoff;
(iv) Lack of situational awareness — failure to account for local institutional norms, political dynamics, and jurisdiction-specific regulations that shape real administrative practice;
(v) Sycophantic tendencies — generating outputs that confirm the user's apparent preferences rather than providing critical or balanced analysis.
For each failure mode, the analysis traces the causal chain from technical limitation to administrative risk to democratic harm: e.g., hallucination → fabricated legal citation → unlawful policy recommendation → citizen rights violation → erosion of public trust.
3
Analysis of Epistemic Asymmetry
The paper examines the structural power imbalance created when administrators depend on AI systems whose reasoning they cannot inspect or evaluate. This "epistemic asymmetry" is analyzed through three interconnected lenses:
(a) Technical opacity: Transformer-based LLMs with billions of parameters operate as black boxes for non-technical users. Even "explainable AI" techniques provide post-hoc rationalizations rather than genuine insight into the generative process, leaving administrators unable to distinguish reliable outputs from unreliable ones.
(b) Authority effect: The professional, well-structured, and confidently assertive tone of AI-generated text creates a cognitive bias that the paper terms the "epistemic authority illusion" — administrators are less likely to question AI outputs than equivalent human-drafted text, even when the AI output has a higher error rate. This mirrors findings in automation bias research (Parasuraman & Manzey, 2010) but is amplified by the natural-language fluency of modern LLMs.
(c) Expertise erosion: When AI replaces rather than supplements human analysis, institutional knowledge atrophies over time. Administrators who routinely defer to AI lose the domain expertise needed to detect AI errors, creating a vicious cycle of increasing dependence and decreasing oversight capacity.
4
Institutional Design Principles for Epistemic Governance
Drawing on institutional theory (North, 1990; Ostrom, 2005) and democratic accountability frameworks, the paper proposes four governance principles for responsible AI integration in public administration:
(a) Structured verification protocols: Mandatory domain-expert review of AI-generated outputs before they inform decisions, with checklists targeting the most common failure modes (factual accuracy, legal validity, equity implications).
(b) Epistemic accountability mechanisms: Clear assignment of responsibility for AI-assisted decisions to identifiable human decision-makers, preventing the diffusion of accountability across human-AI systems. This includes documentation requirements specifying which outputs were AI-generated and which were human-verified.
(c) Transparency requirements: Making AI's role in administrative decision-making visible to both internal stakeholders and affected citizens, including disclosure when public-facing communications or policy analyses have been AI-generated or AI-assisted.
(d) Capacity-building programs: Ongoing training and institutional investment to maintain human analytical skills alongside AI augmentation, ensuring that administrators retain the expertise needed to evaluate and override AI outputs when necessary. This includes periodic "AI-free" exercises to prevent skill atrophy.
Key Arguments & Findings
Comparison: Bounded Rationality vs. Epistemic Boundedness
| Dimension | Bounded Rationality (Simon) | Epistemic Boundedness (This Paper) |
| Source | Human cognitive limits | Opacity of AI-generated knowledge |
| Core Problem | Cannot process all available information | Cannot verify reliability of AI outputs |
| Information Effect | More information helps (up to a point) | More AI output amplifies the problem |
| Decision Strategy | Satisficing with heuristics | Deferred judgment to opaque systems |
| Remediation | Better decision aids, training | Institutional safeguards, epistemic governance |
| Accountability | Clear human responsibility | Diffused across human-AI interaction |
- AI does not merely reduce bounded rationality: While AI can process vast amounts of information, it simultaneously introduces new epistemic limitations that administrators are poorly equipped to detect or correct. The net effect on decision quality is ambiguous, not uniformly positive. The paper argues this represents a qualitative shift in the nature of decision-making constraints, not simply a quantitative change.
- Hallucination as systemic epistemic risk: Generative AI's tendency to produce plausible but unfounded outputs — fabricated legal precedents, nonexistent regulations, invented statistics — poses a unique threat in administrative settings where decisions carry binding legal and social consequences. Unlike errors of omission (missing relevant information), hallucinations are errors of commission that actively introduce false knowledge into decision processes.
- Epistemic asymmetry creates dependency: Administrators typically lack the technical expertise to evaluate when AI outputs are reliable versus misleading, creating an asymmetric dependence on opaque systems. Over time, this dependency can erode the institutional knowledge base that would otherwise serve as a check on AI errors — a phenomenon the paper describes as a "deskilling spiral."
- Authority effect of AI-generated text: The professional, confident tone of LLM outputs creates an "epistemic authority illusion" that makes administrators less likely to question or verify AI-generated content compared to human-drafted alternatives, even when error rates are comparable or higher. This effect is compounded by the speed advantage of AI: time-pressured administrators are incentivized to accept AI outputs uncritically.
- The paradox of augmentation: More AI-generated information does not resolve epistemic boundedness but amplifies it, because each additional AI output carries unassessable reliability. Unlike bounded rationality, where more data generally helps, epistemic boundedness worsens as AI is used more extensively.
- Institutional safeguards, not just technical fixes: Effective AI integration requires governance mechanisms — structured verification protocols, domain-expert review stages, epistemic accountability frameworks, and mandatory human-in-the-loop checkpoints — rather than relying solely on improving model accuracy. The paper emphasizes that technical solutions (e.g., better retrieval-augmented generation, improved calibration) are necessary but insufficient.
- Democratic accountability at stake: When AI-assisted decisions go wrong, the diffusion of responsibility between human administrators and AI systems threatens the democratic principle that public officials must be accountable for their decisions. The paper warns of an emerging "accountability gap" where neither the AI developer, the AI system, nor the administrator bears clear responsibility for erroneous outcomes.
- Implications for administrative law: Existing legal frameworks governing administrative discretion, due process, and reasoned decision-making were designed for human decision-makers. The paper argues that AI-augmented decision-making may require new doctrinal developments to ensure that administrative actions remain legally defensible and procedurally fair when substantially informed by AI outputs.
Why It Matters
As governments worldwide accelerate AI adoption in public services — from the U.S. federal government's executive orders on AI to the EU AI Act's regulation of high-risk AI systems in public administration — this paper provides a critical conceptual lens for understanding why "more AI" does not automatically mean "better decisions." The epistemic boundedness framework makes several key contributions:
- New theoretical vocabulary: By distinguishing epistemic boundedness from bounded rationality, the paper gives public administration scholars and practitioners precise language for identifying and analyzing the distinctive risks of AI-augmented governance — risks that are invisible through the lens of traditional decision theory alone. This is the first systematic theoretical treatment of how generative AI transforms, rather than merely extends, the epistemological foundations of administrative decision-making.
- Actionable governance framework: The proposed institutional design principles — verification protocols, epistemic accountability, transparency requirements, and capacity building — offer concrete, implementable guidance for agencies seeking to integrate AI responsibly rather than reactively. Each principle is tied to specific failure modes and risk pathways, making it practical for organizational implementation.
- Interdisciplinary bridge: Connecting public administration theory (Simon, March, Lindblom) with AI epistemology, philosophy of science, and computer science, the paper opens a productive dialogue between fields that have largely developed their analyses of AI governance in isolation, advancing our understanding of how technical and institutional dimensions interact.
- Reframing the AI-government relationship: The paper challenges the prevailing "efficiency narrative" that treats AI as a neutral tool for reducing costs and processing times. By centering epistemic risk rather than operational efficiency, it redirects attention to the knowledge-quality and accountability dimensions that are ultimately more consequential for democratic governance.
Rather than treating AI as a neutral efficiency tool, the paper calls for deliberate institutional design that preserves human judgment and democratic accountability in the face of AI's epistemic opacity. The framework has immediate practical relevance: it can inform the design of AI governance policies at the agency level, guide procurement standards for AI tools in government, and shape training programs for public servants who will increasingly work alongside generative AI. As LLM capabilities continue to expand and AI adoption in government accelerates, the need for principled epistemic governance will only grow more urgent.
Domain LLM
Reasoning