What happens to human agency when intelligence is no longer exclusively human?
We are entering an era in which artificial intelligence does more than assist-it recommends, generates, evaluates, and increasingly shapes the informational environments in which we think and decide. The question is no longer whether AI will influence human life, but how that influence will transform responsibility, autonomy, and judgment itself.
This book introduces a rigorous and urgently needed framework for understanding human-AI interaction: coalescent agency. Moving beyond simplistic debates about "tool vs. agent," it offers a precise philosophical and operational model for assessing when AI systems preserve, enhance, or erode meaningful human control.
Drawing from philosophy of mind, cognitive science, ontology, and biosemiotics, the author develops measurable criteria-domain understanding, critical evaluation capacity, override authority, and responsibility attribution-that allow designers, policymakers, and researchers to evaluate real-world human-AI configurations. Rather than speculative claims about machine consciousness, this work delivers analytical clarity and practical relevance.
At a moment when AI systems generate text, images, code, and strategic advice with unprecedented fluency, the stakes could not be higher. Will these systems expand human capability while safeguarding accountability-or subtly displace the very conditions of human agency?
This book provides the conceptual tools to answer that question.