What I Learned About Creativity, Agency, and Myself by Trying to Write a Book With AI
The Party Favor
At a holiday party, I heard my name before I heard the sentence that followed it.
I was standing near the kitchen, half in a conversation about work and kids, when my wife’s voice cut through the background noise. Not loud. Just clear.
“I like his writing better.”
That was it. Six simple words. No edge in her tone. No drama. Just an honest answer to a question I hadn’t heard.
My first reaction was anger, even if it only showed up as a polite smile from across the room. In my head I was arguing with her instantly. This is my writing. Every page started with my idea. My story. My prompts. My edits. I wasn’t a bystander who fed a topic into a machine and hit publish. I had driven this thing.
That little legal defense team in my head went to work fast. She didn’t mean it that way. She just likes your older work because it feels familiar. People resist new things. Besides, other people have liked it. You are overreacting.
None of that changed the fact that she had said it without hesitation. “I like his writing better.” Not the plot. Not the concept. The writing.
I went home that night with the usual post-party fatigue, but my mind kept cycling through lines from the book. Phrases I had flagged as slightly off and then let slide. Scenes I had meant to tighten. Sections where the voice sounded more like a very competent stranger than like me.
I had told myself that was the tradeoff for working this way. I was moving faster, experimenting, building something “with AI.” A creative experiment. A statement, even. Of course it would feel a little different.
Every other Saturday, my oldest daughter and I do FaceTime coffee. She lives in Michigan, so it’s our little ritual. No big agenda. Just a steady connection where we talk about family, faith, life, what we’re working on, what we’re worried about, and what’s funny.
That morning, she brought up the audiobook.
“I really love the story,” she said. “The idea is so good.”
There was a pause. I knew that pause. The same one I use when I’m about to give a client honest feedback.
“Can I be straight with you?” she asked.
“Always,” I said, even though a part of me wanted to say no.
“I don’t think the writing is your best. It doesn’t sound like you. The story is there, but the way it’s told feels off. I think you should actually write it.”
I tried to play dumb. “What do you mean, actually write it? I did write it.”
She smiled in that way grown kids do when they love you but are not buying your story.
“I know you worked with AI on it,” she said. “I know how you did it. But there are moments where the language feels thin. Repetitive. Some lines sound like AI. The story is strong, but I miss your voice. I think the story deserves your writing. The real version.”
She started pointing out specific spots she remembered. A scene that should have hurt more than it did. A description that tried too hard. A piece of dialogue that sounded fine but didn’t really serve a purpose. None of it surprised me. Every example she gave was something I had noticed at some point and chosen to live with.
She wasn’t attacking the project. She was honoring it. And me. That was what made it harder to shrug off.
When we finished talking, I told her I appreciated the honesty. I meant it. I also felt that same argument spin up inside my head again. Other people like it. The story matters. Nobody else will care about the wording at this level. You are being precious.
But it was simple. Two of the people closest to me, who knew my writing best, were telling me the same thing from different angles. The story was strong. The writing was not mine.
The next morning I woke up early. The sky outside was still dark. I made coffee, opened my laptop and logged into my Amazon KDP account.
There was no dramatic soundtrack. No shaking hands over the keyboard. Just a quiet office and a very practical set of publishing menus.
I found the title, clicked into the settings and followed the steps to unpublish. Confirm this. Save that. Are you sure.
Yes.
In less than a minute, the book was gone from the store. The Kindle version, the paperback, the hardback, the audio edition that had just gone live. A year of work, dozens of versions, all the launch energy. Gone with a few clicks.
I closed the laptop and sat there for a while, listening to the laptop hum and the faint sounds of my wife upstairs. I didn’t feel crushed. I felt exposed, then oddly calm.
The experiment was over.
If this book was going to exist, I was going to have to write it myself.
What I Thought I Was Proving
Before I ever called it an experiment, I had a simple belief about creativity.
Story is the core of it.
A human being takes something they’ve lived, something they’ve seen, something they’ve survived, and they shape it into a form that can move into another human being. Narrative. Poetry. Music. A painting. A joke. Even a well-told anecdote over dinner. The medium changes, but the transfer stays the same. One person reaches across the gap and says, “This happened to me,” or, “This is what it felt like,” or, “This is what I’ve come to believe.” And the other person recognizes something in themselves.
That recognition is the point.
I’m not interested in creativity as decoration. I’m interested in it as communication. Not marketing communication, either. Human communication. The kind that bypasses arguments and hits the heart.
That’s why I’ve never been impressed by output alone. Words on a page can be technically fine and still empty. A song can be perfectly produced and still feel like nothing. You can follow all the rules and still miss the human.
So when AI got good enough to write clean paragraphs, I didn’t immediately think it was going to replace writers.
I thought something else.
What happens when a system can mimic the surface of story, but it doesn’t have a life behind it?
Because that’s the strange thing about AI writing. It can sound like somebody. It can sound like everybody. It can sound like me on a good day if I’m not paying attention. But it doesn’t have a childhood. It doesn’t have regret. It doesn’t have a marriage. It doesn’t have a body that gets tired or a conscience that flares up when a sentence is dishonest.
It doesn’t know what it costs to say something true.
I wasn’t trying to prove that AI is useless. I already knew it was useful. I use it constantly for research, synthesis, brainstorming, thinking through structure. It’s an extraordinary tool.
What I wanted to test was narrower and more practical.
How far could I push AI in a creative endeavor, and where would collaboration become real instead of theoretical? What does working “together” actually look like?
The First Trial
The first time I tested this idea was in October of 2023.
Back then, it wasn’t MacBeth.exe. It wasn’t a project with a workflow and a knowledge base. It was a curiosity. I opened ChatGPT and gave it a simple prompt: retell Macbeth in a near future where AI assistants are everywhere.
It came back fast. A title. A plot summary. A modern cast.
Mac was an ambitious corporate guy. The AI assistant was Beth. The advice started helpful and became increasingly persuasive. The moral descent was there on paper. The tragedy was recognizable. The story had the right bones.
On the surface, it worked.
Then I asked it to write.
It produced something like a twenty-page mini-book. It had scenes. It had dialogue. It had plot. It did the thing people now casually call “writing a story.”
And I remember finishing it and feeling almost nothing.
It wasn’t bad in the way a middle school essay is bad. It was competent. It was readable. It just didn’t connect. When I tried to recall it later, I couldn’t. Not because my memory is terrible, but because nothing in it had enough weight to leave a mark.
That was my first real clue.
The system understood parallels. It understood structure. It understood the idea of ambition, betrayal, consequence. It could line those pieces up and make them look like a story.
But it didn’t understand why any of it mattered.
Mac’s choices didn’t feel like choices. They felt like plot points. Beth’s influence didn’t feel like seduction. It felt like a device. The language moved forward, but it didn’t carry pressure.
So I put the project down.
Not because I thought the concept was dead. The concept stayed with me. The Mac and Beth pairing stayed with me. The idea that the modern version of a whispering force isn’t a spouse in a candle-lit castle but a voice in your pocket, always present, always ready with an answer. That part felt true, and it stayed in the back of my mind.
What I set aside was the illusion that a good outline equals a good story.
Then two years passed, and I did what creative people always do.
I thought, maybe I could make it work if I did it the right way.
Why I Came Back Two Years Later
I didn’t come back to Mac and Beth because I suddenly believed AI had become a writer.
I came back because the idea was still sitting there, unfinished, and because the tools had matured just enough to make a real experiment worth running. The first attempt had shown me something true. AI could produce a narrative that looked like story. It could hit the shape. It could move a plot forward. It could generate competent prose on demand.
It also showed me how forgettable that competence can be when there’s no human weight behind it.
That didn’t kill the concept for me. It clarified what the concept needed. If this was going to work, it wasn’t going to work as a one-off prompt and a quick output. It would have to be built the way real books are built, with deliberate choices, repeatable structure, and revision that actually changes something.
By the time I returned to it, I was thinking less like someone playing with a tool and more like someone trying to pressure-test a workflow. I wanted to know how far I could push AI inside a creative endeavor without it turning into either a gimmick or a mess. I wanted to find the edge of the relationship.
Where does “co” actually show up?
Because “co-writing” can mean a lot of things. It can mean you write a book and the AI offers a handful of wording alternatives. It can mean the AI generates whole scenes and you edit them. It can mean you act as a director and the system acts as an executor. I wasn’t trying to land on a moral conclusion at that point. I was trying to find the line in the real world.
So I approached it like a process experiment.
I started bouncing concepts around, working on characters and setting, and getting clear on what the story actually was. I read and re-read thrillers, trying to understand what made that genre work when it worked. I mapped an arc, and I started making decisions that felt less like brainstorming and more like committing.
Then I built a system around it.
I broke the book into chapters, and each chapter into five self-contained segments. Token limits forced that constraint, but I also liked what the constraint did. It made the work manageable. It made it repeatable. It gave me a steady unit of progress that I could actually control.
Every segment began with a structured handoff prompt. That prompt carried the narrative context, the emotional target for the scene, the style rules, and the guardrails around what had already been decided, so the model wasn’t starting from zero every time. It sounds mechanical when I describe it, but it was the opposite of lazy. I was trying to make sure the work stayed aligned and didn’t drift into generic language or inconsistent character behavior.
Looking back, this is where the experiment quietly got serious. Not because I was making a grand statement, but because I built a workflow sturdy enough to produce a whole book.
And once you can produce a whole book, the temptation changes. It stops being, can this be done, and becomes, how quickly can I get to done.
That part came later, but the conditions for it were set right there, in the moment I stopped prompting casually and started building a machine that could ship.
Building the Machine
Once I decided this wasn’t going to be a one-off prompt experiment, I had to treat it like production.
That meant I needed constraints, a repeatable unit of work, and a way to keep the story from drifting every time I opened a new chat window.
So I did what I always do when I’m trying to build something complicated. I stopped thinking about the whole and started breaking it down into parts I could actually control.
First came the story arc. Not prose. Structure. Where it starts, where it ends, what the turning points are, what has to be true for the ending to feel earned. I made decisions about who Mac is at the beginning, what he wants, what he’s afraid of, and what Beth is actually doing to him, even when he can’t see it yet. I did the same for the other characters. I didn’t need a perfect outline at that point. I needed a spine.
Then I broke the book into chapters.
Then I broke each chapter into five segments.
That segmentation started as a workaround for token limits, but it became one of the best decisions I made. A five-segment chapter forced each piece of the story to do a job. It also meant I could sit down and finish a segment in a focused block instead of trying to wrestle with an entire chapter and losing momentum.
Each segment got a target word count and a clear intent. What scene is this. What changes by the end. What emotional note has to be present. What can’t happen yet.
Then came the “co” part, and this is where it gets misunderstood.
The collaboration was not “AI, write my chapter.”
The collaboration was more like: here is the segment, here is the context, here is what has already been established, here is the tone rule, here is what Beth is allowed to say and how it needs to be formatted, here is what Mac believes right now, here is what he can’t know yet, here is the point of the scene, now draft it.
That was the handoff prompt.
Every segment started with one. Same structure every time, because consistency matters. If you change the rules every time, you don’t get collaboration. You get chaos.
I also kept a locked reference document that functioned like a creative constitution. Canon, character rules, voice rules, formatting rules. What the world allows. What it doesn’t. What would break believability. What would cheapen the story. When I say locked, I mean I treated it like a source of truth. Every prompt had to respect it, or the whole project would drift into generic territory.
I treated the setup like a small team with clear roles. One model became my production manager. It helped me think through story, kept the production documents straight, wrote the handoff prompts for the writing model, and then reviewed what came back. The writing model did the scene-level prose.
Then I did the part nobody sees when they imagine “writing with AI.” I read. I revised. I re-prompted. I tightened language. I cut weak lines. I chased down inconsistencies. I flagged overused words and patterns and forced variation. I also ran a Humanization pass, because the model has habits, and those habits show up fast if you let them. That pass was my way of breaking the “model voice” and pulling the sentences back toward mine.
And I kept running into the same problem and solving it the same way. The draft would come back fluent, and I would have to decide whether it was true. Whether it sounded like me. Whether it carried the right weight.
Sometimes it did. Often it didn’t. But it almost always gave me something to work from, which is the real advantage of these systems. They reduce the friction of starting from nothing.
So that became the loop.
Handoff prompt. Draft. Read. Revise. Re-prompt. Tighten. Lock the segment. Move to the next.
After a while, it started to feel like a machine. Not in a cold way. In a predictable way.
And predictability is where the danger started to hide, because once a process starts producing steady progress, you stop asking the deeper question as often.
Is this good.
Is this mine.
Or is this simply done.
The Comfort Phase
Once the process was running, it started to feel familiar.
Not familiar like writing a book the old way, because the old way has more silence in it. The old way has more staring at a sentence and realizing you don’t yet know what you mean. This felt familiar in a different way. It felt like running a creative project with a capable team. You set the intent, you define the constraints, you review the work, you give notes, you iterate, and the thing gets better.
It also gets finished.
That steady progress does something to your brain. It makes you trust the system that is producing it. It makes you want to protect the momentum, because momentum feels like proof that the work is real. It makes done feel close enough to good that you can sometimes confuse the two without meaning to.
The collaboration started to settle into roles.
I would walk into a segment with clear intent. I knew what the scene needed to accomplish. I knew what Mac could see and what he couldn’t. I knew what Beth was trying to do, even if the story wasn’t ready to say it out loud yet. I would shape the prompt the way I would brief a human writer or a creative team. The model would come back with language, description, dialogue, and pacing. I would read it, mark what worked, cut what didn’t, and push it again.
When it was bad, it was obvious. The model would drift into generic phrasing, or it would over-explain emotions, or it would pick the first dramatic option instead of the right one. Those were easy to fix because they felt wrong immediately.
The harder part was when it was pretty good.
Pretty good is the dangerous zone, especially for someone like me who can see the story clearly in his head. If the prose was serviceable and the scene hit the beats, I could feel satisfied, because the work was matching my internal movie closely enough. My brain would fill in the gaps. I would read a line and supply the weight that wasn’t actually there on the page.
That is a strange kind of self-deception because it feels like competence.
It also felt legitimate, because I wasn’t pretending I had done nothing. I was doing a lot. I was directing the story. I was setting the guardrails. I was revising. I was managing consistency across chapters and across versions. I was putting in real hours. I could point to the work and say, honestly, I built this.
So I started to get comfortable with the relationship.
I told myself this was no different than creative direction in any other medium. I’ve led teams. I’ve shipped big projects where my fingerprints are everywhere, even if I didn’t personally execute every detail. I didn’t paint every pixel, but the final product still reflects my intent. That logic made sense to me, and it made the collaboration feel normal.
What I didn’t notice at first was that writing is not the same as directing.
In most creative work, you can separate vision from execution without losing authorship, because the medium allows it. A film director doesn’t hold the camera, but the film still has a coherent voice because the director’s judgment is expressed through casting, framing, pacing, edit decisions, and the thousand small calls that shape the viewer’s experience.
A book is different. A book is made of sentences. The voice lives in the micro-decisions. Word choice. Rhythm. Restraint. The way a character thinks. The way a moment feels. You can collaborate and still have a voice, but the collaboration has to be handled with care, because language is not a separate layer. Language is the work.
And I was starting to treat it like a layer.
I wasn’t doing this consciously. I wasn’t trying to shortcut craft or fake authorship. I was simply enjoying what the tool made easier, and I was letting the process convince me that control over the system was the same thing as ownership of the prose.
It felt like I was holding the steering wheel.
In reality I was letting the car pick the road more often than I realized, because it was getting me where I wanted to go, and it was doing it quickly.
That comfort lasted longer than it should have, and the reason it lasted is the same reason these tools are so seductive. They don’t ask permission when they make life easier. They just make the next step convenient, and the next one, and the next one, until you wake up one day and realize you haven’t actually been paying for the same parts of the work anymore.
You can still claim the project. You can still defend the intent.
But you start to lose the right to say, without hesitation, that every word is yours.
Launch Mode
Once the manuscript felt done enough, I slipped into a mode I know well.
Launch mode has a rhythm. You stop obsessing over the internal mess and start thinking about how the thing is going to show up in the world. You start making assets. You start building context around the work so people can understand what it is and why it matters. You start telling yourself that if you can get it in front of the right people, the work will speak for itself.
I also had a blast.
That’s worth saying plainly because it’s part of what made this whole thing complicated. The transmedia layer I built around MacBeth.exe was one of the most creatively satisfying parts of the project. I wasn’t just uploading a file to Amazon. I was building a world you could step into. A narrative experience that felt half marketing, half fiction, half social experiment, and somehow still coherent.
At one point I remember thinking, this is either the most creative thing I’ve ever done or the craziest.
And the more honest version is probably both.
The irony is that I could have done that part without AI. The world-building, the launch story, the packaging, the tone of the campaign, the way I framed the experience, the choice to make it feel like something leaking out of a fictional company. That was all me. That work felt like my fingerprints. It felt like my sense of timing.
It also gave me a convenient place to focus my energy.
Because when you are building a launch, you can always find something productive to do that is not the scary thing. You can tweak the cover. You can write posts. You can plan the rollout. You can design a site. You can polish the blurb. You can do a hundred things that feel like progress, and they are progress, but they are not the same as sitting with the prose and asking if it holds up.
And I was already slightly numb to the prose.
Not because I didn’t care, but because I had spent so long inside the process that the text had become familiar. Familiarity is dangerous with writing. You stop seeing what’s actually there. You start reading what you meant.
So launch mode felt like relief. It felt like reward. It felt like the point where the work turns into something shareable and alive.
I reached out to beta readers. I sent manuscripts. I asked for feedback. I was excited.
And I made mistakes I shouldn’t have made.
I sent the wrong versions to people. Not once. Multiple times. With twenty versions floating around and each revision generated through prompting and re-prompting, it was easy to lose track of what was current. I would fix a section in one version and forget to port it cleanly to the next. I would tighten a chapter, then regenerate a segment and reintroduce problems I thought I had already solved.
That was my first real taste of what happens when you move fast with a system that can rewrite huge chunks instantly. The speed doesn’t just help you create. It creates new failure modes.
I also started noticing patterns I had missed for too long. Overused descriptors. Repeated sentence structures. Characters thinking in the same cadence. Moments where the language felt polished but weightless. Those were the places where the model had done exactly what models do. It reached for what sounded right, not what was true.
And I had let it.
Not fully. Not in the obvious way where you hit publish on a first draft and call it art. But enough that the book carried a voice that was close to mine, and in the middle of a launch, close can feel sufficient.
During the launch, I was telling myself the truth and a lie at the same time.
The truth was that the story mattered and needed to be told.
The lie was that the way it was told didn’t matter as much, because I had already done the real creative work.
Launch mode makes that lie easy to believe, because everyone around you is reacting to the concept and the energy, and very few people want to be the one who says, yes, but the writing.
Very few people, except the ones who know you well enough to tell the truth.
The Ignored Signals
If I’m honest, the warning signs were not subtle. They were just easy to explain away.
The first one was the simplest. Readers didn’t finish.
At one point I printed a copy of Version 12 and took it on vacation. I wasn’t trying to run a formal study. I just wanted to see what would happen when the story was placed in a normal human context. A book handed to a friend with a free weekend, a beach chair, a plane ride, a quiet evening.
Four friends started reading it.
None of them finished.
Nobody said, this is bad. Nobody was rude. That almost made it worse because it left me room to protect my ego. People are busy. They got distracted. They’re not big fiction readers. Maybe the timing was off. Maybe they’re just not into thrillers.
All reasonable explanations. All convenient explanations.
The second warning sign was internal. Repetition.
I would read a chapter and notice the same types of sentences showing up again and again. The same emotional cues. The same physical tells. Sometimes I caught it and cleaned it. Other times I let it slide because it wasn’t breaking the story. It was just making the writing less mine. That distinction matters, and I didn’t fully respect it yet.
The third warning sign was consistency drift.
When you are generating and revising through prompts, you can create new problems while solving old ones. You tighten a segment, but the next re-generation changes a detail that later becomes important. You rewrite a scene, but the character’s voice shifts slightly and now you have to decide whether to correct the new scene or rewrite earlier ones to match it. You fix a timeline issue in Chapter 4, then discover you’ve introduced a contradiction in Chapter 7.
That is a normal part of writing, but the speed of AI makes it feel manageable right up until it isn’t. You can change so much so quickly that you stop tracking what has changed, and you start trusting that you’ll catch it later.
That trust is misplaced more often than you want to admit.
The fourth warning sign was my own workflow errors.
I sent out wrong versions to beta readers. I mixed files. I lost track of which version had the latest fixes. I would get feedback on a scene and realize the reader had read an older draft that no longer existed in my working copy. That is not a creative failure. It’s a management failure, and it matters because management failures are often where agency slips quietly. You stop feeling like an author and start feeling like an operator.
The fifth warning sign was the kind of feedback I was getting.
People told me the concept was strong. They liked the premise. They enjoyed the idea of Beth. They thought the AI angle was timely. They were impressed by the project.
All of that felt good, and none of it forced the hard question.
How does it read?
There is a particular kind of praise that can keep a creator asleep. It is positive enough to feel affirming, and vague enough to avoid accountability. It’s not malicious. It’s just polite, and most people are polite. Most people don’t want to critique a friend’s work. Most people will respond to the idea, not the execution.
If you’re honest with yourself, you can hear the difference.
I didn’t always want to hear it.
And then there was the editor.
I hired an editor on Fiverr. I wanted a professional pass. I wanted someone to catch what I was missing, and to tell me what didn’t work.
He was polite. He made refinements. He did not push back in the way a real editor pushes back. He did not tell me, this voice is inconsistent, or this prose is generic, or you’re relying on the reader to supply emotion that isn’t on the page.
He told me what I wanted to hear, and he charged me for it.
The best way I can describe it is this. It felt like working with a human version of AI’s agreeable tone. Helpful, affirming, and ultimately not in my best interest.
At the time, I still rationalized it.
The story matters. The point is the cautionary tale. The idea is strong. The world is cool. The launch is working. People are intrigued. I can clean this in later editions.
And then I did what you do when you’re in launch mode and you’ve already decided you’re going to see it through.
I published it.
Kindle. Paperback. Hardback. Audible.
I told myself I was doing the responsible thing by shipping a story that mattered, because the message was important. I told myself the writing was good enough, and that any rough edges were acceptable because the larger creative act was the story itself and the world around it.
Once it was published, the question changed.
It was no longer, how far can I push AI as a co-writer.
It was, am I willing to stand behind every word as mine.
What I Learned About Creativity, Agency, and Myself
Creativity
The experiment didn’t change my definition of creativity. It sharpened it.
I’ve always believed story is the core of creative work because it’s the cleanest form of human transfer. A person takes something real and shapes it into a form that can move into someone else. Not as information. As experience. That’s why a good novel can make you feel grief you’ve never known, or courage you didn’t think you had, or recognition you can’t fully explain.
Working with AI forced me to separate competent writing from writing that has weight.
AI is excellent at language. It can give you a clean paragraph, a vivid description, and a plausible scene faster than most humans can warm up. It can even surprise you with phrasing you wouldn’t have reached on your own.
But it doesn’t understand what it costs to tell the truth in a story.
It can simulate emotion. It can’t originate the emotional logic that makes a reader trust a character over time. It doesn’t naturally grasp slow-burn erosion, the small rationalizations, the way people betray themselves in tiny increments and only see it clearly in hindsight. When I needed something like emotional adultery to feel believable, the model could produce scenes that referenced it, but it could not consistently carry the human gravity of it without me forcing it in.
That difference matters because it’s the whole job.
The model helped me find good words. It helped me explore options. It gave me momentum. It did not supply the pressure under the prose by itself.
So the creative lesson was simple. A model can generate a story-shaped object. It can’t guarantee meaning. Meaning still requires a human who knows what the story is really about and refuses to let the language fake it.
Agency
The agency lesson was more uncomfortable because it wasn’t theoretical. I lived it.
When I say agency here, I’m not talking about having ideas. I had ideas. I’m talking about the moment where responsibility sits. The part where you decide what stays, what goes, what is true, what is cheap, and what standard you’re willing to put your name on.
The process I built was disciplined. Five segments per chapter. Structured handoff prompts. Locked reference docs. Iteration loops. It worked in the sense that it produced a complete manuscript. It also created a new temptation. Once the machine is producing steady progress, you start protecting progress. You start treating done as evidence of quality because it feels earned.
That’s where agency starts to slip, and it doesn’t slip all at once.
It slips when you accept a paragraph that is fluent but slightly hollow because it advances the scene and you can fix it later. It slips when you let repetition pass because it doesn’t break the plot. It slips when you regenerate a section for improvement and quietly reintroduce problems you already solved. It slips when you move fast enough that you can’t always tell what is your voice and what is simply like your voice.
I had convinced myself I was holding onto authorship because I was directing the whole thing. I was briefing, reviewing output, giving notes, tightening drafts, and making the final calls. That felt like leadership, and leadership feels like ownership.
But writing isn’t only leadership. Writing is the sentences.
A book is made of small decisions. Rhythm. Word choice. Restraint. The exact way a thought is expressed. You can collaborate and still be the author, but the collaboration has to be handled with a different level of care than most other creative work, because language is not a layer on top of the work. Language is the work.
Pulling the book was me finally acknowledging where the line is for me. I’m fine using AI to help explore options, draft rough scenes, and reduce startup friction. I’m not fine letting fluency slowly replace voice, even when it happens one reasonable compromise at a time.
Agency, for me, ends up being simple. I can’t delegate the part that makes the story mine.
Myself
The last lesson was the one I didn’t want to learn, which usually means it was the one I needed.
I learned I am susceptible to comfort.
Not comfort as laziness. Comfort as closure. I like shipping. I like seeing a big creative object become real. I like the feeling of momentum, and I can talk myself into calling momentum the same thing as craft if I’m not careful.
AI amplifies that weakness because it constantly offers the next step. It never says, stop and sit with this sentence until it’s honest. It offers options. It offers progress. It offers a smooth path forward. If you already have a bias toward finishing, it will meet you there.
I also learned something about reputation.
When people know you as a creative person, they often become gentler with you. They don’t want to critique too hard. They praise the concept. They praise the ambition. They praise the project. They don’t always tell you the truth about the execution, even when they sense it.
That’s not manipulation. It’s just human. Most people are polite. Most people don’t want to be the one who risks making a friend feel small.
Combine that social politeness with an agreeable tool and you get a very specific risk. You can end up surrounded by feedback that feels supportive while it quietly lowers your accountability. Meanwhile, the few people who will tell you the truth are the ones closest to you, and they are the easiest to argue with because their feedback feels personal.
That’s the trap I walked into.
The outcome wasn’t shame. It was clarity. The story is worth telling. The experiment proved a lot. It also proved something simple about me. I need to build my process in a way that protects my standards, not just my speed.
Because if I don’t, I will always be tempted to accept close enough, and close enough is where voice goes to die.
The Second Experiment
Pulling the book was not the end of the story. It was the end of that version of the relationship.
After I unpublished it, there was a strange sense of relief, and then a more practical realization. If I was serious about this, I had just traded a finished product for a blank page. I had traded momentum for responsibility.
I also had to admit something that had been easy to avoid while the machine was humming.
Writing is slower than directing.
When you are directing, you can move quickly because your mind is operating at the level of intent. You know what the scene is supposed to do. You can evaluate whether it works. You can make adjustments. You can keep the project moving.
When you are writing, you have to pay for every inch. You have to find the sentence that carries the meaning without explaining it. You have to decide where to be restrained, where to be sharp, where to let a moment breathe. You have to choose words that reflect your actual judgment, not the most likely next phrase.
That is the part I had drifted away from, little by little, while telling myself I was still the author.
So the second experiment is simple.
I am going to write the book the hard way.
Same story. Same characters. Same world. Same tension. But this time the sentences come from me. Not approved by me. Not shaped by me. Written by me.
That doesn’t mean AI disappears from my process. It just means it has a different job.
It can help me outline. It can help me test structure. It can help me catch continuity mistakes. It can help me find the places where I am repeating myself. It can help me pressure-test pacing and logic. It can even offer options when I’m stuck, the way a good assistant might throw out a few ideas in a room.
But it does not get to be the voice.
It does not get to do the part where a human being makes a sentence true. That responsibility stays with me, because that is what this whole story is actually about. A man who trades agency for comfort, one reasonable step at a time, until he realizes he has been rehearsing someone else’s voice.
If I am going to tell that story with integrity, I can’t live the opposite while I write it.
There is also a humbling side to this.
I can’t hide behind the concept anymore. I can’t hide behind the workflow. I can’t hide behind the cool launch world I built around it. If the writing is thin, that’s on me. If a scene doesn’t carry weight, that’s on me. If the voice feels generic, that’s on me.
That is the deal.
And honestly, I prefer it.
I would rather write slower and know the work is mine than ship faster and spend the next year defending how it was made. I would rather my wife and my daughter recognize me on the page than have a stranger tell me the concept is interesting.
The funny thing is, the story did what it was supposed to do.
It warned about the temptation to outsource judgment. It warned about the ease of small compromises. It warned about how comforting voices can steer you without ever sounding like a threat.
Then it did the same thing to me.
So I’m not embarrassed that I ran the experiment. I’m glad I did. It taught me exactly what I needed to learn. It also gave me the one conclusion I can live with, and it is simple.
The story is worth telling.
Now I’m going to tell it in my words.