Lately, I have found myself circling around the same question: what does it really mean for humans and AI to “dance” together in creative work, especially writing? We are way past the days when AI was just considered a glorified spellchecker or at most a patient thesaurus for writing. Now, we can invite AI to be a co-author, or at least a sparring partner, in the story-making process. But what does that partnership actually look like, once you step past the buzzwords or unending Twitter marketing hype and, into the messy reality? Honestly, I am still collecting observations, just like so many others navigating this “fast-moving landscape.”

Whenever I kick off a new project with an AI tool, it feels a bit like managing a super-enthusiastic (but often hilariously literal) apprentice. Keen to help, but apt to take instructions a little too much at face value. The big question, then, is how to split up the work so we’re not constantly tripping over each other’s feet. That’s where I found myself grateful for a framework from Haotian Li et al. at CHI ’24. Think of it like sketching out the stations in a creative kitchen: who is prepping the ingredients, who’s doing the actual cooking, who handles plating, and who gets to taste-test at the end? This lens helped me start mapping out where AI fits (and where it sometimes gets in its own way) in the collaborative storytelling process.

They break down data storytelling (and honestly, this applies to most collaborative narrative work) into four main stages:

  1. Analysis: Digging through data to unearth those juicy, unexpected insights.

  2. Planning: Shaping those findings into a storyline, a bit like outlining a novel’s plot.

  3. Implementation: Actually creating the stuff: words, visuals, maybe even multimedia.

  4. Communication: Sharing the finished product with the world.

And both humans and AI can play a variety of roles at each stage:

  1. Creator: Doing the heavy lifting, building things from scratch.

  2. Assistant: Supporting the creator, think of AI proposing ideas, fetching facts, or automating the boring bits.

  3. Optimizer: Taking what’s already there and making it better, smoother, clearer.

  4. Reviewer: Offering feedback and catching what everyone else missed.

What is striking is that most tools sit in either a “human-creator, AI-assistant” pattern (we lead, AI helps) or occasionally flip that to “AI-creator, human-optimizer” (AI drafts, we polish). It makes total sense: we haven’t yet totally figured out how to hand over the reins to AI for the most complex, creative leaps, at least, not without risking our own voice getting drowned out. As Li and team point out, balancing human direction with AI autonomy is a real tightrope act. (And if you dig into their paper, you will see this isn’t just a gut feeling, it’s mapped out across dozens of tools.)

Is This Just Task Automation, or Real Co-Creativity?

So, is collaborating with AI just about offloading the boring stuff, or is there a shot at something genuinely creative here? That’s a question Anca Serbanescu wrestles with in her work on “co-creativity.” Her take: True co-creativity isn’t just AI automating tasks, it’s a back-and-forth jumble, a kind of creative jam session. She describes it as “...the dialogue between human and AI...leads to creative results that emerge from the interaction, not merely from automating sub-tasks.” Think of it like the difference between using a calculator for Maths versus riffing with another musician. One is efficient; the other is alive.

This is a question I keep wrestling with in my own work. When I get stuck, I will sometimes throw rough ideas at a generic LLM and see what comes back. Does it nail my intention? Not exactly, it often reacts more like an advanced autocomplete with quirks. But I’m also often surprised by the unexpected connections or odd suggestions that shake me loose from creative gridlock. As Serbanescu points out, most current AI tools are still tuned for efficiency or story assembly, not for that richer, back-and-forth kind of creative sparring I am after. Closing that gap, making creative partnership with AI feel more like a true back-and-forth and I dare say, like a true jam session is something I keep trying to crack.

Making Our Worldviews Mesh with AI’s Logic

If we really want this human-AI narrative dance to work, we have to understand how our worldview-laden perspectives interact with the AI’s straight up data approach. Enter Kaira Sekiguchi and Yukio Ohsawa’s “Hierarchical Narrative Representation (HieNaR)” framework. Imagine building a story like stacking blocks in a tower; there are twelve levels, from the tiniest particles (characters, strokes) up to words, sentences, and the whole narrative structure. They also break information into three states: raw data, descriptive explanations, and finally, patterned narratives.

What really grabbed me is their focus on the cyclical process, our high-level intentions (“worldviews,” creative hunches) keep bumping up against the AI’s low-level data representations. We might have a grand architectural vision, but it’s the AI that helps us find, shape, and stack the bricks. The catch: our vision has to keep steering the ship, or else the AI’s logic can bulldoze right over what makes a story feel human (I have once made an incoherent Twitter thread about this). Sekiguchi and Ohsawa highlight just how crucial it is to preserve human autonomy, especially as LLMs get more powerful and more persuasive. Creative conflict isn’t a bug; it is where the magic happens. (For what it’s worth, this reminds me of the “Jagged Frontier” I have read about before: AI has wild peaks of brilliance and surprising valleys of oddness, so we need to stay somewhere between pilot and co-pilot.)

Where Do We Go from Here?

I would love to say I have clear answers, but the truth is we are all writing this “user manual” as we go. The frameworks from Li, Serbanescu, Sekiguchi, and Ohsawa offer valuable perspectives on how to approach creative collaboration. They act as signposts, showing us how to slice up the work (creator, assistant, optimizer, reviewer), and why genuine co-creative dialogue matters more than ever.

However, what I do know is that the most interesting, and honestly, fun results come from those willing to experiment, and willing to treat AI tools not just as mere instruments, but as partners in creativity. There is an excitement to be seen, to be experienced, a possibility that is very new, very experimental. The excitement is not about replacement, but about crafting a world and insights that that neither human nor AI could dream up alone.

So, here is my advice, to myself, and to anyone else venturing into these creative waters: dive in, play, break things, and see what happens. Regardless of the general perspective about a bubble or not, the story of AI is still very much unfinished, and the invitation is out to help write the next chapter.