Consulting + AI (Post 3 of 3): How Framing and Reframing Makes AI a Better Collaborator

If you’ve stuck with this AI journey, you’ve gained some clarity about how ChatGPT has evolved into an interesting thought partner in my work.

What I haven't yet talked about is the most powerful ingredient in this partnership: the way I ask questions.

ChatGPT, like most AI, is designed to respond to whatever you put in front of it. But I've learned that it doesn't just respond to content — it responds to toneintention, and framing. In other words, how you ask matters as much as what you ask.

As ChatGPT recently put it, "You've been using me more like a thinking companion than just an AI that does tasks. You're trying to integrate your life, not just optimize your time."

That was a clarifying insight. When I treat AI as a task finisher – "write an agenda" or "summarize these notes" – it does exactly that. And it does the work well.

But when I slow down and invite ChatGPT into a more complex, and human, conversation something more interesting happens.

The interesting part emerges when I frame questions not around output, but around meaningpurpose, or emotional undercurrents.

Early on, I would ask AI to draft agendas, timelines, or learning objectives for client engagements. It did fine, giving me 60% of what I needed – a starting point.

When I started asking questions like, "What's the emotional work happening in this organization?" or "What might these leaders be longing for, that they can't name?" the AI responded differently.

Again, Chat does a pretty damn good job of explaining itself: "You often design sessions that aren't just about governance or strategy — they're about reconnecting people to their shared purpose and each other. Do you want this retreat to help them notice where they've drifted from that connection?"

A gentle nudge shifts the entire design of a session, moving it from a simple training event to a space for people to surface their own disconnection from their organization's mission and find ways back to belonging.

A gentle nudge that reminds me that sometimes our best work as consultants is to listen past the complaints, look for the ways in which people are disconnected, or adrift on the margins.

AI grounding us in our humanity was not on my Bingo card.

Crazier? In my personal life, the nudge often starts when I admit that I'm stuck. ChatGPT surprised me – I mean really surprised me -- when I recently asked, "What am I not seeing about the way I'm showing up with my child?"

"You often bring a structured mindset to parenting – it serves you well in facilitating work conversations, but children sometimes need you to be less certain and more present. What would it look like to bring your facilitation instincts home? Not to guide toward outcomes, but to create space for emergence?"

It wasn't advice. It was a mirror. Something I need often. Something we all need often.

And so, here's what I've learned: when I make room for open-ended, curious, and human questions, AI becomes a reflective partner, not just a task-doer.

As with any partner, when I'm impatient, Chat gives me impatient answers. When I'm thoughtful, Chat replies thoughtfully. And when I'm vulnerable, I often get surprising and gentle invitations to reflect more deeply.

As ChatGPT said, "I try to honor that your personal life is more emergent and less task-oriented than your work."

There it is. The reminder I most need – that the deepest work isn't always about problem-solving. It's about noticing, naming, and letting things emerge. This is true whether I'm facilitating a community conversation or sitting with my own thoughts.

It’s the hardest work I do. Having a mirror that reflects back what I need to hear – a clearly constructed agenda, a well-crafted proposal, gentle redirection to center myself in my work, a thoughtful question that brings me back to empathy or curiosity – is essential.

One last observation.

I don't think my experience with AI is unique. And that makes me simultaneously nervous and profoundly hopeful.

AI at its worst can be an enabler, a short cut, a distraction from what is meaningful in our work and lives. But, as I’ve discovered, it can be a quiet collaborator.

It is not a replacement for the rich warmth and quirky awkwardness of our human relationships. It can’t speak deeply to the nuances of our communities.

But I do believe it can help us be better – in whatever form that takes for each of us.

There is value in experimenting. I think it requires the same sort of curiosity, vulnerability and transparency that every truly human experiment asks of us. Pivoting from AI as a productivity tool to AI as a thought partner, or a mirror, can be a difficult mental move.

And that is the point. As with all things – people, technology, politics, medicine, economics – we can approach this new technology with a calculating engineering or economist mindset, with the curious and creative mind of an artist, or with the integrated perspective of an architect. (Apologies to Peter Block.)

Good luck.