Musings
//

🔍 Designing with Possibilities: The Human Advantage

Jump to section

The AI conversation often splits into extremes: techno-optimism on one side, existential panic on the other. But what if the most interesting space is in between?

This is where I found myself in conversation with Ellen Muench, Desklight’s Director of Learning Innovation & Strategic Projects. She doesn’t approach AI with blind enthusiasm or resistance, but with thoughtful curiosity. We explored what it means to design learning in a world where AI is not just present, but evolving rapidly.

At Desklight, we often ask:

What do we want learning to feel like? To lead to? To build?

This article is an invitation into a conversation about what AI makes possible in learning, and how to keep the human at the center of it all.

Possibility Over Panic

For Ellen, a central concern is not that AI will take our jobs, but that it might erode our ability to think for ourselves.

“I have a real paranoia about the risks of cognitive offloading,” she shared. “I really want AI to augment and accelerate thinking and creativity, not displace it.”

While many worry about AI replacing the human element, Ellen reframes the conversation:

What possibilities does AI enable?

Yes, AI offers efficiency, but more importantly, it can free up time and energy for what truly matters:
Human connection. Creative inquiry. Critical thought.

This reframes the conversation. The goal isn’t simply to do the same work faster. It’s about being deliberate in how we use these tools, using them to offload low-value tasks and create space for the work only humans can do: meaningful learning, deep connection, and activating our powers of judgment. When we do that, we can reclaim some of the joy of thinking.

So the question becomes: How do we use AI to augment our thinking, not automate it?

From Meta-Cognition to Joy

Ellen uses generative AI in her day-to-day work, but always as an active partner. She prompts it to ask her metacognitive questions that stretch her thinking, such as:

  • How might I approach this differently?
  • What would someone with an opposing view say?
  • Where might my assumptions be getting in the way?

For Ellen, AI isn’t just a tool to speed up output.
It’s a prompt for deeper reflection.

“In a way, AI is like Isaiah Berlin’s fox. It ‘knows’ many things, and I can use it to quickly surface the starter information I need. But it’s important to remind myself that it is a starting point, not a destination.”

Ellen Muench

She often uses AI like a sparring partner in brainstorming and research, quickly surfacing context, ideas, or patterns. When designing learning, she might describe a challenge and ask AI to generate variations, like riffing on a pattern in design thinking. But choosing the right idea still requires deeply human judgment.

This reminded me of something we explored in our last article with Chris Huizenga, check it out:
Designing for Human Growth in the Age of AI

“Instead of powering another content-delivery platform, AI can be something like a metacognitive mirror,” she explained. “An AI coach that analyzes your work and asks probing questions, making your implicit process explicit. This kind of interaction fosters agency, not just passive information consumption or the dreaded ‘copy/paste syndrome’.”

Ellen brings that idea to life, using AI not to shortcut her thinking, but to challenge it.

In this way, AI becomes a tool to train the muscles of curiosity and reflection, when used with intention.

Designing the Learner, Not Just the Tool

“As Anne-Marie Willis says: “We design our world, and our world designs us back.” The way tools for learning are designed have impacts beyond their core functionality. To design a platform is to design the learner.” — Ellen Muench

Ellen referenced the idea of ontological design to highlight that EdTech is never neutral. It shapes not just outcomes, but learners themselves.

When we build a tool, we’re also encoding a vision of what a learner is and what success looks like.

“Are we thinking enough about what kinds of learners our tools are designing?” she asked.

“It seems important to ask: Whose vision of ‘good’ is being encoded?”

This is why she is concerned about creating an algorithmic monoculture. Relying on a single type of AI model to assess or guide learners can magnify biases and create invisible bottlenecks. An AI optimized for one style of writing, for instance, can become a barrier that limits opportunity pluralism.

When systems standardize too much, they narrow possibilities.

This shaping defines a learner’s ‘way of being’ (e.g., collaborator vs. data point) and ‘way of desiring’ (e.g., intrinsic curiosity vs. extrinsic badges). This is where ethical design comes in: creating environments that invite exploration and choice, not just compliance.

Be Clear on Your Goals

This isn’t to ignore other real concerns.
Ellen named several: data privacy, algorithmic bias, and academic integrity.

“AI isn’t neutral,” she said.
“What assumptions is it making when it helps us design?”

This is why clarity of goals is so essential.

Before hitting “generate,” Ellen recommends pausing to reflect:

  • What are we optimizing for?
  • What do we care about most?
  • What is the human need behind this work?

“Be clear with your team about what your goals are,” Ellen said. “And then experiment from there.”

That clarity is strategic, not technical.
It starts with values, outcomes, and empathy for learners, and builds forward from there.

Guiding Thoughts for a Changing Landscape

Learning isn’t just cognitive. It’s emotional.

And as AI evolves, the human dynamics of learning, trust, vulnerability, and permission to fail become even more essential.

“We have to talk with our teams,” Ellen said.
“Where are they on this? What are they afraid of? What are they excited about?”

These conversations matter, not just to reduce fear, but to shape how teams will show up, create, and learn with AI.

That act of pausing to ask, to listen, to co-design, is itself a way to fight back the loss of critical thinking.

These are questions learning leaders must ask not just to their teams, but to themselves.

To navigate this moment with clarity, Ellen shared some of the principles she is exploring in her own work:

  1. Start with the Right Questions

Before using AI, clarify your goals. What are you optimizing for: efficiency, reflection, insight?
Ellen often returns to Neil Postman’s question:

“What is the problem to which this technology is the solution?”
And the follow-up: “What new problems might it create?”

  1. Aim to Augment, Not Automate

Use AI to automate low-value tasks so you can focus on deeply human ones: mentoring, exercising judgment, building trust.
Michelle Weise calls these Human+ skills, the work that can’t be outsourced.
This shift means evolving from content creator to system architect: designing prompts, feedback loops, and pathways, not just lessons.

  1. Design for Pluralism, Not Just Efficiency

Resist the allure of a single solution. Creating multiple pathways for success matters. This means seeking a diversity of tools to avoid creating algorithmic bottlenecks.

“What kind of learning ecosystem are we building?
Does it foster curiosity or compliance?”

  1. Make Space for Human Emotion

Learning isn’t just cognitive; it’s emotional. Ellen stressed the need to talk with teams about their fears and excitement. Creating psychological safety for people to explore, fail, and ask questions is essential for genuine innovation. Learning happens when emotional permission is present.

  1. Foster Agentic Engagement

Design systems where AI becomes a tool for self-directed inquiry. The goal is to help learners ask better questions of themselves and become the active drivers of their own growth.

The Human Endeavor

There’s one idea I keep circling back to after talking with Ellen:

What if AI is here to help us delegate what doesn’t need to be human, so we can return to what does?

Things like:

  • Wonder
  • Leisure
  • Critical thinking
  • Spiritual evolution
  • Creativity for its own sake

What if AI is not here to replace us, but to liberate us? To challenge us to be more intentional about what is uniquely human?

To question.
To imagine.
To create.
To be.

Maybe the next revolution isn’t industrial or digital, it’s human.
A revolution of being, led by creativity, joy, reflection, and connection.

If so, it’s one worth designing for.

“The use and application of AI should remain human-centered.”
— Ellen Muench

And maybe that’s the heart of it.
Not what AI can do, but what it frees us to become.

A Small Experiment

Let’s close with a small prompt, a little spark:

What’s one low-value task you could delegate to AI this week to create space for something that brings you joy, meaning, or connection?

This is how it starts:
Small experiments.
Tiny reclaimings of time.
Micro-moves toward a more human future.

The tools are here.
The questions are waiting.

Let’s design from possibility.

Big thanks to Ellen Muench for a conversation that sparked this reflection.

Let’s keep asking better questions and designing for better answers.

Read More

View All
Scroll to Top