top of page
Search

The World We Mean: Why Creative People Need to Engage AI Thoughtfully

  • Writer: Scott Simpson
    Scott Simpson
  • 3 days ago
  • 3 min read

Updated: 22 hours ago

For the last three years, I’ve been quietly experimenting with artificial intelligence.

Not as a tech evangelist. Not as someone interested in replacing human creativity. And certainly not as someone convinced that machines are becoming people.

I came to these tools the same way I’ve come to most creative practices in my life: curiously. As a writer. A poet. A songwriter. A producer. Someone interested in process, meaning, rhythm, image, and the strange ways ideas emerge when we stay with them long enough.

A great deal of this exploration has also grown out of ongoing conversations with my lifelong friend and thought-partner, Mark Baldridge, through our podcast Faithfulcrum. Over the course of the show’s second season, we found ourselves returning again and again to questions surrounding creativity, spirituality, meaning-making, artificial intelligence, and human identity. Those conversations became less about technology itself and more about what technology reveals about us.

We talked about how meaning is actually made. About the benefits and dangers embedded within artificial intelligence systems. About the ways human beings have always collaborated with process, accident, juxtaposition, and unpredictability in creative work.

Long before generative AI appeared, Mark and I were already experimenting with random language fragments, collaborative writing exercises, image association, cut-up techniques, storytelling prompts, improvisational structures, and game-like creative processes designed to produce unexpected connections and ideas. Looking back now, it feels as though those years of experimentation uniquely prepared us for engaging AI creatively without immediately mistaking it for consciousness or magic. We were already interested in what happens when structure and surprise collide. AI simply intensified the conversation.

At first, my own experiments with generative AI were fairly simple — working with transcripts, unfinished lyrics, fragments of journals, songs, images, and reflections. But very quickly the experimentation became something deeper and more personal. The questions stopped being primarily technological and became philosophical, spiritual, and creative.

What actually makes something meaningful?

What is uniquely human in the creative act?

What happens when a machine can imitate patterns of language, style, image, or emotional tone?

And perhaps most importantly: how do we continue creating in ways that preserve human integrity, attentiveness, responsibility, and soul?

Those questions became the foundation for my upcoming book, The World We Mean: Creativity, Spirituality, Artificial Intelligence, and Human Identity.

The book grows directly out of my own wrestling with these tools. I’ve felt both wonder and concern. I’ve experienced moments where AI opened surprising creative pathways, helping me notice patterns, connections, and possibilities I might otherwise have missed. I’ve also felt the tension of trying to use these technologies ethically and honestly without surrendering the deeply human parts of creativity that matter most to me.

As an artist, I’ve never believed fear is a particularly good creative posture.

Every generation of artists encounters new tools. Cameras. Synthesizers. Recording software. Film editing. Sampling. Digital painting. Word processors. Photography itself was once viewed as a threat to “real” artistry. So was electronic music. So was every tool that changed the relationship between imagination and craft.

What concerns me now is not that artists are experimenting with AI, but that so many thoughtful artists, writers, musicians, and educators are avoiding the conversation altogether out of fear. It seems to me that creative people should be among the ones most actively engaging these tools — testing them, questioning them, resisting their worst tendencies, and discovering ways they might serve genuinely human ends rather than flatten them.

If these systems are going to shape culture — and clearly they already are — then artists need to remain part of shaping the posture with which they are used.

For me, that posture has less to do with efficiency and more to do with attention.

The book explores that idea through essays, images, reflections, and poetry. It moves through questions of creativity, spirituality, story, identity, meaning-making, simulation, trust, and collaboration. But it’s also intentionally practical. Throughout the book, readers are invited into experiments of their own: creative prompts, reflective practices, collaborative exercises, and opportunities to engage AI tools thoughtfully while remaining grounded in human awareness and intentionality.

I’m not interested in offering simplistic answers about whether AI is “good” or “bad.” Those conversations usually collapse into caricature pretty quickly. I’m more interested in the deeper human processes already underneath all of this: how we make meaning, how we shape identity, how tools shape us in return, and how we remain awake to what is happening within us while we create.

In many ways, this book is less about machines than it is about people.

It’s about the stories we tell.The worlds we construct together.The patterns we participate in.The attention we give.The meanings we make.

The World We Mean will be available in late May, and I’m looking forward to finally sharing it with readers after living inside these questions for quite a while now.

 
 
 

Comments


Copyright 2026, Scott Simpson

bottom of page