Learning to See Differently: My First Steps with Midjourney
- Michele Nagle

- May 22
- 2 min read
I didn’t come to AI art with a plan. I came with curiosity, confusion, and—if we’re being honest—some wildly chaotic image results that made no sense and definitely had too many fingers.
When I first opened Midjourney, I didn’t know what a prompt was supposed to look like. I didn’t know what the difference between “v5” and “stylize 1000” was. I just typed what I felt—sometimes a phrase, sometimes an emotion, sometimes a sentence that sounded like poetry and looked like a visual disaster when rendered.
It was messy. And it was magic.
Where Photography Met Prompting
What I didn’t expect was how much my photography background would shape the way I learned to prompt. I didn’t know the technical jargon of AI generation—but I knew how to see. I knew how light behaves. I understood mood. Composition. Timing. Texture. What to leave in the frame, and what to cut.
So even when the tools were new, the instincts weren’t.
I found myself writing prompts that weren’t just descriptive—they were emotional compositions:
soft side light through fogged glass
distant figure framed by shadows, gold light pooling on the floor
a face half-remembered, bathed in quiet blue
And the more I leaned into feeling instead of formula, the closer the images got to what I was chasing.
Learning the Hard Way (Also Known as: Prompting Like a Newborn Deer)
Here’s one of the first images I ever generated.And here’s the prompt that led to it:
“a woman standing in a dream, soft shadows, gold and blue emotion, like memory”
What I got back was… not quite that.

I wasn’t sure if she was melting, blowing up, or possibly dissolving into a carbon curtain. But it was strangely beautiful. And totally wrong. And kind of perfect.
I saved it anyway.
Letting the Machine Misunderstand You (At First)
In the beginning, the results were strange—sometimes laughable, sometimes uncanny. There were heads where there shouldn’t be heads. Doors that went nowhere. Hands that didn’t quite belong.
But every now and then, something came through. Something that felt like it was almost true. A half-formed dream. A memory that hadn’t quite happened yet.
I started saving those images. I started chasing that feeling. And slowly, the noise started to make sense.
Making Images from the Inside Out
This wasn’t about creating “cool AI art.” It was about learning to speak a visual language that could handle what I needed to say—things that were too internal, too layered, too soft or surreal for the camera alone.
I didn’t stop being a photographer when I started using AI. I just started photographing things I couldn’t point a lens at:Longing. Silence. Memory. Interior weather.
Next Post Teaser (Blog #3)
In the next post, I’ll share how I started pairing words with those images—and why I didn’t realize I was writing a book until it was already halfway done.
Until then, if you’re fumbling through something new and beautiful and strange—keep fumbling. That’s where the good stuff lives.
—Michele



Comments