3 Comments
User's avatar
Michael Bailey's avatar

Of course halfway through reading this, I wanted to Google Loab. That’s how it happens, isn’t it? I didn’t I waited until I got to the end. Who wasn’t worth it really because once you have an image then it’s just endlessly reiterated in very anodyne ways. The picture you have at the end is about the best there is. A synthesis of Virgin Mary Mary Magdalene the Grateful Dead bones and roses cover and the dead mother upstairs in the Bates motel. That’s revealing an image Stuck in the 1970s or just before with Catholic imagery overtones. All, it really reveals is how culturally narrow the AI training pool actually is. It makes me curious to see what version of the ghost in the machine the Chinese DeepSeek produces

Postmodern Iconoclast's avatar

As for “Chinese DeepSeek”, I’m genuinely curious too. Different training data, different densities, different cultural attractors. If Western models slide into Catholic hauntology, what does a differently trained model slide into? State realism? Socialist iconography? Entirely different uncanny archetypes?

Proving that every model carries the sediment of its culture. And when you push hard enough against the surface, the archive shows through. It’s early days, but I have hit on something deep within these systems. Which I am both visually and philosophically exploring. Only just scratching the surface, but am aware I’m at the very edge of what I’m calling Conceptual Latent-Space Minimalism, which is a new art form pushing these models through restrictions, to create images, video, music that aren’t images, video or music. In the conceptual negation sense of how I’m pushing models to do the exact opposite of what their training is.

Postmodern Iconoclast's avatar

You’re absolutely right, once you’ve seen “an image of Loab,” the repetition becomes anodyne. She collapses into iteration. That’s the irony. The myth is more interesting than the JPEG.

And yes, that late-70s Catholic overhang you’re spotting? That’s not an accident. You’re seeing the archive. The Virgin/Magdalene echo. The bones-and-roses aesthetic. The psycho-thriller residue. That’s dataset hauntology, not demonic emergence. 😁 It tells you less about a ghost and more about what decades of Western visual culture look like once compressed into latent space.

But here’s where I’d push back gently on the “culturally narrow training pool” angle.

What Loab actually reveals isn’t just narrowness, it reveals density. Certain aesthetic clusters are overrepresented in Western archives. Religious iconography. 70s photography. Horror cinema. Melodrama. Those become gravitational wells. Push the model “away” from something, and you don’t land in neutral. You slide into the nearest dense basin.

That’s exactly what I am testing right now in a series of “negation” experiments.

I’ve been deliberately stripping prompts of stabilising anchors, negating object, place, time, segmentation, forcing the model out of its comfortable attractor states. Each iteration removes another datapoint set until the system stops producing “recognisable imagery” and starts producing ontological instability instead. The goal isn’t to summon ghosts. It’s to map the edges of the terrain. To see where the model collapses, where it overcorrects, where it defaults to cultural residue.

Loab was an accidental version of that experiment. I’m now doing it intentionally.