Fidelity to Fracture: AI in Image-Making Practice
Generative AI has become increasingly proficient at tasks that demand fidelity. It can stretch a background, fill in missing areas, or conform assets to technical specs with mechanical precision. These interventions are fast, clean, and structurally predictable. The image extends. The frame holds. The system follows orders.
But shift the task from correction to invention, and the logic disintegrates. Ask for something open-ended, and the results begin to fray. The tool no longer completes a pattern; it disrupts it. Perspective folds. Surfaces bleed. Familiar visual language is bent until it carries an unintended charge. The image doesn't just resolve: it hallucinates.
This shift from fidelity to fracture reveals a core instability at the heart of image-making with AI. These systems are not neutral tools. They do not "understand" visual context or spatial logic in the way humans do. What they produce is not grounded in perception but in statistical plausibility —trained interpolations of what images tend to be rather than what this image is or should become.
In this context, the artist's role shifts. You are no longer composing in the traditional sense. You are setting conditions, prompting iterations, and then sifting through outcomes—most of which fail to meet the original intent. What matters is not the generation but the curation. The work happens after the image.
This is not a deficiency. It is a medium-specific condition. And like all conditions, it demands practice.
Working with generative tools involves building a feedback system between intent and excess. You do not author so much as you accumulate, interrogate, and refine. You become the editor of a field of outputs rather than the originator of a single image. Meaning emerges in what you preserve, what you discard, and what you allow to persist in tension with the brief.
My video Wave Offering is a good example of this dynamic—a curated composition built from deliberately unstable material, leaning into the visual language of early generative systems. Rather than chasing photorealism or coherence, it celebrates the distortions and failures that newer models try to suppress: warped anatomy, echoing artifacts, and the unmistakable awkwardness of an image half-formed and overconfident.
This practice is not about relinquishing control, nor is it about asserting dominance over the machine. It's about navigating a space where authorship is porous, and results are contingent. It's about working inside a fracture, neither resolved nor fully chaotic—and finding clarity through repetition, framing, and refusal.
AI does not make art. However, it does generate surfaces that artists can explore and interrogate.
And in that interrogation—between fidelity and fracture—a new kind of practice begins.