With Father’s Day just passing, I was reminded of an old project I did for my dad almost 20 years ago. Here is how I recently revisited the project using AI.
Almost 20 years ago, my dad found a photo of his father, my grandfather, and asked me to fix it up. He knew I was “good with computers,” even though he’d never touched one himself.
I wasn’t a Photoshop expert, so my friend Adam helped me restore it. He helped me turn the original (left), to a restored image (right).
The result wasn’t perfect, but it felt whole again. My dad and brother loved it.
Fast forward to last week. I saw yet another new AI tool pop up (can’t even remember the name, since there are 18 new ones a day). Figured I’d give it a shot on that same photo.
The result? It didn’t feel like our family. Looked made-up. New pocket square, random added details, it was just off.
Then I tried Gemini (I use it since it’s included in my Google One plan). Gave it this prompt:
“Can you do your best to restore the attached photo? I have a very old beat up photo from the 1940’s. As you can see it has tape on it and everything. My goal is to have it look as well restored as possible, as photo realistic as possible.”
The output? Loud colors, a funky suit, and none of the soul. Even when I tried refining the prompt, it got worse.
Next, I gave the same prompt to ChatGPT (free account). 20 minutes later, I had a version that… wasn’t perfect either, but my brother and I could at least see the family resemblance.
So finally, I took a stab at unlocking an alternate reality with Figma 🙂
This was a small personal test, but it shows where AI tools are right now: fast, sometimes impressive, but not always accurate or emotionally tuned. I’m always digging into more of these use cases, curious where they work, where they fail, and how they fit into my creative workflows.
Seemed like the right time to dig this up again. Father’s Day brings up memories, and now, some experiments with the future too.