It was November 2003, and I had just finished three months of Navy boot camp in Great Lakes, Illinois. I arrived with my clean shave and haircut to the next phase of my military training at Fort Meade, Maryland, at the Defense Information School (DINFOS). I was going to be a Photographer’s Mate in the Navy, continuing a tradition and following in the footsteps of many brave individuals who had documented military history with photography since World War II.
Since the early 1990s, digital photography had begun gaining popularity over film. I didn’t have much experience with photography beyond a high school class I’d taken before then. By 2003, digital cameras were emerging as a professional tool for serious photographers, but many veterans in the field saw them as a threat to their artistic creativity. Digital photography, they argued, would allow any schmuck to become a photographer with little creativity or talent. At least, those were some of the discussions happening at the time. To a degree, they were right, digital cameras transformed many industries, but it didn’t reduce demand. Quite the opposite. In the 20-plus years since then, I’ve seen how a new generation of creative people have produced amazing works of art across different fields, benefiting from cheaper cameras, shorter production cycles, and a much lower barrier to entry into the profession or hobby.
Today’s AI discussions and software development remind me quite a bit of the film-versus-digital-photography debate back then. There were those who didn’t consider people using digital cameras to be real photographers unless they knew how to develop film and prints in a darkroom. Today, the conversation is around “vibe” coding versus “raw dogging” it with happy fingers and pure keyboard magic. Many are screaming into the void of social media feeds, warning others of the dangers of letting AI code for you. “You’re hurting your skills by coding with AI,” they say. Others, mainly AI companies who want you to subscribe to their services, warn that your skills will be irrelevant in six months. Subscribe now! It’s futile, they claim, there’s no point in learning to code anymore. Start finding yourself a job in the trades; it’s over. Yet optimists preach about the amazing world to come, where we’ll all be able to build multimillion-dollar companies with an army of AI agents that do everything for us. I can see it now: my agent haggling with your agent for the cheapest price on an AI-powered hammer so I can change careers and move into the trades.
The argument about using AI and diminishing your coding skills may have some merit, but only time will tell. In 2003, when I went through DINFOS’ photography school, there were very similar concerns about newcomers not truly learning the foundations of photography if they didn’t understand film. The format of the training had us first learn light theory and composition using film, then move to digital cameras in later parts of the course. We learned to develop our own film in darkrooms and studied the chemical compositions needed to properly develop a photograph, none of which was actually used in the field anymore. I suspect those worries were unwarranted because, within a few years, DINFOS stopped training new photographers on film altogether. I think we’ll see a similar transition for writing code. There will be hesitation about allowing new Computer Science majors to use AI, but over time, it will become like a writer using a pencil, typewriter, or keyboard. Pick one.
I’m not sure which camp I fall into. I’m a self-taught developer, decent, though I’m no computer scientist and likely won’t be writing any frameworks or open-source projects that everyone uses in their products anytime soon. I have little interest in that. I’ve worked mainly on the frontend but can also do full stack. I enjoy creating products, and AI has been very helpful in both learning faster and writing more complex code to solve bigger problems. At the same time, I see how the current state of AI can feel intimidating. The advances in the past three years have been very cool, and if it continues to advance rapidly, who knows what will be possible in the next three years? For now, I’m enjoying being able to create cool things faster. Just the other day, I rewrote Tetris in JavaScript in a few hours. I’d never tried building a game before, it was a lot of fun, and I learned a lot.
I see the evolution of AI following a path similar to photography’s transition from film to digital. Today, everyone uses digital, both professionals and hobbyists. While there’s still some use of film, it’s primarily in very specific creative processes, like the film industry, or as a nostalgic hobby, or for some fancy artist who sells prints of cracks in concrete for $50,000. Probably money laundering, I guess I’m in the wrong industry.
Let’s be clear, though: digital cameras did shake things up, but not because photographers were no longer needed. It was a more efficient and cheaper way of doing photography. The low barrier to entry has been responsible for many creative expressions and creators who might never have touched photography if they’d had to use film. It’s also changed how movies and TV shows are made, with many big-budget films using Digital Single-Lens Reflex (DSLR) cameras to shoot scenes. One that comes to mind is the 2012 film The Avengers, they used digital DSLRs for many of their action scenes.
I see a very similar path in how we create software with AI. Unless there’s a significant leap in capability or AI gains consciousness, I see it enriching how we create products and helping us resolve tougher problems. It’s already doing that for many of us, especially those who aren’t 10x engineers and are too “stupid” to get a job at a FAANG company because we don’t have the neurons to solve LeetCode problems. I can finally crack binary search and call myself a real software developer.
All kidding aside, there are huge benefits, in my opinion, for those creating products today, not just developers. From an engineering perspective, one person can do way more, faster. Of course, this raises the question of whether we’ll need as many developers in the future. I think so. In my opinion, there will always be problems that need solving, and some people don’t want to solve them themselves, they’re willing to pay someone else to do it so they can focus on whatever they want at the time. Even if AI ends up doing all the coding, design, and marketing, I see it as an abstract layer of problem-solving. Regardless of what AI becomes in the future, one thing is clear: humans are creators and problem-solvers.
I do hope coding doesn’t go away in my lifetime. I love writing code and the feeling when I finally understand a concept after struggling with it for a while. My conclusion is that AI will create a much bigger demand for those who can write and understand code in the future because it will make problem-solving with computers more accessible and easier for smaller groups of people to do more work. Like most technologies humans have created, it will make some things obsolete, but we’ll figure out new ways to apply that technology to other fields. Like photography, it will help us make better, more expressive products.
If I’m wrong, I see three possibilities: Skynet becomes a reality and we’re screwed; humans actually colonize Mars and AI helps us; or AI is the Antichrist and Jesus’ return is right around the corner. Either way, I want to keep writing code for as long as I can