Disclaimer
These are my thoughts TODAY. Everything is moving so fast that these opinions can change based on industry changes during any given week. I do not yet know how often I’ll update this page, or what form those updates will take.
Will AI take all of our jobs?
Yes, eventually.
As it turns out, we figured out that LLMs are great at writing code. So it’s coming for the software developers first. All the most recent models were built by themselves, or rather their agentic coding tools. Most AI companies are focusing on coding as the first major application. They’re mostly going deep. Anthropic is going wide, though, with Claude Cowork and its skill system.
Speaking of Anthropic, we recently saw markets panic after they launched a legal plugin for Claude Cowork. Thomson Reuters dropped 16%, RELX fell 14%, and LegalZoom sank nearly 20%, wiping roughly $285 billion from the legal tech sector in a single day. Similarly, KPMG told its auditor, Grant Thornton UK, it should pass on cost savings from the rollout of AI and threatened to find a new accountant if it did not agree to a significant fee reduction. These stories talk about dollars, but there are actual people, people with jobs, behind those dollars.
And this is greatly upsetting to people.
Don’t be mad at AI, be mad at late stage capitalism.
Didn’t we used to dream of living a life of leisure while robots did our jobs and menial labor? Why are so many people upset that that dream is now becoming a reality? Obviously we thought robots were going to be washing the dishes, doing the laundry, cleaning the house, walking the dogs, etc. We didn’t think that they’d show up and start doing the things we enjoy!
So, here they are doing our creative writing, drawing and painting, making music, and just about any office job that you can think of. And if they’re not doing your job, they’re certainly making you more productive. Well, we don’t love that.
What strikes me is that people aren’t demanding four-hour work weeks and universal basic income, they’re demanding that we outlaw AI so they can continue to work. We’re so brainwashed by capitalism that when the first technology shows up that promises to make us hyper productive as a society, we rail against it as though we’ve developed Stockholm syndrome for the very system that exploits us.
I wasn’t around during the industrial revolution, but I have to imagine we faced similar, existential crisis then, too. What need have we of a weaver when a machine can produce patterned cloth at 10 to 100 times the rate? I can imagine that today’s situation is very similar. I can even imagine that a lot of the exact same arguments happened then that are happening now.
I’m not sure if the concept of an economic bubble existed back then, but I can picture some of the people who were doing the jobs that machines eventually took over thinking that the machines could NEVER produce the same quality or creativity that they themselves poured into their work. As it turns out, society at large cared more about getting MORE clothes cheaper, than they cared about having each garment hand tailored.
Is this a bubble?
We know that there are billions and billions of dollars in investments in AI and we know that the companies being invested in have yet to make any money (except for Nvidia). Even the most expensive ChatGPT and Claude plans, running $200 / mo, aren’t making a profit. All AI use today is subsidized by investors trying to win the race for eyeballs.
However, big companies are pouring even more money into infra in 2026 than they did in 2025. Oftentimes by a factor of 10. AI spending, while seemingly astronomically high (perhaps bubble high) is INCREASING. How much investment in infra will it take for current consumer prices to reach break even? Is this enough? When will the infra spending be enough to push pricing for running models into the utility range?
Once investment and spending slow, what will the price of running AI be? And will consumers be willing to pay that price?
What about the legal cost?
All AI models are trained by an insatiable amount of data and information, and we’re told that it’s impossible to know how much of that data was copyrighted data. However, we’ve already seen image models produce images with the Getty watermark. And we’ve seen researchers be able to reproduce the Harry Potter books with 96% accuracy from commercial LLMs.
If any one of us had created a tool that reproduced copyrighted material to this degree we’d be sued into oblivion. At some point there must be a reckoning. Mustn’t there? How can we continue to allow AI to produce text, images, audio, and video when it was trained on material that it doesn’t own and wasn’t given permission to?
What about the environmental cost?
Admittedly, I’ve not spent much time educating myself about the environmental impact of running AI. I’ve heard the same rumblings about AI use that we heard about with blockchain, which I took very seriously. I promise to do better here, but right now I don’t feel at all equipped to comment on this.
Isn’t AI just a flash in the pan like blockchains and NFTs?
I worked at Block.one for several years, which oversaw the EOS ICO, the largest Initial Coin Offering during the short-lived blockchain era, coming in at over $4 billion. I’m sorry blockchain folks, but the reality is that there was never a practical application for blockchains beyond an asset store…er, I’m sorry…digital currency. There was no shortage of amazing ideas for ways we could use blockchains, but very few of those ideas were better than how we were already solving those problems.
The blockchain era was characterized by exploring for solutions. We were trying to find a use for blockchains beyond Bitcoin and Ethereum (and other “currencies”). And there were some really creative and neat ideas, but ultimately none of those ideas solved a problem that didn’t already have a simpler and cheaper solution. In other words, there were simply no useful applications of blockchains.
There are already tons of useful applications of AI technologies. Now, maybe they’re not priced right yet, and maybe once they are priced right, nobody will want to use them. But there are already plenty of useful applications.
Will we still need programmers?
We talked earlier about whether or not AI will take programmers’ jobs. Yes, AI can write code, but, at least for now, we still need people to tell the AI what to write. Using these coding tools is a skill, a quite complex one at that. So the code is written by the AI now, but the people orchestrating the agentic coding agents are what I would call, programmers.
When OpenAI released their coding agent Codex 5.3 they claimed that it was 100% coded with agentic coding tools. Anthropic announced Opus 4.6 that same week, also claiming that almost all code is now done with agentic tools. When asked why they had so many software developer jobs posted on their website, they said that they still need developers to run the agents.
So, yes, we still need programmers. But the job is changing. What we’ll call a programmer tomorrow isn’t what we think of when we talk about programmers today.
What about junior developers just now entering the field?
Running agentic coding tools effectively is currently an advanced skill. One that is changing and updating on a near daily basis. First of all, simply getting a safe environment up and running with all the tools and permissions needed is a non- trivial task. Verifying the “non-functional” requirements for an application can also be non-trivial, and junior developers are not necessarily well equipped to tackle these.
For instance, I’ve noticed that AI tools almost always miss adding CSRF protection to web forms, even when I get detailed about security requirements. I only know to check for that because I’ve had apps hacked due to exactly that kind of oversight. I’ve launched hundreds of websites and dealt with all their (read my) flaws once they hit the big bad web.
But then what value do agentic AI tools bring in this situation? Possibly the experimentation loop is shortened dramatically. Maybe we can ship agentic tools that are specifically designed to package up all the security knowledge the industry has built up over the last couple of decades and make all developers run their vibe coded apps through that?
Right now, it honestly doesn’t feel much different to me than how the industry has always worked. There will always be a need for developers of various skill levels. The junior devs tend to take on the greatest volume of code writing while the senior devs work on specifications, code reviews, and mentoring. Perhaps, right now, while we’re all figuring out how these advances in AI technology are going to change things, companies are pausing junior hiring. But I think that will rebound fairly quickly.
The biggest users of AI code editors are currently senior developers.
Most of the data up to now is pointing towards senior developers being the most frequent users of agentic coding tools. We’re seeing a pretty large uptick in senior devs submitting PRs and committing code. I think that they are just better positioned to take advantage of the tools as they exist today. As mentioned previously, figuring out how to get these tools set up and running effectively and safely is still a difficult task. I’m sure that will change quickly, but it’s where we are now.