September 20, 2025 · updated March 15, 2026

My background is in rail vehicle engineering… we help transit agencies buy and oversee new trains, streetcars, coaches, locomotives, etc.. Design review, construction management, procurement support. Not the kind of thing that shows up in AI demos.

I’ve spent a lot of time thinking about AI and thinking about what it means for my industry. This post is my guess of what I think will happen over the next five years. I plan a later more global AI prediction.

I’m not an AI researcher. I’m an engineer who uses these tools daily and thinks about how they’ll reshape engineering consulting. Take the predictions with appropriate uncertainty.

Where we are right now

The AEC industry is behind. Over half of firms still use paper during the design phase. This is not surprising. But the 27% who are using AI are going harder: 94% of them plan to increase usage this year, and 68% report saving at least $50,000. There is a chasm forming between firms that are experimenting and firms that are not.

Meanwhile, an MIT study found that 95% of generative AI pilots fail to show measurable financial returns. Gartner put generative AI in the Trough of Disillusionment on their 2025 hype cycle. Companies that sprayed AI across the org chart without a strategy are writing off their experiments.

This is actually good news in my mind. The hype tourists are leaving, and the people who remain are the ones actually integrating AI into actual workflows.

Five predictions

I want these to be specific enough that I can come back in 2031 and check whether I was right.

1. AI agents will handle most routine project documentation by 2029

Not generate drafts for humans to edit… actually handle it. Daily construction logs, progress reports, RFI routing, meeting minutes, transmittal letters. The kind of work that currently eats 15–20% of a project manager’s week and adds almost no engineering value.

The shift agentic AI is the biggest development since I started paying attention. I’ve written a couple over the past two years and they are surprisingly capable. These are systems that don’t just respond to prompts but plan, use tools, and execute multi-step tasks. Firms have already built public AI agents that evaluates scope changes by cross-referencing project execution status, materials purchased, and schedule impacts.

The pieces exist today. The integration is what’s missing, and that’s an engineering problem, not an AI problem.

2. Design review will become AI-assisted with human approval, not human-performed with AI assistance

This is a subtle but important distinction. Right now, an engineer reviews drawings and specifications line by line, an may occasionally ask an LLM to help check something. By 2031, I expect the default workflow to flip: AI performs the initial review against codes, standards, and prior project feedback, and the engineer reviews the AI’s findings.

An estimated 43% of design insights and feedback get lost in traditional review processes. A model trained on ten years of a firm’s review comments, keyed to specific spec sections and drawing types, would catch things that a junior reviewer simply doesn’t know to look for.

This doesn’t replace engineering judgment. It replaces the tedious, error-prone read-every-page part of the job. The engineer’s role shifts toward evaluating edge cases, exercising judgment on ambiguous findings, and making decisions the AI can’t. That’s a better use of time.

3. Domain-specific agents will outperform general-purpose LLMs for engineering tasks

GPT-5 and Claude Opus are impressive, but they know a little about everything and a lot about nothing in particular. A small agent wrapper fine-tuned on the task will be more useful than a naked frontier model.

4. AI won’t shrink engineering teams, but it will change who you need to hire

The skill floor is going to rise. A junior engineer with AI tools can do work that previously required five years of experience, but someone still needs to know what questions to ask and whether the output makes sense. The bottleneck shifts from “can this person draft a spec section” to “can this person evaluate the spec section.”

The role I think will matter most is something like an AI integration lead: someone who understands both the engineering domain and the tooling well enough to connect them. Not a data scientist. An engineer who codes, or a coder who understands engineering.

5. The competitive gap between AI-adopting and non-adopting firms will be visible in win rates by 2028

This one is the hardest to verify, but I believe it most strongly. If firm A can turn around a proposal in two weeks that takes firm B four weeks, and the technical approach is comparable, firm A wins. If firm A can include an AI-driven simulation or lifecycle cost model in their proposal at no extra cost because they’ve built the tooling, that’s a differentiator.

More importantly, clients will start expecting it. Clients are under pressure to do more with constrained budgets. A consultant who can deliver a design alternatives analysis in days instead of weeks is enabling the agency to make better decisions within their timeline. Once one firm demonstrates this, the expectation resets for everyone.

How I think about the confidence level on these

I apply roughly the same framework I use for any prediction:

What’s the base rate? Most technology predictions are wrong on timing, even when they’re right on direction. I’d give myself 60% odds on the timing of these predictions and 85% on the direction.

Where’s my motivated reasoning? I’ve invested time and money in AI integration. I want it to work. That makes me a bad judge of whether my specific approach is correct, though the macro trend is supported by enough independent evidence that I trust the direction.

What would change my mind? If regulatory action (FRA, FTA) restricts AI use in safety-critical design review. If the cost of running capable models doesn’t continue to fall. Any of these would make me revise.

The meta-point

95% of AI pilots fail. Be specific about what you’re trying to improve, pick interventions with actual evidence, implement them properly, and measure the results. The firms that do this will pull ahead. The firms that buy a ChatGPT Enterprise license and call it a strategy will not.

I’ll revisit these predictions annually. If I’m wrong, I’d rather know early.