image of a glowing circuit board with central microchip labeled AI Congressional Budget Office
Engineers are more frequently turning to artificial intelligence, but must be vigilant about minimizing the risks that accompany it.

This article is the second in a series looking at artificial intelligence’s impact on civil engineering and related fields. Read the first part here.

More Americans are integrating artificial intelligence agents into their internet searches, shopping, and travel planning – and exploring the ways these technologies can improve information management tasks.

Right now, whenever AI systems make mistakes, it feels innocuous – a humorous hallucination on a basic ChatGPT query or a silly animation.

Further reading:

But in the industries that design and build the spaces we live in, work in, and move through, being wrong at all can be catastrophic. As architecture, engineering, and construction firms increasingly turn to AI tools to help them unlock massive potential, they’re also more aware than ever of the risks.

“From the engineering standpoint, we’re concerned with reliability and risk,” said Dan Reynolds, AI leader for engineering firm Walter P Moore. “There’s obviously a lot of potential efficiency and productivity gains, but at the same time, if we don’t have the right guardrails, then there is significant risk to hallucinations or other errors.”

Reynolds suggests that as AI systems get better, it will be more, not less, difficult to spot when they’re making mistakes. Walter P Moore tested an AI model with the Principles and Practice of Engineering (P.E.) Exam. A few months ago, the agent was able to pass the P.E. exam. Now it’s up to about 70% accurate, about what a graduate engineer might do.

“As AI models continue to improve, let’s assume they will eventually be correct 98% of the time,” he continues. “That doesn’t give us any more comfort to allow it to autonomously control all aspects of the design process. It just makes it that much harder to identify and correct the 2% of the time that it’s going to be wrong.”

Those guardrails Reynolds referenced may include other AI systems policing themselves, but architects and engineers, not surprisingly, are adamant about keeping human oversight at the center of the design and engineering work.

Managing and securing data

As the Source’s initial story in this series explored, AI has the potential to revolutionize how firms access and leverage their own data – but this requires firms to have more robust and consistent data management practices.

“I think almost every single company across the globe is telling their people, ‘Do not start putting your documents into free versions of an AI tool you found online,’” said Alastair MacGregor, who leads WSP’s property and buildings business line, “because once it’s in, you ain’t getting it back.”

“People need to structure their data better to take advantage of AI,” added Gideon D’Arcangelo, Arup’s Americas digital services leader. “There need to be protocols around how files need to be stored and accessed with a well-structured data model, so that data is tagged and can be found again.”

For instance, D’Arcangelo notes that the metadata that gets attached to files must be properly architected. “That means that there’s a huge change management aspect to that, too, because now Frank can’t store his file on his favorite local hard drive or even network drive,” he said. “It has to be done in a way such that there are no anomalies.”

But there are other questions firms are considering.

“Data security and data privacy are major concerns,” Reynolds said. “Who owns the data, and what rights does a given party have to view, edit, or train models on that data? Autodesk updated their user license agreement recently to prohibit users from training their own AI models from the content those users are authoring.

“Third-party services generally require data going to a remote server. Who owns that data, and what rights does someone give up by using that service?”

The perils of saving time

Greater efficiency may help architecture and engineering firms accomplish more sophisticated work in the same amount of time, but it could also end up costing firms whose business models are built on selling hours

“If you look historically back at when we’ve had major shifts, when we first went into AutoCAD, when we went into Revit – all of those efficiency benefits – the value was not captured by the person doing the job,” MacGregor said. “It was just dissolved into the supply chain. Even though they were getting a higher-quality output, if they were doing it faster, the expectation is you’re doing it faster. Then that’s great, I pay less because you’re spending less time doing it.

“How do we get to a place where the client sees a reduction in overall cost, while the AEC firm enhances their project-level margin to support the underlying training, ongoing enhancements, and forward-looking innovations that strive to further enhance the value that can be provided to the client?” MacGregor added.

Eric J. Cesal, a special program instructor at Harvard University, says the rapid progress of AI tools has thrown cold water on earlier, more optimistic views among architects and architecture firms about AI from just a few years ago.

“You go back three years, and the consensus was really that it was all upside,” he said. “It was going to clear away reviewing submittals and (requests for information) and all the drudge work of architecture and just leave us free to design all the time. Now we’re sort of reckoning with the fact that architecture is traditionally a profession that charges by the hour. So a technology that makes all of your work faster isn’t necessarily a great thing.”

Clients are becoming more aware of the efficiency AI tools can bring and are demanding faster delivery.

“They want the contracts to be less expensive because of these tools,” said Niknaz Aftahi, LEED AP BD+C, CEO and founder of aec+tech, an online platform based in the San Francisco Bay Area that connects AEC firms to software and technology tools, including AI-driven applications. (Aftahi is also a co-chair of the American Institute of Architects’ AI Task Force.) “So they encourage the architect to use this tool so they pay less. And it is becoming more competitive for companies to compete over a certain project and the timeline.”

Aftahi says architects spend only 30% of a project’s total time on design. The main part – about 70% – is construction documentation; when a project gets underway, architects become coordinators more than designers. As AI begins to automate that documentation work, the threat of job displacement is going to become serious, she says, “because 10 architects will probably be replaced with 2-3 architects.”

This may work out fine if there are plenty of other building projects for those displaced architects (or engineers). But, as many point out, the market for new buildings is inelastic.

“Are you going to 10X the number of projects as a whole that the entire market generates?” Reynolds asked. “Maybe. We might need that to address the housing crisis. But given current demand, if every firm operates with a mindset of staff reduction, at some point the entire industry would collapse.”

Between utopia and dystopia

The allure of AI – its ability to process mounds of complex information and reveal new insights – is inescapable, but the human beings who work alongside these systems don’t want to be subsumed by the machine.

“As you get to AI, and it’s ‘Write a quote, or ‘Write an ask,’ you know, ‘Put something in there as a prompt,’ and it gets something out. It may look fantastic,” MacGregor said. “But are you vested in it? Is it yours? If you think about what we do as engineers, you’ve got to stamp the drawings and say that you’re in responsible charge of the design.

“How do we create people who can use those tools and still achieve that level of engagement, feeling a vested kind of ownership over the solution?”

MacGregor added that engineers take time to process complex problems – new AI workflows need to allow them to “process the complexities” of the problems they are solving.  “If our goal is to supercharge our people, we need to think about both how AI tools can support that goal and how we integrate the time needed for them to digest the solutions created,” he said.

The days of people not taking the technology seriously – assuming their unique skills can’t be replicated – are over.

“I don’t consider anything sacred,” Harvard’s Cesal said. “I just sort of assume that AI will find a way to replicate just about everything eventually. And I come from a background in humanitarian architecture, with a specialization in disaster reconstruction and resilience, and that informs how I look at any major disruption: Be prepared, assume the worst, and build your strategy around that.”

There is probably some middle ground between the utopian promise of AI and the dystopian fears – the same middle ground humans traditionally navigate any time a new technology becomes widespread.

“We’ve had to continually reinvent what it really means to be human,” Cesal said. “And for the longest time, it meant being able to think, being able to reason, being able to compose a sonnet, all this other stuff. I’m not sure it means that in the future, but it’ll mean something else. And that’s our task today – to author what exactly that’s going to be.”