AI hallucinations can put construction projects at great risk

AI-generated outputs should not be relied on for final sign-offs, according to Georgia Tech.
March 27, 2026

The construction industry stands to benefit from AI, but project teams should not “treat AI outputs as gospel or rely on them for final sign off,” writes a Georgia Tech lecturer that teaches machine learning and natural language processing.

Construction firms are rapidly adopting generative AI to search for and summarize project documents, emails, and schedules. Using AI for these functions speeds up administrative processes which can help boost profit margins.

But there is a danger for project teams to “conflate well written answers with ground truth.” Large language models (LLMs) can use incorrect, incomplete, or outdated data in compiling authoritative sounding documents. Project team members may be unaware that an LLM used bad data, which can lead to catastrophic results. 

“The danger is most significant in work that becomes invisible once covered: foundations, reinforcing steel, post-tensioning, fireproofing and critical mechanical, electrical and plumbing routing.” The big takeaway: AI speeds many routine tasks, but humans must still verify that important facts and conclusions are based on reliable data.

Sign up for our eNewsletters
Get the latest news and updates