Ashley Forkey, S.M.ASCE, is a senior from South Florida studying civil engineering at Tufts University. She is focused on geotechnical engineering and is also pursuing a minor in engineering management. She has gained research and internship experience in geotechnical engineering along with experience in structural engineering and computer-aided design.

Forkey has been involved in ASCE throughout her undergraduate years. She is passionate about using engineering to support resilient communities and bridging the gap between student experience and professional practice. 

In her column, An Undergraduate's View, Forkey documents her experience as an ASCE student member navigating her civil engineering studies.

We have begun outsourcing our own thoughts. 

From summing up articles (because no one has any attention span anymore) to solving problems from start to finish (oftentimes incorrectly), artificial intelligence has become a staple in most students’ lives. 

But what can be seen as the greatest academic resource of all time can also be viewed as a hindrance to the quality of learning that students are receiving. Many students cannot write an email without the security of ChatGPT rewording it to “sound more direct and professional.” 

If we cannot communicate with confidence, how are we expected to trust our own calculations? How are we supposed to conduct meaningful research if we lack the attention span to read academic articles without shortening and simplifying them? I believe that students who consume information in fragmented, paraphrased forms are less likely to retain it, engage with it deeply, or draw independent conclusions. 

Regardless of the warnings that I have seen professors try to incorporate in their syllabi, most students know that, despite our professors being extremely knowledgeable in other areas, this is one area where the students know more. 

Kailey Takaoka, an incoming Tufts medical student starting this July, believes AI is a great resource in medicine for processing data and providing faster results. It has significantly improved recognizing medical images such as X-rays, MRIs, and slides. However, AI has become a factor in deciding which career path she will pursue. I knew AI posed a significant threat to a lot of careers, but medicine surprised me. She told me about radiology, pathology, and robot surgeries and how the negative impact of AI on these career paths is a significant factor that will impact her residency preferences. I had seen cashiers being replaced by self-checkout but now… doctors? 

What I immediately thought of was, What does this mean for civil engineers? For me? As selfish as it may sound, I did not work this long and hard for my job to be replaced, even partially, by AI. So, what did I do? I turned to the culprit himself. I asked Chat GPT.

“How can AI replace civil engineers?”

ChatGPT quickly reassured me that AI will change civil engineering but not replace it. That’s reassuring. I then asked how civil engineering could change. Who knows ChatGPT better than ChatGPT? In my opinion, no one. ChatGPT gave me some engineering tasks that AI can replace or reduce, but I thought they were too broad to be of any true relevance to my conclusions. I altered my question and asked which concentrations within civil engineering were likely to be impacted by AI and why. 

According to ChatGPT, transportation engineering (specifically planning and operations) “is highly vulnerable because it relies on large datasets, predictive modeling, and optimization problems that AI excels at solving.”

Construction engineering and management is “susceptible to AI takeover due to its heavy dependence on scheduling, cost estimating, and performance prediction, all of which can be automated.”

ChatGPT also noted that urban/transportation planning (technical focus) “is at risk because land-use modeling, GIS analysis, and scenario simulations follow standardized, data-driven workflows.”

Geotechnical engineering and construction field engineering are the least likely to face major impacts, said the chatbot.

It’s a good day to like geotechnical engineering. Yippee! For everyone who is feeling a little defensive and sad at the thought of AI replacing their rigorous education, I’m sorry. What’s interesting about AI is that you can ask it the same question and it might give you a different output that better suits what you want to hear. 

The deeper concern lies in what this dependence on AI is doing to how students think in the first place. With AI becoming more of a crutch than a resource, how will students be able to develop critical thinking skills when AI can generate a list of ideas for them to choose from? Not only is the reliability of AI questioned rarely, the sources that AI pulls from are also not often questioned. A lot of power is given to the sources that AI pulls from first. This creates an echo chamber that is exacerbated as students lose their ability to write independently and adopt the ideas that are given to them first. High priority articles that AI favors in its conclusions are increasingly powerful as people become more dependent on AI to save time and energy.

I spoke with Tre Williams, an environmental engineering major at Tufts (putting him at risk, according to ChatGPT). He reacted as most engineers would when I asked him how he'd feel about AI putting him out of a job: impassive. How else would someone react to the thought of years of hard work and late nights being seen as replaceable? This was not only expected but also a pattern across all the people whom I spoke with. It would just be so unfortunate for our hard-earned talents to be less useful (not necessarily useless) because of AI. But it is also out of our control and not worth feeling any emotion toward at all. 

An Instagram reel about the godfather of AI recently popped up on my feed. Geoffrey Hinton received the Nobel Prize in Physics in 2024. Don’t worry, I fact-checked the reel’s authenticity before writing about it. 

Hinton noted that AI is now being programmed to model human intuition in addition to human reasoning. He discussed how AI is currently used in massive authoritarian governments for surveillance and is used to create echo chambers that make people indignant. 

Alarmingly, he talks about how AI might be used to create “horrendous lethal weapons” and terrible new viruses in the near future. It was like listening to a speech from a dystopian movie. If this were from a dystopian movie, it would be a speech where no one would listen to him despite this newfound technology being his creation. 

It reminded me of how no one listened to the engineers when the Challenger took off in 1986. In this case, the Space Shuttle Challenger disaster occurred when an O-ring seal failed in cold temperatures, leading to the shuttle’s breakup shortly after launch and the deaths of all seven astronauts. Despite the numerous warnings from engineers not to take off due to safety concerns, no one listened enough to prevent the rocket from leaving the ground. 

Much like Hinton who “designed” AI, these engineers designed the Challenger. Who is more qualified to determine how and when a product should be used than those who created it? Although my tone is humorous, the impacts of AI surpass the job market and monetary gain. If the godfather of AI used his brief Nobel Prize speech to warn us of the dangerous implications of this new technology, I think we should heed his words. 

Future leaders, inventors, creators, etc., are just current students. We are living in a time where we are shortening our own attention spans because we cannot keep up with the demands of a racing society. Our learning is increasingly shallow while our essays are more concise and articulate than ever. If we cannot convey our thoughts onto a piece of paper for a lab report, how are we supposed to innovate, connect with others, or connect to our work? This goes beyond the classroom and back to the neural networks that AI replicates within our brains. 

AI uses artificial neural networks to create a mathematical model of the human brain. This is the first generation in which we are seeing an education system where students do not have to use their own brains all the time. They have access to a model of the human brain at any given moment and, in an effort to save time, energy, or both, we can turn to AI for its neural networks instead of us using our own. It is remarkable but weird to think of ChatGPT as a network of neurons and nodes bouncing information back and forth. 

If students continue to rely on AI as a shortcut rather than a tool, we risk trading independent thought and critical reasoning for convenience, leaving a generation that knows how to consume fragmented information but not how to engage with it.