Photo by Nick Fewings on Unsplash
(Photo by Nick Fewings on Unsplash)


By Tara Hoke

This hypothetical situation is based on a pair of cases recently reported in the news media. It presents one of a growing number of circumstances in which engineers today must grapple with the ethical use of artificial intelligence in their professional practice.

Situation

An engineering professor is invited to serve as an anonymous peer reviewer on a paper submitted to an ASCE journal. When reading through the paper, the reviewer discovers an extraneous phrase printed at the end of the paper’s discussion section: “Regenerate Response.” The phrase has no discernible connection to the content of the paper, but the reviewer recognizes the phrase as a button on the artificial intelligence system ChatGPT.

One of the world’s most well-known generative AI systems, ChatGPT can create human-sounding natural language text in response to a user’s query, text that can range in complexity from a simple conversational exchange to a detailed analysis of a scholarly topic. Below each text response generated by the system, ChatGPT provides a series of buttons by which users can rate their opinion of the response, including one that invites users who are dissatisfied with the text to try again by selecting the option to “Regenerate Response.”

Based on the unexplained appearance of this phrase in the paper, the reviewer suspects that the author used ChatGPT to produce at least some portion of the content for his paper. He believes that, when copying text from the ChatGPT interface into his paper, the author inadvertently copied the buttons below the text as well, and then he failed to review the copied content closely enough to catch the stray language before submitting the paper for review.

The reviewer shares his suspicions with the journal editor, who in turn contacts the author for an explanation. The author admits to using ChatGPT, but he claims that he merely used it to improve the clarity, tone, and organization of his own writing. When pressed by the editor to substantiate this claim, however, the author fails to produce an earlier draft or other evidence of his written contributions. As the author’s response casts serious doubt about the origins of the paper, the editor advises the author that his submission has been rejected.

Question

If this had been a real case involving an ASCE member, is it likely that ASCE's Committee on Professional Conduct would consider his actions to have violated the ASCE Code of Ethics?

Discussion

One of the most exciting aspects of generative AI is its capacity to automate content needs in the engineering workplace. Whether used in drafting business correspondence, summarizing financial data, streamlining information searches, or in a multitude of other applications, generative AI offers tremendous potential to increase the productivity and efficiency of the engineering practice. Unfortunately, this same potential is also the source of its greatest threat to professional ethics when it presents engineers with the temptation to offload not just business activities or preliminary work but also tasks that fall within the engineer’s professional responsibilities or that require the exercising of engineering judgment.

Generative AI is an exceptional tool for content delivery, but as with all tools, a user must understand its appropriate uses and limitations. First and foremost, for reasons that include flaws in the data used to train the AI and ambiguities created by user requests themselves, AI systems such as ChatGPT are susceptible to producing incorrect, misleading, or even completely fabricated information. In one much-reported case, an attorney was sanctioned for filing an AI-created legal brief that cited fake case law; in another, a biologist’s use of AI in a paper was exposed when a reader found himself listed as the author of several nonexistent references.

Section 1c of the ASCE Code of Ethics requires that engineers “express professional opinions truthfully and only when founded on adequate knowledge and honest conviction,” while Section 4g instructs that engineers must “approve, sign, or seal only work products that have been prepared or reviewed by them or under their responsible charge.” In the case described above, if a member submitted an AI-generated paper without first making a thorough review to confirm the accuracy and applicability of its content, then the member would certainly not have met his ethical requirement of adequate knowledge and responsible approval of an engineering work product.

Second, because generative AI creates content that is based on, or similar to, existing content, the output it creates could represent plagiarism or a copyright infringement of the original material. Indeed, any claim of authorship to AI-generated text could be considered as plagiarism, as it involves taking content from another source and identifying it as one’s own work. This misattribution is particularly concerning from an ethical standpoint when the false claim of authorship conveys a personal benefit on the “author,” as with a student receiving credit for assignments written by AI or a researcher whose resume is boosted by an AI-created paper.

Sections 5a and b of the code state that engineers “only take credit for professional work they have personally completed” and “provide attribution for the work of others.” If the false claim of credit gives the offender an advantage over peers for grades or in competition for tenure or other professional recognition, then such an act might also run afoul of Section 3d’s mandate to “reject practices of unfair competition.”

Applying this to the scenario described in this column, the member undoubtedly submitted the AI-generated paper in hopes that its acceptance would convey some personal benefit to him; thus, the false claim represents at least a failure to give proper credit and potentially an unfair advantage over colleagues working to publish research entirely through their own efforts. 

Finally, even if the member had placed appropriate checks and limits on the use of generative AI in his paper, his failure to inform the journal of this fact would still be cause for ethical question. Given the ethical concerns raised by generative AI, it is likely that the journal may have had additional questions about the author’s use or might have chosen to subject the paper to closer scrutiny. By omitting important information about his submission and instead forcing the journal’s reviewers to “catch” his use of generative AI, it could be said that the member violated his ethical obligation to “uphold the honor, integrity, and dignity of the profession” (as per Section 3a of the code).

Recognizing both the harms and benefits offered by generative AI, many publishers have adopted policies regarding its appropriate use in scholarly works. ASCE’s own policy states that it “will not review or accept manuscripts written by nonhuman authors.” Individuals who make any other use of generative AI in connection with their work must disclose that fact upon submission and be prepared to offer additional details about the nature of such use.

As the range of applications for generative AI continues to grow, it is likely that other ethical considerations may be added to the ones discussed here. Indeed, the rapid pace of development of AI and other technologies is the reason for the addition of Section 1h to the ASCE Code of Ethics: “Engineers consider the capabilities, limitations, and implications of current and emerging technologies when part of their work.” While no one can predict exactly how new technology may affect the future practice of engineering, members who remain mindful of basic moral principles such as integrity, accountability, and transparency are nevertheless likely to remain in good standing with the profession’s ethical standards.

Tara Hoke is ASCE’s general counsel and a contributing editor to Civil Engineering.

This article first appeared in the March/April 2024 issue of Civil Engineering as “Relying on Generative AI Has Its Pitfalls.”