Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

I did engineering at a university, one of the courses that was mandatory was technical communication. The prof understood that the type of person that went into engineering was not necessarily going to appreciate the subtleties of great literature, so they're course work was extremely rote. It was like "Write about a technical subject, doesn't matter what, 1500 words, here's the exact score card". And the score card was like "Uses a sentence to introduce the topic of the paragraph". The result was that you write extremely formulaic prose. Now, I'm not sure that was going to teach people to ever be great communicators, but I think it worked extremely well to bring someone who communicated very badly up to some basic minimum standard. It could be extremely effective applied to the (few) other courseworks that required prose too - partly because by being so formulaic you appealed the overworked PhD student who was likely marking it.

It seems likely that a suitably disciplined student could look a lot like ChatGPT and the cost of a false accusation is extremely high.



Extremely disciplined students always feed papers into AI detectors before submitting and then revise their work until it passes.

Dodging the detector is done regardless of whether or not one has used AI to write that paper.


This is my exact issue. ChatGPT seems formulaic in part, because so much of the work it’s trained on is also formulaic or at least predictable.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: