Hacker Newsnew | past | comments | ask | show | jobs | submitlogin

Programmers are usually the minority. The introduction mentions that ChatGPT reached 100 million users faster than any other consumer technology in history. There aren't even that many programmers worldwide. In their table 3 of non-use scenarios, programming isn't an explicit one while "creating poetry" is. (Despite mentioning CoPilot use as one of their pre-screen options. Perhaps in the 24 situation codes they came up with, one of the 4 they removed for table 3 due to having the greatest reported AI usage was programming, as this study is more about non-usage.) To put yourself in the mindset of a study participant, go through each of those scenarios and ask yourself if you've used the AI for that (and would use it again) or not, and why.

They also only surveyed a few hundred people via Prolific.

The product success (millions of users) implies that for most people, concerns over "ease of use" (which is what I'd code your reason of "flow" as) aren't common, because it's quite easy to use for many scenarios. But I'd still expect the concern to come up for those talking about using it for artwork because even with things like inpainting in a graphics editor it's still not exactly easy to get exactly what you want... The study mentions they consolidated 29 codes into the 8 in table 2 (you missed the two general concerns, Societal and Barrier). Perhaps "ease of use" slides ito "Barrier", as they highlight "lack of skill" as fitting there and that's similar. It would be nice to see a sample of the actual survey questions and answers and coding decisions but hey what is open data am I right.

Anyway, the table headings are "General Concerns" and "Specific Concerns". I wouldn't get too hung up on the use of the term "fear" as the authors seem to use it interchangeably with "concern". I'd also read something like "Output Quality: fears that GenAI output is inaccurate..." synonymously as "has low confidence in the output quality of GenAI". (I'd code your "value" issue as one about output quality.) All of these fears/concerns/confidence questions can be justifiable, too, the term is independent of that.



My actual problem isn't with quality, it's speed. I either have to put in very specific prompts and wait for a good model to think, or iterate many times with a lesser one with incremental prompt refinement. Neither of these are effective uses for me. Programmers may be a minority but others who have specific uses with high bar demands must also exist. As far as I can tell from reading HN, programmers are a high value target so would be odd to not be well represented.

With the number of bad studies, it might be a better default to consider all posts not well-founded until repeat independent studies appear.

The small sample size probably is the major factor, that along with its subjective summarization.




Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: