> "CS research will be guided by industry interests". This is good to most extent, save a few very remote situations.
You start out with a pretty strong statement there...
The point (for me) is not so much that a single company might try to push through some evil villainous plan. It's that all the companies that tend to sponsor such conferences (or more generally "guide" the research) have specific incentives.
Take as the most glaring example the way that machine learning and statistics have been developing over the last years. The industry has an interest in collecting and knowing as much about their customers as possible. Most prominently, facebook and google are both pretty openly based on surveilling every detail of their users' (and everyone else's) lives.
ML research has been co-developing with this. The big money (grants, hardware support, PhD funding, conferences, ...) has been overwhelmingly in domains that directly benefit these players. A lot of "cutting edge" research at the moment is of little benefit to anyone who is not a surveillance capitalist megacorp, simply because of the compute & datasets needed to power these methods.
"Causality" has been a big topic over the last years. And yes, it will benefit a lot of things. But where does the actual research start? With the question "why did the user click that search page ad, and what ad should we show them next?"
Sure, there is a little research into privacy preserving ML, into "small data" ML, into federated learning (i.e. user-centric ML, not "distributed training" as in spreading computation over a big corp's cluster) and you can always argue "yeah but in a few years this will be commodity."
That sounds like trickle down ML research to me. I'm not convinced. But you'd kinda have to make that case, because otherwise "this is good to most extent" doesn't seem so believable.
One big aspect of what industry-guided research has given people is all the burn-out, anxiety, sense of loss of agency, UI dark patterns, polarization, and dumbing down of the internet. Along some huge upsides, yes, but I wouldn't call that these are "a few very remote situations".
You have several fair points. The overall direction of course gets some incentives from industry. But there are government sponsorships & private fellowship too. ELLIS, DARPA, NSF, NIH invest several billion dollars each year to R1, CAREER, MRI, SURF programs which takes care of fledgling topics until they see more adoption. Simons Foundation e.g. similarly hosts several hundred researchers to work on CS theory.
Also Google and AWS in particular have put in a lot of money on ML/RL based solutions - on reducing electricity grid loads, Alphafold protein & drug discovery, neuroscience, precision agriculture, personalized education & even interplanetary science/astronomy. You could argue these could be glamorized CSR programs. But in net effects, they are advancing our understanding in several discipline which do not directly feed their bottomlines.
(Full disclosure again: I am not affiliated to any FAGMA or benefitted from any of these grants)
You start out with a pretty strong statement there...
The point (for me) is not so much that a single company might try to push through some evil villainous plan. It's that all the companies that tend to sponsor such conferences (or more generally "guide" the research) have specific incentives.
Take as the most glaring example the way that machine learning and statistics have been developing over the last years. The industry has an interest in collecting and knowing as much about their customers as possible. Most prominently, facebook and google are both pretty openly based on surveilling every detail of their users' (and everyone else's) lives.
ML research has been co-developing with this. The big money (grants, hardware support, PhD funding, conferences, ...) has been overwhelmingly in domains that directly benefit these players. A lot of "cutting edge" research at the moment is of little benefit to anyone who is not a surveillance capitalist megacorp, simply because of the compute & datasets needed to power these methods. "Causality" has been a big topic over the last years. And yes, it will benefit a lot of things. But where does the actual research start? With the question "why did the user click that search page ad, and what ad should we show them next?"
Sure, there is a little research into privacy preserving ML, into "small data" ML, into federated learning (i.e. user-centric ML, not "distributed training" as in spreading computation over a big corp's cluster) and you can always argue "yeah but in a few years this will be commodity."
That sounds like trickle down ML research to me. I'm not convinced. But you'd kinda have to make that case, because otherwise "this is good to most extent" doesn't seem so believable. One big aspect of what industry-guided research has given people is all the burn-out, anxiety, sense of loss of agency, UI dark patterns, polarization, and dumbing down of the internet. Along some huge upsides, yes, but I wouldn't call that these are "a few very remote situations".