We need a Blender-like design tool specifically for product design. Using HTML/CSS for rendering so it covers most web needs and that usually more than encompasses native app-layout emulation. Open source, technical, and not expected to be picked up in a day or fully understood top-to-bottom by everyone.
The reason Figma is putting us into a design box is because it doesn't have all the CSS features that actually let you create incredible experiences.
Figma on desktop is an Electron app I believe. Figma chose to build a custom webGL rendering engine for their design canvas, so the core issue is that technical decision early on (probably allowed for some better performance and multiplayer back then). Figma is stuck with wanting to control their rendering and allow for non-product stuff like Figjam or the new Draw tools, but it will inherently hold them back from providing a really good design/dev handoff and always will hold designers back because it doesn't use CSS web rendering.
What about variables that don't use pixel units? Often values appear as hardcoded in dev-mode when they are actually meant to be a % unit or something else Figma doesn't support because Figma doesn't actually use CSS for rendering.
When my devs just copy whats in Figma dev-mode they get so much stuff wrong.
I actually have the opposite problem with Figma. It is way too basic and simple, targeting every kind of design and the average designer skill level.
I work in complex SAAS product design. Basic things I can do in CSS I can't do in Figma. Things like a table? Yeah it is entirely faked and awful in Figma. Don't even get me started on anything more complicated than flex rows and columns.
Half the debate over designer/dev handoff in the industry right now is simply because of Figma's limitations and the refusal of designers and front-end devs alike to learn HTML and CSS.
We need a Blender-like tool for web and app product design. Highly capable and advanced, you aren't expected to know all of it, and it can do anything you want it to.
I need a tool that is more than just a fancy rectangle drawer.
I agree with many of your statements but draw the opposite conclusion.
HTML and CSS are expressive, have a vast selection of libraries and tools, and can actually result in shippable code. Designers and front-end devs should learn and use it.
But I don't see the point in creating a design tool unless it's meaningfully simpler than HTML/CSS. I reach for Figma when I need to quickly mock up a dozen iterations using our design system and fancy rectangles. It's fast enough that I can make mockups in realtime during discussions with developers and subject matter experts. But if I'm actually going to take the time to set constraints to make things flex properly or make a real table then why not use HTML and CSS directly?
Because I can do way more meaningful design exploration and iteration if I am not constantly running into a tool's limitations. I work at a fast paced startup where my prototyping rapidly iterates into production and the vast majority of developers I have ever worked with don't really know CSS. If I want to implement something actually complex in layout it would be SO MUCH FASTER if I could show the devs how to do it in CSS correctly in the design tool. AND it would let me better test and explore how the complex layout interacts with real data and real users. Figma prototypes are terrible.
Figma is a great tool for 90% of basic and boring design. A lot of product design is not just basic and boring, and a lot of stuff I need simply cannot be reproduced in Figma. So yes I do just write the code directly, but that doesn't let me explore those complicated layouts and iterate on them visually the same way I could if it was HTML/CSS in a Figma-like design canvas.
There are also Adobe Illustrator and Photoshop, both of which were used for UI design until Sketch/Figma appeared. Both harder to learn and more expensive.
The problem then was that the designs could be too cumbersome to implement (and also that you couldn't share files with developers as easily, but Sketch has the same problem). You can really do just about anything in PS/AI, whereas with Figma and Sketch it's almost like they limit you to what an average developer can implement with CSS.
That being said, we're in the age where you can do pretty much anything with CSS, and I totally agree with you that Figma's controls are very basic (especially for typography, there's just not enough options).
I read Starfish close to 20 years ago. He had a uniquely dark vision of the future compared to the zeitgeist in 2007 or so. It's been interesting living through reality since then. I fear the day will come when I reread his earlier works and they start sounding optimistic.
I found it a letdown, lacking conviction and thus unconvincing. Oh, the story had to end some kind of way, but by then both its author and I had mostly lost interest. I don't hold that against what came prior.
Here is an incomplete list of things that it's impossible to research in industry:
1. Astronomy.
2. Physics.
3. Geophysics concerning the parts of the Earth deeper than the crust.
4. Biology aside from medicine.
5. Chemistry aside from industrial chemistry.
6. Theoretical computer science.
7. Mathematics.
I'm not blaming you for not knowing this, but I am holding my head in my hands - how can people not know about astronomers? They've been a part of our culture and the prestige of civilization for thousands of years.
"Bell Laboratories has been the recipient of 11 Nobel Prizes in Physics, with notable laureates including John Bardeen, William Shockley, Walter Brattain, and Arthur Ashkin. Other notable achievements include the invention of the transistor, the discovery of the cosmic microwave background, and the development of optical tweezers."
Microsoft Research has a ton of people working on theoretical CS.
Biology - there is a ton of research in agriculture too - e.g. Monsanto and GMO seeds.
Bell Labs was at its peak from 1960s-1970s. Since the 80s, corporate governance has completely changed due to Jack Welch’s short-term shareholder maximization ideology taking over the corporate world.
I don’t think there are current private organizations doing research similar to what Bell Labs did as the current corporate-governance systems wouldn’t allow for it.
Currently, industry research is more for profit-maximization at the expense of greater human prosperity/economic growth: such as you mention Monsanto making patented seeds, increasing profits by disallowing farmers to regrow crops more cheaply which otherwise could’ve been passed onto consumers/wider society.
Things aren't impossible to research in industry. Managers with an R&D budget can fund whatever research they want, and it's easy to find examples of companies doing research in most of the fields you name. The fields where it's not are because you've defined them in a circular fashion e.g. "chemistry aside from industrial chemistry" is not an argument for "there are things you can't do in industry", and space research is primarily driven by the private sector now but I guess you are defining astronomy to only include study of things too far away to matter to anyone except astronomers.
Apple definitely had an internal group doing mathematics research, which i know first hand. But yes there are, to your point, topics in science probably only done in academia etc, but, to my point, several are seriously funded in industry.
I couldn't say more without knowing the details, but industrial labs don't usually research mathematics, so much as ways to apply mathematics to their industry. These are called "mathematics research departments," because they hire mathematicians.
Light pollution. Lots of people now live in places where the stars are hard to spot.
And now we also all have infinite distraction cuboids, many who can still see them if they look, won't look.
And if you don't know what's out there — or mistakenly categorise all of space as scifi — then why would you be curious about it? Why even ask who might research it?
What benefits do you expect to see from the kinds astronomy that require this sort of funding? Sure, knowing things can be nice but this ignores opportunity costs, eg. would practical knowledge like fusion research be further along if talent weren't focused on impractical knowledge?
> Physics.
Not strictly true, see quantum computing for instance, lasers, semiconductors and so on. There are some types of physics that aren't viable in this sense, but why does that automatically translate into some need to support them? For instance, consider the decades spent on supersymmetry which ultimately produced bupkis. In a world in which we weren't so focused on ideas so divorced from empirical data, what other types of knowledge or engineering would we have done?
> Geophysics concerning the parts of the Earth deeper than the crust.
What benefits do you expect to see?
> Biology aside from medicine.
Such as? What benefits do you expect to see?
> Chemistry aside from industrial chemistry.
Such as? What benefits do you expect to see?
> Theoretical computer science.
Untrue, Google and Facebook have advance distributed computing considerably, for instance.
> Mathematics.
Unclear, there's a lot of math involved in predictions of all sorts, like weather forecasting, stock market prediction. If your argument here is that math will be more application-focused, this strikes me much like the physics objection where it's unclear that we'd really be worse off.
There seems to be this automatic assumption among some people that pure research with no direction or constraints is an unmitigated good and that we can't do better. I used to think so too, but I just don't see it anymore.
Each of these has a long answer, so I'll pick this one:
>[Chemistry]? Such as? What benefits do you expect to see?
Everything around us is made up of "molecules," assemblages of parts called atoms. Since it's not possible to manipulate the molecules directly in sufficient numbers (one pound of plastic is made of 2000000000000000000000 individual molecules), we have to assemble molecules en masse by subjecting them to processes that cause each step to happen to all of them at once. How does that work?
Let's say you have a molecule. Its structure will have exposed parts, and some bonds will be weaker than others. If you want to replace a part with another part (one step in the assembly of the final product), you might go about it by letting another molecule come along that has a greater affinity to bond with the location of the part you want to replace, and also has a tendency to be in turn itself replaced with the part you want to add. How can you know which molecule to use for this? You could run a computer simulation, apply a rule of thumb, or look it up in a book. In order to write the simulations, deduce the correct rules of thumb and write the books, scientists need to try a lot of combinations of molecules to see what parts swap with what other parts when they're mixed, and then think very hard about what's happening and why it is happening. This practice is known as, "chemistry."
Once a lot of the rules for a certain molecule are mapped out, engineers with an application in mind can go to the library and ask, "what sequence of steps will take me from available molecules to a molecule I can sell in a way that succeeds very often?" This is called, "industrial chemistry." If there was no library and no knowledge in it, industrial chemistry would be impossible. That is the relationship between science and engineering.
All well and good, but not an argument that the science here has to be publicly funded, and that a commercial research enterprise providing this library is not viable. The question was about what benefits come from the public funding, in particular, benefits that simply cannot be provided by a commercial enterprise or alternative approaches. You merely stated this was the case and I'm asking for the reasons.
Even supposing the costs associated with basic research can't be recouped commercially using existing technology, that does not suggest that alternatives are not possible. For instance, in a world in which this library was not created, perhaps the set of talented industrial chemists in our timeline would have studied physics or computer science instead and advanced quantum chemistry simulations that capture some, most or even all of the utility of this empirical approach.
I can look at what you describe and acknowledge that it's useful without automatically accepting that it's a) not commercializable, and b) impossible to work around, which is what you were implying.
This is exactly the problem: you're describing a for-profit mindset, and it's exactly why research in the private sector floundering.
If you pretense everything with "what value will this bring" then you've already lost. Research is about finding things out for the sake of it. You don't know if it brings value because you haven't researched it yet. That's what breaking new ground is all about.
> If you pretense everything with "what value will this bring" then you've already lost. Research is about finding things out for the sake of it.
No, it's not. If research never brought any value then we would almost never do it, aside from some weird hobbyists, and particularly not with public funds. Everything has a cost-benefit tradeoff, even publicly funded research, and pretending this isn't so is naive at best.
Research whose costs can be recouped on short time horizons arguably should not have public funding because the economic incentives are sufficient. Exactly where to draw this line is not clear, not only because research returns are unclear but also because publicly funded research diverts talented people from endeavours that would have provided direct economic benefits. This second order effect is not widely appreciated. How can you truly evaluate the opportunity cost of this counterfactual world? Seems virtually impossible in fact, so arguments that research X returns Y with no consideration that you can't evaluate the counterfactual should be viewed with extreme skepticism.
And this doesn't even get into the problem of scope creep. Academics charged with pure research develop the exact mindset you illustrate, where no matter how outlandish the idea, well maybe it will be good for something someday, so why not fund it just in case? This ends up producing a whole lot of nothing, as we've seen in particle physics over the past 30+ years.
Basically none of modern optics would exist without astronomy (well at least astronomy is a convenient cover for military/intelligence interests funding better optics). Most of statistics and efficient cameras originate in astronomy/astrophysics (mostly because you have to count all the photons and you are never getting a second relevant measurement point)
There are huge parts of physics which are only publicly funded. Results are often spun out into companies, but there is no institution that can fund experiments that require timelines of multiple decades (even things like fusion power is nearly completely government funds)
And those are the only two parts where I actually have some competence. So yeah.. I wouldn't buy
> well at least astronomy is a convenient cover for military/intelligence interests funding better optics
Right, defense can and has funded research for its own purposes, and sometimes those purposes can find wider commercial application (like the internet). That's all great, national defense is one of the government's primary purposes.
> There are huge parts of physics which are only publicly funded.
Yes, and? Is this an argument that they cannot be funded in other ways, or an argument that the parts of physics that cannot be funded in any other way ought to be publicly funded? There's just this blanket assumption that this is true but it simply doesn't follow.
For instance, the newest super collider project that some people are pushing for completely misses the opportunity cost of not funding other projects that could be far more impactful, like wakefield accelerators, which would reduce the size and cost of particle accelerators by orders of magnitude.
> For instance, the newest super collider project that some people are pushing for completely misses the opportunity cost of not funding other projects that could be far more impactful, like wakefield accelerators, which would reduce the size and cost of particle accelerators by orders of magnitude.
This is not true in many aspects. There are many problems with plasma and laser wakefield acceleration. First, the beam quality (emittance and stability) is orders of magnitude below collider requirements. They have demonstrated GeV-scale acceleration over centimeters, but scaling to multi-TeV and maintaining luminosity is not even close to solved. There are no concept for a full detector-ready experimental program exists using wakefield accelerators. But on the other hand, we have "FCC" being based on mature accelerator technologies, with well-understood cost scaling and detector integration that builds on decades on experience building accelerators. Actually it is much safer option than what you are saying.
But the important point is that you are making it binary choice, we can still investigate and work on wakefield accelerators while working on more mature projects. Remember than it takes decades of work and thousands of scientists to make any of these things work. And it is not the question of accelerator itself but what detector can use it and for what physics exactly. We can produce much more interesting physics colliding muons instead off protons but this is much more challenging task and will cost more efforts and will cost more.
Also I would say that Scientific value isn’t measured by compactness or cost alone. This is a VC mindset not a scientist pushing boundaries of knowledge.
> Right, defense can and has funded research for its own purposes, and sometimes those purposes can find wider commercial application (like the internet). That's all great, national defense is one of the government's primary purposes.
Well since we are here. I know it is a cliche by now and many people HN doesn't like to be reminded about that but guess that is the most beneficial CERN output ?
> But on the other hand, we have "FCC" being based on mature accelerator technologies, with well-understood cost scaling and detector integration that builds on decades on experience building accelerators. Actually it is much safer option than what you are saying.
The question is not whether wakefield accelerators are "ready" for something on the scale of a supercollider, the question is what is the expected return per dollar spend? From what I can see, there's very, very little we can expect from the energy levels achieved by the next radio frequency supercollider. It's basically "explore the Higgs sector a little better", and that's it, and we're not expecting to find much there. $20B is a high price tag for producing basically nothing new.
I'm saying that if you took $18B of $20B for the supercollider they've been tossing about and invest it into wakefield research, it's very plausible that we could solve all of the problems you describe, and with the $2B left over we could build a wakefield accelerator of comparable energy, and that we'd be better off in that world.
> But the important point is that you are making it binary choice, we can still investigate and work on wakefield accelerators while working on more mature projects.
Investment dollars are finite, therefore it often is a binary choice. You could fund tens of thousands of smaller experiments in domains where we have actual uncertainty for the cost of this one piece of equipment that's good for only a few experiments.
> Also I would say that Scientific value isn’t measured by compactness or cost alone. This is a VC mindset not a scientist pushing boundaries of knowledge.
If you want to expand knowledge faster then you should consider adopting the VC mindset: reduce costs per novel datum gathered. You can run more experiments in more diverse fields and uncover more surprises. Sounds like something a scientist should value frankly.
> I know it is a cliche by now and many people HN doesn't like to be reminded about that but guess that is the most beneficial CERN output ?
Direct military projects arguably haven't been a focus of CERN for 30+ years. They might benefit indirectly, but ask yourself whether the military would have still achieved the outcomes they needed by funding that research directly rather than indirectly in a way that accidentally produced things they needed. The direct funding is what I'm suggesting is well justified, the indirect maybe not so much.
I'm sorry you've somehow become so jaded, but why do you insist on parading your ignorance as informed skepticism? You could look up information on these topics yourself on your own time. Just because you can't fathom how research in one area can benefit another doesn't mean it doesn't happen.
To just point out a few items:
> Astronomy
* Medical imaging has been revolutionized by advances made by astronomers, both in hardware and software. I'll give CT scans as just one of the examples of direct transfer of knowledge/tools from astronomy to medicine.
* Security scanning (eg. scanners in airports) is another example of direct transfer. The technology comes directly from astronomy and, in fact, an astronomer from the Space Telescope Science Institute, a govt. funded basic research institution, holds one of the main patents for this technology.
> Mathematics
* Riemann's work on non-euclidean geometry was a purely intellectual exercise for many years, until Einstein was able to make practical use of it to describe spacetime curvature. The resulting theory of general relativity underpins many things, but a direct example I'll give is GPS. It simply wouldn't work without the theoretical framework built on math that originally had no practical application.
> I'm sorry you've somehow become so jaded, but why do you insist on parading your ignorance as informed skepticism?
Why are you parading your ignorance of my position as an informed rebuttal?
> Medical imaging has been revolutionized by advances made by astronomers, both in hardware and software. I'll give CT scans as just one of the examples of direct transfer of knowledge/tools from astronomy to medicine.
Not an argument that these advances would not have been made otherwise, such as by research directly in medical imaging, nor an argument that this was the cheapest way we could have made these advances.
> Security scanning (eg. scanners in airports) is another example of direct transfer. The technology comes directly from astronomy and, in fact, an astronomer from the Space Telescope Science Institute, a govt. funded basic research institution, holds one of the main patents for this technology.
Again, not an argument that this wouldn't have happened without publicly funded astronomy, nor an argument that this was the cheapest way we could have made these advances.
Ditto for mathematics, which for centuries has progressed without direct public funding.
This is exactly the problem in this sphere, all of conversations are rife with fallacious arguments. "X happened this way" is not an argument that X could not have happened any other way, nor that in a world where we didn't discover X because we didn't fund it publicly, we wouldn't have had just as impactful a discovery Y. There are opportunity costs to public funding and tying up intelligent researchers to goals that don't have realizable goals in the near future, and this pervasive assumption that we must be in one the best possible worlds that can only be better if we funded more public science is naive.
Would you please stop posting flamewar comments to HN? You've done a great deal of it in this thread. It's not what this site is for, and destroys what it is for.
> Why are you parading your ignorance of my position as an informed rebuttal?
Nicely put, but I say ignorance because your post was a flurry of questions asking someone else to tell you information about multiple subjects rather than adding substantive information or viewpoint to the conversation.
> Not an argument that these advances would not have been made otherwise, such as by research directly in medical imaging, nor an argument that this was the cheapest way we could have made these advances.
You've asked a question that is impossible to answer, but the reality is that the benefit happened, and it's not the only one. It seems that the system has some merit, although yes, there's no way to prove that there wasn't a "better" straight-line-to-the-answer way to do it. How can you know the straight-line path ahead of time? You can't map the territory without going out there and looking. Basic research in multiple areas, allowing for cross-pollination has done a really good job at that over the years.
> Ditto for mathematics, which for centuries has progressed without direct public funding.
This one really doesn't make sense. Who paid Riemann? Who paid Newton? Universities are not a new thing, and funding them with state money has been there from the start. Even figures perhaps not as strongly associated with universities like John Herschel or Tycho Brahe got their money from the state one way or the other (aristocrats, or given money to advance the knowledge and/or image of the state).
Wild to come across people on HN who don't seem to understand that "knowing things" has a price tag, that there is no such thing as "no price is too high", that science that can economically justify itself shouldn't be publicly funded and that science that must be publicly funded should have to justify itself to tax payers.
wild to come across people on HN who don't understand that putting economic scrutiny on scientific studies and questioning the public funding of science will both slow the development of yet unknown useful knowledge, as well as slow down economic growth
the perniciousness of the "we need to economically justify the kind of scientific research we are doing" is that plenty of the research we've had that has been economically beneficial was NOT obvious when it was being conducted
by restricting research to programs that may have economic benefit, you restrict yourself to funding things that we pretty much already know, which is a bit more like R&D and less research
to give two examples
1) Gila monster venom - research in the 1990s on Gila Monster Venom formed the building blocks that would become GLP1 medications, which are likely to be some of the best performing medications of all time, as well as have the huge societal benefit of reducing the obesity load on the health system, when this research was being conducted its implications were not known and could have very well been on the chopping block if we were trying to "justify it to taxpayers"
2) CERN - the study of high energy particle physics at CERN is a classic case of "how useful is this knowledge?" It's pretty easy to look at this and wonder how it economically justifies itself. What difference does it make to the tax payer if we discover the Higgs Boson or not? Well, the entire digital economy is down stream of CERN. The internet was partially developed to facilitate the transfer of large quantities of data from colliders like CERN to be analyzed elsewhere in the world. For fuck suck, the world wide web was invented at CERN by Tim Berners Lee. If we didn't invest that money into CERN, or other research institutions, who knows what the web would look like today, and how large the digital economy would be.
Yes, these are just two examples of how research without clear ROI has had economic benefit and justified itself to tax payers. The crux of the issue is we don't know how valuable what we don't know is, and we don't know what branch of science will have the next society altering discovery, so a random walk through scientific research for the sake of knowing things is valuable, because there are undoubtedly things we don't know that will benefit us greatly.
So my argument is in a way like yours, the science does have to justify itself to taxpayers. But the evidence is that the process of science, and knowledge seeking at a high level have justified the funding of science, going study by study to figure out what will have ROI and what won't is a great way to ensure that we discover less and less, leaving more and more stones unturned.
Bell labs was funded from the profits of a legal monopoly, and the money spent on it was used to justify the continuance of that monopoly. You do see some private basic research these days as with Google in robotics or Microsoft in quantum computers but its fairly rare and small compared to government funded research.
And pharma companies do a lot of research but it's almost entirely applied, taking the basic processes discovered by NIH funded research and figuring out how to turn them into feasible drugs. You need both halves there to sustain our current progress.
Bell labs also took in and benefitted from, the larger tapestry of experts and scientists that were attracted by the US education and research machine.
> it's almost entirely applied, [...] you need both halves there to sustain our current progress
why do we need to subsidize "half" of these pharma companies' research? if they can't get it for free then they'll have to find a way to do it themselves at a profit
Industry research is generally R&D (applied science, engineering research), not basic research (basic science). Not to disparage either; both are needed, but they are quite different and a person may be suited to one but not the other. It can be hard for someone looking for work to determine where an organization's focus is, as an outsider.
Bell Labs was a monopoly granted by the US government where they were literally compelled by Congress to invest in research.
I also don't buy the notion that industry is better for science, maybe if you want to research ways to damage humans and the environment sure but most people don't want these things.
Please provide evidence of fraud in government funding. I have yet to see any real evidence of fraud from the DOGE reports, just contracts and projects that don't align with the current administration's world view and priorities.
Yes, scientists are human and there are certainly instances of scientific misconduct that take time to root out. In the grand scheme of things, funding for science generates many times its value in economic output.
Fraud's illegal. If they found any, and certainly if they found lots, we should see indictments. We should see criminal investigations. And plenty of them.
We don't see that, because they're full of shit.
Similar story on systemic, widespread partisan voter fraud. One of their AGs with full authority and access to investigate such things, launches such an investigation, and all you hear about it after the initial fanfare is crickets, or else some crowing about a half-dozen indictments that turn out to mostly be voters making mistakes and are of mixed partisan benefit, not at all the kind of thing they said was happening. This is what we see every time, assuming they even bother to try to investigate when they have the chance (if what they claim is true, they absofuckinglutely should investigate!)
Why? Because they're full of shit. When it's "put up, or shut up" time, they shut up, because they've got fucking nothing.
It is what we used to call FUD. Like parent says, it never gets near a prosecutor, even a prosecutor "on their side" in a state with a judiciary "on their side" because we still have the semblance of rule of law in this country. DOGE makes a claim, and it sticks permanently into Trump-supporter minds, never mind that they quietly walk it back a few weeks later or it never makes it to the DOJ.
What people expect is a connection between the claims of fraud and the massive cuts, funding pauses (which effectively destroy some research, wasting any money already spent), and disruptive reorganizations. The claim is that DOGE is uncovering so very much fraud all over the place, that you practically can't enter a government office without tripping over fraud. Where's all this fraud, justifying such extreme measures? They say they found it! Where are the indictments? Where are the investigations? There should be lots of them.
The root post of this thread was concerned with the effects of, "The Trump admins cuts", as is TFA.
I think I see what you mean, however: that your original post in this thread could be read as setting that aside and treating of just the sub-topic of whether there's any fraud in science (of course there's some).
However, the post I responded to (not your post, to be clear) mentioned DOGE by name: "I have yet to see any real evidence of fraud from the DOGE reports" (to be fair to that poster, I think they read your "why do think so much fraud has been uncovered lately?" as being about that, given the context of the thread, and as the most prominent claims of fraud in general and, specifically, in government funding of science lately have been from DOGE and friends, though evidently that's not what you meant). Between that, TFA, and the root of the thread, I hope you can see why I bristled a bit at being accused of being "the only person talking about DOGE in this thread", given that I was responding to a post that mentioned them by name, and that it's, more-or-less, also the topic of TFA and the root post of this thread.
I do empathize with your being exhausted with the topic of Trump/DOGE, but would suggest that this is maybe not a good thread to expect to avoid it in, given the topic of the linked article.
The fraud perpetrated by an individual which misled the field as whole is not the same as 'fraud in government funding'.
Again, scientists are human and will do things for personal gain. There are mechanisms being implemented in science funding that are meant to try and curb this behavior. NIH intramural research now require the use of electronic lab notebooks, which greatly reduce the ability to doctor data post-experiment. There is also a push for scientific preregistration, which helps to prevent p-hacking and hypothesis modification.
But saying that all funding towards a scientific dead-end due to misdirection by individual researchers is proof of fraud in government funding doesn't compute.
> Again, scientists are human and will do things for personal gain.
Which was my original point.
> But saying that all funding towards a scientific dead-end due to misdirection by individual researchers is proof of fraud in government funding doesn't compute.
> Why do think so much fraud has been uncovered lately?
> Please provide evidence of fraud in government funding.
> Are you serious? Just go look at the recent Alzheimer's research scandal. It's the tip of the iceberg.
> But saying that all funding towards a scientific dead-end due to misdirection by individual researchers is proof of fraud in government funding doesn't compute.
> Who said this?
That was how I interpreted you bringing up the Alzheimer's scandal. When you say "why do you think so much fraud has been uncovered recently?" and mention the Alzheimer's scandal, I feel you portray it as an overwhelming or systematic issue with government funded research.
The process of getting funding for science from a government agency is tedious and painful. There are many eyes that review each grant application. It takes months/years, there are usually reviews to make sure that research is on track and aligns with the original proposal... So when you say that the Alzheimer's scandal is the tip of the iceberg, it implies systematic and widespread fraud in science funding rather than individual instances of fraud and misconduct.
You're not wrong, however, who would have funded lasers? What about CRISPR funding? Most scientists are focused on status, however, many revolutionary discoveries come out of basic research paid for by the government and not companies.
The original laser was built by the research arm of Hughes Aircraft, so I'm not sure I see the issue there. Even if Hughes was partly funded by defense, I don't really classify defense under the same category as other types of pure research funding because there's typically an actual purpose, which fits the "short-term profit" rather than "long-term benefit of humanity".
Re: CRISPR, wasn't that discovered at least 3 different times on completely independent lines of research? That suggests to me that it's sufficiently "obvious" it would have eventually cropped up in many other areas.
> many revolutionary discoveries come out of basic research paid for by the government and not companies.
Yes. I took no position about this in this specific thread, but I will just say that "X happened this way" is not an argument that X could not have happened any other way, nor that in a world where we didn't discover X because we didn't fund it publicly, we wouldn't have had just as impactful a discovery Y. There are opportunity costs to public funding and tying up intelligent researchers to goals that don't have realizable goals in the near future.
PCR (Polymerase Chain Reaction) would not have been possible without a completely unrelated discovery of a heat-resistant bacteria by a federally funded scientist years earlier. Is it possible that eventually a privately funded effort may have figured out that some bacteria can survive in temperatures beyond what was generally considered possible and connected that to replicating DNA? Yeah maybe, but it seems extremely less likely.
Your black and white way of looking at this is naive at face value. We need both federal and private funded research. Is there fraud in science? Yes. So your answer is throw it all out instead of rooting out the fraud? Somehow expect fraud not to exist in privately funded research? Your comments here are so myopically driven by a bias against something rather than what is the best outcome for scientific research.
> Is there fraud in science? Yes. So your answer is throw it all out instead of rooting out the fraud?
I didn't suggest any such thing. Deconstructing fallacious arguments around publicly funded science was the only objective here. At no point did I stake a position on whether science should or should not be publicly funded as a whole, or what mix of public or private funding would be optimal. I would love to live in a Roddenberry utopia with no resource scarcity so we can all research to our heart's content, but we don't live in that world, and the evidence is mounting that we're not in a good place or on a good trajectory.
In a very real sense we are stuck in a local minima: innovation as measured by patents have been decreasing every year for decades, researchers spend more time writing grants than doing research, they are heavily influenced by publication bias and celebrity status, the replication crisis has rightly undermined trust in a lot of scientific disciplines, and cases of decades old fraud indicates that academia is not as self-regulating/self-correcting as we might have hoped, and the skewed incentives in academia are largely to blame. You can take these changes coming as some kind of authoritarian oppression, or you can take them as an opportunity to remake research and academia into something better. As Wheeler said, "In the middle of difficulty lies opportunity."
I literally used to get laughed out of the clinic, told I was a healthy young male and just needed to exercise more. After a decade of this, I was finally diagnosed with gout, something doctors had just been lying about testing for. No one could believe someone could have gout in their 20s (It's been developing since my late teens and I've generally had arthritis my entire life, since I was a child).
It took a physician's assistant, who happened to see me one day when both of my doctors were on their third extended vacation of the quarter, to hear my plight, take my suggestion of gout seriously, and do the leg work, also revealing to me that "full test panels" don't include uric acid by default and that my doctors had been lying to me about their thoroughness.
The assistant was also massively more knowledgeable about the disease, its history, the history of treatment, etc., and disease in general, than either of the two doctors running the clinic. Really opened my eyes.
This is why, although I know there will be problems with it, we should get AI and blood tests more accessible for individuals. Accessing the healthcare system for "I know I'm not 100% but ... I don't have anything specific wrong like a broken bone" is basically a crapshoot - and a totally stupid one.
I have seen young men get diagnosed with gout, but they were Islanders (Samoans and Maori in my case), who I believe are at a higher risk so doctors are more aware of it
Funny enough I also got diagnosed with gout once in my 20s. I have always had somewhat bad toes/bunions (probably partially genetic, and partially wearing only tight soccer shoes as a kid) and I went to a wedding wearing some new leather shoes that I hadn't broken in yet. The next day I woke up with a fever and horrific pain in the sides of my toes. Went to doctor and they did some tests and were also seemingly surprised at the results indicating gout. They asked me to come back in a week to double check, and by then my symptoms were gone and the tests no longer indicated gout.
> They asked me to come back in a week to double check, and by then my symptoms were gone and the tests no longer indicated gout.
Ha. Do you still have symptoms? If not, yea just a bad initial diagnosis. If you do still have symptoms sometimes though, it should be noted that gout is hard to test for when you're actively experiencing aggravated symptoms, as the uric acid crystals are lodged into your tissue and not freely available in the blood stream / urine. This exacerbated everything quite a lot, as when I was much younger I definitely got uric acid tests done when my symptoms were at their worst.
I wonder if the medical textbooks only mention gout as a historical curiosity and not as a modern day disease. I have an older relative with gout, have met someone in their 30s with gout, and yesterday heard a story about an acquaintance with gout, so it's not that rare anymore.
I think it's just typically seen in older men. In fact, only something like 5% of gout sufferers are women. But a 2023 study says [0]:
> The global gout prevalent cases in individuals aged 15–39 years was 5.21 million in 2019, with the annual incidence substantially increasing from 38.71 to 45.94 per 100 000 population during 1990–2019
So while marginal, it is either getting more prevalent for younger men over the last 30 years, or we are getting better at catching it.
What's interesting is all of the older men I've met with gout describe moderately uncomfortable pains, I was surprised to learn that my case is exceptionally intense, debilitating enough to be a physical handicap at times (along with sciatica, fused discs, flat feet, some other little things and possible fibromyalgia) which has plagued my life since I was in my teens.
It's been a horrendous disease that has greatly impacted my ability to be as active as I'd like, and sometimes during a flare-up it's extremely difficult just to walk to my bathroom. Flare ups sometimes happen constantly and sometimes I get a month or two of reduced symptoms.
Another thing is that I don't eat meat, and I rarely drink, which are the two biggest aggravators of symptoms. When the doc told me I needed to cut those things out I laughed, and they said they were very surprised that my symptoms were so bad given that I already avoid the most offensive foods.
I'm also currently trying to pin down another autoimmune disease. From what I know, I don't speak to him, but my father has been in and out of the hospital his whole life and it took decades to pin it down as lupus. I am wondering if he lacked the butterfly rash because I don't have one, but otherwise have basically every symptom of lupus, but it also could be fibromyalgia or even MS. Combined with the gout though, I feel 40 years older than I am, almost every little tissue and bone and muscle hurts from head to toe (literally toe, gout keeps one of my toes at a constant level of pain).
Doctors are trained to be arrogant, dismissive of unknown unknowns, and with a terrible understanding of statistics.
Add to that:
- They have a lot of patients and not enough sleep.
- They need to pay back a huge student loan.
- They hold terrible responsibilities and risk being sued.
- They don't have much time for themselves, let alone update their knowledge.
- Most patients are overreacting idiots, so it's a winning strategy to ignore what they tell you most of the time.
- They are not trained nor selected for empathy or open-mindedness.
And you get so many medical errors.
Basically, you have to double-check everything they do, and endure their cynical rebuttal when you make suggestions, ask questions or try things they didn't request.
I had to face many such errors myself, two almost lethal.
When you can, shop for one that is both good and is open to discussion. But even then, there is a limit. At some point, your doctor WILL fail you, so you have to take responsibility, usually when you're weak and at a low point in your life.
And if you are wrong, people will tell you you should have listened to your doctor, but if the doctor is wrong, well, shit happens.
One of my practitioners is a friend of 15 years, I literally lived with him, he is considered top in his specialty. I'm surrounded by people working at the hospital.
He saved my life once.
Even that is not enough. I still have to double check stuff every time.
> At some point, your doctor WILL fail you, so you have to take responsibility, usually when you're weak and at a low point in your life.
The two times I've been hospitalized in my adult life, I've been incredibly thankful for my parents stepping in to act as my patient advocates, including pushing back on doctors when necessary. (The first hospitalization was guilliane-barre and the other a rare hemotological condition, so i wasn't in a great place in either scenario to advocate for myself).
A pediatrician in my family has said that patients get significantly better outcomes when they have a patient advocate, because even if they are directly related to you (i.e. parents or sibling), they are going to be far better at being objective on the situation than you, the person being affected by it, is
It's purely anecdotal but does have some provenance going back at least to the 19th century, with one of the early liver specialists.
He was reportedly at a cocktail party one evening when a messenger burst in and informed the esteemed doctor that one of his patients appeared to be dying from a heart attack.
"My good man," he replied, "that can't possibly be true. When I treat a patient for liver disease he dies of liver disease."
I suspect AI chosen by an organization trying to maximize profits could be really bad.
This is an industry that places people’s lives as vastly less important than minor scheduling issues as someone working 12+ hours is seen as perfectly normal.
One can debate its merit right now, the upside / downside equation. In 10-20 years? Game over. Doctors will largely be the physical space touch point. AI will in effect use meatbags to interact with the patients.
I think I'm feeling the effects of Gell-Mann amnesia here. The same is said about software engineers, but I'm not as confident as you that there won't be a need for the profession in 10-20 years.
It makes more sense when you realize most of the time it’s not <disease>. Doctors see thousands of patients per year and 99% of them have common conditions with straightforward diagnosis.
Add on top vague symptoms that can’t actually be measured and are subjective and you end up with challenging diagnoses.
I do agree that patients should educate themselves and advocate for themselves. Doctors aren’t perfect and they don’t know everything.
But it helps to have some perspective of what doctors deal with on a day to day basis.
"Basically, you have to double-check everything they do, and endure their cynical rebuttal when you make suggestions, ask questions or try things they didn't request."
I had frequent headaches and the student health service referred me to a well-known and very respected hospital for tests as an outpatient. The doctor to whom I was referred was a well-known neurologist with papers to his name—probably the most eminent neurologist in the country at the time (even now, some decades after his death, his name appears on Wiki as someone of eminence).
He then sent me for a series of tests at the hospital and they extended over a number of days although not consecutive (which was inconvenient). Those tests were rather exhaustive and included amongst others neurological tests, brain x-rays, electroencephalographs and testing my eyes including injecting fluorescein into my veins to improve the contrast of the photos they took of my eyes/retina—afterwards I was pissing out that brilliant florescent yellow dye for the better part of a day.
Keep in mind that those tests involved other doctors and clinicians who would have examined the neurologists report, so decisions weren't taken in isolation.
After all that and multiple visits to the hospital he said that they could find nothing wrong with me and suggested that I be admitted for at least three days for further tests! I declined as I was about to have uni exams and never did return to be admitted.
Several months later I visited a local GP practitioner because I'd had a bad dose of the flu and after he'd dealt with that I mentioned my ordeal at the hospital.
He was palpably furious and mumbled quietly under his breath which was just audible enough for me to hear "fucking idiots". Within a split second he went on to say "presumably during all this testing no one actually suggested that you might have migraine?" to which I replied "no". That made him even more annoyed.
He then prescribed a common Parke Davis formulation called Ergodryl, which, back then, was a common go-to drug for migraine, it's a formulation of egotamine tartrate, caffeine and diphenhydramine (a well-known antihistamine).
Problem solved, that drug completely killed my headaches. I've never forgotten that incident and although I've experienced similar inept performances I've never experienced one on that scale again. Ever since I've never fully trusted a medical diagnosis unless confirmed by second options and backed up with tests. It pays to be not only cautions but also to do one's own independent investigations.
From my experience, not all doctors are mediocre to the extent that I'd wished I'd seen another, some I've visited are quite exceptional and have an innate ability to cut to the core of a problem immediately, or at least start investigations on the right footing. Unfortunately, from my experience, they seem few and far between in numbers.
I was once introduced to a state director of health (the State's top medical officer) through a common interest outside of medicine and I got to know him relatively well. Some time later I mentioned that incident and he said to me without hesitation that he would not trust 90% of his profession to make a competent diagnosis, and he went on to say that if I were ever to be stricken by some dangerous life-threatening disease that I was to give him a call and he'd provide me with a short list of the competent ones who he'd trust—one's that he would go to if he became sick. Fortunately, to date I've never had need to take up his offer.
Frankly, for the lay person this has to be a significant worry. How on earth does one know who is competent and who is not, especially if it's at short notice?
American doctors are also reluctant to do rabies shot. Yeah they are expensive, the risk is low and there are ways to rule it out, but I'd rather not die. Other countries can get them anywhere for cheap. Here...thousands of dollars in the ER. One reason could be its just not administered enough. The other is, for profit American medical system because no one wants to die.
"Other countries can get them anywhere for cheap."
Those of us outside the US understand the US health care system is more profit orientated than many other countries but we cannot understand the huge price differentials, they're often huge in comparison with many others. Surely figures that high are nothing other than price-gouging. (Even if demand is low and the stuff has to be imported the additional costs can't be that costly. Surely not?)
So why doesn't consumer and or monopoly law kick in to stop it (as it does in many other places)?
Because the US has absolute garbage enforcement of consumer safety and anti-monopoly laws, and those laws aren't exactly strong or clear to begin with.
Health-care industry lobbyists spend huge sums to convince lawmakers that they're not price-gouging and _any_ kind of price control is somehow illegal and/or will destroy the economy. This allows them to keep prices high, colluding with health insurances to make deals that incentivize buy insurance (direct-to-patient prices for many medical things are much higher than the prices that the insurance companies pay). Prices high enoguh to make it worth spending so much on lobbying.
It's completely backwards and very anti-consumer/anti-patient, but money has such an outsized influence on our politics, it's ridiculously difficult to get changes made that actually benefit the average citizen.
Yeah, I sort of thought that. But the US is such a competitive place one would have thought competition would have kept prices down. What seems to be happening is the nature of health care allows it to be a 'closed shop' so they do what they like.
FYI, I'm in Australia where we have Medicare which is a universal health care scheme† supported by government taxes. It's not perfect and could always do with more money but people here love it. Tampering with it in a negative way would be electoral suicide for a government at election time.
That said, a visit to a GP is only partly paid for by the scheme (about 2/3)—that is unless the Dr 'bulk-bills' all accounts to the scheme and many don't. Bulk-billing allows a patient to walk out of a surgery without paying anything.
That long preamble lets me lead on to something you may not be aware of although many of your countrymen would be. Some years ago the government made changes to the Pharmaceutical Benefits Scheme—which is the drug store equivalent of Medicare where drugs are heavily subsidized—because many Americans who were visiting here used to stock up on prescription medicines and take them home because the price differential between here and the US is so great. Incidentally, COVID vaccinations are also free as they're covered by the scheme.
Previously, anyone could take a scrip to any pharmacy and have it filled without ID, now one has to have one's Medicard card with one or have its number on file and the Dr's script has to match that ID.
The pondering issue for us is why doesn't the US population rise up on mass for a better deal?
_
† Some years ago I heard an American (I think it was Michael Moore) say that Roosevelt had intended to set up a similar universal health scheme but he died before he could implement it. Is there any truth in that?
Certain things are way under-diagnosed, especially anything relating to a chronic condition that does not have an easy biomarker. Doctors get cynical about their patients.
Chronic Lyme Disease is a popular choice for hypochondriacs (or maybe they're actually right, who knows?) so it gets raised eyebrows when people think they have it.
The majority of doctors I've interacted with, with a low-single-digit number of notable exceptions, seemed to estimate their own intelligence at about 1 or 2 stddev higher than it actually was. Combine this with (I imagine) a large number of legitimately stupid and/or hypochondriac patients, and you have a recipe for really shitty diagnostics.
Cursor has consistently felt faster and easier to use with better inline auto-complete and faster large edits in chat than VSCode ever did. The way suggestions and chat is shown is just a bit easier to read and more elegantly presented.
It's not so simple. Background panning on modern TVs can look very juttery/flickery with motion-smoothing completely off. OLEDs can turn on and off very quickly, and 24 frames a second really isn't that many, so you end up seeing each frame rather distinctly instead of the more smoothed out and less instant frame updates you got on older TVs.
I've found the lowest motion-smoothing setting makes watching stuff like this far more enjoyable while avoiding the awful soap-opera effect you get from higher settings.
It felt awful to admit to myself since I hated on motion-smoothing for so long, but I simply cannot not see the 24 frames in pretty much all scenes where the camera is panning and background has to move a lot.
We need a Blender-like design tool specifically for product design. Using HTML/CSS for rendering so it covers most web needs and that usually more than encompasses native app-layout emulation. Open source, technical, and not expected to be picked up in a day or fully understood top-to-bottom by everyone.
The reason Figma is putting us into a design box is because it doesn't have all the CSS features that actually let you create incredible experiences.