Hacker Newsnew | past | comments | ask | show | jobs | submit | vohk's commentslogin

Put a different way, would you say Fiverr enables people to be more creative?

Using AI to create an artistic work has more in common with commissioning art than creating it. Just instead of a person, you're paying the owners of a machine built on theft because it's cheaper and more compliant. It isn't really your creativity on display, and it certainly isn't that of the model or the hosting company.

The smallest part of any creative work is the prompt. The blood and the soul of it live in overcoming the constraints and imperfections. Needing to learn how to sing or play an instrument isn't an impediment to making music, it's a fundamental aspect of the entire exercise.


>would you say Fiverr enables people to be more creative?

That's not what GP said. They said that using a model removes creativity. That's a ridiculous leap from their premise, especially considering that it's misleading at best.

>The smallest part of any creative work is the prompt.

Like most people who never actually played with it, you seem to assume that prompting is all you can do, and repeat the tiresome and formulaic opinions. That's not worth discussing in the 1000th time honestly. Instead, I encourage you to actually study it in depth.


I had the same association but interestingly this version appears to be a "remix" of TigerBeetle's style guide, by an unrelated individual. At a glance, there is a lot of a crossover but some changes as well.

I think the point is well made though. When you're building something like a transactions database, the margin for error is rather low.


I keep a home server for exactly that reason but I still use cloud for some things to have an off site copy as well. There are some things I don't want to risk losing over burst pipes, a fire, burglary, power surges, etc.


Nah.

Actual engineers have professional standards bodies and legal liability when they shirk and the bridge falls down or the plane crashes or your wiring starts on fire.

Software "engineers" are none of those things but can at least emulate the approaches and strive for reproducibility and testability. Skilled craftsman; not engineers.

Prompt "engineers" is yet another few steps down the ladder, working out mostly by feel what magic words best tickle each model, and generally with no understanding of what's actually going on under the hood. Closer to a chef coming up with new meals for a restaurant than anything resembling engineering.

The battle on the use of language around engineer has long been lost but applying it to the subjective creative exercise of writing prompts is just more job title inflation. Something doesn't need to be engineering to be a legitimate job.


  The battle on the use of language around engineer has long been lost
That's really the core of the issue: We're just having the age-old battle of prescriptivism vs descriptivism again. An "engineer", etymologically, is basically just "a person who comes up with stuff", one who is "ingenious". I'm tempted to say it's you prescriptivists who are making a "battle" out of this.

  subjective creative exercise of writing prompts
Implying that there are no testable results, no objective success or failure states? Come on man.


Engineers use their ingenuity. That’s it.

If physical engineers understood everything then standards would not have changed in many decades. Safety factors would be mostly unnecessary. Clearly not the case.


>> Engineers use their ingenuity. That’s it.

If this was enough all novel creation would be engineering and that's clearly not true. Engineering attempts to discover & understand consistent outcomes when a myriad of variables are altered, and the boundaries where the variables exceed a model's predictive powers - then add buffer for the unknown. Manipulating prompts (and much of software development) attempts to control the model to limit the number of variables to obtain some form of useful abstraction. Physical engineering can't do this.


> Because consumers have less disposable income with all the AI-enabled layoffs, the bigger bonanza will come if OpenAI creates educational pathways via AI to enable more people to make money with AI.

Who do you imagine will be throwing money at all these side-hustle "make money with AI" business you envision? No doubt there will be a few- there already are a few- but as the market gets increasingly flooded with AI slop enterprise with very little value add, that well is going to dry up quick.

It's not different than all those content creators making videos offering to teach you the secrets to easy money... instead of just making all that easy money themselves.


AI can help people stuck with skills that are no longer useful to acquire new skills. The premise is AI driven re-qualification of talent towards fields that are needed and faster than any educational system can do at the moment. Nursing (the academic steps before practical skills can be learned in person somewhere), electrical engineering, entrepreneurship for people who may want to start that with a skill they already have, etc.


Regarding the bookmarks bar, Settings / Appearance / Show Bookmarks Bar. If the setting is off, the bar only appears on new tabs. I found that by accident.


thank you! Indeed, it is shown on a new tab in such case!


The problem is when using a model hosted by those labs (ex: OpenAI only allowed access to o3 through their own direct API, not even Azure), there still exists a significant risk of cheating.

There's a long history of that sort of behaviour. ISPs gaming bandwidth tests when they detect one is being run. Software recognizing being run in a VM or on a particular configuration. I don't think it's a stretch to assume some of the money at OpenAI and others has gone into spotting likely benchmark queries and throwing on a little more compute or tagging them for future training.

I would be outright shocked if most of these benchmarks are even attempting serious countermeasures.


Mainly I don't think Proton is serious competitor here. I'm not sure there is much of a market demand for mediocre white labelled LLMs priced at a premium. I can see it carving a bit of a niche with privacy-focused customers already in their ecosystem, but I don't see this taking off for them.

I echo the parent comment. I'm really on a Proton user for email and VPN. The quality drops off rather quickly after that. Calendar, Drive, Pass, and Wallet are all adequate at best; their primary selling point is not being Google rather than being particularly well built or supported. I would rather see them focus on being a truly competitive ecosystem.

I'm also not terribly impressed at the way they've positioned Lumo as a separate service from the existing Scribe AI features, and so conveniently not part of Ultimate plans.


Most people would also not believe there's much of a market for mediocre email priced at a premium. But it turns out if you market the privacy angle, there is.


I've not gone looking for videos specifically, but my experience there is that Kagi seems to focus on what you've explicitly searched for, where Google and others have increasingly leaned into interpreting your intent.

Google's approach works well enough when you're searching for a commodity and you don't care terribly much about the specific source. I get the impression Google, especially post-LLM, wants to divorce satisfying your question from the underlying sources.

I find Kagi is better at finding a specific thing, especially if you're willing to engage with it as a tool, ye olde search engine style. If my query doesn't find what I want, it's usually apparent why and I can reframe it.


I think there will be solutions, although I don't think getting there will be pretty.

Google's case (and Meta and spam calls and others) is at least in part an incentives problem. Google hasn't been about delivering excellent search to users for a very long time. They're an ad company and their search engine is a tool to better deliver ads. Once they had an effective monopoly, they just had to stay good enough not to lose it.

I've been using Kagi for a few years now and while SEO spam and AI garbage is still an issue, it is far less of one than with Google or Bing. My conclusion is these problems are at least somewhat addressable if doing so is what gets the business paid.

But I think a real long term solution will have to involved a federated trust model. It won't be viable to index everything dumped on the web; there will need to be a component prioritizing trust in the author or publisher. If that follows the same patterns as email (ex: owned by Google and Microsoft), then we're really screwed.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: