Hacker Newsnew | past | comments | ask | show | jobs | submit | more vorticalbox's commentslogin

you can run whisper locally on your machine.


This reminds me of one I read a few years ago “fizzbuzz in tensorflow”

https://joelgrus.com/2016/05/23/fizz-buzz-in-tensorflow/


At least i could follow through this one... thanks!


Good luck following the Enterprise Edition https://github.com/EnterpriseQualityCoding/FizzBuzzEnterpris...


I normally just use multiple terminal tabs, but I never knew about worktrees; that’s super useful.


yeah I was doing the same and it was working okay, but was hard to work in a truly parallel fashion due to agents making conflicting changes


I tend to not do this often but I will certainly check out the project.


At work we call this scope creep.


Stack that with how you write (drive, emails, everything you post on the internet) to gain a writing fingerprint too.

I can imagine a bad actor getting hold of this putting it into a LLM given all this how would I manipulate this person to do x,y,z.



Anyone can put anything in their WHOIS.

I could create an account, buy a domain name with a gift card, and put your username in the WHOIS.


And supply real credentials & host bestiality and CP on it.


I keep a lot of notes, all my thoughts feelings both happy and sad, things I’ve done, etc. in obsidian. These are deeply personal and I don’t want this going to a cloud provider even if they “say” they don’t train on my chats.

I forget a lot of things so I feed these into chromeDB and then use a LLM to chat with all my notes.

I’ve started using abliterated models which have their refusal removed [0]

Other use case is for work. I work with financial data and I have created an mcp that automates some of my job. Running model locally allows me to not worry about the information I feed it.

[0] https://github.com/Sumandora/remove-refusals-with-transforme...


Tile I’d Like to Fill.


huihui-ai[1] on hugging face has abliterated models including a gpt-oss 20B[2] and you can download a few from ollama[3] too.

If you are interested you can read about the how its removed[4]

[1] https://huggingface.co/huihui-ai [2] https://huggingface.co/collections/huihui-ai/gpt-oss-abliter... [3] https://ollama.com/huihui_ai [4] https://www.lesswrong.com/posts/jGuXSZgv6qfdhMCuJ/refusal-in...


> some of the cutting edge local LLMs have been a little bit slow to be available recently

You can pull models directily from hugging face ollama pull hf.co/google/gemma-3-27b-it


I know, I often do that, but it's still not enough. E.g. things like SmolLM3 which required some llama ccp tweaks wouldn't work via guff for the first week after it had been released.

Just checked: https://github.com/ollama/ollama/issues/11340 still open issue.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: