There seem to be quite some negative comments to this post.
Regardless of the title and the full story, I mostly feel empathy for the writer of the article.
Half a year ago a colleague of mine told me, in tears, that her spouse suddenly left her after living together for 7 years. I felt her pain and cried with her, i tried to comfort her with kind words. I had a nightmare that night. She's truly a very good person and I still feel very sad for her. She told me later that his personality suddenly seemed to have changed, he was not the man she used to know.
The bottom line of what I want to say: please have empathy with people going through a break of relationships, even if such things happen every day. Be thankful if you are in a good relationship.
A bit baffled that he bought so much rather cheap stuff, almost like someone who's bored and tries to kill that feeling with shopping. I'd earlier expect someone rich would buy mostly expensive, exclusive or high quality items.
I think overspending usually is indeed trying to reduce some other negative feeling without solving the underlying problem that caused it.
I don't think though the same mechanism is at play with the feeling of lust for some person, as seems to be implied. The lust is in our DNA for good reasons. 'Overconsumption' or over indulging into it, is what we should avoid indeed, but that doesn't make lust a vice.
I would promote the feeling of love - in the broad meaning of the word. Love is what we truly desire, that is what can provide as well to our fellow people. We can do that without overspending, simply showing care can go a long way.
The article discusses basically 2 new problems with using agentic AI:
- When one of the agents does something wrong, a human operator needs to be able to intervene quickly and needs to provide the agent with expert instructions. However since experts do not execute the bare tasks anymore, they forget parts of their expertise quickly. This means the experts need constant training, hence they will have little time left to oversee the agent's work.
- Experts must become managers of agentic systems, a role which they are not familiar with, hence they are not feeling at home in their job. This problem is harder to be determined as a problem by people managers (of the experts) since they don't experience that problem often first hand.
Indeed the irony is that AI provides efficiency gains, which as they become more widely adopted, become more problematic because they outfit the necessary human in the loop.
I think this all means that automation is not taking away everyone's job, as it makes things more complicated and hence humans can still compete.
Your first problem doesn’t feel new at all. Reminded me of a situation several years ago. What was previous Excel report was automated into PowerBI. Great right? Time saved. Etc.
But the report was very wrong for months. Maybe longer. And since it was automated, the instinct to check and validate was gone. And tracking down the problem required extra work that hadn’t been part of the Excel flow
I use this example in all of my automation conversations to remind people to be thoughtful about where and when they automate.
Thoughtfulness is sometimes increased by touch time. I've seen various examples of this over time; teachers who must collate and calculate grades manually showed improved outcomes for their students, test techs who handle hardware becoming acutely aware of the many failure modes of the hardware, and so on.
Said another way: extra touch might mean more accountable thinking.
Higher touch: "I am responsbile for creating this report. It better be right"
Automated touch: "I sent you the report, it's right because it's automated"
Mistakes possible either way. But I like higher-touch in many situations.
Curious if you have links to examples you mention?
The teacher example was from one of those pop-psych books on being more efficient with one's time. I can't remember the title off the top of my head. Another example in the book applied the author's model of thinking to a plane crash in the Pacific. I'm sorry, man. It's been a long time.
The way you put that makes be think of the current challenge younger generations are having with technology in general. Kids who were raised on touch screen interfaces vs kids in older generations who were raised on computers that required more technical skill to figure out.
In the same way, when everything just works, there will be no difference, but when something goes wrong, the person who learned the skills before will have a distinct advantage.
The question is if AI gets good enough that slowing down occasionally to find a specialist is tenable. It doesn't need to be perfect, it just needs to be predicably not perfect.
Expertw will always be needed, but they may be more like car mechanics, there to fix hopefully rare issues and provide a tune up, rather than building the cars themselves.
Car mechanics face the same problem today with rare issues. They know the mechanical standard procedures and that they can not track down a problem but only try to flash over an ECU or try swapping it. They also don't admit they are wrong, at least most of the time...
> only try to flash over an ECU or try swapping it.
To be fair, they have wrenches thrown in their way there as many ECUs and other computer-driven components are fairly locked down and undocumented. Especially as the programming software itself is not often freely distributed (only for approved shops/dealers).
They also made the point that the less frequent failures become, the more tedious it is for the human operator to check for them, giving the example of AI agents providing verbose plans of what they intend to do that are mostly fine, but will occasionally contain critical failures that the operator is supposed to catch.
That's how it tends to go, automation removes some parts of the work but creates more complexity. Sooner or later that will also be automated away, and so on and so forth. AGI evangelists ought to read Marx's Capital.
Marxists have the tendency to think that the Venn diagram of "people who have read and understand Marx" and "Marxists" is a circle. There are plenty of AGI evangelists who are smart enough to read Marx, and many of them probably have. The problem is that, being technolibertarians and that, they think Marx is the enemy.
That seems patently absurd, considering that the debate is not between marxists and non-marxists but accelerationists and orthodox marxists, who are both readers of marx, its just that the former is in alignment with technolibertarianism.
Hi. I am not an evangelist -- I'm quite certain it's going to kill us all! But I would like to think that I'm about the closest thing to an AI booster you might find here, given that I get so much damn utility out of it. I'm interested in reading, I probably read too much! would you like to suggest a book we can discuss next week? I'd be happy to do this with you.
If you're "quite certain it's going to kill us all", then you are extremely foolish to not be opposing it. Do you think there's some kind of fatalistic inevitability? If so… why? Conjectures about the inevitable behaviour of AI systems only apply once the AI systems exist.
I read on HN a lot of anecdotes of people who find less joy in work because AI is taking the fun out of it, or that it is relacing part if their job.
For me it feels different. Finally i have a 'coworker' who doesn't get annoyed after asking tons of questions and details.
One that mostly understands where I am getting at, even if the question is poorly formulated.
One that comes up with ideas that make the result better. One that summarizes what i've told it, so I can check whether it got what I meant.
One that has more knowledge than any living person.
Still it is being criticized for hallucinating and for producing imperfect results. But maybe that's what keeps the job interesting as a SW engineer, the provided solution is not perfect and you can improve it together step by step.
The AI was trained to come across as a real person. It's easy to fall into the trap of not seeing it just as it is: a very complex tool. However if you have enough experience, you feel the difference. Its being overly confident shines true.
In my perception ending a conversation is much easier than keeping it alive. People will pick up easily that you are not interested, even in the non verbal part of communication, no?
Interesting to read that people like that you answer in milliseconds to their questions or suggestions.
I very often interrupt people when eagerly fitting into a conversation. That happens almost automatically and sometimes I apologize and say, sorry i was interrupting what you are saying... Often they don't continue where they got interrupted but don't seem annoyed.
Maybe it has to do with those emerging doorknobs i noticed and couldn't resist in grabbing.
Regardless of the title and the full story, I mostly feel empathy for the writer of the article.
Half a year ago a colleague of mine told me, in tears, that her spouse suddenly left her after living together for 7 years. I felt her pain and cried with her, i tried to comfort her with kind words. I had a nightmare that night. She's truly a very good person and I still feel very sad for her. She told me later that his personality suddenly seemed to have changed, he was not the man she used to know.
The bottom line of what I want to say: please have empathy with people going through a break of relationships, even if such things happen every day. Be thankful if you are in a good relationship.
reply