In The Dispossessed by Ursula Le Guin, the anarchists have names assigned at birth by a computer, guaranteed to be unique. They are gender-neutral (I think their artificial language doesn't have the possibility for gendered names) and monomial.
(I highly recommend reading the book, it's one of my favourite novels.)
Anecdotally I'm using the superpowers[1] skills and am absolutely blown away by the quality increase. Working on a large python codebase shared by ~200 engineers for context, and have never been more stoked on claude code ouput.
Chrome managed it. Not sure how since Edge still works reasonably well and Safari is instant to start (even faster than system settings, which is really an indictment of SwiftUI).
One of the problems with suspend/resume is simply: nobody is looking at it or trying to improve it. There is no progress because nobody has tried. The current recommendation is "if suspend/resume doesn't work, disable all of the drivers until you work out which one (of many?) is causing the issue, and work on a fix - sure, people could do that, but most won't - and not even knowing which driver is the issue is annoying.
Until recently, rc scripts (think initd on Linux) had functionality that could be executed on system resume, but not on system suspend - like stopping a service on suspend. Why? Simply because nobody added that functionality for ages [0].
Similarly, drivers often have suspension but not resume capabilities (why?) which means they need to be added by someone who actually tries to use suspend/resume. [1] is an example of this (around midway through the section).
I recently took the time to get FreeBSD set up on my MacBook Pro from 2015, and it took quite a few kernel patches to get it working - many of which I don't think should have been missing already [2].
Webcam support is another issue; at the moment, webcamd is unmaintained because the developer passed away. Even then, it is just an emulator for Linux's USB subsystem and relies on some random person's GitHub for v4l2-loopback support using a branch called "my-build"[3].
Wifi is also an issue, with the best option for fast wifi support being the usage of a nano Alpine Linux VM, and using Linux's drivers [4]. If your wifi device is even supported, it's probably quite slow.
If all three of these things ever progress, I can see FreeBSD being more accepted by the masses. It is a great OS, but for personal computing, there are clear issues.
As someone who spent most of a career in process automation, I've decided doing it well is mostly about state limitation.
Exceptions or edge cases add additional states.
To fight the explosion of state count (and the intermediate states those generate), you have a couple powerful tools:
1. Identifying and routing out divergent items (aka ensuring items get more similar as they progress through automation)
2. Reunifying divergent paths, instead of building branches
Well-designed automation should look like a funnel, rather than a subway map.
If you want to go back and automate a class of work that's being routed out, write a new automation flow explicitly targeting it. Don't try and kludge into into some giant spaghetti monolith that can handle everything.
PS: This also has the side effect of simplifying and concluding discussions about "What should we do in this circumstance?" with other stakeholders. Which for more complex multi-type cases can be never-ending.
PPS: And for god's sake, never target automation of 100% of incoming workload. Ever. Iteratively approach it, but accept reaching it may be impossible.
> Apps should be self-contained in their bundles, and may not read or write data outside the designated container area, nor may they download, install, or execute code which introduces or changes features or functionality of the app, including other apps.
Roblox is in clear violation of this clause, downloading and executing entire games written in Lua. Apple does have an exception to this policy for HTML5 games and streaming games but Roblox does not qualify because it is not HTML5 and not streaming. Many people have had their businesses destroyed for far less serious violations of App Store policy.
I believe there are also other rules against putting an app store inside your App Store app. Clearly Roblox is an app store for games, with its own currency. Apple has not been reasonable on this point with other companies: they originally didn't even want to allow cloud game streaming apps to play multiple games in a single app. Their ridiculous plan was to require a separate Apple App Store listing for each game that a streaming platform supported, and they only relented under pressure after Microsoft went public with their complaints: https://www.theverge.com/2020/9/11/21433071/microsoft-apple-... And after that debacle they explicitly added exceptions to their policies for game streaming apps. They have never done so for Roblox-like apps, which are still plainly forbidden under their publicly posted policies.
>Here’s the recipe for success, as far as I can tell: 1. You’re not going to make any money from your side projects. Internalize that, and believe it. 2. Do everything in your power to try to make money from your side projects.
I didn’t say anything for many minutes, and we continued the slow walk toward the faculty club, Stockdale limping and arc-swinging his stiff leg that had never fully recovered from repeated torture. Finally, after about a hundred meters of silence, I asked, “Who didn’t make it out?”
“Oh, that’s easy,” he said. “The optimists.”
“The optimists? I don’t understand,” I said, now completely confused, given what he’d said a hundred meters earlier.
“The optimists. Oh, they were the ones who said, ‘We’re going to be out by Christmas.’ And Christmas would come, and Christmas would go. Then they’d say, ‘We’re going to be out by Easter.’ And Easter would come, and Easter would go. And then Thanksgiving, and then it would be Christmas again. And they died of a broken heart.”
Another long pause, and more walking. Then he turned to me and said, “This is a very important lesson. You must never confuse faith that you will prevail in the end—which you can never afford to lose—with the discipline to confront the most brutal facts of your current reality, whatever they might be.”
To this day, I carry a mental image of Stockdale admonishing the optimists: “We’re not getting out by Christmas; deal with it!”
Un-wise super intelligent programmers are the worst. They come up with the most amazing roundabout way to achieve even the most mundane tasks, and consider complex code a mark of honour, rather than a pathology to be purged.
At some point people mature and realize that their job when writing code for work is not to concoct brilliant puzzles for next maintainer, but to do exactly the opposite. And then they reach the meta-level of realizing that simplifying things is actually just as hard and pleasing as puzzle-making, and they've reached a maturity in the quality their program design.
The worst kinds of programmers are those that plateau at the "Riddler" stage. They potentially are able carve themselves a nice safe niche, but the value they could have delivered could have been so much more if they'd ever matured. "Oh, that code... yeah, better call Edward ... it's his baby, supercomplex. Nobody seems to be smart enough to work in it except Mr. Nygma".
There are pieces of code that are potentially super-complex due to the intrinsic complexity of the problem space. But since for an outsider it's impossible to distinguish between the two, utmost empathy should be always used to introduce new people to such a thing.
Oh hey, I use buttondown. Love the service, no fluff. The only point of feedback I have is that the terminology around hosting from a domain vs sending from a domain was kind of confusing. Wasn't super clear how to set things up for a simple `newsletter@mysite.com` setup (I got there eventually)
Microsofts Azure Cloud Patterns is some of the best documentation out there. It's not Azure centric. It focuses on why you may want to do something and describes the techniques that are commonly used to solve it. It's pretty great.
These settings heavily depends on your OS, hardware, and use-case.
This profile is what I prefer for AORUS 5/RTX3070/i7-12700H/16GB laptops, and despite how terrible the OEM hardware is... this setup will run acceptably well with dual Intel 670p M.2 drives.
The following should work with most Debian variants, but is hardly optimal for every platform. But if your laptop is similar, than it should be a good place to start. One caveat, when ejecting media it may take some time to flush your buffers.
sudo nano /etc/sysctl.conf
net.ipv4.conf.all.rp_filter = 1
net.ipv4.conf.default.rp_filter = 1
# Ignore ICMP broadcast requests
net.ipv4.icmp_echo_ignore_broadcasts = 1
# Disable source packet routing
net.ipv4.conf.all.accept_source_route = 0
net.ipv6.conf.all.accept_source_route = 0
net.ipv4.conf.default.accept_source_route = 0
net.ipv6.conf.default.accept_source_route = 0
# Ignore send redirects
net.ipv4.conf.all.send_redirects = 0
net.ipv4.conf.default.send_redirects = 0
net.ipv4.tcp_syncookies = 1
net.ipv4.tcp_max_syn_backlog = 2048
net.ipv4.tcp_synack_retries = 2
net.ipv4.tcp_syn_retries = 5
net.ipv4.conf.all.log_martians = 1
net.ipv4.icmp_ignore_bogus_error_responses = 1
net.ipv4.conf.all.accept_redirects = 0
net.ipv6.conf.all.accept_redirects = 0
net.ipv4.conf.default.accept_redirects = 0
net.ipv6.conf.default.accept_redirects = 0
net.ipv4.icmp_echo_ignore_all = 1
#ban list mem
net.core.rmem_default=8388608
net.core.wmem_default=8388608
#prevent TCP hijack in older kernels
net.ipv4.tcp_challenge_ack_limit = 999999999
#may be needed to reduce failed TCP links
net.ipv4.tcp_timestamps=0
net.ipv4.tcp_rfc1337=1
net.ipv4.tcp_workaround_signed_windows=1
net.ipv4.tcp_fack=1
net.ipv4.tcp_low_latency=1
net.ipv4.ip_no_pmtu_disc = 0
net.ipv4.tcp_sack = 1
net.ipv4.tcp_mtu_probing = 1
net.ipv4.tcp_frto=2
net.ipv4.tcp_frto_response=2
net.ipv4.tcp_congestion_control = cubic
net.ipv4.tcp_window_scaling = 1
kernel.exec-shield=1
kernel.randomize_va_space=1
#reboot on kernel panic after 20 sec
kernel.panic=20
vm.swappiness=1
vm.vfs_cache_pressure=50
#percentage of system memory that can be filled with dirty pages
# run to check io performance with: sudo vmstat 1 20
vm.dirty_background_ratio=60
#maximum amount of system memory filled with dirty pages before committed
vm.dirty_ratio=80
vm.dirty_background_bytes=2684354560
vm.dirty_bytes=5368709120
#how often the flush processes wake up and check
vm.dirty_writeback_centisecs=10000
#how long something can be in cache before it needs to be written
Firstly, it should be noted that this isn't kindergarten. Your work doesn't get praised because you showed up, or because you worked hard. So if your idea is crap (and let's be honest, most are) be prepared to hear it.
Secondly your idea isn't new. Trust me, lots of people have thought it before. Most did nothing with it. Some have tried and failed. I know this feels unintuitive but ideas are valueless, everyone has them all the time. The worst thing you can do is treat an idea as valuable or unique. (it is neither.) Despite what the patent Office would have you believe.
Third, Execution. Assuming it's at least a useful idea, assuming it's not some boil-the-ocean scheme, you'll get judged on your execution. And you're a carpenter showing a rough prototype to other professional carpenters. Who will judge it like a finished product. And all of whom believe they could do a better job with execution. (Oak? Seriously? Oak is so last year, everyone is working in Myrtle now...)
Lastly, we all know the product is doomed anyway. Either you give up, and it dies gracefully, or it's successful enough so you get acquired by say Google (well done to you), but then Google wait 3 years and then kill it.
So yeah, this is a crappy place to get validation, much less actual customers. Selling to techies is a fools game.
On the other hand figuring out a target market with pain (you can solve) at a price point they can afford, and figuring out how to reach that market (at a price you can afford) can lead to a good income, and a long-term business.
- $3,000 (2022 prices): Siberian cat. Bought before recent price rises where the prices have apparently more than doubled, somewhat killing my dream of getting another Siberian kitten. Closely related to Maine Coon and Norwegian Forest Cat. Very very very playful and social. Enjoys playing friendly games of tackle with my Texas Heeler dog (50% Australian Cattle Dog, 50% Australian Shepherd), and often can be found entertaining itself throughout the day by training my dog. Naturally played fetch with plastic water bottle caps from a young age. Wants lots, and lots, and lots, of cuddles and play time. Our other Siberian, which was imported from Russia by American breeders, has had a very rough life and joined us late in her life. She has a very similar natural temperament. All three shed a truly totally insane amount of fur -- recommend Miele vacuums which are fully sealed with proper gaskets for maximum suction and minimum allergen leaking/spreading...Dyson strongly not recommended. He has had a lifelong fascination with dissecting electromechanical objects, especially air filters, and insists to supervise/assist with all maintenance work going on in the home. He learned that biting apple cables causes humans to stop using the laptop and destroyed $1,000 of laptop chargers when first generation MagSafe charges had non-detachable cables. Unlike his older sister, he has never had interest in killing/murdering animals, but enjoys removing all the legs from cockroaches and then playing fetch with himself by throwing the body around. Announces most of his poops with a loud meow and insists that the litter box be maintained daily. Enjoys training the humans, especially enjoys training the dog to get the humans to perform desired behaviors on his behalf. Contrary to marketing materials, neither Siberian is actually "hypoallergenic" in any sense of the word...maybe they're "less" allergenic but they still cause a lot of allergies and sheets/pillowcases need to be changed often.
I'd probably go 8" if I was buying today, I hedged a bit cheaper because I wasn't sure if the quality/performance would be what I needed.
If you're doing JS development, just use React. There's really no good reason not to, and I would argue that in an enterprise environment it would be negligent not to.
If anyone can give a non-ideological reasoning for a better choice, I'd love to hear it.
Do you think the byproducts that result from the reaction with ozone are safer to breathe? If not, then you need to air out the room or filter the air, so why not just do that in the first place?
Just because you can't smell something because it has been "destroyed" by being converted to something else doesn't mean it's not harmful to health.
"First, a review of scientific research shows that, for many of the chemicals commonly found in indoor environments, the reaction process with ozone may take months or years (Boeniger, 1995). For all practical purposes, ozone does not react at all with such chemicals. And contrary to specific claims by some vendors, ozone generators are not effective in removing carbon monoxide (Salls, 1927; Shaughnessy et al., 1994) or formaldehyde (Esswein and Boeniger, 1994)."
"Second, for many of the chemicals with which ozone does readily react, the reaction can form a variety of harmful or irritating by-products (Weschler et al., 1992a, 1992b, 1996; Zhang and Lioy, 1994). For example, in a laboratory experiment that mixed ozone with chemicals from new carpet, ozone reduced many of these chemicals, including those which can produce new carpet odor. However, in the process, the reaction produced a variety of aldehydes, and the total concentration of organic chemicals in the air increased rather than decreased after the introduction of ozone"
"Third, ozone does not remove particles (e.g., dust and pollen) from the air, including the particles that cause most allergies. "
A larger problem is that time series modeling is particularly resistant towards black box approaches since a lot of information is encoded in the model itself.
Take even a simple moving average model on daily observations. Consider stock ticker data (where there are no weekends) and web traffic data (where there is an observation each day). The stock ticker data should be smoothed with a 5 day window and the web traffic with a 7 to help reduce the impact of weekly effects (which probably shouldn't exist in the stock market anyway).
It's possible in either of these cases you might find a moving average that performs better on some choose metric, say 4 or 8 days. However neither of these alternatives make any sense as a window if we're trying to remove day-of-week effect, and unless you can come up with a justifiable explanation, smoothing over arbitrary windows should be avoided.
If you let a black box optimize even a simple moving average you would be avoiding some very essential introspection into what your model is actually claiming.
Not to mention that we often can do more than just prediction with these intentional model tunings (for example day-of-week effect can be explicitly differenced from the data to measure exactly how much sales should increase on a Saturday)
I feel a connection to the guy who missed work "absent with the scribe". I know it was probably something important, but a scribe writing "that guy is excused, he was with me" makes me think of those times my boss took me for lunch and paid with the corporate card.
I have spent more time in my career than many people think is healthy trying to plot out contingency plans for 'what if' situations. It stops being irritating the moment the building is on fire and all of a sudden people want to listen to me.
My relationship with YAGNI has ebbed and flowed over the years, and as with any 'problematic' relationship, you may or may not see your own relationship clearly but people who have it worse are still easy to identify. Whether you then ask if you're like that is a matter of wisdom.
What we hear about, what we interview about, what we write about is these cool huge projects from the heroes we want, but the heroes we need are the people who figure out how to solve problems in ways that leave the option of solving them differently in the future. It's a little dangerous to say things like that out loud because lots of people hear that as "I'm going to build a configuration engine that I can use to swap out implementations at startup/runtime," which is pretty much the merry-go-round we are all stuck on half the time.
No, what I mean is go back way old school, taking some notes from Bertrand Meyer, and arranging your code so that there are 'spots' where major changes in functionality seem to naturally fit. I may have mangled this story in my head over the years into a parable, but I still recall hearing my uncles waxing poetic about, the Chevy straight 6 engine block that was popular during the height of the muscle car era. This engine did not have a particularly high horse power to displacement ratio. But in those days engine bays were fairly empty, and the Straight 6 was overbuilt just enough that it was a dream to modify it, and easy enough to work on that many people did. They bored it out for higher displacement, modified it for higher compression ratios (naturally aspirated or blown), hung additional accessories off of it, you name it. Many people were running around with cars that had over 50% more horse power than the stock version, and some crazy bastards who went considerably higher.
In short, this engine was not that particularly great, but it was full of potential.
As someone who may or may not become the next Google, (it's been a long time since I worked for anyone who shared opinions like that, when Google was smaller and only dreamed of being as big as they are now), I don't want a system full of features. I want a system full of possibilities. IF we wanted to do this, you would add it here, and verify it here. But we don't, not yet, so if you could just not break 'here' please, that would be great.
Nobody is collecting those stories and patterns. They're hidden away in Meyer, Fowler, dropped as throw-away lines in lectures, and discussed at length over coffee, noodles, or beers but never written down. Elsewhere they are really hidden in older aphorisms like 'premature optimization', "single responsibility" and that ilk, so much so that they risk becoming bad advice.
We're definitely getting to the point where piracy is getting more convenient than paying again. At one point I was paying for Netflix, prime, disney+, and videoland (a Dutch VOD service). There was a show my wife wanted to watch so I check it on the well-marketed "don't steal, get a subscription - everything is available :)" service film.nl. Not available.
I do some googling and it's available on Prime, for $20 a season. Fine. Whatever. Go to checkout but can't pay because I need a US bank account?
Next weekend I spent a bit of time setting up the arrs and life has been great. Open Overseerr on my phone, request a TV show or movie, arrs do their magic, and within ~10-60 minutes I can watch it on any of my devices through Plex. The ecosystem of Overseerr, Sonarr/Radarr, Prowlarr, Bazarr, Plex.. and whatever *downloadclient+vpn) is just super well integrated. This all being a bunch of simple docker-compose files makes it accessible for anyone who has a little bit of technical skills and a free afternoon.
I still have spotify but honestly also thinking of cancelling it, I'm stuck in my bubble anyway. Paying 10 euros a month for the same few albums just isn't worth it.
yeah - Some. crypto companies have that too now though. Most exchanges have haulted the ability to buy crypto with credit altogether.
But Crypto.com allows you to buy up to $30 worth of crypto with a credit card and cards online are pennies on the dollar online if bought in bulk.
An astute con artist would recongnise that the only real verifications credit cards have at their disposal is geolocation and shopping habits. So if one were to buy CC online, create a bot to enter the details onto the crypto.com exchange, route the traffic through a VPN using the CC address as a base, send the $30 worth of crypto you bought off exchange and into a wallet, using Tor and some clean Eth you bought in cash to pay the gas fees, Wrap the BTC in WBTC, throw it on uniswap, swap for some privacy coin, use that privacy coin to send to another wallet within that private ecosystem so the headers on the node are lost and bam. You've got some clean crypto that you've fraudulently bought with a credit card but can never be claimed as fraud!
I've actually run into some data loss running simple stuff like pgbench on Hetzner due to this -- I ended up just turning off write-back caching at the device level for all the machines in my cluster:
Granted I was doing something highly questionable (running postgres with fsync off on ZFS) It was very painful to get to the actual issue, but I'm glad I found out.
I've always wondered if it was worth pursuing to start a simple data product with tests like these on various cloud providers to know where these corners are and what you're really getting for the money (or lack thereof).
[EDIT] To save people some time (that post is long), the command to set the feature is the following:
nvme set-feature -f 6 -v 0 /dev/nvme0n1
The docs for `nvme` (nvme-cli package, if you're Ubuntu based) can be pieced together across some man pages:
I'm the Abusix engineer in question (and actually the architect of the system in question), and you're being somewhat "economical" with what actually happened here.
Here's the actual chain of events in question:
- You recently switched ISPs and that meant you moved to a new IP block.
- Without going into massive details on how our infra works, but when we have the most serious level of listing (e.g. hitting our most secure traps as in this case), we treat the same /24 more aggressively than we normally would (because of things like snoeshoe spam that would normally spread traffic across a wide range of IPs).
We use /24s only where we cannot determine different ownership of the IPs by looking at the abuse contact registered at the RIR e.g. if the IPs are different contacts then we don't bundle them into the same bucket. In this case Hurricane Internet owns the entire range, so the /24 is used.
We do this because there is no other way to do it that isn't completely abuseable by a bad actor e.g. rDNS is completely trivial to forge and to claim multiple fake entities. If someone has a fool-proof way to do this, then I'm all ears.
Then we get to your failings in this case:
- You don't have proper, working bounce handling.
You were repeatedly sending mail to an old customer and we were rejecting 100% of the email you sent to that address - stating that you should stop sending to it in the rejection message. We always reject traffic on traps we are building so that bounce handling removes them automatically over time.
- You decided to send a marketing message to a bunch of users, including to the address described above "We (rsync.net) are experimenting with a lifetime prepayment option". This message provided no List-Unsubscribe option at all, so I could not unsubscribe it without exposing the trap to your engineer (which is my primary concern as it takes years to build traps properly).
- Your engineer said to me "we took down 600 old accounts and are reviewing our contact policies going forward" and "there are still plenty of other customers who are listed generically as bouncing so we will have work to do here".
That tells me that you knew that you were sending to old accounts and that your bounce handling was either not great or non-existent.
I take our role very seriously and I and my team go out of our way to help anyone who finds themselves listed by providing evidence and advice and we always try to find a good resolution for all legitimate senders.
That is exactly what I did here with your engineer, the problem is resolved as the specific account was removed and you know now what you were doing wrong and how to fix it so that it never happens again.
Your engineer was appreciative and I said if there are any further issues in the meantime whilst you fix things on your end, then we would help by exempting your traffic whilst you did those changes.
I can't see how we could have done anything more in this case.
On our website, we provide blog posts and videos, we take part in conferences and workshops to give advice on how to do things properly so you never have blocklisting issues.
I can easily summarise here what everyone should do to avoid issues:
- Have proper rDNS for your sending IPs that is part of your administrative domain (e.g. don't use your providers generic rDNS that contains the entire IP in the hostname). If someone visits the domain name, make sure there is a website present that has contact details as a minimum.
- Make sure your abuse@domain and postmaster@domain accounts actually go to a responsible person.
- If you send any marketing mail at all, make sure it has a working List-Unsubscribe header (preferably HTTP that allows someone to unsubscribe without having to contact you). Important note: If you use a mailto: unsubscribe, then I cannot unsubscribe a trap, even if I wanted to.
- Make sure you have working bounce handling. If you repeatedly send to an account and it either bounces or is hard rejected, then you need to stop sending to it and mark it as bad. No excuses.
- Don't send to addresses where you have not contacted them or had any interaction at all for > 1 year. If you haven't kept in touch with your customers, then that is on you.
- If you have a web form that when submitted, sends a message to an external user, then you MUST a) validate all of the input fields and disallow URLs unless the field required it and b) prevent automated submission via the use of a CAPTCHA.
- When collecting email addresses for mailing lists, always use confirm opt-in e.g. send a message containing a link that they have to click to activate the subscription. Do not send any further messages to them until this has been completed.
- Make sure you separate IP addresses being used for outbound mail from those used for outbound NAT pool. Block outbound port 25 from the NAT pools and make your firewall notify you of any port 25 activity from any hosts as this could indicate they are infected.
- When provisioning a new mail server IP, don't send more than 30 messages per minute (e.g. 0.5 messages/sec) for the first day and then increase the volume over the following week.
I'm sure some will disagree with some of these, but I guarantee that if you follow all of these, then you'll never have an issue with a blocklist like ours.
Stack Overflow took this approach last year with the following font stack: [1]
@ff-sans:
system-ui, -apple-system, BlinkMacSystemFont, // San Francisco on macOS and iOS
"Segoe UI", // Windows
"Ubuntu", // Ubuntu
"Roboto", "Noto Sans", "Droid Sans", // Chrome OS and Android with fallbacks
sans-serif; // The final fallback for rendering in sans-serif.
@ff-serif: Georgia, Cambria, "Times New Roman", Times, serif;
@ff-mono:
ui-monospace, // San Francisco Mono on macOS and iOS
"Cascadia Mono", "Segoe UI Mono", // Newer Windows monospace fonts that are optionally installed. Most likely to be rendered in Consolas
"Ubuntu Mono", // Ubuntu
"Roboto Mono", // Chrome OS and Android
Menlo, Monaco, Consolas, // A few sensible system font choices
monospace; // The final fallback for rendering in monospace.
I agree that it probably does not include the vast majority of automated bans.. but I'd prompt anyone interested to read the relevant guidelines to understand what might be in scope as far as legal effects or "significant effects" are concerned; it goes well beyond profiling by authorities, and commercial data controllers are far from exempt.
One example of a legal effect is cancellation of a contract. Examples of significant effect include automatic refusal of an online credit application, and e-recruiting practices without any human intervention.
Advertising is in scope too: "For example, someone known or likely to be in financial difficulties who is regularly targeted with high interest loans may sign up for these offers and potentially incur further debt."
Pricing is in scope too: "Automated decision-making that results in differential pricing based on personal data or personal characteristics could also have a significant effect if, for example, prohibitively high prices effectively bar someone from certain goods or services."
Finally, there's an example of profiling reducing a credit card limit. "This could mean that someone is deprived of opportunities based on the actions of others."
Anecdotally, getting kicked out of my email account has had far bigger effects on me than being rejected my credit card application.
(I highly recommend reading the book, it's one of my favourite novels.)