Hacker Newsnew | past | comments | ask | show | jobs | submit | playpause's commentslogin

This looks like a great resource. Human checklists are not perfect but they tend to be better than automated checks for this sort of thing.

But there is a common form of accessibility guideline that I have a problem with, and this one illustrates it well: "Avoid using the autofocus attribute." The problem here is it quietly pits users with special accessibility needs against the rest - "Just don't use feature X." OK, but what should I do instead for the rest of my users who benefited from feature X? What if I'm making a search engine landing page and I want to automatically focus the input on page load (and automatically bring up their keyboard if touchscreen)? Is there some other approach that achieves the same UX as the autofocus attribute but without creating accessibility problems?

According to MDN, "When autofocus is assigned, screen-readers 'teleport' their user to the form control without warning them beforehand." OK, but really? Why? Why don't they offer the user the option to not do that?


I agree with the comment, but:

> Why? Why don't they offer the user the option to not do that?

A screen-reader doesn't replace the browser - it just responds to changes in focus by reading out a description of the currently focused item. It should be the browser that offers an option to disable autofocus. Firefox does seem to have an option in about:config called "browser.autofocus" which I assume does this exact thing.


The screen reader could still choose to ignore the automatic focus and read the whole page instead.


How? The accessibility APIs in question just tell the screen reader when the focus changes, it doesn't say why it changed. I'm sure theres a hacky way to make it work for specific browsers, but it's the wrong place to solve the issue.


Checklists are not a place for nuance. A search engine landing page is a place where using autofocus in the search field is perfectly appropriate; autofocusing a search field that's in the header of every page is not.


I agree, autofocusing a search field that's in the header of every page is not generally appropriate, but that's not specifically about accessibility. It would be annoying for anyone on a mobile device (keyboard pops up on every page), and the focus ring and/or blinking cursor would be distracting.

What about cases where it is appropriate to autofocus the search input (as it's the primary action on the page) but where there also might (sometimes) be an important text notice above the search input? Sighted users would see the notice fine, while screenreader users would be 'teleported' (MDN's word) straight past it, missing the notice.


I believe it's up to screen reader developers to come up with innovative ways of making the web as it currently exists more usable, rather than requiring the millions of websites out there to accommodate the technology's current limitations. In this case, perhaps screen readers need to somehow intelligently read nearby text when a control immediately receives focus on page load.


No. Browsers already parse the page to produce an accessibility tree that's provided through the operating system's accessibility API. Screen readers attempting to supplement what's provided through that API by re-parsing the page should be a last resort and is bound to be error-prone.

Designers and developers need to learn that what they make will be, and should be, experienced in ways different than how they do. Content is experienced linearly, not always in two dimensions, and semantics can make programmatically explicit what is only implied visually (like heading text being bigger and bolder).

The prevalence of smartphones and responsive design have helped somewhat to expand people's understanding of how using digital experiences can vary. There's a lot more that can be learned.

Aside from not using autofocus on the input field at all, another existing option is to programmatically associate the preceding block of text by giving the block a unique id and referencing it from the input using an aria-describedby attribute; after a screen reader reads the input's accessible name, it will read the associated description.


> while screenreader users would be 'teleported' (MDN's word) straight past it, missing the notice.

Autofocus in that case seems like reasonable and expected behavior. Isn't it up to the tools (e.g. screen reader, etc) to accommodate the expected behavior, rather than for the reasonable and expected behavior to change?


No. It is reasonable and expected that some users experience page content linearly so designers and developers have to create experiences that takes that into account. That includes not skipping important content. If it's important, don't skip it. If it's not important, maybe it's okay to skip it but there should be means for users to explore and discover what's on the page (like give the preceding block of text a heading that indicates what that section of content is about).


Hmm. "All users should experience the page linearly because some do" is not really an argument that works.


A sighted user with a cursor or touchscreen can be as non-linear as the size of the viewport and the layout allows.

All designers should design the linear experience as well as the non-linear experience because the linear is all some users will have. Responsive design is a move in that direction, a smartphone portrait display is much narrower than a laptop display so the content tends to be in a more linear layout. Experiences through voice assistants are also linear.


Let's just step back and state our terms. Are we agreeing that website placing the cursor in a search box is an example of autofocus?

I believe it is reasonable to do this if there is no other purpose for the page.

If I understand correctly, it is your contention this is unreasonable because it will skip all of the other information about the page for screen readers.

My question is why screen readers cannot, or should not, handle this special case? Isn't there an argument that it helps users with hand-eye coordination problems? I am absolutely for a11y, but when there is a trade-off in degraded experience for the majority, I'd like to hear arguments why the interpretive tools used by the minority cannot handle that interpretation.


I have no particular disability at the moment and I’m struggling to think of anything that autofocus adds to my browsing experience and not struggling to think of times it’s annoyed me.


On the other hand, I am endlessly annoyed when e.g. Ebay / Amazon do not auto focus the search box on page load. Anything else I'm going to click and probably 50% of the time I want to search. Just auto-focus please.


> it quietly pits users with special accessibility needs against the rest

Not just that; it pits users with one accessibility need against users with another. (Or I guess, not "pits" so much, sometimes needs just conflict.) For example, for someone with motor impairments, having to control the mouse (or pressing Tab a lot) to focus an input field rather than being able to use their keyboard right away is not a great experience.


I don't remember which site has autofocus for the username and I absolutely HATE IT, specially because it did that after a small delay - probably loading all js and crap - so it would change the focus from the PASSWORD field to the user field.


For the last 2 years I’ve been using an iPod touch as my “downtime” device. I usually put my iPhone away in a drawer from early evening until I’m ready to start work the next day. I found this impossible to stick to until I got an iPod touch, because in the evenings and mornings I often need to manage things like HomeKit devices or other Apple ecosystem things like Reminders. I don’t have any distracting or time-sucking apps on the iPod touch, and the screen is small and fiddly, so I barely use it except for a few seconds here and there for something practical. The difference in stress levels and mindset has been huge. I can’t recommend highly enough separating your phone usage into ‘social/work/news/comms’ and ‘practical/home/calm’ categories, on different physical devices.

I have tried using the new iOS Focus and Downtime features to make my iPhone work a similar way (hide all the time-sucking apps at certain times of day etc), but having a dedicated device for the purpose is much simpler and much more effective.


I do something similar but not quite the same with my iPad: it has most of the same (non-work) apps that my phone does, but I've disabled all notifications for all apps so it never yells at me; it's an entirely chill-out/self-directed device where nothing ever grabs at my attention


I do something similar but with my main iPhone. There’s like 3 apps that have notifications and none of them make sound.


I treat my main phone as a notification entrypoint; trash-notifications are turned off, of course, but every messaging, calling, email, etc app has them turned on. If I want to know if anything relevant to me has happened, I look there or keep it nearby. If I don't have a notification there, I know nothing has happened

On my iPad, even messages with friends, emails, etc are all blocked. Not even red badges on the app icons. Nothing that can possibly prompt or notify me in any way. I don't think I could get away with that on my main phone


You can still do this buy just purchasing an older iPhone and never putting it on a cell plan right?


I have an older iPad mini for this purpose (but an old iPhone without a data plan would work too). I setup a separate home@<domain> iCloud account under my family plan and use it exclusively for streaming music vi AirPlay, cooking w/ recipes on Paprika, HomeKit controls, reminders, timers, etc. - no Slack notifications, no calls, no calendar reminders. The AppleTV goes on the same account too. It's really been a great solution.


I've got an old LG V40 for this purpose. It's got music apps, meditation, and a few other non-social things. It's also my flashlight if I wake up in the night, and alarm clock to wake me in the morning.


I imagine your comment was meant with a bit of humour, so don't read too much into the following! But it got me thinking about optimism and pessimism.

The HN post title (at time of writing):

> Nature has enormous emotional and cognitive benefits on people

Yours:

> Being constantly trapped in a world made of concrete and drywall causes enormous emotional and cognitive detriments to people

It's just "Things could be better" vs "Things are bad". They are pretty much the same in terms of what they say/imply about current reality.

I think the HN title is actually closer to an objective statement than yours. Yours is ornately, floridly pessimistic. I think a lot of people (esp. engineering types, including me) suffer from a recurring tacit belief that: pessimism is better than optimism for getting at objective truth. I used to argue that both leanings are equally likely to result in poor judgement, with 'realism' smack in the middle. But I now go further and believe that fostering a slightly optimistic lean is actually better than 'no lean' – not just in the sense that 'you'll have a nicer time', but in terms of maximising how often you are correct in your observations of reality, in the long term. Because zero lean is impossible to maintain all the time. Our observations almost always rely on heuristics to accommodate for incomplete data or insufficient time. So you are going to err in your objective judgements about reality sometimes. And when you err on the optimistic side in a way that matters, reality tends to tap you on the nose and correct you fairly promptly. Which can hurt a bit, especially for someone who spends most of their waking life working with complex systems that are unforgiving about tiny details being incorrect - this trains us to to think of all the ways something might go wrong, so we feel bad when we failed to predict a negative event. But for most of life outside of solving engineering problems, eg, dealing with more organic/nebulous things like 'other people' or 'long term goals' or 'relaxing', being optimistically incorrect in your judgements and then being corrected by reality is better than being pessimistically incorrect and not being corrected. When you make pessimistic errors in judgement, you don't get actively corrected as much, so your ability to make objective observations drifts further pessimistic, worsening your decision making, worsening your situation, and it cycles downward. Eventually some kind of correction comes, but usually after hitting a new low, by which point a few things have gone wrong in your life and the climb back up is difficult. A slightly optimistic lean doesn't seem to have the equivalent problem (of drifting ever more optimistic until you're problematically divorced from reality). At least for me, anyway. I think this might be because a positive state of mind tends to be more active and therefore able to run more thoughts in parallel, including thoughts that can gently correct others that have gone a bit too far, while a negative mindset is more monotone.


We are (of course) also taught to be this way. So much of our education focuses on deconstructive and critical thinking versus constructive thinking.


> spending a bit more to get good quality paper straws

I am yet to see one of these


Your implied plan (doing less of what made the problem in the first place) seems like more of a stretch to me. It would be nice, but I can’t see many countries giving up plastic. It’s too useful. I think focusing international collaboration efforts on better waste/pollution management (eg getting more waste plastic into properly managed landfills) seems a lot more plausible.


I may not have explicitly stated it in the parent comment, but it's quite explicit other places.

Either a system is sustainable or it's not. If it is, it can continue forever. And, if it's not, it won't. One way or another, it won't, though you can pick the method early on, and later, reality will force it on you. The history of civilization collapse is the history of this reality being forced on groups of people who thought it didn't apply to them.

Plastics in their current form won't exist as new products in 1000 years (though the current stuff probably won't have broken down entirely). Either something far less vile and toxic to "all life" will have been found, or, more likely, industrial civilization will have done the usual "overshoot and collapse" thing, so we won't have the technology to make them in their current form then.

None of that changes the fact that plastic are toxic to life now - and, so, we ought not be using nearly as much of them. I don't mind "durable plastics" quite as much, but the bulk of it is single use, and splitting out all our recycling, I'm regularly reminded of just how much plastic one cannot avoid, even when trying to minimize it.

If the reality (which it probably is...) is that people won't stop doing anything until the external reality we live in forces their hand, the outcomes are almost always far worse than if we decide to stop doing those things earlier.

Plastics are convenient, certainly. They're also a horrid biotoxin that has, quite literally, blanketed the planet in the form of microplastics. We have no idea what to do with the stuff, and burying it only works for so long (and if you're really careful to not let the bits and pieces leech into groundwater). But, I mean, at least you can get water without having to use a drinking fountain!


Plastics aren’t a system. Some plastics are likely to be manufactured in 100,000 years simply because their made from abundant atoms, non toxic, and have useful properties. The current global economy on the other hand isn’t stable across months let alone hundreds of years. But that doesn’t matter much, the meat your eating today probably didn’t come from a wild animal like the meat your ancestors where eating 100,000 years ago but it’s still edible. Different systems to produce essentially the same thing across vast stretches of time.


> Either a system is sustainable or it's not. If it is, it can continue forever. And, if it's not, it won't. One way or another, it won't, though you can pick the method early on, and later, reality will force it on you.

The mistake you are making is thinking the plastic system is a closed system. It is not. The plastic system is part of the universe, which is populated by people who constantly create new knowledge to solve problems. And we never know which new knowledge will be created which will affect this system.

For example, when nuclear power was invented one might have predicted that cheap, clean power would be available to all. But you probably wouldn’t have predicted the environmentalist backlash to it and resultant continued dependence on fossil fuels.

Or Malthus predicting worldwide food shortages and starvation as the population grew. He had no way of predicting the invention of fertilizer.

There are a million examples of this but the takeaway is: people solve problems and create new knowledge that is constantly redefining what is possible.

> The history of civilization collapse is the history of this reality being forced on groups of people who thought it didn't apply to them.

On the contrary, the history of civilization collapse is rife with people who insisted on thinking about things as closed systems and stifling open ended progress (e.g. ancient Sparta).

It is actually hard to think of an example of a civilization which couldn’t have been saved if only they had the right knowledge. Say the cure for a disease or a piece of military technology.

I believe that is the point the other person is trying to make. That humans continually solve problems, create new ones, and rapid progress means a higher likelihood of developing the new solutions needed.


He’s the chief engineer and product designer at Tesla, and the chief engineer at SpaceX, as well as CEO of both. He led the design and engineering of rockets that land, cutting the cost of space travel by one order of magnitude. Meanwhile he created the most valuable car maker in the world and forced the rest of the industry to bring forward their electric car plans by a decade. And these things weren’t accidents, he often tells the world what his goals are before accomplishing them. And he’s the richest man in the world. He’s many things, but not unaccomplished.


He owns the companies. He can call himself whatever he wants - doesn't make it true. He got lucky with Tesla - good timing, a pliant customer base willing to ignore all the things Tesla got wrong but still was willing to spend untold amount of money.

I am entirely skeptical he designed anything and considering the duds he's pretending are revolutionary (Boring, Hyperloop, Mars nonsense, Cybertruck, etc) but instead are failures it seems unlikely he could be so right about one thing and so wrong on others.

Trump is rich also


My feeling is that the 'undermining' the author refers to is not a conscious attempt to keep the lower classes down. It's just middle/upper class people spouting fashionable political views, going along with the 'right message', and not really thinking about it. When it comes to their own personal behavioural decisions that will affect their kids, they take it more seriously, basing their decisions on their deep-down sense of 'how things really are', and then they come up with some kind of rationalisation of how this conservative behaviour actually fits in with their anti-conservative political views. This rationalising process has become second nature and they don't notice they are doing it at all.


It is also often of form of courtesy. You don't want to burden single parents further by elaborating how damaging this can be for their kids, especially if you have them in your audience and it makes it even worse if they take it to heart. So in that situation being polite and being honest collide like in many situations. There are some true believers, but they are far more rare.


> Is this because such visualizations are useless clutter or because film makers have better tools for creating them than actual programmers do?

Another reason is that real visualisations are hard. You have to faithfully represent some actual data, and make it ‘usable’ in the sense of well designed keys/labelling etc, and choose a chart type that is appropriate for the data and the point you are making, plus you somehow have to make it pretty and not overwhelming. These constraints all fight each other, you go round and round in circles and eventually you have something passable. It’s a professional field at the intersection of data science and design, it’s not like there is some magic ‘tooling’ that solves it for you.

Fake visualisations are very easy to make pretty - you can just generate data that looks nice, the design doesn’t have to have any real utility or even make sense, etc. It just has to look like a data visualisation.


Coming to this late, but I am interested to know if your view is still the same as when you wrote this 13 hours ago.

I don't think you have really answered the question about the logical incoherence put to you in another comment [1], which was replying to this comment by you (now flagged, sadly):

> If a bunch of people are saying "this thing hurts me" and your stance is "well you shouldn't be hurt so shut-up". > Yeah, you're an asshole.

Honestly, genuinely: my feelings are hurt by reading your comments here - I find it a bit chilling, it makes me uncomfortable. I'm a free speech advocate, and I perceive (rightly or wrongly) a frightening swing away from free speech values in the last few years, and your comment triggered uncomfortable feelings. Perhaps I'm wrong about this, perhaps my view is biased somehow. We could debate it, and maybe you'd point out something I hadn't considered, and maybe I'd change my view. But until then, the fact is, my feelings are hurt by your comments. And I'm definitely not the only one - there's more than a "bunch" of free speech advocates in the world who find this line of argument chilling.

As a free speech advocate, I believe you should have the right to say what you want, and that my hurt feelings should not prevent you from doing so. But don't you see the logical incoherence in your position? How can you argue that you are not an asshole under your own logic? (To be clear, I am not calling you an asshole, just pointing out that your own logic would seem to conclude that you are an example of one, while also containing the statement that you are not one.)

[1] https://news.ycombinator.com/item?id=31088745


My view is the same.

I'm a free speech absolutist. I'm not saying any word, phrase or idea should be illegal, no matter how repulsive I, or anyone else, find it.

There's no logical inconsistency here.

The notion that personally choosing to use inclusive language is "chilling to free speech" is ludicrous.


You call people assholes, dismiss people's arguments out of hand, and quote people saying things they didn't even say (in quote marks, too). I think you're just a dick. I'm glad you consider yourself a free speech absolutist though, that's something I guess.


Calling me a dick violates my free speech.


I never said you violated anyone's free speech. I shouldn't have called you a dick though, I just got frustrated. I was trying to make a nuanced point, maybe it was a dumb one, but if so, it would be nice to be put right rather than dismissed.


Most of my non-tech friends believe that nuclear apocalypse would entail at least the end of all human life. I think this only makes them more complacent in a strange way. They can shrug it off with gallows humour - if it happens, well, that was that, I only hope it’s quick. They don’t have to think about the reality of how life would change in the years following a nuclear war. The myth of nuclear apocalypse is a kind of licence to stop mentally engaging with the issue, which I don’t think is good. Not to mention the nihilistic side effects of believing one is going through life on borrowed time. Pushing falsehoods for the greater good always has unintended consequences.


Guidelines | FAQ | Lists | API | Security | Legal | Apply to YC | Contact

Search: