

Also, putting things in the order you would read them doesn't really work, as a sighted person would scan the page quickly and read the section they want to read. So, for you example, you would only set it as visible for the aural media quite aware that the screen reader doesn't actually read the screen, but you would think that to work well, it would have to build a model of what a person with sight would see, otherwise, it seems like it would do a really poor job of getting across to the user what's actually on the page. I would think that the programming community could come up with a better solution to this contains the aural media type specifically to control the "rendering" of things when screen readers are doing their work.

In there, a commenter said that honey pot form fields (form fields hidden with CSS that only a bot would fill in), are a bad idea, because screen readers would still pick them up.Īre screen readers really so primitive that they would read text that isn't even displayed on the screen? Ideally, couldn't you make a screen reader that waited until the page was finished loading, applied all css, and even ran Javascript onload functions before it figured out what was actually displayed, and then read that off to the user? You could probably even identify parts of the page that are menus or table of contents, and give some sort of easy way for those parts to be read exclusively or skipped over. This question was sparked by reading another question about non-image based captchas. What limitations do screen readers have that I should be most aware of, and what can I do to avoid hitting these limitations. I'm a web developer, and I want to make the web sites I develop more accessible to those using screen readers.
