Jared: Hello and welcome to another in our series of web casts sponsored by the National Center on Disability Access to Education. I'm Jared Smith. I would like to welcome all of you to our web cast today. Our topic is on Accessibility 2.0, a little bit of a play on words there that we'll be talking about here in the next few minutes. I'm happy to be joined today by our three panelists. We have Derek Featherstone. Derek is one of the world's foremost authorities on accessibility and web development. He has authored books, publications and provided training on all sorts of web technologies, particularly on Web 2.0 applications. He's the founder of Further Ahead, which is a web development and accessibility company, and Derek is also lead of the Accessibility Task Force of the Web Standards Project. Derek, it's nice to have you with us.
Derek: Great to be here.
Jared: Derek is from Ottawa, Canada, but he's joining us today from France, so good evening to you, Derek.
Derek: Bon soir!
Jared: We have Gez Lemon. Gez works as a web developer for the Paciello Group. Gez is a prolific writer, researcher on web accessibility, web development, provides a lot of information to the community and is one of the leading experts in scripting and assistive technology support. His personal site is juicystudio.com. There you can find a lot of accessibility information. That's juicystudio.com. Thank you for joining us.
Gez: It's good to be here.
Jared: Our third panelist is Aaron Leventhal, an accessibility architect for IBM and leader of the Mozilla Accessibility Project. He has been involved in making Mozilla products including Firefox accessible since the year 2000. To be honest, the word involved probably doesn't do it justice. He's really been a driving force behind Mozilla's accessibility efforts. He's currently working on making Firefox web browser accessible on Linux and advises the W3C properties and formats committee on dynamic content accessibility.
If you would, further your introductions, tell us a little bit about yourselves and what it is that you do. We'll start with Derek.
Derek: Sure. Thanks. I guess the only other thing really to say is that my interest in accessibility stems from my history as a school teacher where I was really focused on trying to make my lessons and my lesson objectives something that could be easily digestible to people with all different learning styles. That's when I moved into the web world. Seemed like an obvious move with accessibility. Now I spend a lot of time doing training and helping people to understand more about accessibility and in particular in areas we'll talk about today, working on physical problems, making sure that applications in particular that are heavily scripted or are uses AJAX type technologies are accessible to as many people as possible.
Jared: Okay. Thank you, Derek. Gez?
Gez: Yes, I am a web accessibility consultant and web developer for TPG and my particular areas of interests are in making applications accessible for people with disabilities.
Jared: Great. Thank you, Gez. Aaron.
Aaron: Thanks for having me on. My interest in this area is not just making software for people with disabilities but to bring a whole new generation of developers involved in open source into accessibility and make accessibility interesting for them and find ways for them to get involved.
Jared: Great. Thank you. All of you will be speaking certainly a lot more as we go through the next hour.
I want to take just a moment and just describe what it is we're talking about. When we talk about Accessibility 2.0, what does that mean? It is a play on words, in some ways to Web 2.0. Web 2.0 is a bit of a buzzword. Some people don't like it, others do. But it's used to describe the newest generation of web applications that we are seeing. So being that Web 2.0 may be a buzzword, I Googled it to see and there were 152 million hits for the term Web 2.0, so it definitely has caught on. A few other terms that we'll probably be using throughout the web cast, one is ARIA, which stands for the Accessible Rich Internet Applications, a set of guidelines or protocols that is in development by the World Wide Web Consortium. We'll be explaining that in more depth in a moment. AJAX is another one that we will be defining and explaining in a moment. So that's really where we're talking about Accessibility 2.0. We have this new way in which the web is shifting and the way that it's working, and what impact does Web 2.0 have on Accessibility 2.0? That's what we will be discussing today.
I went through real quickly for those who may not be familiar with Web 2.0 and its rich internet application, and I worked on -- made a quick list, and I'm going to read these, regarding some of the things that are characteristics -- sorry, I'm just working with some audio issues here. Sorry. Some of the characteristics of Web 2 .0. One is real time content instead of static content. Another is simplified interfaces, better visual design and higher usability. Next would be advanced technologies and interaction. So moving beyond just static HTML to more interactive type elements on the web. User-centric application, so having these applications focus on the views rather than some task that the web developer wants to have happen. Standards compliance. Community and social networking. User generated content, so things like Wikipedia, where the content is coming from the community rather than from one central author. Tagging of content, so providing little tags or pieces of information, meta data or meta information on content. For instance things like flickr, where you can provide tags to your photos for searching or categorizing. The last was content aggregation, where you're pulling in content from a variety of sources into one place and often that content can be user driven, such as the Digg website where it's really a social news site, where the most relevant or important top news items are driven by the community rather than by one person or one group. Those are just a few things I came up with. Maybe ask the panelists if there are any other things or ways that you might describe Web 2.0.
Gez:This is Gez speaking. One of the things that's really good for business about these new types of applications is that traditionally, if a business wanted a software application, they would have to distribute this application, and if there are any amendments, they would then have to redistribute the application. An obvious advantage of having these all controlled from a central location is that if there are any updates to the application, then they're updated in one place. This is what is really appealing to business, because they see a really cheap way of updating applications so that their users get the best of their services at all times, and they don't have to worry about trying to provide them updates and hoping that they install these updates. If they have a browser, they have all they need to run the application. That's why it's so prevalent in the industry at the moment.
Jared: Great. Okay. That centralized location for applications. Anybody else have any other ideas?
Aaron: This is Aaron. I think the instant gratification is certainly a factor as well as vendors wishing to lock into a specific platform. In the future, I think we'll see the same sort of applications, many that we see on the desktop as desktop applications written as web applications.
Jared: Very good. That cross platform compatibility of the applications certainly is very powerful. Derek, do you have anything else to add?
Derek: Interestingly enough, I think it ties into what both Gez and Aaron were saying, something you said earlier, Jared, is that it all seems to tie in to providing a better user experience. Talking about creating an application that is highly usable but also is highly available and doing things like software updates have traditionally been a pit of a pain not just for the company that has to distribute the updates but for the person that has to reinstall or install new components or updated version. Now that type of scenario potentially won't exist because it will all be seamless, which just means that we don't have to -- if we have to update a piece of software two days in a row we don't necessarily have to give our clients that information, it just happens behind the scenes. That's often one the important issues of a lot of applications these days is that user experience, over all experience with that particular application just becomes much more enjoyable of an experience for our clients.
Jared: Right. Thank you. So we have, and it really is a little bit of a nebulous thing, Web 2.0. We want to drill down a little bit more and talk about specifics and how these types of applications, these rich internet applications affect accessibility. One fairly predominant thing that goes across many of technologies is AJAX. AJAX, again is a little bit of a nebulous thing, but Derek, if you could take just a moment and explain to us what AJAX is, how it's different from the traditional way in which the web uses and maybe a little bit about some of the implications for accessibility.
Derek: Sure. One of the best ways I think to describe it is really understanding that traditionally when we have our normal web pages that we understand today, we might have a component on page like a button or drop-down list that has values in it. When we click on those, or you can click on just a link in a page, we then cause the whole page to refresh because we send a new request off to the server and then the whole page is forced to be rerendered again within the browser. One of the things that AJAX does without going into all the technical details of it, is it allows us to do things like when we click on a button on a link or make a change in a select box, we can send that server request behind the scenes and use (inaudible) some of the improved usability of this type of technology, it seems like a much quicker response from the server, and it means that we don't lose our context within that page. When a page gets refreshed quite often it goes blank and needs to be refreshed again. Without losing that context, it actually in some ways makes the application and what's happening with the application easier to understand.
One of the issues that we have with this, of course, is that some screen reader users and potentially people using other assisted technology, the way that updates are done to the page may not be compatible with assisted technology, so their screen reader may not recognize that part of the page has been updated. Another typical problem is when you may click on a button on a page and use AJAX, we change something that came earlier in the page, so we now have a situation where the screen reader user may not even know that part of the page was changed. So when we get into a situation like that, we need to deal with things on a number of levels. We need to start finding ways of making these small updates to pages compatible with assistive technology, but going beyond that we find ways of notifying people using a screen reader that those changes have occurred in the first place. Even if it's technically compatible, we need to let the user know, something happened up here in this area of the page. You should now go and read that updated content. Some of those issues are not all that easy to solve with the set of tools we have right now.
Jared: Great. Thank you. Next question for Gez, maybe you can expand on what Derek has talked about and explain what some of the biggest issues are for accessibility of these rich internet applications and AJAX in general, and maybe some things we can do to address it's accessibility.
Gez: Yes, I'll try. It's quite a complicated subject. It mainly affects assistive technology such as screen readers. Screen readers tend to be sophisticated pieces of software that have quite a sophisticated controller that allows key strokes for people to interact with HTML. For example, they can navigate by headings or read a table or go through lists and so on. With a regular browser, people without disabilities are able to get an overview of the page very quickly and navigate to a specific element using a pointing device. If we limit that to people who have mobility problems who maybe wish to use a keyboard. They are able to navigate to form controls, to navigate to elements that are interface elements that can receive focus. They have extra ability to read a page by using the up and down cursor keys, and they can also use home and end to navigate to the beginning or end of the document or use page down or page up. With screen reader users, they need a much better controller, which is why there are so many key strokes.
The way the screen readers typically provide this is they take a snapshot of the web page and this snapshot is known as a virtual buffer or more importantly, different screen readers give the buffer specific names; for example, JAWS calls this virtual PC cursor mode, and Windows-Eyes calls this browse mode. And within these modes, different keys allow them to navigate around. For example, pressing H will take you to the next heading element or you can even press a key like 2 to take you to the next level 2 heading element. It enables people to get an overview of what the web page is like.
The problem occurs when you use technology like AJAX, the off-screen model, the virtual buffer they use is only updated in response to the navigate event and the navigate event is only raised by the user pressing a link or button element, so in other words... maybe I go on a bit further about that -- say, for example, you have a link, and when you activate that link an AJAX call is called in the background and that happened on an onreadystate change event. When that's finished that will update a certain part of the page. The problem is that when the link is activated, the virtual buffer is updated at that point that there is some latency between when the onreadystatechange event fires and updates a particular element on the page, and what happens is there's a disconnect between what's physically on the screen and what the screen reader users is aware of in their virtual buffer. They tend to be exactly one step behind. If they activate the link again, then they would get the updated content physically on the screen, but they would only be informed of the last update because the snapshot was updated before the new content was available because of this latency issue.
Screen readers typically provide different modes like -- virtual buffer mode allows them to use key strokes to control the application. There are also modes like PC cursor mode or non-browse mode. PC cursor mode is a JAWS term indicating that the virtual buffer is not on. In other words, the user is acting directly with the web application. In Windows-Eyes this is referred to as non-browse mode. And in this mode, the user is interacting with the web page like a traditional browser would be interacting with the web page, so they have access to the links and other interface elements like form controls, but they are unable to read any other elements like lists or anything. When they start to use the cursor keys, all that happens is that physically on the screen, the browser will go up and down, but they are not made aware of anything through their screen reader.
There's also another mode in most browsers called forms mode which is very, very similar to PC cursor mode, except it's activated when the user wants to enter data, for example. When they enter data and press Enter and that turns off the virtual buffer which uses these different key strokes to control. If you imagine, for instance, someone about to enter their forename and their forename is Harry, if they didn't change the mode when they type H for Harry, they would automatically go off to the next heading. So in order to stop that controller so they can interact directly with the form control they'll put it into form mode by pressing enter when the form control has focus. They can they press H-A-R-R-Y and it won't do anything other than enter those characters into the form control.
There are also different keystrokes people can use to toggle these modes, like scren reader users can press INSERT + Z in JAWS to toggle in and out of virtual PC cursor mode, CONTROL + SHIFT + A in Windows-Eyes to toggle in and out of virtual buffer mode, and CONTROL + 4 in Supernova. If the user is not in the virtual buffer mode, when an update is made through AJAX it's possible to find what the update is, but only by using specific techniques. And those techniques revolve around the tabindex attribute. Now with the tabindex attribute, this was something introduced by Internet Explorer, if a value of zero is provided for tabindex, then that element is added to the tab order. So in other words, if you put a tab index of zero on a paragraph, when you're tabbing through the document you can give focus to a paragraph element. Another value you can provide is a value of minus 1. That leaves the element outside of the natural tab order, which is likely to be desirable in web 2.0 applications, but it does allow you to focus on the elements with scripting. This technique only works when the screen reader is in PC cursor mode. So from a screen reader user's point of view, the way to empower screen reader users, is to make them aware of being able to update the virtual buffer and for example, in JAWS to update the virtual buffer, you press INSERT + ESC, and whatever has been added, they can then access. So if somebody is using a screen reader and they expect something to happen but nothing happens, by pressing INSERT + ESC, and then re-investigating where they were, is likely to help them out. Particularly with the state of web applications at the moment where very few people are providing accessibility considerations in these applications at all. This technique can be very empowering for screen reader users. Realistically, it's not an ideal solution. It's something you can do if maybe there was just one update, but if you end up with a situation where multiple parts of the page are being updated at the same time, then the technologies we have at the moment are just not capable of providing that amount of information to users, which is where WAI-ARIA comes into it because WAI-ARIA has a concept called live regions which enables users to be informed of several updates or updates at particular points and it doesn't interfere with where they are at the moment, whereas the solution we were looking at in terms of if something is updated you focus on a particular element so it's announced to them, it means you've interrupted what they were doing at the point something was updated.
Jared: Thank you. As you can see, there are some solution that are out there. Many of them are maybe a little bit hacked, but there are some solutions with scripting to increase accessibility to screen reader users. Just again definition of a term that was used there, that Gez mentioned WAI-ARIA. WAI is W-A-I, that's the Web Accessibility Initiative of the W3C. And as I mentioned previously, ARIA is Accessible Rich Internet Applications. We will be talking more about that in just a moment. Either Aaron or Derek, want to add anything about what some of the problems are with these applications right now and some of the techniques or tricks or hacks that can be used to increase accessibility of them right now?
Jared: Sorry to interrupt. I know that in Gmail, which is a very popular web 2.0, rich internet application - that Gmail essentially offers that functionality. The default interface is very dynamic, but you can choose options to disable chat or go to a basic HTML option that uses traditional forms and posting without as much dynamic information. I know using that option is fairly accessible to screen reader users, so there are ways to present these options without it necessarily being just an accessibility option or here's where you go if you don't want to have scripting or have scripting disabled. It really can be a very useful option for your audience.
Jared: Derek? Anything else to add there?
Derek: Sure. I think we could talk about this for hours.
Jared: We could.
Derek: I guess the only thing that I would add to this is that I think it's important that, as Aaron said, we want to make sure that it's folded in and it's all one solution. I think we also need to very broadly remember that some of the patterns that are being used right now in some of these applications are not just problems for, say, a screen reader user. They are problems for somebody maybe that's using something like voice recognition technology. One of the examples, one of my favorite applications is flickr. You mentioned that before. I use that for all my photo management. I upload photos and share them there. One of the interesting features in the flickr interface, and this is not necessarily all an issue with AJAX, but just with the way that the interface is created, if I have a title for a photo that's displayed up above the image, I can actually click on that title of that heading and it dynamically becomes an edit box or input type equals text where I can change the text for that heading, hit enter when I'm done and it will dynamically update that. One of the issues that we have with that is that's part of the interface that is basically hidden. You don't know that you can even click on it until you hover the mouse over top of the words themselves. So that's another example of something where that has an impact on users of a different -- using voice recognition technology that are not going to even recognize that there's something in the interface there that you can interact with because it looks different. There again, this may not necessarily be a hack, but a possible solution is to include something on the page like a check box that says enable all these controls or expose all the controls or go into edit mode or something like that where it provides us with an alternative way of accomplishing the same thing. It allows us to provide the same functionality as Aaron was saying. We want to provide that same type of rich functionality but maybe enable it in a slightly different way.
Gez: This is Gez. Just to add something to both of those comments. When developers are developing these types of applications, they really need to choose interactive elements otherwise the role that will be described is often wrong. For example, you can add all kinds of scripting to things like div, paragraph, span, images, et cetera, but if elements such as links and buttons are included, particularly buttons or input type="image", it gives a lot of flexibility and you have a lot of flexibility over the rendering and it has a quite generic role and it is quite intuitive to people. So quite often, a lot of the accessibility problems I encounter in web 2.0 applications is that the element that is chosen is often the wrong type right from the very beginning.
Jared: Very good. Just a follow-up to something that was brought up was the issue of scripting. Nearly all of these next generation applications as Gez and Derek mentioned, many of the solutions require scripting. So are we to the point where as developers we can maybe require scripting as maybe a baseline technology, something that's required for our users, not only to access the content but also for accessibility, like the supplementing of accessibility, making accessibility better. Because often those accessibility solutions rely on scripting itself, are we to that point or do we still need to rely on the little bit of a buzz term here, graceful degradation, allowing applications to still work if scripting is disabled? That's a question we get quite a bit - are we to a point where we can require scripting? Maybe that will open that question up to you guys if you have any quick responses to that.
Jared: Any other thoughts there?
Gez: This is Gez. I completely agree with what Aaron said. Personally, I prefer progressive enhancement techniques rather than graceful degradation. There's a slight difference. Graceful degradation, tends to be that you try to do everything with the scripting and fall back if you can, whereas progressive enhancement means that you start just assuming that scripting is not available and add it. I do think we are also approaching -- I think Aaron made a good point, although scripting has been around a long time, we should be at a position where we should be able to rely on it, the fact of the matter is, as we know discussing the state of AJAX at the moment, we just can't. There are several areas of responsibility here, and one of them is with the user agents themselves, and there's a lot that they can do in order to make this a lot better and a lot easier, for instance it would take nothing for all user agents to raise a navigate event after an onreadystatechange event, and that would solve the whole issue of things being updated by screen readers. There are other issues which I'm sure Aaron will go into later about which areas of update and how you control that, but it would solve quite a large problem.
Jared: Great. That's a great transition right into Aaron's next question. If you can just take a few minutes and explain to the audience what ARIA - Accessible Rich Internet Applications is, where it came from, what the current support is in browsers and assistive technologies, and explain what are some of the things that we can do with ARIA that are more difficult using existing technologies.
Aaron: Okay. ARIA is a large area with a lot of capabilities. It can address some of the things we talked about before such as different areas of a page that are updating at different times. It can address some of the things that authors are doing in terms of creating the custom widgets they've wanted all along in HTML, but didn't have access to, such as tree views, which potentially have a great benefit for accessibility because it allows for progressive disclosure as the user navigates the tree view. But because they are done with typically with divs and spans and are often not keyboard navigable they are not accessible at all. ARIA allows you to deal with widgets, along with keyboard navigation and telling assistive technology what kind of a widget you've developed, what state the widget is in, is this thing focused, is it expanded, is it disabled. Different states that a screen reader or assistive technology user needs to know about the widget. It can help describe what areas of the page will potentially be changing and how to deal with the interruption policy - should the user be interrupted right away or should this be considered a polite change or should we wait until the user finishes what they are doing or should those changes simply not be announced.
Jared: Great. Thank you. We really could spend a lot of time dispensing ARIA and implementing it. There's a very good article on alistapart published in the last few weeks on ARIA and some of the implementation of that. I would refer you to that. We have several other questions we may get to regarding that.
I would let our listeners know that if they would like to submit a question to the panelists, you can do so on the web cast page. We have had several questions come in, and we appreciate those. I'll go ahead and ask at least one of those right now for the panelists and I'll just open this up to anyone that wants to address it. This is from John O'rourke. He asks, how close will the WCAG and WCAG is the Web Content Accessibility Guidelines 2.0 - that are the accessibility guidelines being developed by the W3C, how close will those guidelines in final form reflect the changes and updates in web 2.0?
Gez: This is Gez. I have been involved with WCAG 2 for a while, but I have not been involved recently. From what I have seen, the guidelines are deliberately focused around being technology agnostic and also taking into account these new way of web technology, so from what I've read, I'm very confident that WCAG 2 will provide excellent guidance on creating accessible web applications.
Jared: Great. Thank you. Anybody else. I think that answers it pretty well. And Gez, that's pretty much my opinion as well. Those guidelines are still in development right now, they are still being formed, so we all have an opportunity to participate and influence those is we feel they maybe are not addressing some of these aspects of web 2.0 as well as we want them to.
Another user-submitted question here. This one comes if Patrick Burke, and he's from UCLA, University of California Los Angeles. He asks, "As the concept of the static page starts to break down, what changes will adaptive or assistive technologies need to make in order to work successfully with dynamic content?" This is a little bit beyond what we've talked about so far as developers what we need to do, so what needs to be happening on the assistive technology side?
Aaron: This is Aaron. I would like to answer that question. As the Mozilla Foundation has been generous, also IBM very supportive of this ARIA work, we have been working with both proprietary and open source assistive technologies on solutions and designs for what to do in this case of dynamically changing content. And this is such a new area, and it's so really just rippling with complexity that it takes a lot of smart people to really point each other to their own flaws in their designs. At first, the idea was we were going have the authors say which changes on a page are important or not important, and of course everyone realized that no one wants to say something on their page is not important. So what we came up with was a model of modeling a web page after human conversation, which is that you have certain kinds of interruptions are rude interruptions and certain interruptions are polite interruptions, and there are other things in between, assertive and saying that you should not interrupt. Then there's finally the unknown type, which is when an author did not update his page at all. So we have these five classes of politeness of changes. As the browser, we are exposing it to the technology. What the screen reader needs to do is basically have a queuing mechanism. For things that are considered rude, that would be like an error - it just needs to speak those right away, even be willing to interrupt the user. It may need to do that in a different voice or have a sound to alert in case the user quickly types a key or does something which reinterrupts the thing that interrupted them. For other kinds ever changes it needs to wait until there's an opportunity to speak those changes and it need to be able to queue those changes.
One of the factors that influences whether the assistive technology should in fact speak the change is whether the change was actually caused by the user. It's really two general cases, one is things that are happening in the world that are updating the page, for example in a table of sports scores, or maybe a log with a stock ticker or something like that. Another kind of change is something the user is doing something on a widget, and as they navigate, for instance through the tree view, it's updating a pane or as they're navigating through the list of mail messages it's updating a pane with some new text, and that user generated changes, and it's generally left disorienting for the assistive technology to speak the user generated changes right away.
So in essence, we have tried to categorize all sorts of common use cases. I could talk about it for a long time. There are some more specific things where we try to categorize very common things such as timers and tickers and logs and status and those sorts of things. Then there are the general politeness levels that we have. Then what we have done is we have written up -- the Mozilla Foundation gave a grant to the developer Firevox to write up a description of how to handle these changes, and he wrote a set of test cases which are available online. Am I going too long?
Jared: Could you take a moment and explain what Firevox is for those that may not be familiar?
Jared: No. That was very good. Aaron or Gez? I'm sorry, Derek or Gez, any follow-up in regards to what the assistive technologies need to be doing?
Gez: This is Gez. Not necessarily what assistive technology needs to be doing, but I'd like to see mainstream browsers like Interpret Explorer showing the same commitment that Mozilla is showing. At least raising the events that enable these kinds of technologies and having support for WAI-ARIA. Realistically, I've spent a lot of time trying to make applications accessible now. The only thing that I can see that realistically can be achieved in the very near future is WAI-ARIA. I would like to see more of a commitment from the user agents rather than the assistive technology manufacturers - obviously we're going to need that as well. I would just like to see all user agents involved in this.
Jared: Great. Quick question, and this question comes in from Steve Leonard from the Institute of Occupational Safety and Health of the Center for Disease Control. I'll ask this of Derek. If you want to take a stab at this, the question is, "how will this affect rich media such as video, Flash, online video, captioning and so forth?"
Derek: Well, that's an interesting one. (inaudible) We've had this belief that other types of rich media, like video and Flash and Quicktime movies, that sort of thing are not necessarily accessible because they are some proprietary technology that's not just plain old HTML. One of the things that I have been wondering for a while -- maybe a way of answering your question best is, what is the way WAI ARIA stuff can enable us to provide a means of embedding a media page and a live transcript that sits there in plain HTML and has the time encoded information and a transcript to render the captioning at the same time as the movie, so the timing information. I know that's outside of the scope of where ARIA is right now, but would something similar enable us to, (inaudible) embed our captioning with our movie files, so the captioning would be right in the flash movie area or the Quicktime or Real media or whatever it is. Is there a way that we can use plain alt text in a browser with HTML to synchronize that type of information. That would obviously be something through scripting and pry that timing information in there and have text (inaudible) webpage so that it could be resized so that if people want to resize it, they could very easily (inaudible) reduce the size of it for some that may not have the ability with embedded captions that are inside the movie file.
Jared: Great. Thank you. I know we have received some comments regarding the audio quality, and I do have to apologize. I Know Derek and Gez at times have been a little bit difficult to understand. We do apologize for that. That's in many ways, a reflection of people halfway across the world using various technologies that we have patched together for this. I can let our participants know that we will have audio archives on the website shortly. We will also have a transcript that we'll be providing. So for those of you that maybe were experiencing that or had problems understanding some of this, we'll do our best to transcribe this and get it on to the website shortly so you'll be able to go back and review that. We also have a list of resources on the web cast web page, and we will add to that any other resource that have been mentioned here or any anything the participants would like to provide to you.
Let's go with one -- maybe a final question here. We'll see how things go. We have talked a lot about a lot of technologies, a lot of solutions - scripting solutions and live regions and ARIA in general. For each of you, what would be... if you could give a developer a blueprint or a way they can start to implement accessibility right now into the rich internet applications, what would your general advice be to developers right now? Let's start with Gez.
Gez: Yes, my advice would always be to choose the interface element that most closely represents what you are trying to achieve. And where possible try to use things like input type="images", or buttons, or links, because those things do have relatively well understood roles. That makes things easy for people to understand. Things like mouse-overs. Personally I would avoid them. They are quite unique to the web. You don't often find a desktop application that responds to incidental movement such as moving the mouse around, but it's quite prevalent on the web for some reason. I don't know why. I would try to avoid them. But if they absolutely must be used, then at least provide hidden links that when they receive focus do the equivalent of what the mouse hover would do and when it loses focus would do the equivalent of what the mouse out would do. And the other major thing we tend to recommend is that if you are going to provide error messages and hope these are going to be accessible to people using assistive technology, that you make these programmatically associated with the form control. If you put an error message make sure it's in a label so that when the person lands in the form control, they can find out what the problem is, such as please enter your surname, tip: must be characters, must be over 5 letters long. That's probably the major things that we would recommend.
Jared: Thank you. Aaron?
Aaron: My recommendation would be, I would agree that people need to deal with technology as it stands now, which means trying to look for techniques to solve the problems that you can for as many people as possible. If you're one of those developers you're working for an organization that simply needs to do something next generation and it has to be snazzy and attract users and be really fantastic, and having accessibility in the UI making it less interesting is simply not an option, then I would say look into ARIA, and now is a chance to have an impact on the standard and on the implementation of the standard. That opportunity is now if you want to help shape it.
Jared: Thank you. Derek.
Derek: I guess to add to what Aaron just said, it's important for all developers to always try to come up with the simplest possible solution to the problem. As Gez said, one example is making sure your creating the interface (inaudible). E-mail for some reason does this, they have an e-mail link that is not actually a link, it's a span that is styled to look like a link and so it's not focusable, we cannot actually activate or do anything with it. You're dealing with a very possible scenario. The other thing that is important for people to remember is to provide information, say, like a 30-step wizard or there's something going on on a page, or something where you're taking a process out of context. So making sure that you're doing something (inaudible) for that section of where you are so that the context is there - I know I'm on step three of four. Using headings throughout an application so the user is able to quickly scan the page with a screen reader so they can get around a page easier. Things like that are quite critical to the over all (inaudible), but over all to ensure that the over all user experience is good for everybody.
Jared: Great. Thank you. Thank you to all of our panelists. Thank you to those of you that sent in question. I'm sorry we were not able to get to all of your questions. We will be posting the transcript and audio archives to the web cast web page. Our next web cast will be in the fall. We're going to take the summer off. The National Center on Disability and Access to Education, with our partner WebAIM, have received grant funding to look at cognitive disabilities and and web accessibility, so our next web cast will likely be on cognitive disabilities and some of the research and tools well being developing in regards to cognitive disabilities. Again, thank you to our panelists and to those of you who have tuned in. We will see you again this fall.