Multilingual AI Newsletter and Perspectives

Multilingual Computing for the Visually Impaired

Written by Libor Safar | Jan 2, 2008 3:00:00 PM

Imagine yourself blind-folded (your travel eye shades will do), seated in front of your computer screen, your hands on the keyboard, ready to work. What do you do? This may seem daunting, but today’s technologies turn computers into a powerful tool and an indispensable window to the world for millions of visually-impaired people worldwide.

The arrival of computing has had a major impact on how the visually-impaired can live their lives and communicate – between themselves as well as with the outside world, and has extended and enriched their options enormously. Importantly, technology enables blind and visually-impaired to go to universities, enjoy life-long learning and have jobs previously unimaginable.

Drivers for accessibility

For some, the visually-impaired community – which includes blind as well as partially-sighted people – can be surprisingly large. The World Health Organization (www.who.int) estimated in 2002 that there were over 161 million people worldwide who were visually impaired; over 124 million people had low vision and 37 million were blind. This excluded people with refractive errors, such as myopia (short-sightedness) or hyperopia (long-sightedness), which are a frequent cause of visual impairment, and which in many cases can lead to blindness if not corrected.

So making products or content accessible to the visually-impaired not only enables them to live more fulfilling lives, and better realize their potential. It also makes a good business sense.

There are legal pressures driving accessibility too. For instance in the United States, the Section 508 Amendment to the Rehabilitation Act of 1973 mandates that web content maintained by the federal government must be made accessible to people with disabilities. The same Section also prevents federal agencies from buying electronic and IT products that are not accessible by people with disabilities.

Section 508 was the law for a number of years, but it was the amendment, passed in 1998, which gave the right to sue agencies in case of non-compliance, that made accessibility a more serious concern. The fact that public procurement may account for more than a quarter of all IT equipment purchases in the USA was certainly a factor. And similar laws apply in other countries too.

On the other side of the Atlantic, the European Commission is taking steps to promote accessibility in areas such as public procurement, certification of accessible products and services, and accessibility of public web sites, among others, under the heading of eAccessibility. And, there is already the recently amended European Union directive (Article 56a of Directive 2001/83/EC) that stipulates that product names should be labeled in braille on pharmaceutical packaging.

It also states that information on pharmaceutical products contained in Patient Information Leaflets (PILs) must – on request – be made available in formats accessible to the visually-impaired. This means in a suitable print for the partially sighted and in a format perceptible by hearing (CD-ROM, audiocassette, etc.) or in braille for the blind.

Key assistive technologies

Screen-readers play a major part in enabling especially the blind to use computers. A screen-reader is a software application which runs in the background, monitors textual as well as non-textual information which is displayed in the GUI on the screen, and delivers this to the user – either with voice via a speech synthesizer, in a tactile form via refreshable braille displays (described below), or by way of a combination of these two.

In essence, screen-readers enable visually-impaired people to operate a computer almost normally, even if somewhat less comfortably. However, not necessarily less efficiently – proficient blind users can actually operate computers faster and more effectively than a normal sighted user. This is because in many situations, the visual display can actually be distracting and in reality slow down sighted users.

In practice, a screen-reader will make accessible what the user types (the keyboard echo); the information or the text that is displayed on the screen (the screen echo); information from the User Interface (such as menus and menu items, dialog box options, messages, edit fields, icon names, or window titles) or any current status information. Each screen-reader will normally have a wide range of advanced navigation options, keystrokes and shortcuts to accelerate working in this environment.

Such a system will not only communicate what is on the screen, it also features enhanced functionality for specific software such as office applications, browsers, instant messaging programs, Adobe PDF files, etc. It can equally provide information on actions which take place with non-textual elements, such as graphics or GUI objects. In addition, numerous speech or braille verbosity options exist which determine how much feedback is provided, and in which way.

For instance, advanced verbosity options for Microsoft Excel can inform you when data in the current cell is cropped or overlaps other cells, or it can detect and report the formatting styles associated with a font, including text color, background text color, and font attributes. In Microsoft Word, a skim reading feature enables a user to quickly browse through long documents by reading only the first part of each paragraph.

Screen-readers today typically support multiple languages. For instance Freedom Scientific’s JAWS for Windows, probably the most widely used screen-reader worldwide, comes equipped with a synthesizer for 10 languages (including variants such as US/UK English, Castilian/Mexican Spanish and French for France/Canadian French), and enables the plug-in of third-party synthesizers for additional languages. JAWS has been officially localized into 17 languages to date.

The same multilingual speech synthesizer that is behind JAWS – ETI-Eloquence from Nuance – is also built into another popular screen-reader, Window-Eyes from GW Micro. Eloquence is currently being gradually replaced by Nuance’s RealSpeak Solo, which supports 31 languages and dialects and more than 40 voices. Similarly, another frequently-used screen-reader, Hal from Dolphin, features both the Eloquence and RealSpeak synthesizers, and along with its own Orpheus set of voices is available in over 20 languages including Welsh and Icelandic.

Refreshable braille displays are a natural complement and provide a tactile output from screen-readers. Simply put, these hardware devices – typically placed in front of a standard keyboard – display the text that is produced by the computer screen-reader in a line of dynamically-refreshed braille characters. These are represented by dots raised mechanically through holes to denote a specific braille character. With some devices, the display is represented by vibrating the dots that make the corresponding character.

Users read the braille characters using their fingers, and once they finish reading a line they can refresh the display and read the next one. A typical braille display will be able to show one line of 18 to 84 braille cells and will come with predefined keyboard commands and buttons that will partly replace those on the computer keyboard, such as the tab key, cursor keys or mouse clicks. This makes working with the braille display more efficient – users need to switch between the display and the keyboard less often.

One advantage of the braille display is that with text documents, it constantly indicates the position of the cursor at the given character until users change the position and hence the displayed text. This makes working with the braille display closer to the normal experience sighted user have. On the other hand, with speech output, the position is read just once or on demand.

While computers may be very well controlled just by using screen-readers, the combination with a refreshable braille display is advantageous in many scenarios. For instance when proofreading texts, an error can be easily omitted when only speech feedback is used. The braille display enables proofreaders to identify typographical errors much more easily. Blind people working in call centers can use the braille display to read while having their customers speak on the phone.

Last but not least, while various types of voices can be normally selected and their quality is improving across languages, they are often still somewhat synthetic and unnatural to the human ear, especially when listened to for a longer period of time.

Complementary tools

These two key assistive technologies are complemented by several others that extend the use of computers by the visually-impaired. Magnifiers facilitate use of computers by those who have a degree of visual impairment but are not fully blind. Magnifiers enlarge the operating system environment or a part of the screen as much as up to 36 times, while the magnified area is determined by the position of the mouse cursor or the insertion point. Some magnifiers also feature speech output thanks to built-in synthesizers.

A number of applications, called braille translators, exist to support the conversion of text into braille (or vice versa) and its subsequent printing on special braille (impact) printers called embossers. This braille translation software is required because the conversion to braille is not straightforward. Many languages feature contractions (described below), so there is often not a one-for-one relation between one printed character and one braille character.

Formatting is another area of complexity and one where usage differs depending on the language. Braille translation software will ensure that the printed braille document follows the selected braille standards or customs that exist for the given language. In addition, this software can also convert special symbols such as mathematical or scientific notations to the corresponding Nemeth braille code.

Overall, braille embossers, which produce hard copy braille prints, have enjoyed a major growth over the years. The possibility of interline printing, where there is embossed braille together with ink print (produced by a laser printer), enhances sharing of printed documents between the visually-impaired and the sighted. This is extremely useful in proofreading or for instance teaching, so that sighted teachers can use study materials printed in braille and ink, regardless if they themselves read braille or not.

Many braille translation products today also support adding and printing tactile graphics or images. However, when printed with a braille embosser, the resolution and options available tend to be rather limited. More sophisticated graphics and diagrams can be printed using special fusing devices.

With these, images printed or drawn onto special paper rise under heat to form tactile diagrams which can be interpreted using touch. Also, thermoform machines provide a vacuum technology for creating touch-sensitive relief diagrams, such as maps, signs or charts, which can be used by visually-impaired people.

Accessibility in operating systems

While a whole suite of specialized tools exist for the visually-impaired, there is a good and growing support and a wide range of accessibility features included in standard off-the-shelf commercial software, including operating systems. They find utility especially with the partially-sighted, for whom they provide a readily available way of using computers. Blind users typically rely on specialized software.

Microsoft Windows, for instance, comes – for a number of releases already – equipped with a good set of basic assistive tools such as the magnification program Magnifier, the text-to-speech program Narrator, or the centralized way to manage accessibility via the Ease of Access Center (labeled Utility Manager prior to Microsoft Vista).

Although these built-in tools provide only limited functionality compared with specialized products, they are useful and enable any visually-impaired person to use Windows immediately, and can help them during installation of their selected special tools.

In addition, Microsoft Vista now includes voice recognition that enables users to control the operating system, applications or dictate documents by voice. In the current version, this feature supports US and UK English, German, French, Spanish, Japanese, Traditional Chinese and Simplified Chinese.

At first, this may seem an attractive tool for the blind, but in reality the use of voice recognition is in general not widespread. This is partly because of the still relatively early stage in its development, but mainly because it is so much less efficient compared with the use of a screen-reader combined with a braille display.

Figure 1 Accessibility options available in Mac OS X VoiceOver utility.

Similarly, Apple’s Mac OS X Tiger release hosts a number of features for the visually-impaired, clustered in the VoiceOver utility. This provides screen-reader functionality, magnification options and a range of keyboard commands for navigating the interface.

These are improved further in the most recent Mac OS X Leopard version, released in October 2007. The new features include a virtual braille display, which is a visual representation of braille output on screen along with an English text translation, and the availability of a braille display while installing or upgrading the operating system.

Accessibility in Adobe Acrobat

Adobe Acrobat products are one example of non-specialist applications where accessibility is today well developed. This makes PDF documents friendly for visually-impaired users. The support includes tools for authors to create and distribute content optimized for accessibility, and tools for users with blindness or low vision to easily create and read PDF documents in multiple languages.

Figure 2 Reviewing and modifying the reading order in a PDF document with tags added for accessibility.

A key feature is the support for generating tagged PDF files, where tags indicate structural elements of a document such as headings, tables, text blocks, graphics, titles etc., similar to XML or HTML tags. As a rule, tagged documents can be accessed and navigated more easily by the blind and the tools they use, thanks to the logical document structure and the defined reading order.

In the absence of a source file or the authoring application, Acrobat has a feature to automatically add tags to an already existing untagged PDF document and optimize it for accessibility. Tagged PDF documents can then be visually reviewed and their logical reading order modified so that the content can be read by assistive technologies more effectively.

However, the support does not stop here. Another feature will automatically assess a document’s accessibility when used with an assistive technology, and will inform the user if tags are present. Another checker will then verify compliance with a broader set of accessibility standards. It will also create a detailed accessibility report which provides results of checking and steps for repair of accessibility errors found in PDF files against several of the current major accessibility guidelines.

This includes compliance with provisions of the U.S. Section 508 guidelines concerning accessible web content described earlier, and the W3C Web Content Accessibility Guidelines (WCAG). Adobe Acrobat also includes a built-in text-to-speech feature that will read out the PDF file content directly from Adobe Acrobat, using the standard Microsoft Windows or Mac OS X text-to-speech functions.

Braille and its use in various languages

The development and use of the braille system internationally provides an interesting comparison with what the story has been in the encoding systems for computers, culminating in the Unicode Standard today.

Braille is really not a language, it is rather a coding system which enables the transcribing of languages so they can be written and read by the visually-impaired. It is a series of raised dots that can be read with the fingers by the blind, or normally with eyes by the sighted. 6 to 8 dots (raised or not) form one braille character or a braille cell.

Figure 3 A braille translator in action and the options of available national braille standards.

There are also other systems in addition to the braille standard, for instance the Moon system. Rather than using dots, this uses a set of embossed symbols which resemble simplified Roman letters. This system may be easier to learn by those who became blind in the course of their lives, but because using it is considerably slower than braille, its popularity is small and actually falling, and is constrained mostly to England, from where it originates. In reality, braille is the dominant system.

Even today, there is no one common braille standard that applies worldwide. Since its development by the Frenchman Louis Braille in 1821, the system spread globally by individual national organizations for the visually-impaired who adapted it to their local graphical printing systems. There has been little contact between these national institutions and there have been no fully generally accepted rules for this local adaptation. As a result, national standards tend to differ.

On the positive side, barring some exceptions, corresponding Latin characters are transcribed in different languages to the same braille characters, and even the phonological equivalent of another graphical system gets transcribed using the same braille character.

For instance the Latin a is transcribed to braille the same way as a in the Cyrillic alphabet, alpha in the Greek alphabet, and aleph in Hebrew. This would be the case even if their position in their respective alphabets would differ. This happens with the Greek gamma, which is the third letter in the Greek alphabet, but transcribes the same way as the Latin letter g (seventh in many Latin alphabets). It is in the letters beyond the basic Latin alphabet that there are differences and a need arises to form braille combinations.

In its basic 6-dot form, braille cells are made up of six dot positions, arranged in two parallel columns with each containing three dots. A combination of these dots may create a maximum of 64 characters (in practice some combinations are not used), which of course is not sufficient.

Many standard characters in braille are therefore created using a combination of several basic characters, where the main character is preceded by a prefix which helps to modify the meaning. It is in this area concerning punctuation, diacritics, numbers and special symbols that the national institutions for the blind work fully independently. In some languages, there is even no clear national standard.

In the 1980s, the extended 8-dot code was created, which has 2 parallel columns with 4 dots each, while the bottom two dots have a similar role as prefixes in the 6-dot braille. This in theory enables up to 256 combinations, and the system is used as a rule in refreshable braille displays. It has however never been fully adopted in embossed printing on paper. Most central national braille standards deal only with the 6-dot braille.

Some of the most detailed rules have been published by the Braille Authority of North America. The Unicode Standard encodes a complete set of these 256 eight-dot patterns, which also includes the 64 dot patterns used in six-dot braille. They are included in the Unicode Standard as symbols; the standard encodes only their shapes. The association of letters to patterns is left to other standards.

What adds efficiency, but also complexity, to using braille is word contractions. These are developed in English and also a few other languages. In the uncontracted form (also called Grade 1 Braille), one braille character often corresponds to the printed character when transcribed. There are simply the 26 characters of the alphabet and various punctuation symbols such as the period and comma, but no abbreviations or contractions. This exists across the languages that use the Latin alphabet.

This form of transcription makes for slow reading, and often makes publications in braille extremely long and bulky. Normally, one English printed page would result in about seven pages of braille print.

So contracted braille (also called Grade 2 Braille in case of English) was developed in several languages, which serves as a kind of shorthand for words or part-words. Note that there are some slight differences in contractions and formatting even between North American English and UK English. In English, Grade 2 Braille contains 189 different letter contractions and 76 short-form words, which are abbreviated spellings of common longer words, and is considered to be the standard form of literary braille.

German, for instance, has officially three braille grades. The uncontracted Basisschrift is an equivalent of the English Grade 1 braille; Vollschrift is like Grade 1 braille but uses eight contractions (for au, eu, ei, ch, sch, st, äu and ie); the third Kurzschrift is then the German version of the Grade 2 fully contracted braille code.

Figure 4 Examples of braille contractions and short-form words in English and German.

In a multilingual environment, the different codification in various languages can cause practical problems. There are a good number of characters that look the same in various languages. They may have the same meaning, the same Unicode code, but their visual representation in braille looks different. It is also worth mentioning that – all-in-all – the transcription of characters in any language into braille is ultimately manageable – there are hundreds or in same cases thousands of units at most It is the transcription rules for formatting and structure of documents in braille that are much more complex.

Still a long way to go

There is no question that assistive technologies have made a large amount of information and knowledge accessible to visually-impaired people compared with what the situation may have been just a few decades ago. However, contrasted with what is available to the sighted, only a fraction of information is accessible to the visually-impaired. Why is it so?

Much of what exists today is not easily amenable to conversion into an alphanumeric format that could be well processed by a screen-reader and then either delivered by a synthesizer or converted to braille. This applies especially to the following types of documents:

  • Purely graphical or visual content, such as movies, videos, galleries, photo galleries, old prints, manuscripts etc.
  • Partly graphical content, such as comics, cartoons, charts, maps, posters, folders, technical standards, and other documents that combine text with graphics.
  • Documents that utilize some non-textual information to indicate the structure and sense of individual elements, such as spreadsheets with merged cells, textbooks that use text placed over the page rather than as a linear flow, presentations, chemistry, physics or mathematics texts, or musical notation.
  • Documents using letters with no current alphanumeric equivalents such as ancient scripts or historic works.

Even if the content as a whole is available in an appropriate format, there a number of pitfalls that can prevent its practical usage by visually-impaired people. For instance, a simple combination of two different alphabets (say Latin and Cyrillic) may render a document practically inaccessible. And a simple usage of two languages in a document, for instance in dictionaries, presents a challenge for speech output. The sad fact is that most document types are impossible to code in braille, or only with great difficulty.

Blind-friendly web

The Internet has certainly become a great source of information for the visually-impaired, and one of the strongest motivators for them to learn to use the computer assistive technologies that exist today.

For the blind to be able to access and read web sites correctly, the sites need to take into account the way the blind perceive information online and access it using assistive tools. The blind can only get information which is available as text. Textual information is perceived only linearly – line by line – and so the blind internet users do not get the global view of what is on the page. For navigation through a web page, only the keyboard and hot keys are used. And, partially-sighted users using a magnifier can at any given time see only a small section of a web page.

Regrettably, currently there are still relatively very few sites that are well optimized or which are made accessible for visually-impaired people. Those that are often follow the accessibility guidelines that were developed by the World Wide Web Consortium (W3C).

While many national sets of guidelines or best practices for making blind-friendly web sites exist, these W3C Web Content Accessibility Guidelines (WCAG 1.0) have become widely accepted. Currently in their first version, adopted in 1999, the second edition of the guidelines is currently at the working draft stage.

For each guideline, the recommendations list a number of checkpoints (e.g. provide a text equivalent for every non-text element). Each checkpoint has a priority level assigned based on its impact on accessibility. Priority 1 checkpoints are the basic requirements and must be met or else the information won’t be accessible by one or more groups of users. Priority 2 checkpoints should be satisfied, otherwise the information will be difficult to access for some users. Satisfying Priority 3 checkpoints will improve access to Web documents for many more.

The extent to which the content meets the various priority checkpoints will determine three levels of conformance. At level “A”, all Priority 1 checkpoints are satisfied; with ”Double-A”, all Priority 1 and 2 checkpoints are satisfied; and with “Triple-A", all Priority 1, 2, and 3 checkpoints are satisfied.

In practice, web sites fail to meet basic accessibility guidelines most often in these areas:

  • Missing or inappropriate alternative text for graphical elements. Then, there is no way for the blind to determine if this graphical element is only an illustration or a clickable image.
  • Menus are not accessible using a keyboard. If, for instance, only Java script is used for menus, blind users will not even be able to find out there is a menu.
  • Inability to change font size using the browser.
  • Scrolling and changing frame sizes is disabled. When magnifying is used, some information is then simply not accessible.

Conclusion

Over the past years, a whole spectrum of assistive technologies has been developed to serve the needs of the visually-impaired. Their practical utility, however, depends on the way information is presented and made available – this can help or hinder its accessibility. Equipped with some basic knowledge about how the blind communicate and use computing, we can enrich the experience they have, using and accessing content or products in any language.

Further reading 

The author thanks Petr Peňáz and Svatoslav Ondra from the Teiresias Support Center for Students with Special Needs at Masaryk University in Brno, Czech Republic for sharing their practical experiences in the preparation of this article.

This article was originally published in MultiLingual magazine, issue #93.