A lire sur: http://www.gizmag.com/orcam-aids-visually-impaired/27784/
The OrCam is a small camera linked to a very powerful wearable
computer. It sees what you see and through your finger-pointing
understands what information you seek, relaying auditory feedback
through a bone conduction earpiece. Using an intuitive user interface,
the device can read text, recognize faces, identify objects and places,
locate bus numbers and even monitor traffic lights.
Today’s modern technological world is producing a plethora of mainstream devices and software that will allow people with reduced vision to be more independent. Smartphones for example, with their built-in cameras, are a boon to the visually impaired by use of text-to-speech and SayText software, with which the user can photograph an image of text and have the phone read it back to them. Enhanced contrast features and magnifier software are particularly useful smartphone features, but perhaps the most beneficial is the intelligent personal assistant/knowledge navigators now available in today’s mobile operating systems, such as Apple’s Siri and Android’s Sherpa.
OrCam, on the other hand, is designed with substantially more processing muscle. It is essentially a pocket-sized portable computer connected to a camera that clips to the user’s glasses with a tiny magnet – not entirely unlike Google Glass looks-wise, yet inherently more powerful.
The system incorporates a bone conduction earpiece which conveys text-to-speech output, or descriptions of the object pointed at by the wearer. Bone conduction transducers do not block outside sound, but are excellent in maintaining sound clarity in a noisy environment. Currently the device is best used in normal/daylight, although with the aid of a flashlight it can still function in darker environments.
By just pointing with their finger, the user can get OrCam to understand what information they need, whether it's to read a newspaper article, catch a bus or cross the road. Even faces and places are continuously scanned and recognized. OrCam will tell the user when it sees a face or a place that is stored in its memory, without the user having to do anything.
Using an algorithm called Shareboost, the minimalist control system is reportedly "hungry for input." An item can be stored in its memory by merely holding up that object and shaking it. Likewise, for a place or face just a "wave of your hand" launches the interface to store that image in OrCam’s memory banks.
The video below shows an OrCam employee with limited sight utilizing the OrCam device in everyday situations.
OrCam was primarily designed for the visually impaired, however it may well prove beneficial to people with dyslexia or memory loss. The first commercial units will be ready by September this year and will initially be available in US only.
Source: OrCam
By Colin Dunjohn, June 5, 2013
Today’s modern technological world is producing a plethora of mainstream devices and software that will allow people with reduced vision to be more independent. Smartphones for example, with their built-in cameras, are a boon to the visually impaired by use of text-to-speech and SayText software, with which the user can photograph an image of text and have the phone read it back to them. Enhanced contrast features and magnifier software are particularly useful smartphone features, but perhaps the most beneficial is the intelligent personal assistant/knowledge navigators now available in today’s mobile operating systems, such as Apple’s Siri and Android’s Sherpa.
OrCam, on the other hand, is designed with substantially more processing muscle. It is essentially a pocket-sized portable computer connected to a camera that clips to the user’s glasses with a tiny magnet – not entirely unlike Google Glass looks-wise, yet inherently more powerful.
The system incorporates a bone conduction earpiece which conveys text-to-speech output, or descriptions of the object pointed at by the wearer. Bone conduction transducers do not block outside sound, but are excellent in maintaining sound clarity in a noisy environment. Currently the device is best used in normal/daylight, although with the aid of a flashlight it can still function in darker environments.
By just pointing with their finger, the user can get OrCam to understand what information they need, whether it's to read a newspaper article, catch a bus or cross the road. Even faces and places are continuously scanned and recognized. OrCam will tell the user when it sees a face or a place that is stored in its memory, without the user having to do anything.
Using an algorithm called Shareboost, the minimalist control system is reportedly "hungry for input." An item can be stored in its memory by merely holding up that object and shaking it. Likewise, for a place or face just a "wave of your hand" launches the interface to store that image in OrCam’s memory banks.
The video below shows an OrCam employee with limited sight utilizing the OrCam device in everyday situations.
OrCam was primarily designed for the visually impaired, however it may well prove beneficial to people with dyslexia or memory loss. The first commercial units will be ready by September this year and will initially be available in US only.
Source: OrCam
Aucun commentaire:
Enregistrer un commentaire