Now training: Seeing eye mobile phones

By Brian Dolan
07:17 am

iVisitQualcomm CEO Paul Jacobs recently offered a vision of future mobile applications that included enhanced reality where mobile phones can use location data and cameras to identify people and places. While Jacobs did not give a time frame for his vision's realization, it may be much sooner than he thinks.

If you ask iVisit, the precursor technology to the one Jacobs described will be available this summer.

iVisit was spun out from NevenVision a few years ago, just before the latter, which was known for its image recognition technology, was sold to Google. Google went on to buy YouTube a few months later. Ad agencies are still awaiting the launch of relevant video advertisements embedded in YouTube videos and enabled by NevenVision's smart image recognition technology.

Learn on-demand, earn credit, find products and solutions. Get Started >>

NevenVision's former CEO Before NevenVision and iVisit split, the company was called Eyematic. Eyematic's former CEO Orang Dialameh is now CEO of iVisit, a company focused on mobile conferencing and wellness applications. One of the company's newest products is called SeeScan/SeeStar, which aims to help the blind and visually impaired to identify their surroundings by using the camera on their mobile phone. The application currently only works on Windows Mobile smartphones, but iVisit plans to expand beyond the platform to other smartphones like the iPhone and Symbian by the end of the year. (UPDATE: Check out a video demo of SeeScan here.)

iVisit's applications SeeStar and SeeScan offer visually impaired users a virtual "pair of eyes" and object recognition capabilities that leverage a mobile phone's camera to detect and recognize specific objects, like currency and packaged goods. SeeStar gives users the option to transmit live video to a remote assistant, which iVisit may provide as a professional service or the live stream could be transmitted to a family, friend or other caregiver. An example of a time when a user may use the live video stream to a remote assistant could be when they get off at the wrong bus stop and have no way of identifying their location. A remote set of real eyes may be more useful than a database full of images in that case.

For consumer goods identification, the application relies on existing databases (like book covers and album covers), but the real key to the app is its ability to store user generated images and labels for later use in identification. A photo of the front of your house with the label "home," for example, could help a visually impaired person find their way back inside if they were disoriented and trying to figure out which way their front door was. The app would allow the user to hold up the phone's camera and scan around until the direction of "home" was identified. If the camera was still positioned correctly, the app will repeat "home" until the user stops pointing it at the object. Honing in on "home" as it were.

As Jacobs envisioned in his latest interview, identifying people's faces would probably be one of the most desirable features of such an application. It just so happens that facial recognition technology is Dialameh's specialty. Clearly iVisit plans on moving the application's development in that direction. That's when things could get very interesting.


The latest news in digital health delivered daily to your inbox.

Thank you for subscribing!
Error! Something went wrong!