Home ›› 29 Apr 2023 ›› Editorial
Axis Bank from India has its Near Me augmented reality app. The app enables an effortless search for finance-approved properties, ATM locations, and even the best dining offers that the city features. MoneyLion from Utah has launched an augmented reality (AR) app called Grow Your Stack.
Through the app, users can see a visual representation of the balance in their bank account. A computer-generated image is overlapped onto the real-world environment and the representation itself is 100 per cent accurate (due to the fact that the app establishes a connection with accounts in various banks). According to analysts, augmented reality and virtual reali-ty could be utilised to give bank customers tons of autonomy in terms of at-home banking. Hybrid bank branches are also likely to come into existence.
These physical locations will make use of AR to enable self-service, chatbots, or robots for the provision of information and live video conferencing capabilities to connect a customer to an actual bank representative upon necessity. Virtual and hy-brid branches will definitely be the future of banking. A white paper called The Future of the Branch pinpoints the shortcom-ings that video conferencing brings to the table (especially when it comes to discussing contracts or clauses). Instead, the authors of the study suggest that virtual reality branches will be much more capable of offering customers what they want.
Visual holograms and projections, the creation of personalised offers that will be displayed onto real-life surroundings, ac-count opening, closing and other transactions could all be handled through the adoption of AR. All of the simulations and projections will be personalised through the use of relevant data and artificial intelligence. Augmented reality (AR) is an in-teractive experience of a real-world environment where the objects that reside in the real-world are “augmented” by com-puter-generated perceptual information, sometimes across multiple sensory modalities, including visual, auditory, haptic, somatosensory, and olfactory.
The overlaid sensory information can be constructive (i.e. additive to the natural environment) or destructive (i.e. masking of the natural environment) and is seamlessly interwoven with the physical world such that it is perceived as an immersive aspect of the real environment. Augmented reality is used to enhance natural environments or situations and offer percep-tually enriched experiences.
With the help of advanced AR technologies (e.g. adding computer vision and object recognition) the information about the surrounding real world of the user becomes interactive and digitally manipulable. Information about the environment and its objects is overlaid on the real world. This information can be virtual or real, e.g. seeing other real sensed or measured information such as electromagnetic radio waves overlaid in exact alignment with where they actually are in space. Aug-mented reality also has a lot of potential in the gathering and sharing of tacit knowledge. Augmentation techniques are typically performed in real time and in semantic context with environmental elements. Immersive perceptual information is sometimes combined with supplemental information like scores over a live video feed of a sporting event. This combines the benefits of both augmented reality technology and heads up display technology (HUD). This is rather different from vir-tual reality. Virtual reality means computer-generated environments for you to interact with, and be immersed in. Aug-mented reality adds to the reality you would ordinarily see rather than replacing it.
Augmented reality is often presented as a kind of futuristic technology, but a form of it has been around for years. For ex-ample, the heads-up displays in many fighter aircraft as far back as the 1990s would show information about the attitude, direction and speed of the plane, and only a few years later they could show which objects in the field of view were targets. In the past decade, various labs and companies have built devices that give us augmented reality. In 2009, the MIT Media Lab’s Fluid Interfaces Group presented SixthSense, a device that combined the use of a camera, small projector, smartphone and mirror. The device hangs from the user’s chest in a lanyard fashion from the neck. Four sensor devices on the user’s fingers can be used to manipulate the images projected by SixthSense.
Google rolled out Google Glass in 2013, moving augmented reality to a more wearable interface; in this case, glasses. It displays on the user’s lens screen via a small projector and responds to voice commands, overlaying images, videos and sounds onto the screen. Google pulled Google Glass at the end of December 2015. As it happens, phones and tablets are the way augmented reality gets into most people’s lives. Vito Technology’s Star Walk app, for instance, allows a user to point the camera in their tablet or phone at the sky and see the names of stars and planets superimposed on the image. Another app called Layar uses the smartphone’s GPS and its camera to collect information about the user’s surroundings.
It then displays information about nearby restaurants, stores and points of interest. Word Lens was an augmented reality translation application from Quest Visual. Word Lens used the built-in cameras on smartphones and similar devices to quickly scan and identify foreign text (such as that found in a sign or a menu), and then translate and display the words in another language on the device’s display. The words were displayed in the original context on the original background, and the translation was performed in real-time without connection to the internet.
For example, using the viewfinder of a camera to show a shop sign on a smartphone’s display would result in a real-time image of the shop sign being displayed, but the words shown on the sign would be the translated words instead of the orig-inal foreign words. Until early 2015, the application was available for the Apple’s iPhone, iPod, and iPad, as well as for a selection of Android smartphones.
The application was free on Apple’s iTunes, but an in-app purchase was necessary to enable translation capabilities. On Google Play, there were both the free demo and the full translation-enabled versions of the application. At Google’s unveil-ing of its Glass Development Kit in November 2013, translation capabilities of Word Lens were also demonstrated on Google Glass. According to the January 2014 New York Times article, Word Lens was free for Google Glass. Google, Inc. acquired Quest Visual on May 16, 2014 in order to incorporate Word Lens into its Google Translate service. As a result, all Word Lens language packs were available free of charge until January 2015. The details of the acquisition have not been released. Word Lens feature was incorporated into the Google Translate app and released on January 14, 2015.
It is there but still it is not there yet. There are a number of applications as on date. For instance, customers might just scan using their mobile camera at shops in a mall to locate the best offers and deals around them or even locate an ATM to withdraw cash. One can even point at product brochures and see the comparison with other similar products.
Similarly, banks have come up with home financing apps where customer could scan the property with the mobile camera that he/she may buy, and it gives the past sales history, current property listings and other reviews. E Commerce and mo-bile commerce are not far behind. There are talks already about AR apps allowing purchase of a dress just as the model sporting it walks the ramp. However, these applications are just baby steps considering the potential of ART.
The writer is MD and CEO of Community Bank. He can be contacted at masihul1811@gmail.com