Per Oculus serviranno finalmente PC meno potenti

oculus

La realtà virtuale di Oculus si è appena fatta meno costosa. La società controllata da Facebook ha infatti annunciato l’introduzione sul suo visore Oculus Rift di una nuova tecnologia chiamata Asynchronous Spacewarp, pensata per rendere più facile alle schede grafiche dei PC il difficile compito di inviare al gadget il flusso di immagini necessario per l’immersione nella realtà virtuale.

Il risultato è che anche schede grafiche meno potenti e costose di quelle fino a ieri richieste dal visore potranno essere utilizzate per tuffarsi nei mondi virtuali di Oculus e dei suoi partner.

See on Augmented World

Apple granted patent for head-mounted virtual reality display

head
The Stack reports that Apple has been granted a patent for a head-mounted virtual reality display which is designed to temporarily integrate a device such as the iPhone to act as the screen and processing power for the headset.

They also suggest using an extra clicker device as a scroll wheel and for other control functions. The patent which was filed in January 2015 and granted on the 1st November 2016 appears to simply display the state of the art as we know it at present, but is crucially a continuation of a September 2008 patent, meaning it likely precedes a number of similar devices such as Samsung Gear VR and Google Cardboard.

The patent, if enforceable, may represent a massive land grab by litigation-happy Apple, but may be good news for Microsoft for a number of reasons. One is that Microsoft’s approach is very different, and does not rely on phones due to its obvious weakness in this area. The other is that Microsoft and Apple have a long-standing cross-licensing agreement which actually precedes this patent, meaning Redmond is likely immune from any fall-out, unlike Google for example.

See on Augmented World

User Experience Design Meets the Connected World

_9

How state–of–the–art technology is changing the way businesses function and deliver:
The buzz surrounding the Internet of Things (IoT) seems to be getting louder—the ‘next big thing’ is knocking at the door. Increased applications of sensors, machine-to-machine (M2M) communications, and advanced cloud computing to interpret and transmit data are architecting a smarter and hyper–connected world.

IoT, with its mobile, virtual, and instantaneous connections is truly innovation at its finest—poised to help companies leap into the connected age, with far–reaching impact. Understandably then, industries across the world are gearing up to gain faster insights and deliver innovative products and services.

However, before jumping on to the IoT bandwagon, companies must accurately evaluate their customers’ latent needs. Innovation and utility should work in perfect unison, in order to drive genuine value and improved customer experiences.

Calibrating the toolbox – Balancing innovation with intrinsic value:
IoT does not exist in isolation—the value of IoT matures alongside data and insights it generates. It exists in combination with a complex ecosystem of devices that can interoperate seamlessly to deliver unique insights into their usage and condition. Similarly, voluminous data is of little actual utility, unless it displays hidden patterns analyzing customer behavior and help predict adverse situations.

Therefore, the true adoption of IoT would involve the ability to generate, ingest, and analyze billions of disparate data streams, and glean insights from a connected environment. These insights could open doors for reshaping existing processes—consider for example, the product development process—making them more agile and optimizing the roles of people involved to improve productivity and deliver greater value.

This will also help accelerate the ‘design–test–learn–iterate’ process, ensuring that market feedback is incorporated in the makers’ product roadmaps in a timely fashion. Currently, the process of gathering market feedback is disparate, cumbersome, and time consuming—and organizations are left unable to meet strict time-to-market demands, especially that of consumer markets.

A unique example of IoT–driven, interactive user interface design is Samsung’s Family Hub refrigerator. This smart appliance enables online ordering, controlling other home appliances, displaying messages, and sending emails. A user can check the contents of the refrigerator remotely, stream music, and even compare and manage recipes online. However, is there really a demand to have tweets displayed on our fridge door?

The largely lukewarm response to recent industry innovations have sounded a clarion call on the need to evaluate, assess and streamline the way forward.

Similarly, consider the constantly expanding domain of wearables. As technology continues to become more intimate, wearables have also broadened customers’ expectations for tailored services. Gartner predicts that while wearables are currently an immature market, by 2020, they would exceed 500 million shipments. However, as CCS Insights’ user survey reveals, despite high customer awareness, a significant proportion of wearable device owners have stopped using them as the device did not provide enough functionalities.

While designing wearables, it’s imperative for companies to consider a wide gamut of challenges/factors related to user engagement and durability (including fairly long battery lives, where applicable). For example, Jawbone’s first production run of UP bracelets were all recalled due to an improperly sized power capacitor. The brand image was salvaged to a certain extent, only because the company instituted a ‘no questions asked guarantee’, and offered a full refund.

The writing on the wall is clear—innovation merely for its own sake cannot inspire or attract customer retention; the product must hold its own unique and effective brand promise, and offer high utility.

Engineering Synergy – Crafting an IoT framework that’s smart and simple:
Product design and user experience are key decision influencers in the digital age. Challenges such as cross–platform design and inter–connectivity between devices have emerged as the primary areas of concern.

It is also important for functionality to be distributed across devices. However, only some of the devices may have screens; others may emit only sound or light signals. In some cases, device interactions are channelized via mobile apps.

Industries must therefore, move towards a sharply–tuned blueprint: a lucid and connected experience that ensures functionality and consistency across different user interfaces, as well as seamless cross–device interactions.

Remember, the device, user, and service experiences are inextricably linked in an IoT product—and without doubt, a user will be quick to discard a connected item, if it does not deliver a satisfactory experience.

Companies should strive to harness this comprehensive amalgamation—technology innovations coupled with solid design strategies—armed with a build-test-learn process. This would eventually aid the creation of an expanding and robust ecosystem.

What then lies at the foundation of this transformation? What is the essential blueprint governing the implementation of agile and concentrated IoT applications? The answer foretells the road ahead for enlightened organizations; it is important that businesses continue to gather product-usage data, captured via multiple devices, continuously to generate insights and incorporate them in the product features. This would shrink the possibility of dissonance between designer, developer, and the final user—creating a finely-balanced value chain of consistent and top-of-the-line quality, and unmatched user experience.

See on Augmented World

The Mainstreaming of Augmented Reality: A Brief History

augmentedreality

The launch of Pokémon Go this summer was a huge success—both for the gaming industry and for Augmented Reality (AR). After launching in July 2016, the game hit its peak in August of almost 45 million users. Despite the fact that Niantic, the American software development company that developed Pokémon Go, has failed to maintain high levels of engagement on the game (its current user base is now 30 million users), the phenomenon demonstrated AR’s potential to be adopted by mainstream culture.

In a previous piece I discussed why some AR apps are destined to be forgotten as gimmicks, and what mistakes marketers should avoid when trying to deploy them. But it is just as important to ask: What has contributed to AR’s increasing success?

Aside from complex technological advances (e.g., mobile devices are now powerful enough to handle AR software and tracking systems), three other elements have enabled the mass adoption of AR apps: 1) meaningful content, 2) convincing and realistic interaction of the virtual with the physical environment, and 3) unique value that goes beyond what other technologies deliver.

Pokémon Go hits all of these targets, and it offers useful direction for designing future AR games. But it also has implications for areas outside of entertainment, such as marketing, fashion, tourism, and retail, where commercial AR apps have already been increasing in numbers and popularity. This growing presence of AR results from a long trajectory of development that has been full of hits and misses. Understanding this timeline is crucial, as it highlights the value that AR can offer in various contexts.

Phase 1: Attention-grabbing early efforts

The first AR technology was developed in 1968 at Harvard when computer scientist Ivan Sutherland (named the “father of computer graphics”) created an AR head-mounted display system. In the following decades, lab universities, companies, and national agencies further advanced AR for wearables and digital displays. These early systems superimposed virtual information on the physical environment (e.g., overlaying a terrain with geolocal information), and allowed simulations that were used for aviation, military and industrial purposes.

The first commercial AR application appeared in 2008. It was developed for advertising purposes by German agencies in Munich. They designed a printed magazine ad of a model BMW Mini, which, when held in front of a computer’s camera, also appeared on the screen. Because the virtual model was connected to markers on the physical ad, a user was able to control the car on the screen and move it around to view different angles, simply by manipulating the piece of paper. The application was one of the first marketing campaigns that allowed interaction with a digital model in real time.

Other brands started adopting this idea of situating content on a screen and having consumers interact with it through physical tracking markers. We start seeing more advanced versions by brands such as National Geographic in 2011, which showed rare or extinct animal species as if they were walking through a shopping mall; Coca-Cola in 2013, which also simulated environmental problems, such as ice melting right beside you in a shopping mall; and Disney in 2011, which showed cartoon characters on a large screen in Times Square interacting with people on the street.

In each of these examples, the AR technology was used to engage customers at events or in public spaces. These types of displays aren’t always scalable, as they require considerable investment—but we still see them today. For instance, Skoda ran a campaign in 2015, placing an AR mirror in a Victoria railway station in London, so that people passing by could customize a car and then see themselves driving it on a large screen.

Phase 2: Trying on products at home
Simulating digital products, so that they interact with movements in the real world in real time (usually through paper printouts), was a popular approach to AR in the early 2010s, especially for watches and jewelry. This technology let people virtually “try on” a product. Even the Apple watch was available for a similar virtual try-on. However, the task of printing out and cutting a special paper model so that it could fit one’s finger or wrist has always been somewhat clunky, and it requires some effort from the consumer.

Much more successful apps are those that can offer a more seamless experience. Trying on products virtually, by instant face recognition, has been one of the most successful uses of AR in the commercial context so far, and make-up companies have been leading this use. Predecessors of this technology were websites that overlayed make-up on an uploaded photo or avatar. But AR mirrors, developed by agencies like Holition, ModiFace and Total Immersion, have allowed customers to overlay make-up on themselves in real-time. The technology behind this is highly sophisticated, as it requires adapting virtual make-up to an individual’s actual face. In order to create this personalization of virtual content—and make it seem real—the software uses 2D modeling technology and advanced face-tracking techniques. The effect delivers a highly perceived value: seeing one’s face augmented with make-up not only offers a more convenient and playful way to try it on, but also allows consumers to assess looks that they would not have been able to create themselves or to try on combinations that they would not have thought of. That can’t be delivered by simply uploading a photo to an app.

And this type of technology continues to advance. London-based AR agency Holition and agency Coty recently launched an AR app for the make-up company Rimmel, which lets a consumer scan the make-up of another person or an image and then immediately try that same look on his or her face. It takes the experience of look creation to a whole new level. Not surprisingly, the fashion industry has touted the technology, already picking up on its practicality, and consumer ratings for this type of AR apps keep increasing.

Phase 3: A broader range of uses
Aside from try-ons, a rich body of research also shows that AR can be incredibly valuable for exploring various cultural, historical, and geographic aspects of an environment. This type of app typically operates on the basis of a user pointing his mobile device towards an object or a site, in order to see superimposed content on the screen.

Apps developed for tourism purposes started appearing in the 2000s, but initially they were predominantly created in university labs. They’ve only started to become more widely used in recent years, thanks to technological advancement and a better understanding of the consumer experience. For example, the Museum of London has an app that shows you how the particular London street you’re standing in used to look in the past—you just have to point your phone camera at it for the augmented version to appear on your screen. Similarly, apps designed for museum contexts let visitors get more information about famous paintings by overlaying a description over it on smartphone screens in real time. Then there’s also Google Translate, an app that lets you instantly translate a text, whether it’s on a sign or elsewhere, into a language you can read. And Google Sky Map can help you identify stars and planets if you just point your phone camera view toward the sky.

Research I conducted with Professor Yvonne Rogers and Dr. Ana Moutinho from University College London and with the English National Opera, suggests that AR apps could offer innovative support to cultural institutions as well. We observed how opera singers and theatrical make-up artists would take to virtual try-on apps: the AR mirror assisted singers as they were getting into character and building their roles; and make-up artists used it as a helpful tool for developing the artistic looks for each character. Visitors also interacted with the mirror to see what they’d look like as one of the operatic characters.

Each of these examples demonstrate how AR has distinctly evolved to complement and transform the way users experience products and their surroundings. And it will continue to advance as people come to expect more from it. Recent research I conducted with Dr. Chris Brauer of Goldsmiths, University of London, explored how this new generation of digital technologies are changing consumer experiences. Wearables and the Internet of Things have made consumers expect highly customized solutions and instant access to detailed personal data. And AR is reinforcing consumers’ appetite for compelling and creative visualizations of content.

However, our research has also shown that despite the increased use of such technologies, consumers are not yearning for the robotic digitization of their everyday lives. Rather, they want technologies that weave themselves seamlessly into their activities. Consumers expect their digital experience to be more human and empathic, to be filled with emotional content, to surprise them with serendipitous occurrences, to allow for reciprocity and interactivity, and to offer the option of personalized adaptations. As designers and marketers continue to craft AR experiences, it will become crucial to acquire better understanding which areas of human lives can be visually enhanced.

See on Augmented World

Apple CEO Tim Cook thinks augmented reality will be as important as ‘eating three meals a day’

Apple CEO Tim Cook continues to talk about augmented reality like it’s the next major computing platform after the smartphone.

During a talk with Republican Senator Orrin Hatch over the weekend in Utah, Cook gave his most detailed answer yet about how Apple is approaching the technology, which uses computer glasses to superimpose computerized images on the world around the user, kind of like Google Glass does.

Cook thinks AR will “take a while” to reach mass adoption because of difficult technical challenges.

But it will get there. “It will happen in a big way, and we will wonder when it does, kind of how we wonder how we live without our phone today,” Cook said.

His discussion of the topic, which visibly excited him, provides the skeleton of what to expect as the augmented reality industry develops.

Eventually, Cook thinks that AR could become so essential that it will be as much a part of a user’s day as “eating three meals a day.”

Here were his complete comments:

“I think there’s two kind of different questions there. It will be enabled in the operating systems first, because it’s a precursor for that to happen for there to be mass adoption of it. I’d look for that to happen in the not-too-distant future. In terms of it becoming a mass adoption [phenomenon], so that, say, everyone in here would have an AR experience, the reality to do that, it has to be something that everyone in here views to be an ‘acceptable thing.'”

“And nobody in here, few people in here, think it’s acceptable to be tethered to a computer walking in here and sitting down, few people are going to view that it’s acceptable to be enclosed in something, because we’re all social people at heart. Even introverts are social people, we like people and we want to interact. It has to be that it’s likely that AR, of the two, is the one the largest number of people will engage with.

“I do think that a significant portion of the population of developed countries, and eventually all countries, will have AR experiences every day, almost like eating three meals a day, it will become that much a part of you, a lot of us live on our smartphones, the iPhone, I hope, is very important for everyone, so AR will become really big. VR I think is not going to be that big, compared to AR. I’m not saying it’s not important, it is important.

“I’m excited about VR from an education point of view, I think it can be really big for education, I think it can be very big for games. But I can’t imagine everyone in here getting in an enclosed VR experience while you’re sitting in here with me. But I could imagine everyone in here in an AR experience right now, if the technology was there, which it’s not today. How long will it take?

“AR is going to take a while, because there are some really hard technology challenges there. But it will happen, it will happen in a big way, and we will wonder when it does, how we ever lived without it. Like we wonder how we lived without our phone today.”

See on Augmented World