There’s really nothing new under the sun when it comes to addressing security vulnerabilities in code. While there has been a great shift in terms of how server side application are architected, including the move to the cloud and the increased use of containers and microservices, the sad reality is that the biggest security vulnerabilities found in code are typical caused by the most common, well-known and mundane of issues, namely:
- SQL injection and other interpolation attack opportunities
- The use of outdated software libraries
- Direct exposure of back-end resources to clients
- Overly permissive security
- Plain text passwords waiting to be hacked
SQL injection and other interpolation attacks
SQL injections are the easiest way for a hacker to do the most damage.
Performing an SQL injection is simple. The hacker simply writes something just a tad more complicated than DROP DATABASE or DELETE * FROM TABLE into an online form. If the input isn’t validated thoroughly, and the application allows the unvalidated input to become embedded in an otherwise harmless SQL statement, the results can be disastrous. With an SQL injection vulnerability, the possible outcomes are that the user will be able to read private or personal data, update existing data with erroneous information, or outright delete data, tables and even databases.
Proper input validation and checking for certain escape characters or phrases can completely eliminate this risk. Sadly, too often busy project managers push for unvalidated code into production, and the opportunity for SQL injection attacks to succeed exist.
The use of outdated software libraries
Enterprises aren’t buying their developers laptops running Windows XP. And when updates to the modern operating system that are using do become available, normal software governance policies demand applying a given patch or fix pack as soon as one comes along. But how often to software developers check the status of the software libraries their production systems are currently using?
When a software project kicks off, a decision is made about which open source libraries and projects will be used, and which versions of those projects will be deployed with the application. But once decided, it’s rare for a project to revisit those decisions. But there are reasons why new versions of logging APIs or UI frameworks are released, and it’s not just about feature enhancements. Sometimes an old software library will contain a well known bug that has gets addressed in subsequent updates.
Every organization should employ a software governance policy that includes revisiting the various frameworks and libraries that production applications link to, otherwise they face the prospect that a hidden threat resides in their runtime systems, and they only way they’ll find out about it is if a hacker finds the vulnerability first.
Direct exposure of back-end resources to clients
When it comes to performance, layers are bad. The more hoops a request-response cycle has to go through in order to access the underlying resource it needs, the slower the program will be. But the desire to reduce clock-cycles should never bump up against the need to keeps back-end resources secure.
The exposed resources problem seems to be most common when doing penetration testing against RESTful APIs. With so many RESTful APIs trying to provide clients an efficient service that accesses back-end data, the API itself is often little more than a wrapper for direct calls into a database, message queue, user registry or software container. When implementing a RESTful API that provides access to back-end resource, make sure the REST calls are only accessing and retrieving the specific data they require, and are not providing a handle to the back-end resource itself.
Overly permissive security
Nobody ever sets out intending to lower their shields in such a way that they’re vulnerable to an attack. But there’s always some point in the management of the application’s lifecycle in which a new feature, or connectivity to a new service, doesn’t work in production like it does in pre-prod or testing environments. Thinking the problem might be access related, security permissions are incrementally reduced until the code in production works. After a victory dance, the well intended DevOps personnel who temporarily lowered the shields in order to get things working are sidetracked and never get around to figuring out how to keep things running at the originally mandated security levels. Next thing you know, ne’er-do-wells are hacking in, private data is being exposed, and the system is being breached.
Plain text passwords waiting to be hacked
Developers are still coding plain text passwords into their applications. Sometimes plain text passwords appear in the source code. Sometimes they’re stored in a property file or XML document. But regardless of their format, usernames and passwords for resources should never appear anywhere in plain text.
Some might argue that the plain-text password problem is overblown as a security threat. After all, if it’s stored on the server, and only trusted resources have server access, there’s no way it’s going to fall into the wrong hands. That argument may be valid in a perfect world, but the world isn’t perfect. A real problem arises when another common attack, such as source code exposure or a directory traversal occurs, and the hands holding the plain text passwords are no longer trusted. In such an instance, the hacker has been given an all-access-pass to the back-end resource in question.
At the very least, passwords can should be encrypted when stored on the filesystem and decrypted when accessed by the application. Of course, most middleware software platforms provide tools such as IBM WebSphere’s credential vault for securely storing passwords, which not only simplifies the art of password management, but it also relieves the developer from any responsibility if indeed any source code was exposed, or a directory traversal were to happen.
The truth of the matter is, a large number of vulnerabilities exist in production code not because hackers are coming up with new ways to penetrate systems, but because developers and DevOps personnel simply aren’t diligent enough about addressing well-known security vulnerabilities. If best practices were observed, and software security governance rules were properly implemented and maintained, a large number of software security violations would never happen.
You can follow Cameron McKenzie on Twitter: @cameronmcnz
Should you implement a custom user registry to help mitigate access to your various LDAP servers in order to simplify security tasks such as authentication and group association? The answer to that question is a resounding ‘no.’
The simple beauty of the custom user registry
On the surface, implementing a custom user registry is simple. While it differs slightly from one application server to the next, to implement a custom user registry, you typically only have to write a Java class or two that provides an implementation for half a dozen or so methods that do things like validate a password, or indicate whether a user is a part of a given group. It’s easy peasy.
For example, to create a custom user registry for WebSphere, here is the IBM WebSphere UserRegistry interface that needs to be implemented, along with the 18 methods you need to code:
Now remember, the goal here is not to invent a system for storing users. When implementing a custom user registry, there is typically an underlying data store in which the application is connecting. So perhaps the purpose of the custom user registry is to combine access to a combined LDAP server and a database system that has user information. Or perhaps there are three different LDAP servers that need to have consolidated access. Each of those systems will already have mechanisms to update a password or check if a user is part of a given group. Code for a custom user registry simply taps into the APIs of those underlying systems. There’s no re-inventing the wheel with a custom user registry. Instead, you just leverage the wheels that the underlying user repository already provides.
So it all sounds simple enough, doesn’t it? Well, it’s not. And there are several reasons why.
Ongoing connectivity concerns
First of all, just connecting to various disparate systems can be a pain. There’s the up front headache of getting credentials, bypassing or at least authenticating through existing firewalls and security systems that are already in place. Just getting initial connectivity to disparate user registry systems can be a pain, let alone maintaining connectivity as SSL certificates expire, or changes are made in the network topology. Maintaining connectivity is both an up-front and a long term pain.
LDAP server optimization
And then there’s the job of optimization. Authenticating against a single user repository is time consuming enough, especially at peak login times. Now imagine there were three or four underlying systems against which user checks were daisy chained through if..then…else statements. It’d be a long enough lag to trigger a user revolt. So even after achieving the consolidation of different LDAP servers and databases, there is time that needs to be invested in figuring out how to optimize access. Sometimes having a look-aside NoSQL database where users ids are mapped to the system in which they are registered can speed things up, although a failed login would likely still require querying each subsystem. Performance optimization becomes an important part of building the user registry, as every user notices when logging into the system takes an extra second or two.
Data quality issues
And if there are separate subsystems, ensuring data quality becomes a top priority as well. For example, if the same username, such as cmckenzie, exists in two sub-systems, which one is the record of truth? Data integrity problems can cause bizarre and difficult behavior to troubleshoot. For example, cmckenzie might be able to log in during low usage times, but not during peak usage times, because during peak usage times, overflow requests get routed to a different sub-system. And even though the problems may stem from data quality issues in the LDAP server subsystems, it’s the developers maintaining the custom user registry code who will be expected to troubleshoot the problem and identify it.
LDAP failure and user registry redundancy
Failover and redundancy is another important piece of the puzzle. It’s good to keep in mind that if the custom user registry fails, nobody can log into anything from anywhere. That’s a massive amount of responsibility for anyone developing software to shoulder. Testing how the code behaves when a given user registry is down, or figuring out how to make the custom user registry resilient when weird corner-cases happen is pivotally important when access to everything is on the line.
Ownership of the custom user registry
From a management standpoint, a custom user registry is a stressful piece of technology to own. Any time the login process is slow, or problems occur after a user logs into the system, the first place fingers will point is to the custom user registry piece. When login, authentication, authorization or registration problems occur, the owner of the custom user registry piece typically first has to prove that it is not their piece that is the problem. And of course, there certainly are times when the custom user registry component is to blame. Perhaps a certificate has been updated on a server and nothing has been synchronized with the registry, or perhaps someone has updated a column in the home grown user registry database, or maybe an update was made to the active directory? The custom user registry piece depends on the stability of the underlying infrastructure to which it connects, and that is a difficult contract to guarantee at the best of times.
So yes, on the surface, an custom user registry seems like a fairly easy piece of software to implement, but it is fraught with danger and hardship at every turn, so it is never recommended. A better option is to invest time into consolidating all user registries into a single, high performance LDAP server or active directory, and allow the authentication piece of your Oracle or WebSphere applications server to connect into that. For small to medium size enterprises, that is always the preferred option. That way you can concentrate on using the software and hardware that hosts the user records to be optimized and tuned for redundancy and failover, rather than trying to handle such problems in code that has been written in house. It also allows you to point your finger at the LDAP server or active directory vendor, rather than pointing fingers at the in-house development team when things go wrong.
Inevitably, there will be times when a custom user registry is required, and it has to be written, despite all of the given reservations. If that’s the case, I wish you the best of luck, and I hope your problems are few. But if it can be avoided, the right choice is to avoid, at all costs, the need to implement a custom user registry of your own.
Former Google employee James Damore’s recently leaked memo about his old employer’s employment activities has brought the discussion about IT hiring practices to the fore. After reading a vast number of articles written on the topic, it would appear that many believe the terms workplace diversity and gender representation are interchangeable. They of course are not, and doing so is not only intellectually dishonest, but it’s incendiarily disingenuous to the point that doing so actual hinders the progression of the important goal of balanced gender and ethnic representation in the workforce.
How do you define diversity?
I ran for president of my University Student Council twenty-five years ago. One of the other candidates was an enlightened progressive whose main platform plank was to promote and improve diversity in all areas of the university. It was a message that was well received in the social sciences, law and humanities buildings, but it ran into a brick wall when it was trucked into engineering.
In compliance with all preconceived stereotypes, gender parity in the engineering department was a little lacking back then, but a few of those future train conductors were getting a bit tired of constantly being beaten with the ‘lack of diversity’ stick. A student stepped up to the microphone during question period and asked the candidate if she felt the engineering department lacked diversity. After the candidate stumbled in her effort to provide a diplomatic answer, the student followed up with something more rhetorical.
“The leader of the school’s Gay and Lesbian committee is an engineer. Our representative to the student council is from India. Three of the five students who are on full scholarships are second generation Chinese, and even my friends with paler complexions, who you believe lack diversity, are here on Visas from countries like Australia, Russia, Israel and eastern Europe. So how can you possibly stand there and tell me we are not diverse?” The student was mad, and he had every right to be.
The engineering faculty was indeed diverse in a variety of beautiful even inspirational ways. Gender parity was certainly lacking, and I can think of a few minority groups that were under-represented, but for someone to stand in front of that group of students and tell them they weren’t diverse was an undeserved and unmitigated insult.
Confronting intellectual dishonesty
Even twenty-five years later, that exchange still resonates with me. Not just because it was so enjoyable to see a social justice warrior be so thoroughly destroyed intellectually, but because the student wasn’t wrong. He had every right to stand up and object to the insults and the derision that were constantly being thrown at the faculty to which he was proud to be a part.
With a history of participating in medium-term consulting engagements, I can say that I have worked on an admirable number of projects in a wide array of cities. I can’t remember any engagement in which the project room looked like a scene out of the 1950’s based TV series Mad Men, where every programmer was a white male, and every developer was a product of a privileged background. In fact, I was on a Toronto based project a number of years ago where my nickname on a team of over thirty individuals was ‘the white guy.’
I’m proud of all of those projects I’ve worked on over the years, and I’ve made friends with people who come from a more diverse set of backgrounds than I could possibly have ever imagined. And the friends I’ve made include a number of incredible female programmers, although I will admit that all of those project teams on which I worked lacked in terms of gender parity. But it would be an insult to me and to everyone I’ve worked with to tell me that the teams I’ve worked on weren’t made up of a diverse set of people, because they were. I have seen great diversity in the workforce. I have not seen great gender parity. There is a difference.
There is certainly an issue in the technology field in terms of an under-representation of both women and certain visible minorities. But gender and ethnic parity is not the same thing as workplace diversity. Arguing that they are is disingenuous, and perpetuating this type of insulting intellectual dishonesty will do more to hinder the goal of achieving balanced gender and ethnic representation in the workplace than it ever will to enhance it.
Big data is on the boom these days. It has been helping every field. Let us see few of the projects of Big Data in Wildlife Conservation that has used Big data and Machine Learning as their key components.
2. Big Data in Wildlife Conservation
In this section, various projects are discussed below which shows the aid of Big Data in Wildlife Conservation.
2.1. The Great Elephant Census
In Africa alone, more than 12,000 elephants have been killed each year since 2006 and if this goes on, that day is not far when there will not be any elephant left on this planet. The protection of ecosystem is vital not only to wildlife but the communities around them to complete the ecosystem cycle and Big Data is helping in the same. In 2014, a survey The Great Elephant Census was launched by Microsoft co-founder Paul Allen to achieve a greater understanding of elephants number in Africa. 90 researchers traversed over 285,000 miles of the African continent, over 21 countries to conduct this research.
One of the largest raw data sets was created in this survey. The survey has shown that African elephant numbers has become only 352,271 in 18 countries and has gone down by 30% in seven years. This highlighted the need for on-going monitoring to make ensure better response times to emergency situations. Big Data is having a huge impact on conservation efforts that is going to help protect the Elephant population of Africa.
This project was launched in 2002. It is an app that helps users’ in recording bird sightings as they find any and input this data into the app. The app was created with a target to help create usable Big Data sets that could be of value to professional and recreational bird watchers. These data sets are then being shared with professionals like teachers, land managers, ornithologists, biologists and conservation workers who have used this data to create BirdCast, a regional migration forecast giving real-time predictions of bird migration for the first time ever. This uses machine learning to predict migration and roosting patterns of different species of birds. This will provide benefits by providing more accurate intelligence for land planning and management and allowing necessary preparations for areas prone to roosting bird gatherings.
Read Complete Article>>
It’s likely not advice a veteran of JavaOne conferences needs to hear, but if you’ve got your ticket for JavaOne 2017, and you’re attending this OracleWorld affiliated event for the first time, I’m telling you not to do any last minute searching for a San Francisco hotel.
San Francisco is a city completely ill equipped for handling an event of OracleWorld and JavaOne 2017’s magnitude. In fact, San Francisco is so small, it’s ill equipped to handle events of any magnitude. The two million square foot Moscone Center, named after the San Francisco Mayor whose assassination was portrayed in the Sean Penn movie Milk, is a fine conference venue, but there are simply not enough hotels to accommodate all of the guests and speakers who will be in attendance.
Cutting the stay short
Many attendees would love to spend the entire week in San Francisco, but the per-night hotel cost just becomes far too prohibitive. The conference is still almost two months away, yet discounted three and four star hotels available through the JavaOne 2017 website are already pricing at between $285 and $585 a night. And I’d be happy to bet that those $285 a night hotels won’t be available by time September rolls around. In fact, about a month before the conference, Oracle usually takes down the option to book a hotel through their website, as all of the available rooms have been booked.
As a long time consultant who worked largely in the US north-east, I rarely booked accommodations more than a month out, and typically would search for a hotel two weeks before a gig would start. The first time I attended JavaOne, I applied the same strategy and suffered greatly for it. I found very expensive accommodation at low-budget hotel on Lombard Street. The $350 a night motel didn’t have any air conditioning, and it was an unusually hot week in the city, making the stay particularly uncomfortable.
Never too close for comfort
Furthermore, the location was well beyond walking distance to the event, but given the complete lack of cabs in the city, I had to make the sweaty and uncomfortable hike myself. Uber has helped address the transportation problem in the city, but at an event like JavaOne, you want to be close to the shenanigans. It’s nice to be able to get to the opening events without having to get up ridiculously early, and it’s also nice to be able to rest in your hotel in the late afternoon before walking back and attending some of the evening events. Cabbing back and forth to a hotel tends to be both expensive and unnecessarily inconvenient.
So this is my final word of warning to people attending OracleWorld or JavaOne 2017. Make sure you’ve got your hotel booked. Do it right now if you haven’t done it already. Otherwise you’ll be spending way too much money on accommodations, and the only hotels available will be 30 miles away in Burlingame, or even worse, in Oakland. And trust me, you don’t want to be staying there.
What would the tech world look like without leaders, visionaries, and entrepreneurs like Satya Nadella, John Ive, or Elon Musk? What about the contributions of the other seven men who complete the list of “The 10 Most Influential Leaders in Tech Right Now” according to Juniper Research? Would the world be a poorer place without these powerful, intelligent, and insightful men bringing their minds to bear on the problems facing the world today? I think so.
Now imagine a world in which at least half of the names on that list were female. That’s a day that many women in the technology sector look forward to with anticipation. In my interviews with women across the tech spectrum, I certainly heard stories of obstacles and discouragement. But the overwhelming outlook is positive. It’s only a matter of time until the full impact of women in tech begins to be felt at all levels, adding depth and richness to a sector that is geared for an incredibly exciting decade.
I asked my interviewees to tell me about women they admire in their industry, what they believe women have to offer the tech world, and what the future will look like as our influence grows. Here’s what I found out. First, women aren’t tearing one another down. They are definitely cheering each other on.
Who do women look up to in tech?
It’s great to have role models at top levels of leadership in the technology field. Meg Whitman was a name that came up more than once in conversation. Julie Hamrick, Founder and COO of Ignite Sales, pointed to Meg’s early success at the helm of the world’s leading auction site. “For me, it’s the fact that she grew eBay to become a household name.” But it’s not just the wins that people find compelling about Whitman. It’s her attitude about adversity and challenges. CeCe Morken, EVP and General Manager of ProConnect at Intuit, also spoke about her admiration for the current CEO of Hewlett Packard Enterprise. “She so embraces learning from failure. One of the things she told us is that she now celebrates failure as much as she celebrates success in her all-hands meetings. These are just fast failures, experiments they learn from.”
But most of the women I spoke with didn’t choose a big name as a “shero” they look up to the most. They told me story after story of women they know personally who have inspired them. Charlene Schwindt, a software business unit manager at Hilti, put it simply. “I most admire some of the women I see and work with every day. When they complete a successful project, have big wins, get major status or an executive position on a board, that’s a huge achievement.”
Julie mentioned Valerie Freeman, CEO at Imprimis, as a role model. “She is one of those people who is doing well in business and doing good in the community.” Mary McNeely, Oracle Database expert and owner of McNeely Technology Solutions, spoke highly of peer advisory facilitator and talent development consultant Tanis Cornell as someone who showed that hard work and self-belief really can pay off. “She didn’t start out in tech, but she moved to technology sales, pulled herself up by the bootstraps, and overcame barriers to succeed.”
Jen Voecks is the founder and CEO of the tech startup Praulia, an online service that matches brides with wedding vendors. For her, the most inspiring thing to see is other women creating something new in the industry. She pointed to Molly Cain, former Executive Director of Tech Wildcatters, as an inspiration. “She built a lot of things herself.” Today, Cain is the acting Deputy Director of Digital Innovation and Solutions/Venture Relations at the DHS. Quite a remarkable achievement and certainly one that will make her a role model for many more women throughout her career.
How do women change the game within tech organizations?
There’s simply no substitute for having more perspectives for both innovation and problem solving. Charlene has seen the benefit of a diverse team in determining how to develop the projects under her direction. “What women bring to the table can be different. Often, consideration of how people work with technology is not really coming into play as it should during the development process. Even if you have people talking to the customer about what they want, everything is based on interpretation. With a cross gender team, you get a different result by having multiple views on the same thing.”
This is something Julie found true as well. “I’ve noticed when we have women on our teams we have better follow through and more creativity. They are good at filling in the gaps. Amidst all the ones and zeros, women see more of the gray, more depth.” That’s not just good for short term improvement. It’s also essential for long term viability. Tanis Cornell pointed out that economic and financial experts are catching on to the fact that women are good for business. “It’s been shown in study after study now that companies with a better gender balance on the management team perform better financially. Meryl Lynch and other firms are starting to pay attention. They are investing in and recommending companies with more balanced leadership at the top. It’s simply a good business decision.”
How will women influence the future of technology?
Women are bringing their power to bear in leadership, innovation, entrepreneurship, and more. The days when tech was developed through a primarily male lens are fading fast. That shift is bound to have an impact on what happens in the next five to ten years. Many women I spoke with mentioned the subtle but potent effect the female touch may have on the direction of tech. According to Julie, “I think things will become more friendly and useful. They will have more care to them, even in technology. Tech is more utilized by everyone these days. Going forward, there will be even more self-service, but the experience will have a more satisfying, human feel.” Mary echoed this sentiment, in terms of what it will take to succeed in the tech field and the world in general. “As the world becomes more roboticized, there’s also going to be a counter trend. Good intuition and people skills will become even more critical.”
CeCe Morken offered this advice for the current and coming generations of female innovators. “Look ahead and be aware of what’s coming. It’s changing faster than ever before and you need to find a way to grasp it.” Morken put her money where her mouth is recently by purchasing the latest virtual reality tech for employees to experience at work. Intuit is not looking to launch any products using that technology right now, but CeCe wants her people to be familiar with what’s available so they aren’t playing catch up later as innovation continues to accelerate.
Jen highlighted the importance of tech for changing the future of women as well. “Tech gives you a new platform. It allows you to reach a broader audience. As an inventor or business owner, you have the opportunity to grow faster and meet partners.” In essence, tech is democratizing the entrepreneurial space even more than before, ensuring that women can advance on their own terms even if the corporate world continues to change more slowly.
Women in tech must keep reaching for their dreams
Data scientist Dr. Meltem Ballan has faced her share of challenges in building a career in tech. But she offered encouragement to other women in their quest to rise to the top. “It’s not insurmountable. There is no ceiling. Just keep on going out there and doing it. Learn to network well, and have the courage to take that next step.” Mary McNeely agreed that the future is there for the taking. “What we get next is whatever we want. We are educated and empowered. Our star is rising.”
In some respects, Virtual Reality (VR) and Augmented Reality (AR) applications have been around for a couple of decades. But these never really went mainstream because of the cost and limits of existing technology. However, this is starting to change with the recent release of new VR headsets and AR glasses, and the development tools and ecosystems to support them.
At the O’Reilly Design Conference in San Francisco, Jody Medich, director of design for Singularity University Labs, argued that VR and AR are already being developed in mainstream applications, and will play a significant impact in web application development soon. She said, “Developers and designers need to think about how to enable their organizations to use these when they come.” Games are proving to be an early adopter, but more significantly she sees the use of VR in improving travel experience, education, sales, communication, and improved office productivity.
Understand the landscape
The Oculus Rift and HTC Vibe are getting the most press, owing to their high-performance VR rendering in a modestly priced package. Other efforts like Google Goggles have a more cost-efficient option that can bring virtual worlds to high-end smart phones. These are not just being used for games. One surgeon, Dr. Richard Burke at Nicklaus Children’s Hospital in Miami was able to use his Google Cardboard to visualize and execute a complex heart surgery quickly that would not have been otherwise possible.
Medich argues that VR is a subset of augmented reality in which a view of the outside world is occluded. High-end AR adds a layer of new information on top of the existing world, which is a little more challenging to line up. Early version of AR involves simply overlaying information from the real world onto real-time maps using GPS. She said, “The reason we don’t think of it that way is because the developer burdens the user with connecting the dots. As a result, the user has to hold all of the function in their brains to make the transition.”
This could be as simple as Uber showing a user nearby cars, or as complex as the rich gaming environment created for Pokémon Go. New interfaces like the Microsoft’s HoloLens and Magic Leap are just around the corner, while the Epson Moverio is already being used for high-end industrial applications.
Meanwhile, Google’s Project tango intends to embed better AR capabilities into high-end smart phones like the Lenovo Phab 2 Pro. It’s already being used by Wayfair to allow consumers to measure their room, virtually place furniture before purchasing. Medich said this improves customer satisfaction, and reduce returns.
VR and AR hold a lot of promise in improving educational experiences of all kinds. Stanford has been doing research with Stryver to allow football players to practice out game plays to improve their muscle memory. Highly specialized doctors are finding that VR makes it easier to bring a much wider audience of students to their operating theaters than is possible in real life. Meanwhile, students in Africa are using Google Goggles to visit places that their schools didn’t otherwise have the budget for.
Airbus is training technicians on how to perform complicated repairs on expensive equipment where it is cheap and safe until they become experts. This has led to a huge improvement in productivity and cost.
It’s not just for teach students either. Amnesty international created a visceral experience of the bombings in Syria that was shown to people on the streets of London. This raised the campaigns contribution rate by 20% in one afternoon.
Reducing the user burden
The real promise of VR and AR lie in reducing the burden of users in connecting the dots between real and virtual worlds. With most GPS applications, users have to do a lot of context switching between application or between applications and the physical world. There is considerable work on building repair applications that guide technicians on complex repairs without having to look away at a physical manual.
Microsoft and Autodesk are working on developing a workflow for the HoloLens that reduces the translation required between property owners, architects, builders, and inspectors. In the traditional workflow, architects must create 2D diagrams that can confuse developers. After a building is approved, builders must translate these diagrams into an actual building. Medich said, “A lot gets lost in the translation. If they build it they can inspect to see if something lines up or not, and then later down the road they have an easier way to fix it.”
AR could also radically transform office apps. Medich noted that the average user can spend hours a day switching contexts with the traditional keyboard and mouse user interface. A new generation of VR enabled office apps could interpret the context of what a user is doing to reduce the number of clicks and keyboard shortcuts required to do office work. She said, “These new technologies do a lot of translation and add something for humans.”
VR and AR are still in their stages, and now is the time for developers to learn more about the technologies and practical implementation. Medich said, “It is not too late to get started. We still have a couple of years until saturation. The next couple of years will be a little disappointing. We are trained to think in linear ways where things change a little gradually. But especially around technology we see a doubling every two years. At first this is disappointing because these changes don’t match up with our linear experience. But when the technology reaches an inflection point then we will see a complete explosion.”