We are engineering product directors for the Microsoft HoloLens and Trimble XR10 mixed reality headsets. Come ask us anything about HoloLens, AR/MR/VR technology, your DIY projects, or whatever your heart desires!
Hi Reddit, D'Arcy and Jordan here!
D'Arcy is the Senior Director of Commercial Engineering for HoloLens and Mixed Reality at Microsoft. He's posting as /u/darcyjs14.
Jordan is the Senior Business Area Manager for mixed reality at Trimble, including the Trimble XR10 hardhat-integrated HoloLens device.
Together, we've been working in the AR/MR/VR tech space for over 14 years. We've seen it grow from a fun "proof-of-concept" to wide-scale deployment throughout a number of enterprises. More than anything, we're tech nerds and love to chat with people about what we do!
We'll be online for a few hours (starting Noon EST) to answer questions but will try to respond to everyone in the coming days.
Ask us anything!
If you're interested in learning more about HoloLens 2, check out Microsoft's official site here.
If you're interested in learning more about or getting a demo of the Trimble XR10 with HoloLens 2 field-ready device, check out Trimble's official site here.
[Update, 3:20pm ET] Hey guys, we're going to sign off for a bit! Feel free to add more questions and we'll come back over the next couple of days to get to them all. Thanks for the discussion so far!
I love my job and anticipated this question. I also anticipate that Alex Kipman, whose office is directly behind my desk will have some strong words if I reveal product futures. So let me try to thread this camel through the eye of a needle:
- Weight is something that Alex, the Hardware team and the ID team are thinking about all the time. The main issue of weight is comfort for the user. The tradeoffs are that we also want longer battery life (bigger batteries = more weight) and we want more powerful graphics (more CPU = more heat + more batteries. Heat either needs venting or a heat sink and heat sink = weight). Between HL1 and HL2 we made material improvements to comfort by changing the fit system and weight distribution, even as the weight stayed largely the same. As we plan for future products, weight is always at the top of the list of design trade-offs
- More powerful? In general all computing devices, especially devices like ours that render beautiful graphics (if I say so myself), get more powerful across generations. CPU/GPU/AI processor/Memory BOMs all improve in the flagship SKUs. Look at the Surface product line (or, for that matter Apple's iPhone and Macbook product lines) and you'll see that the standard bearer always tries to push a performance envelope (while hewing to other considerations and constraints like weight, cost, heat, comfort, reliability, etc.)
- Longer battery life? Yeah, the Achilles heal of all cordless products. I love my new iPhone 12 Pro Max but still not getting a full day on a single charge. Batteries are hard, as great graphics mean a powerful GPU. Outdoor use means you might need very bright displays to overcome sunlight, etc. We try to figure out the scenarios that the majority of HoloLens buyers use the HoloLens for and then tune the performance to meet or exceed those scenarios. There are scenarios that are more than once deviation from the norm. We have inspection engineers who use the device for 8 hours at a stretch and the batteries don't last that long. You can use external batteries to extend a working session. I would expect we'll look at options like the kinds of piggy-back batteries that exist today for the iPhone, so that you could "snap on" an extended battery pack if you need it, and you as the user would be willing to carry the extra weight around that those batteries entail
- Will the FOV and resolution be greater, refresh rate higher? I think you can continue to expect big generational steps in display technology
- Please understand that I can't answer that last one if I want my badge to work tomorrow.
D'Arcy with the novel over here :P
I'll add on a few things.
From the XR10 side, weight was even more of a challenge, given that it's strapped to a hardhat which already has weight and rides high on the head. What we've found through our research is that, while weight is important, it's actually weight distribution that people notice the most. Think about a motorcycle helmet that's 3x as heavy as a HoloLens, but it's center of gravity is in the middle of your head so you don't notice it much. The hardhat we made for HL1 was pretty front-heavy and made you feel like a toddler with undeveloped neck muscles. We focused a lot of energy on distribution for the 2nd generation and think we got it about as close to perfect as we could, given the hardhat compliance limitations of OSHA and the like...
We have lots of customers who tether a battery pack and run the cord into their reflective safety vest, getting a full day's use out of the device. Personally, I think that batteries are the #1 source for potential breakthrough in the next decade. They're the bottleneck in so many places.
What 3rd party sensors (such as thermal imaging, vibration detection, or hyperspectral imaging) are you working with to bring in additional information to overlay additional on the world? Seems like this could be very useful in industry for things like leak detection and structural inspection.
Agree with your assessment! I think there's a lot of runway for innovation by augmenting (pun intended) a device like a HoloLens with more external sensors / processing sources. The device itself is already quite powerful (depth sensors, fish-eye cameras, etc.) in understanding the world. Feeding that sensor data to the cloud (check out Azure Object Anchors or Spatial Anchors) or adding even more input data (e.g. a FLIR sensor or external hand trackers) adds even more value for specific applications. Trimble and Microsoft have an extensive partner community developing all kinds of these types of applications/integrations on HoloLens/XR10.
- When do you think Mixed Reality smart glasses like HoloLens will be affordable and commercially available to everyone?
- Will HoloLens ever reach a field of view of 180 degree? What are the current challenges in achieving this?
See my reply to u/my_hands_are_diamond on #1
in re #2, is 180 degrees needed? Assume for a moment that 180 degree FOV is more expensive to build and consumes more power than a 150 degree FOV, so any device with a 180 degree FOV is going to cost more and have a shorter battery life than a device with a 150 degree FOV, holding all other variables constant. Now it's also my understanding that the human brain can compute about 120 degrees FOV. If we can't see more than 120 degrees, what benefit would 180 give us? We will have to deal with the higher cost of the BoM, the increased power budget, and getting rid of the extra thermals. Are there user benefits to 180 over 150? I don't know enough to give you a definitive answer but in the world of the trade offs that we have to make to build product, if it doesn't benefit the user directly it's going to get scrutinized by a lot of people who would like to spend that money, heat, weight and power elsewhere in the product, or not spend them at all.
I'm learning new things in here, too. Love this reply.
And now a bit futuristic question:
Do you think we will live to see XR being projected in such compact forms as contact lenses?
Yes, I think we will.
[Edit: how old are you?] :)
24, should have mentioned, haha
Yes, I think we will.
Greetings. Indie VR dever here. What are the App store options for the hololens? If I wanted to start deving fo the hololens, what are the market opportunities and how hard is it to get an app released for the Hololens?
Hey there. It's really pretty straightforward, just like releasing any app for Windows or a mobile App Store. We create a couple of different software products (Trimble Connect, SketchUp Viewer). Ours are written in Unity, though you can also use Unreal (and others) to develop for HoloLens. The HoloLens has a built-in app store on the device that any dev can submit apps to for others to download/install. Here's a good place to start.
What options are there for live sharing, eg so others in the same room can share what you see - chromecast direct from the device? How to you handle latency issues
There are a few different mechanisms to achieve this.
A user can connect to their HoloLens over IP in their web browser. There's a tab in there that lets you live stream in the browser. You can then just HDMI to a monitor or projector. This method works pretty well but is at the mercy of your WiFi network and what other traffic is going across it at any given time.
Our preferred method is using one of these guys. Plugs directly into the HDMI port on a monitor/TV/projector and creates its own WiFi network that you can connect the HoloLens to. You can do the same thing (direct wireless connection) on Surface.
If you have remote users you want to collaborate with you can do one of the above + a Zoom/Teams call with screenshare. You can also use something like Dynamics 365 Remote Assist to have a remote user "see through your eyes" to see what you're working on or help you through a task.
When AR becomes more accessible to the public, what do you think will be the "killer app" that will boost it into the mainstream?
If I knew what the "killer app" would be, I definitely wouldn't be posting it publicly on Reddit :)
My answer is much more boring. I think that the boost to the mainstream will happen not with a killer app, but with utility. There are hundreds of millions of people wearing Apple Watches today and there is no killer app for the watch. Rather, it is a piece of tech that seamlessly merges into your everyday to provide you contextual information as you need it. Messages, phone calls, music, weather, clock, calculator, etc. IMO, the first mainstream devices will be more AR heads-up-displays versus full merged reality MR devices. The public will buy-in simply for the improved utility of having all of your information right in front of your face when you need it, plus some basic stuff like driving directions, etc. Once that's commonplace, you'll start to see the consumer AR world (e.g. Apple AR) and enterprise MR world (e.g. HoloLens) smash into each other. By the time "killer apps" come about, AR/MR will already be mainstream.
I wonder if I’m the outlier here. I’ve bought two generations of the Apple Watch and I’m done. I find it useful for nothing and as a watch it’s not better than any of my legacy watches. I absolutely hate charging the thing and I have yet to find a single thing it does that i use on a regular basis or even that it does reliably on a regular basis. I wear it mostly out of habit right now and it’s part of a small bunch of recent Apple products I’m completely indifferent to (in addition to being meh on Watch, i also prefer Roku to Apple TV 4k; anything to Apple’s TV service; Sonos to HomePods; and Apple’s earbuds have never fit my ears). I don’t know if Watch is really that useful to people or if it’s more of a signaling device for people to show each other “hey, I’m part of the cool digital tribe too.” Might be why I wear mine too, because it’s not like I’ve flipped back to legacy watches. I’m unconvinced that the Apple Watch is legit useful. And I’ll probably still wear mine because i like the orange strap i bought for it.
sir, this is a wendy's
How do AR & VR frameworks manage eye focus accommodation miscues? BTW cool Trimble is in this space.
Hi and thanks for the question. If I've understood the question correctly, then we can either autodetect new users and run them through eye calibration. Alternatively users can trigger eye calibration manually if eye gaze seems off. If I didn't answer the question you asked, feel free to clarify and I'll take another run at it
Correct me if I'm wrong, but I think a recent OS update made eye calibration automatic for new users.
It's not 100% same as eye calibration, but AEP is what you're thinking of. And we added it in November.
Improve visual quality and comfort | Microsoft Docs > Auto Eye Position Support
Thanks for the clarification!
I'll hop in and agree that Trimble is super cool for being in this space.
please keep paying me
Maybe I'm just not finding it on the website, but does the HoloLens 2 require another device to function/do all the heavy lifting or does it have its own built in system and if so, how powerful is it?
Great question. The HoloLens 2 is a completely self-contained Windows 10 computer. It doesn't require any tethering (wired or wireless) to any external source. With that said, you can use cloud computing on HoloLens to enhance the capabilities of the device. As an example, check out 'Azure Remote Rendering'. ARR offloads all of the rendering to the cloud and makes the amount of data (e.g. number of polygons) you can load on a HoloLens near limitless.
If you go to this page and scroll down about halfway you'll see a button that says 'Show all tech specs'. That will give you the details on the processors, RAM, etc.
Thanks for the reply! The concept of rendering in the cloud sounds really promising!
It's really unbelievable what it's capable of today and with (next to) no lag/latency on a device that's streaming 60 FPS. This video that my colleague Rene posted shows it in action running an 18 million polygon model (versus the onboard compute being able to render 500k-1m).
Cloud rendering is great in theory and in practice. So long as you have a good network connection and low latency, you're golden. The thing to remember is that your software developer needs to implement this. It isn't something that you as a user can pick. Jordan needs to implement Azure Remote Rendering in one of his future products. If you use any of Trimble's HoloLens software products, you need to let him know that you want Azure Remote Rendering so that he moves this up his backlog ;-)
Thanks for hosting an AMA!
1. I'm wondering about the latency of remote rendering, especially since the user or environment might be moving while data is sent to the server, processed, and sent back. Do you find that it's useful for the server to make predictions about the user's movement? Or for the device to make final corrections on the servers output before displaying it?
I've done some cloud gaming and it's usually pretty smooth, but I could see where dealing with the physical 3D world could require even lower latency or higher detail.
Edit: I see that the documentation mentions head pose prediction but not making corrections after remote rendering.
2. Do you see MR devices being used mostly alone or as groups? Do you see a role in local mesh networking between devices to improve the accuracy of sensors or the number of polygons each device can display?
3. Do you expect compression techniques to improve for live streaming of 2D or MR data? Looking up some quick numbers, your video below mentioned about 16Mbps for a particular model (30fps, two eyes), Netflix 1080p (probably 24 fps) is 5Mbps, Stadia 1080p (probably 60 fps) is about 10Mbps. Netflix has an advantage in that they can spend more time preprocessing and the future is already known. Increased framerate requires more data, but not linearly because the changes per frame become smaller and more predictable for each marginal frame. On a MR device, you could potentially accept an intermediate dataset that allows greater compression because you have enough processing ability to finish final steps that expand the data. So I'm not familiar with advanced compression techniques and don't have an intuition here, but it seems like there's room for noticeable improvement. Do you agree? And do you see this improving soon? Or will it only be a focus as consumer use becomes more common?
Thanks for the questions.
- You found the same documentation that I did. This link has some info about bandwidth / latency (sorry, for some reason it won't let me hyperlink). I'm guessing that anything not publicly noted in these blogs is probably getting a little too detailed to share. https://docs.microsoft.com/en-us/azure/remote-rendering/reference/network-requirements#:~:text=Minimum%20requirement%20for%20Azure%20Remote,downstream%20and%2010%20Mbps%20upstream.
- I see both scenarios. I think right now MR is a very lonely experience. As more devices become commonplace (both in enterprise and consumer worlds), we can start to leverage the power of the "Mixed Reality Cloud" or what Magic Leap calls the "Magicverse". Essentially a shared virtual environment where, regardless of device, anyone can enter to collaborate. I use the analogy to the upside-down in Stranger Things. It's all around you, but you just have to go through a portal to get there. To achieve this world you have to aggregate the sensor data coming from all of these devices, similar to building a network for autonomous vehicles where each node is both leveraging the network and contributing back to it. This is not to mention any of the more 'remote' collaboration type scenarios, i.e. "Zoom in 3D". Check out the company 'Spatial' and their app, if you're unfamiliar.
- This is way above my head and probably a better question for some folks at Microsoft working on these types of remote rendering algos. Sorry!
What fields besides entertainment are likely to first invest and benefit from Merged/Mixed Reality technology? Medical professionals? Architects? Is there a lot of ongoing software development between various fields?
At a very broad level, any industry that uses data (particularly 3D data) that could benefit from visualizing it in the context of their world while keeping their hands free.
A surgeon overlaying a CAT/MRI scan over a patient while operating.
A university student interacting with a holographic cadaver to learn how the body works.
An HVAC technician on a commercial construction project visualizing their CAD design overlaid on the environment to make sure it'll fit / work as intended before they send guys to site to install it.
An architect virtually teleporting to their client's yacht to walk them through the design of their new office space so that they make changes today and not 6 months from now once the carpet is already laid and the sinks are put in.
A novice technician on an offshore oil rig trying to figure out how to resolve an issue on a pump that has malfunctioned, calling in a remote expert via video and seeing a 3D step-by-step guide on how to fix it.
A worker on the Ford assembly line getting real-time feedback from their headset on how to assemble a part and whether or not they're doing it to spec before it pushes to the next worker.
....I can keep going :)
Microsoft and Trimble have a vast partner network creating applications and offering services for all of these different use cases. Check them out here.
Sorry, I’m not an expert by any means but will this be a retail product? If so, will there be connectivity to other Microsoft products? Thanks!
No worries. Both products have been out for just over a year now. They're mostly aimed at B2B enterprise customers. HoloLens 2 retails for $3500 and the XR10 for $4950. You can see more info here.
I work in AR for education (K12) - I get asked a lot about headsets and when we’ll see them in schools. I’m always telling people that it’s really unlikely, at least in the next 5 years or so. Is the use of AR headsets in a school environment discussed much at Microsoft/Trimble?
I think AR/MR tech is going to be huge for education. Given that today's devices are mostly aimed at enterprise, we're mostly seeing EDU opportunities in higher-ed / trade schools / training centers today. Trimble has a whole program to set up Technology Labs at universities around the world to make sure students studying things like construction management, architecture, and surveying (to name a few) are working with the latest and great tech. As you can see in the main photo on that page, AR/MR is always a big hit.
Another example that comes to mind is the work done by the Cleveland Clinic + Case Western Reserve around HoloLens. Imagine being able to teach your med students anatomy and physiology on a holographic cadaver showing functional organs, blood flow, brain activity, etc. Even if they're sitting at home due to COVID. And no formaldehyde.
I think AR/MR will be huge for K12, as well, once the devices become cheaper and more ubiquitous. This is where I look to the Apple's and Facebook's of the world to probably enter the market with more consumer-focused devices that push this side of things forward. In the meantime, the best thing you can do is get K12 students thinking about data in 3D. SketchUp offers all kinds of offers/programs for K12 and is a great place to start.
My main problem with my HL2 is the weight. 500g is still a lot. Is there any chance you would consider taking the battery out in the HL3 and put it in your pocket instead? Similarly to the ML1. That would help so much with comfort for long hours. Thanks for the AMA!
From my (Trimble) perspective, the main limiting factor today is the wire. For industrial applications, which is mostly where the HoloLens is used today, any kind of dangling wire is a major safety hazard for a number of reasons. I think "tethering", in a general sense, is the future of this technology. It just has to be wireless.
can you explain what this does, and what are the steps to learn how to make apps for this?
Sure! Mixed reality headsets (like HoloLens) are essentially wearable computers sitting on your head. They are see-through, meaning that you can still see your outside environment (unlike virtual reality) but the display you're looking through is feeding you information.
Unlike something like Google Glass, which is essentially just a 2D screen very close to your eye, mixed reality displays overlay content in full 3D. So, for instance, I could be sitting here at my desk and have a holographic coffee mug sitting on it. Nobody else would see it except for me, because it's being shone into my eye through the display I'm wearing.
These devices have the ability to "see" the world through a variety of sensors like cameras and LIDAR. This enables three main things:
- "Mixed" reality: virtual objects interact with the real world. The device knows my desk is here and it won't let the holographic coffee mug fall through it.
- Persistence: if I place the holographic coffee mug on my desk and then walk to the other side of the room, the mug will still be on my desk. If I walk a mile and come back, it'll still be there.
- Interaction: if I reach out with my hands, I can grab the coffee mug the same as I could if it were actually real. I can move it around, turn it over, make it bigger, etc., all with hand gestures
You can see some of my other replies on this thread about the practical enterprise applications for this type of technology.
If you're more of a visual learner, this is a great video overview.
If you're interested in learning more about developing for HoloLens, check out this link.
Hi, thanks for doing this ama!
Recently, i have been playing around with an oculus quest program called Custom Home Mapper, it lets you map out your apartment and then "game-ify" it, so your living space becomes a minigolf course , archery range, other stuff. Its really just a prototype, a solo dev i think, but the concept is fantastic.
I wonder if you can share any other similar kind of work being done? Like, projects that really take advantage of the players living space, blending the real geometry and layout with virtual experiences. Im just facinated by the potential and have only had a small taste.
What should we expect gaming to look like in the future?
That's awesome. I love it.
I'm not a big gamer (and frankly there's not many games for HoloLens, anyway, since it's an enterprise device), but the first thing that comes to mind is the app RoboRaid that was on HoloLens 1. It was basically an alien-shooter game that mapped your environment and then used it as the battlefield. Alien robots came out of your actual walls, hid behind your furniture, etc. Really pretty fun, if only a simple introduction into how space mapping works.
On the enterprise side, our apps do some pretty interesting "room interaction" stuff. Our SketchUp app enables a user to pull from the millions of models in 3DWarehouse and place them around their room. So imagine trying to figure out what your house would look like with different types of Ikea furniture or something like that, being able to check "will it fit" and manipulate the pieces as holograms before going and buying stuff / spending time putting it together. My dad does bathroom/kitchen renovations for a living. He models his designs in SketchUp and then pulls them up in HoloLens, letting his clients walk through their "new" space before he's even started building anything.
Our Trimble Connect app is aimed at very similar use cases, but for onsite construction. Imagine being an HVAC contractor with the ability to overlay your CAD design at 1:1 scale onsite to make sure it's going to fit/work as intended. You're essentially doing a real-time virtual:real clash detection before any work has begun. A huge time/cost saver if/when you find issues that you otherwise wouldn't have found until you started building.
The biggest challenge to the Hololens in genuine construction scenarios (aside from sunlight) is the lack of integrated high accuracy GNSS for the exact positioning of surface and sub-surface assets.
I think this very much depends on your definition of "genuine construction scenarios".
A civil contractor wanting to visualize cut/fill maps overlaid? Yes, we'd need GNSS and more ingress protection and sunlight blocking and a higher thermal range and a number of other things.
A plumbing subcontractor, under the shelter of a building, visualizing his design to ensure fit and guide his install? Perfectly feasible today, though we still have plenty of other challenges to solve. GNSS wouldn't work under the canopy, anyway.
I'd love to hear about what types of use cases / scenarios you're thinking about.
Should consumers get excited about the work Microsoft is putting in AR/MR tech, or will all your products within the foreseeable future be targeted towards the professional market?
Yes, of course they should. Every dollar that gets spent on AR/MR tech by large enterprises / government / military is another dollar spent on advancing the technology to eventually get cheaper/smaller/lighter/etc. to be able to serve a consumer audience. 20 years ago, AR headsets required a backpack full of computing power. Now they're self-contained and light enough for a person to wear on their head. That's progress. We'd never have a memory foam bed if it weren't for NASA...
You guys buying MVIS?
If we are, it's not something Alex has shared with me. And, even if he had -- which he hasn't -- I still couldn't talk about it because....
1) I'd face unpleasant consequences from Alex for breaking an NDA and for not taking seriously my commitment to protect the work of the people working on our program
2) I'd probably get fired by Microsoft and the SEC would get involved, as both companies are publicly traded. Seriously. We get annual training videos about not speculating about things like this
I do product, not M&A. u/JordanLawver, is Trimble buying MacroVision? Oh, wait, I guess the above applies to you too. Don't answer if you know, 'cause Alex will make both our lives unpleasant and with good reason ;-)
We wanted to buy them but we spent all of our money on $GME
All I'll say is that I appreciate the guy that bought an XR10 to tear it apart and look for the MVIS component. Every sale counts!
For architectural design applications, AR/MR seems very limited until environmental occlusion has progressed quite a bit. Can you comment on what is coming on this front?
I touched on this here
Thanks for doing this iama! What industry do you think while benefit most from xr?
How do you suggest getting a job in the industry?
I'm doing a little AR project for college using unity and Vuforia. Any cool suggestions? Currently doing a kids math worksheet but I would love to try do some more stuff on my own so any ideas would be cool
Hey, thanks for your questions.
I answered your first question here.
D'Arcy and I touched on our experience getting to where we are today here and he also dug into getting a job at Microsoft here and here.
I'm not sure I have any great suggestions. I spent my days with tunnel vision on construction customers :) My best advice: think of something you do every day that might be easier if you had a data overlay. Or think about something you use and love on your computer or phone (2D) and adapt it to 3D.
What kind of apps would you like to see third-party developers working on? What's missing from the ecosystem right now?
This question reminded me of this classic Ken M.
In all seriousness, there is sooo much runway for AR/MR today. The partner community for HoloLens is growing rapidly. Startups are blowing up and getting stupid amounts of funding for simple ideas.
Find any small issue that a large enterprise has that could be solved or mitigated by heads-up hands-free 3D display of information. Hire a few devs. Prove the ROI. Become a millionaire.
As someone that works in construction tech, but not on the VR/AR side... how do you get this stuff on job sites without the guys using the products getting laughed into oblivion by the other trades? I can totally see it being useful... I can see your field mechanics/techs/laborers 100% not wanting to put something on their face and walk around a jobsite. It’s still too big and gaudy.
Ooh, this is a great question.
When HoloLens first came out, I used to get laughed off the construction site. Nobody wanted to be the nerdy guy with the headgear. We went and talked to the architects, instead. (no offense, architects)
Everything changed the moment we integrated it into a hardhat. As silly or simple as that may sound, the change was drastic. I would walk on a site and every field guy wanted to be next in line to try it on. We shifted the perception from "let's see if we can get construction guys to try on this gamer thing" to "this is the hardhat of the future and we made it for you." We leaned into this even more as we evolved the hardware, focusing on things construction workers cared about like audio systems that work in high ambient noise environments, intrinsic safety, accessory mounts for their chin straps / earmuffs, etc. Every time I move to the next feature bullet point on the Powerpoint slide you see their eyes light up, realizing that this is actually purpose-built and not some adaptation.
For anyone who was still on the sidelines, they pretty quickly shift their mindset once they put it on. Our goal in construction is to democratize the model. Merge the digital (design) with the physical (as-built), empowering every field worker with the model rather than just the guys wearing a dress shirt under their safety vest. That resonates, in my experience.
Beyond that, if there's still someone on the sideline, holding out because they don't want to wear the weird Halolenz thing, they're eventually going to give in once they're losing business / profit margin to their competitors who have embraced it. AR/MR tech (among many other tech innovations) is coming to construction, whether these companies like it or not. Get on or get left behind.
Here's a video from the first time we walked onsite with a hardhat integrated HoloLens. See the reactions for yourself.
As someone on the front end of tech, i can completely understand why someone would be apprehensive about wearing a face computer. When we started giving private demos in 2015, before the demo, audiences had two reactions: “please can I take a photo with the face computer so i can show my kids?” or “no way would i ever want someone to see me with this on.” The split was probably 80/20. After the demo, everyone wanted a picture of themselves wearing the future. Post demo almost everyone became evangelists. Yes, there where a handful of smug know-it-alls who said we’d fail because the FOV wasn’t big enough or that it was too expensive or too heavy (to which I’d reply “i get it. You should buy one of the other fully self-contained holographic computing devices with a larger FOV that are lighter and less expensive.”) I worked ConAg with Jordan in 2017 and we had everyone from CEOs of big contractors to family paving companies come check us out. Some were skeptical about face computers but those who were curious enough to stay for a demo were converted. In my experience everyone who sees it with their own eyes is converted. Once you know what it can do, you want it, and you no longer worry about what someone else thinks because you just got something like a superpower and then you’ll show others and they’ll get it too. I took the first generation hardhat to a goldmine in remote Mexico to do some product research about whether it would perform in sunlight sitting in one of those massive shovels. Everyone, from the shovel operators to the dump truck drivers and dudes who change the massive tires to the geologists wanted this device, and scenarios about how they’d use it tumbled out of them. It’s adoption itself that moves more slowly. You need the right 3rd party line of business apps and they need to written for 3D worklows, you need budget, IT has to learn how to deploy and manage, you need time to train users, you need to figure out how you’re going to measure ROI. But in our target segments customers struggle to prioritize which use case to do first because they have multiple scenarios across their businesses. Once that happens, it’s just another piece of kit, like hearing protection or steel-toed boots that you put on to do the job.
Agreed. Software is the biggest component. There's so many softwares out there, how do you get it made for that application. Everybody uses different stuff. And I'm not just talking about BIM models. It's Bluebeam, it's Adobe Acrobat, it's Sage, it's Viewpoint, it's the mobile apps (including the one I work for) for any varying thing they're using that app for (time keeping in my case)... Then you get to the specialty trades and they all have software that's specifically for them. Glad to see it moving forward though, honestly.
Check out 'Trimble Connect'. We're building the glue that brings this all together. It has support for everything you listed. Our main HoloLens app is driven by Trimble Connect in the back-end.
Love it. I spent years in the field before getting into the Construction tech side. The ability to connect field to office via video and to transpose things into your purview that you are building is a game changer IMO. So many times guys have to go back to the job trailer to look at plans... So many times guys are on the phone trying to describe something they're looking at on the phone and sending you pics, etc. and you just don't have everything you need to help them. Flip on video and show them your view! It cuts down on so many conversations about specific things you're looking at. Just seems like a lot of potential.
So how do I get a hard hat with a built in HoloLens?
You nailed it. There's really nothing that compares to MR for this type of visualization. And yes, the collaboration piece is huge, too. Not only am I visualizing an overlay, I can bring others in remotely to see what I'm seeing without them even having to come to site. Revolutionary tech.
The hardhat integrated HL2 is called the 'Trimble XR10 with HoloLens 2'. You can see more about it on this page. If you're serious about buying you can do it right on that page. Depending on where you're located we probably have a local dealer near you who could give you a demo.
Do you have a recommended partner or approach to accurately track a controller / accessory while using a Hololens 2? There seems to be a lot of investment in getting tracking for objects by matching meshes, or landmarks, but sometimes you want something that is in your hand.
Like, I want to track a smart screwdriver PERFECTLY. I can add hardware to it to make it work, but I don't want to reinvent the wheel when it's "pretty close to solved" for other VR based headsets.
D'Arcy might have a little more knowledge on this. The closest thing I'm aware of is what Holo-Light is doing in Germany.
One more question:
iPhone 12 Pro has a LiDAR scanner which allows to dynamically occlude virtual objects with real ones. I haven't seen this being used much on HoloLens 2 (I'm guessing it doesn't work sufficiently well). Is this being worked on to be more precise on HoloLens 3?
Good question. This actually does work pretty well on HoloLens. If I recall correctly, most UI panels in the native OS occlude behind the real world. Speaking for our apps (like Trimble Connect), this is something we'd love to implement but just haven't gotten to yet. It definitely adds to the 3D "mixed" nature of the experience. I don't think there's anything stopping us from doing it, other than the million other great ideas we have :)
I just checked on my HoloLens 2. OS UI is not dynamically occluded. Only by static geometry.
As shown here. iPhone (or in this case iPad) can dynamically occlude virtual objects with moving real objects (people in this case).
Ahh okay, got it. Missed the 'dynamic' part. I think that's probably a measure of the refresh rate of the depth sensor. Don't quote me on this, but I think the mapping refresh rate on the new Apple Lidar is 2x the HoloLens (120 vs 60). I don't see why it wouldn't work on HoloLens, but it just would be a bit slower to refresh dynamically.
I work in AR, and have seen zero demand for wearables in the real market.
Right now an iPhone or iPad with lidar can do everything anyone needs with AR, and do it well.
How do you compete with that? What's your plan to build demand? Or even familiarity for that matter?
As someone who spends my days selling these by the droves, I'd have to say you're probably just not looking in the right spot. Perhaps we have a different perspective on what the "real market" is.
XR10/HoloLens is the most capable / advanced device in the AR/MR market (hence the price tag) for many reasons, most of which I won't cover. In short, though:
It sets itself apart from phone/tablet AR by being hands-free, enabling a field user to actually work on something while they're wearing it. It's also providing full 3D content, whereas a phone/tablet will always be 2.5D (3D content delivered via a 2D screen).
It sets itself apart from head-mounted AR devices (e.g. Google Glass, Realwear) in its ability to merge 3D content into the environment and enable a user to interact with it, versus just being a heads-up 2D display with no real integration to the environment.
Each device has its place for certain use cases. If the needs are more limited, there's no reason to get the most advanced device. If I'm only running email and Word, I don't need a gaming computer. For example, a phone/tablet running an AR app is great if you just want to visualize a model overlaid on your environment, but breaks down the moment you want to actually build or repair something with your hands with virtual guidance. An AR headset is great if you just want to do remote assist phone calls, but breaks down the moment that remote user wants to annotate your environment in 3D to help you with a task.
Your question about demand/familiarity is a very fair one. The public knowledge of MR devices and their use is still very limited. The average construction customer I go chat with still isn't aware of it and, if they are, they probably have misconceptions.
Seeing more AR capabilities (enterprise and consumer) helps to rise all the boats, so to speak. But it's definitely on us to continue to educate on what HoloLens brings to the table, hence things like this AMA.
Lots of questions! Don't feel you have to answer them all.
1) Do you think AR will ever hit the mainstream (much like VR nearly has with the Quest 2)? If so, what do you think needs to improve the most first (cost, weight, FOV, software support)? Or do you think it will remain mostly for commercial applications? With smartphones in our pockets and smartwatches on our wrists, what purpose does AR have for a consumer?
2) What is the reasoning behind having the batteries and processing components within the headset, rather than external (like the Magic Leap One)?
3) Have you ever looked into haptics, such as Facebook's Tasbi prototype? Do you consider haptics to be an important part of AR in the future?
4) What led you to a career in XR? Where did you start and how did you get there? What does your day-to-day job entail? I'm 16 and would love to work with XR in the future!
5) As someone who has undoubtedly tried both, do you think the hardhat version is more comfortable than the standard HoloLens 2? It looks like there might be more support and better weight distribution!
6) What happened to Minecraft on the HoloLens? I remember seeing the tech demo video and it looked amazing, but it never became available...
I'll drop a few comments here as well. I'm not reading Jordan's answer so as not to bias mine. So who knows, maybe they'll be similar. Or maybe Jordan will be wrong.
- I think AR *is* starting to hit the mainstream for some pockets of business. I'd argue that even VR isn't mainstream with consumers yet (lots of people bought Xbox, PS5 and Switch during lockdown, but VR gear is still more of a niche thing). All lot of things need to happen concurrently for consumers to be willing to take the leap of faith on AR. But based on nearly a decade working on MR (and a bit less working on VR), for either of these technologies to become a consumer product, mainstream like phones or computers, we'll need utility from these devices beyond just games. We'll need for these devices to weave themselves into our lives the way the phone has woven itself into our lives (and the PC before it). At some point all the tech specs will be good enough: FOV will be good enough for everyone (100 degrees? 120 degrees?), batteries will be good enough (my iPhone 12 Pro Max doesn't last a day but I spent $1400 anyway). When AR tech has the potential to "fade into the background" and the experiences that it facilities are varied and useful and available with a bunch of business models (free, fee, freemium), that's when we'll see widestream adoption. I think Microsoft is well on the road to that future. I think we'll have at least one formidable competitor, and that's great, because good competition keeps you humble and hungry. But success requires tons of R&D, lots of smart engineers and a CFO who has long term patience (as ours does, coupled with high expectations of near term execution). In the time I've worked on HoloLens I've seen at least two dozen Kickstarter-ish startups promise all manner of AR magic and none deliver. Like Clouds, AR platforms take a lot of investment.
- Tons of user research. We experimented early on with decoupling displays and compute/power. Conceptually people thought it was a good idea (small thing on your head) but in the real world it wasn't a good experience (cable running down your back, compute attached to your belt that would unclip and fall off). There are scenarios where decoupling make sense. And we now have 5 years of market data, from two generations of the device, and the most units shipped of any platform, and our customers tell us that being untethered is part of what makes HoloLens compelling.
- Yes, lots of research into haptics. It's a cool space. I don't think I can answer for Microsoft as to whether this is important for the future of AR, as I think you could find different opinions across the company. In the near term I personally don't see haptics as being critical to the success of wider adoption by businesses. There are many things I'd prioritize to secure more commercial sales before working on integration of, say, a haptics glove or shirt.
- Chance. I'd been a product manager for 15 years when I got a call from Lorraine Bardeen asking me chat about a job. She couldn't tell me about what the job entailed, what the product was that I'd be working on, other than I'd be using my product management and strategy skills. I was intrigued. I'd been at Microsoft 3 years and led the most recent Windows CE product release and was thinking about my next move. In Microsoft parlance I did a "loop" with people Lorraine worked with (Darren Bennet, who now runs Design for Microsoft Guides and Remote Assistance; Todd Omotani, who is now the SVP of Design at Fisker Automotive; LaSean Smith, who is now leading Inspirational Shopping at Amazon; Jorg Neumann, who today leads Microsoft Flight Simulator; and Kudo Tsunoda, then a CVP in Xbox). At the end of the loop, I still had no idea what the job or product entailed (I was guessing it might be something about an advertising platform for Xbox) but I knew I wanted to work with and for these people. They were unlike any I'd ever come across in software. The day I started I was shown a bunch of videos of the product and the experiences (a very early version of HoloTour, HoloSkype and Young Conker) and said "this is cool but how much of it is real?" and the reply was "all of it, your demos are tomorrow." They took a gamble on me (as most had come from consumer and gaming, and I had a lot of embedded and commercial experience) and what followed have been the best years of my professional life. A lot of what looks like strategic trajectory in my career has in fact been preparation and readiness coupled with generous helpings of lunch [edit: luck.] I was in the right place at the right time and knew the right people to get a shot at being on the HoloLens team. And I was good enough to get on the team and not get cut. So do everything you can to be prepared, read widely, bring unique, thoughtful and broad perspectives to the table, bring new voices to the conversation, and hope that luck finds you when you're prepared to meet it.
- I prefer the XR10, as I like the hardhat suspension and will trade the extra weight for the comfort.
- Man, the Minecraft demos were great. Here's the thing, though. HoloLens is a $3500 computer focused on business scenarios. For the Minecraft team, that means the addressable market for them is very, very small. But the cost to develop and maintain a version of Minecraft for HoloLens is probably the same as for any platform. Right now, the economics don't make sense for them. It's an amazing game to play on HoloLens and I expect that when MR is mainstream, Minecraft will be one of the first experiences you play. That team knows so much about what makes a good mixed reality experience.
And Jordan's probably right, I'm approaching these answers like George RR Martin. I gotta try /verbose. Thanks for being patient with me as I try to write less ;-)
"A lot of what looks like strategic trajectory in my career has in fact been preparation and readiness coupled with generous helpings of lunch."
The secret is out guys -- pile up your plates!
well caught, luck not lunch. Though I am not a small guy, so indeed, it's entirely plausible that my success could be correlated to generous helpings of lunch. For posterity's sake I'll correct it above but full credit to you, that gave me a big smile at the end of the day.
I thought it was on purpose. You certainly can't discount the career benefit of networking (read: schmoozing) done during 90-minute Studio-C lobby lunches.
I love long questions. I suspect that D'Arcy will also want to reply to some of these.
Note: I have zero insight into anything happening at these tech companies. This is my postulation.
- Yes. I think it's a given at this point. You have companies coming at it from the B2B side (Microsoft, Magic Leap, Google, Trimble, etc.) and companies (rumored to be) coming at it from the consumer side (Apple, Facebook, Magic Leap, etc.). The former is further along (publicly), but the latter is coming quick. Regardless of the end-customer they're building it for, there's a lot of money getting thrown at the technology. That's not the big tech companies taking a gamble that this will be the next computing platform; it's the big tech companies telling you it will be. Main limitations today are cost, size, and battery life. I suspect that wireless tethering to an external computing device (cloud, phone in your pocket, etc.) will be the breakthrough.
- I'll let D'Arcy touch on this for the HoloLens itself. For Trimble, with our focus on heavy industry, any dangling wire is a safety hazard from a catch/trip perspective as well as intrinsic (explosive) safety. Also, after wearing a HoloLens, it's just super annoying to wear a tethered device, to be completely frank.
- I've gotten some cool demos of haptic tech at CES in the past. I do think it has its place in AR/MR in the future. It's the "missing sense" today, so to speak. The really interesting thing about haptics, though, is that it's more than just touch. You can simulate the feeling of "touching" something through things like spatial audio, tactile UI, and animation. For example, if you click a holographic button in a HoloLens with an outstretched finger, you get a very satisfying spatial audio "click" sound, as well as the button clicking in and out. I couldn't physically feel it on my finger, but it's still very tactile.
- I grew up in a construction family and went to school for geomatics engineering with a focus on photogrammetry and computer vision. This, for me, was mostly driven by an interest in things that were "spatial"; GPS, maps, 3D images, etc. The idea of teaching computers to see the world like we do. I came to Trimble (a leader in 3D everything) and just so happened to get lucky and get involved with a project we did with Google Tango back in 2014. When we signed up with Microsoft on the HoloLens project I hopped over in an engineering / product management capacity. From there I've grown into more of a management role, but still love getting my hands dirty on the technical stuff. For me, the desire was always to learn something fundamental (like computer vision, or mapping, or computer science) but then find a great way to apply it to solve real world problems. That's XR to me.
- I think anyone who wears one for a long period of time will say that they prefer the other. Grass is greener. I'll take a HL2 all day!
- I'll let D'Arcy take this one!
You mentioned intrinsic safety for explosive environments. Is there already or a plan to make an IECEx labelled version? That would be very interesting but it seems like the processing power requirements are too high for intrinsic safety protection and the other protection methods might add too much weight to be practical?
The XR10 is already UL C1DII intrinsically safe. Microsoft just announced a 'HoloLens 2 Industrial Version' that is also C1DII.
We won't see anything beyond that (e.g. IECEx / ATEX) in this generation. The requirements are too high for ingress protection and not something we can retrofit.
Can it run linux?
I want to record a demo sequence in VR and AR to introduce my potential clients and new users. What do you recommend to create my very own personal demos in VR with the HoloLens?
Also, what do you think of Spectar and VisualLive BIM solutions?
Thanks and keep up the great work.
Are you looking to build your own app, or use someone else's app and record it? What are you trying to demo?
We know the guys at Spectar and VisualLive very well. They're doing great work and helping to push the technology out into the AEC industry which is, historically, very much a tech laggard. They're competitors to us, but the truth is that AR/MR/VR is such a new industry that all the boats will rise together. The penetration of the tech into AEC is so low today; there's plenty to go around.
Thanks for the reply. I want to sell, support and develop with any and all companies mentioned. Just learned about Trimble and have requested a callback. I want to put a HoloLens headset on a new users hardhat and press play on a prerecorded audio/vr walkthrough of the operation of the headset and a mini virtual/real world to walk around and interact with to simulate a construction site build out and/or on a manufacturing plant floor.
My market in Ontario Canada. Clients are automotive industry and construction.
I'm new to reddit (go ahead, Jordan, crack the boomer joke you've been saving) but I assume you can send DMs on this platform. Drop me a line and I'll connect you with Microsoft's specialist in Toronto for an initial discussion. And thanks for checking out the AMA.
What will the upcoming HoloLens 3 improve upon compared to HoloLens 2?
Will the headset be lighter? More powerful? With longer battery life?
Will the FOV and resolution be greater? Refresh rate higher?
Will the method of image projection change?
View HistoryShare Link