Immersive Mixed Reality, or MR, technologies bring virtual objects into the real world, or blend a physical environment with a digital one.
MR technologies, including Augmented Reality (AR) and Virtual Reality (VR), are poised to grow considerably over the next few years. In fact, more manufacturers are leveraging this technology as the Augmented Reality market is expected to reach - billion in revenue by 2023 .
How will companies use AR and VR?
When a manufacturer has multiple factories, AR can help a small number of engineers see inside many locations, and perhaps maintain a sprawling setup.
With a head-mounted display, design engineers can view and evaluate 3D representations of models before they are built: augmented-reality before reality.
A company called GridRaster uses cloud-based, remote augmentation to bring design models to mobile devices and smartglasses.
We spoke with GridRaster co-founder Dijam Panigrah about the emerging use of AR and VR in design.
(A note to our readers: Have you used mixed-reality in your design process? We want to hear from you. Share your questions and comments in the comments section below.)

Tech Briefs: What is motivating companies to adopt more VR/AR, do you think?
Dijam Panigrah: A recent study conducted by GridRaster showed that 39% of respondents are implementing AR/VR technologies over the next 12 months. Many industries are dealing with staffing shortfalls as positive COVID-19 cases and health concerns keep workers home. Companies quickly invested in safety measures to return plant production to pre-virus levels, but a rise in new cases has threatened setbacks for carmakers.
The survey revealed that for 71% of manufacturing executives, COVID-19 has either moved them to start planning or “fasttrack” plans for AR/VR implementations. Sixty-eight percent of executives are planning to use AR/VR for virtual design. For those manufacturers that have already implemented technologies such as AR/VR, more than a third of them (38%) say they’re seeing a 10% - 15% increase in savings. Seventy-one percent are leveraging AR/VR for supplemental virtual labor on production lines, and another 65% are using it for virtual customer service visits.
"We completely believe that AR and VR is going to completely transform the way we do our work, how we live, how we interact with the world how we entertain our selves, how we. communicate, says Panigrah in the below video from 2019, "but there is a long way to go."
Tech Briefs: What were some of the concerns, shown through the survey results.
Dijam Panigrah: Scalability is a major concern for many manufacturers. Seventy-nine percent point to scalability as a primary concern as to why they haven’t implemented AR/VR yet, and for those that have made implementations 52% said they need to move their AR/VR to the cloud for additional scalability.
Cloud-based AR is still new and while there are security measures in place for the technology today, it is sometimes a challenge to have an enterprise organization modify legacy philosophies and migrate from on-premise to a cloud environment. Once these organizations realize the scalability and technological potential of the cloud, this migration is imminent.
Tech Briefs: What is the biggest challenge in getting designers to adopt AR/VR?
Dijam Panigrah: The device limitations severely restrict the capability of existing AR/VR systems to generate and work with very fine mesh with large polygon count models and point clouds, which is essential to collocate and precisely fuse the virtual objects on top of physical objects in the real world with complex surfaces, and varied lighting and environment.
Manufacturers and engineers are overcoming this great challenge by partnering with providers of cloud-based (or remote server based) AR/VR platforms powered by distributed cloud architecture and 3D vision-based AI. These AR/VR cloud platforms provide the desired performance and scalability to drive innovation in the industry at speed and scale.
Tech Briefs: What are the drawbacks of cloud-based AR? Are there security issues? What must users do to ensure design-data security when it is not stored locally?
Dijam Panigrah: Cloud-based AR is still new and while there are security measures in place for the technology today, it is sometimes a challenge to have an enterprise organization modify legacy philosophies and migrate from on-premise to a cloud environment. Once these organizations realize the scalability and technological potential of the cloud, this migration is imminent.
Tech Briefs: What is the most exciting, specific way that you have seen AR and VR support a design process?
Dijam Panigrah: Enterprise-grade high-quality AR/VR platforms require both performance and scale. However, existing systems such as MS HoloLens and others are severely limited in both aspects. Most enterprises have a rich repository of existing complex 3D CAD/CAM models created over the years. These 3D models may vary in their complexity (such as poly count, hierarchy, details, etc), making it difficult to run and excel within on-premise virtual platform environments, restricted by device limitations. This forces developers to decimate the contents (3D models/scenes) to fit to different mobile devices, spending months in the process and sacrificing on the overall quality of the experience.
As these virtual environments become richer and larger, the problem continues to compound. This cycle is repeated for each of the different AR/VR hardware platforms, making it difficult for any enterprise to move from experiments and pilots to full scale deployable solutions, thus stunting the speed of innovation and effectiveness.
Manufacturers and engineers today are experiencing the next wave of technology innovation that will fundamentally alter the way they operate. This transformation is primarily driven by merging of the digital and physical world to create a better, smarter and more efficient way of operating. Immersive technologies such as AR/VR technologies are playing a pivotal role in this transformation. The organizations that take a leadership role will be the ones that not only leverage these technologies, but they will partner with the right technology provider to help scale appropriately without having to stunt technological growth.
What do you think? Are you considering using AR and VR in your designs? Have you incorporated mixed-reality into the design process? Share your questions and comments below.
Transcript
00:00:11 okay everybody welcome to the Thursday November 21st 2019 session in our series about edge computing and I'm Richard - or I direct the u.s. Asia Technology Management Center at Stanford we're very happy to produce this series and we want to say thanks to the member companies and our industry affiliates program for providing this support that let us keep going thanks to our member companies
00:00:39 we've got some refreshments out back at the end of the session we hope you'll stay around meet our speakers meet each other as at after the end of the formal part of the program so today we have another use case for edge computing right we had talked about some enabling technologies in several of the sessions so far last week remember we had leap mind from Tokyo by live video conference
00:01:06 and that they have a software stack to enable edge computing in a wide variety of applications we had also looked at autonomous vehicles and we had also looked at medical data where it's a totally different motivation for using edge computing as opposed to cloud computing so today we're going to talk about augmented reality and virtual reality I'm delighted that we've got
00:01:34 exactly the right people for this presentation so our first speaker is going to be de Jong Pawnee grinding who is the co-founder and CEO of grid raster and Digium has a rich experience of over 16 years in building and deploying and taking to market various mobile and network based products before founding grid raster he worked at Texas Instruments Freescale Semiconductor and
00:02:03 HCl he's really leading the business development product marketing and partnership at grid raster and he'll tell us more about what grid masters do next or closest to me is Don Angelo DJ is senior director in the emerging technologies team for Charter Communications as such he's providing strategic leadership and direction on
00:02:30 service delivery innovation over the network for new use cases such as cloud gaming AR and VR he has led the product team at Time Warner Cable smart home service and he's had various engineering roles at Emerson Eton and Bosch before joining the cable broadband industry so as you see this covers a couple of pieces that we really needed to have covered in the series this fall
00:03:00 not only this use case but also from the standpoint of the telecom company right the communication side how we're going to give this to the network so without further ado let me turn over the microphone to Dijon and ask you to give us your comment thank you for yours and we can just go down just before I start like how many of you have tried the augmented reality and virtual reality
00:03:35 just a show of hands okay okay and how many of you wish that the device that you're trying is something as easy as like a plus you just pick it up and where you actually see the most immersive Ness that you see in the bulky ones right now yeah so I think there's a long way before we kind of do that and does it there's a big role that the cloud and in specifically the edge
00:04:08 computing would be playing to make that reality happen so a little bit about me like and the idea the genesis of the company grid raster how did we kind of come to form this we have been working in the space of network and mobile for the last 15 years have been through the the first tool core processors some of the defining technologies in 3G and 4G and not only like from a deep
00:04:42 engineering aspects point of view but over a time period I have more into product and have taken some of this products into the market across different geographies during during actually the world in our previous role we started getting getting the requirements for augmented reality and virtual reality almost like five and six years back when we are talking about
00:05:08 enabling this whole 5g network and some of the and at that time like also well at that time also what we see is there are other technologies which was kind of evolving which will ultimately kind of converge to kind of make certain things happen the way we the things are shaping up today one of one of the thing was the GPUs the GPUs the advancement in the GPUs was making it possible to do the
00:05:34 kind of compute or the the the processing that you specifically required for augmented reality and virtual reality also the with the advent of the you know 4G LTE then there is outlook towards the 5g the data pipes were beginning to become thicker and thicker but the problem what we kind of realized was the attempt took have those experiences on a mobile device which is
00:06:01 in a form factor or which is which is confining in the sense that you can't put active coolant over there there can't you can only have somewhat of the GPU that you can cram into those kind of devices was it was a big deterrent when it come to proliferating this experiences across a mainstream use cases or taking it enabling and enabling much sleeker designs which people can
00:06:29 kind of wear and from the work that we were doing we got some insights around like how can we make use of the remote computer which in this case which we started coming as h compute that you are able to offload the computer intensive tasks on to the room with server and be able to still deliver very very high-end experiences on aced untether
00:06:54 mobile device and with those insights actually we started the company grid roster and we have come a long way out there but before I delve into what exactly we do at grid raster I want to touch upon a little bit around on edge computing because this is this this edge computing has become in loaded tongue in the sense like people conveniently use the term a based on like what suits them
00:07:19 not that anybody is wrong there but just that I just want to set the context in that when I'm talking of edge computing I'm talking in a certain context well whoever irrespective of whoever you ask one thing that is common like everybody would agree that the fact that you're moving the compute closer to the user or the place where the data is kind of generic get getting generated is what is
00:07:44 term as edge computing and that that compute you're placing it if you go to your device manufacturer they would say ok it's all about the edge is the device edge because they're trying to put on the sensors you're trying to put some compute so that you can do lot of those processing analytics on that device on the device itself if you go to the network players they will talk about the
00:08:08 network edge because on the base station they're putting the compute and they're going to term that as their edge but when we talk in our context when we talk of edge computing is the first hop from the end device so it's the first hop on which we'll be offloading the compute from the from the device is what we as a company and in general the community kind of beliefs
00:08:41 why is this like wow what is the need of this whole edge computing like the cloud was working fine what's what happened nice so there was fundamentally the the to sifts that was kind of happening one it's it's the data and it's not the data just the way we know it it's about the first time in the history that we started capturing the real world data in such a massive amount that it was just
00:09:12 impossible to put that onto the network so and take it all the way to the cloud and start processing it so for example I just took em if you talk of him self-driving car a self-driving car in a day generates almost like 4 TB of data are you gonna put getting it on to the cloud that's that's not gonna happen it's not going to be supported if you actually go into the smart factory where
00:09:39 the factories are designed on the industry photo you are looking at three Penta byte of data in a smart factory kind of scenario that's like humongous that's humongous and and the other thing is that you're trying to the the second aspect which was there not only that you are generating this massive amount of data but the the actions of the value that you're trying to drive from this
00:10:03 data has to be done real-time otherwise the value kind of goes like in a autonomous vehicle if I am going to using one computer I want to detect there's a stop sign the car has to stop the decision has to be taken there you can't just put it there back to the cloud and take a decision that that doesn't work so in case of like augmented reality virtual reality
00:10:21 you're talking about like you're moving your head and the whole scene what being rendered being rendered within the sub 20 millisecond if you're trying to do that you just can't put it to the centralized cloud and try the process and we render and just get that done that's not going to happen the need the the the massive amount of data that's been kind of put into the
00:10:42 network by this all the sensors the IOT even all the cameras that's something which was one of the reason that kind of dryer drove this change towards the edge computing and second thing the the requirement to kind of process this in real time where you almost have minimal latency like ultra-low latency is something which kind of drove this whole change or the movement towards the
00:11:09 edge computer so what what are you going to see now because all this whole edge computing you you're going to see the network and it could the whole computer infrastructure is going to evolve in a sudden way and what you're going to see is going to be hierarchy of compute so you'll have the compute so all all the compute nodes who like a habit relevance depending on a use case depending on the
00:11:34 task that you're been down depending on a certain use case but what you're going to see is a hierarchy of compute the cloud is going to still play a major role but you're going to see the compute that is on the device the compute that is on the on the base station the compute that is on the edge data centers which was being bailed out is going to have a role and each of them are going
00:11:56 to be used in a very specific way depending on the different use cases and the mostly the mostly the it would be a it'll be a case that you're trying to see whether you are trying to adjust to the latency or whether you're trying to address the cost it's going to be a trade-off between both of that and based on those use cases you're trying to now orchestrate you're trying to build
00:12:23 efficiency in the network and then kind of deliver different experiences so the I think some of the benefits are pretty sulfide everyday you know you're moving the compute closer to the user so you're getting much much faster response you're avoiding the data overall on the data hauling over the or the core network which which doesn't do the network congestion and that is directly related
00:12:53 to the overall cost reduction because the transport cost of all the Samanas of data is going to be like huge secondly the the for thing is there is going to be a improvement of the data security like just for take the example of a hospital-like there are certain data you never want to kind of put it out onto the onto the cloud like there are patient data that could be kind of just
00:13:17 be processed wood needs to be processed at the edge right at the hospital but there are some inferences or certain learnings that can actually go to the cloud so there's a whole structure that's going to be kind of build out which helps you kind of having a better see data security and compliance with the regulations that are in place and of course the better quality of service
00:13:36 you're getting you'll get much more predictable predictable latency because you're looking at minimizing the number of hops that will be required to do things in the network so that that's that's going to be a huge asset coming together so what is that we are doing how this whole edge computing and everything is trying to kind of how you know a RvR move forward
00:13:57 okay so Idris - what what we are building is is a cloud platform which which would be deeply kind of be using this whole concept on the benefits of the edge computing and the 3d vision based iai to enable you to build very highly scalable platform so where you can run you know and any number of use cases across different location you can scale number of users you can scale your
00:14:26 entire virtual world so we are building that aspect and I'm somebody like completely sold on this we completely believe that the AR VR is going to completely transform that we the way we do our work how we live the way we interact with the world how do we entertain ourselves like how do we communicate it's going to completely be a change and the work that is a long way
00:14:55 to go so we have seen the evolution of mix reality I just wanted to touch upon like how things have kind of emerged and where we are and where we will be in may be sooner than later so at the beginning you saw very rudimentary experiences right so you you saw you know maybe a Pokemon go where there's a very rudimentary experiences you were kind of overlaying certain
00:15:24 specific things on a on a on the real world you you you have seen the the I don't know if you've seen the IKEA app which allows you to kind of get your furniture you take your house and kind of drag it out drag your furniture and put it and kind of replace it and so that you can plan your whole setting like how you want to kind of design it but whatever sudden done they were
00:15:49 pretty rudimentary and most of them was on the mobile so it was kind of done just but it was but it was an excellent way where people kind of got introduced to that concepts today we have come like further than that we can create some sort of Kelley presence where you could report it to a different wall and you feel that you are there and also you will see that we are able to create some
00:16:18 kind of visualizations right I can I like in the industry they're doing is the CAD models which they were initially kind of working on the workstations now you are actually able to have those models right in front of you in in full high fidelity and just walk around it as if they are like physically there in front of you but whatever whatever we have done till now
00:16:40 you still have there's a clear demarcation between ok here is the virtual world and here is the real world ok that demarcation is completely Derek ok what we are moving towards is where you will you will completely that the distinction between the virtual and the real will kind of go away anyway so the the virtual the the virtual world will be overly in such am in the physical
00:17:06 world that it will be almost be indistinguishable ok and from there from a technical aspect we are talking about reaching at like 16 K resolution then 120 frames per second there there are other things and the ability to overlay them at absolute in a millimeter precision so all that those technical aspects were kind of addressing but but this is not going to be possible the
00:17:32 kind of the technical requirements are there to do that it's not going to be possible using the clock and we see that ultimately all devices will converge where cloud will all the same device will provide you the a our experience and possibly VR will be a mode within it where you switch to a VR mode and you have VR experiences so it's so while we
00:18:00 believe that's going to happen okay and but this is this this only be able to be pervasive which goes to everything and start having mainstream as I mentioned before as long as those devices is mobile and untethered okay and but today I mean to cram in those kind of compute and battery requirements like it's almost good or almost going to be like 100x compute or
00:18:24 higher computer that will be required to reach there there then what we have today on the device so in order to do that you you basically have to fall back on on where would that extra compute would come from and that's where so that's where grid raster kind of come in so what what essentially we do is we have been able to build a platform that would be able to offload all the compute
00:18:54 intensive tasks right the heavy rendering okay the 3d world reconstruction the the 3d AI when you're doing is semantic analysis of the different objects that's there in the scene and you're trying to find the relationship with each object with each other and also with the physical world so all this all this compute intensive tasks we have been able to kind of
00:19:20 offload to the cloud which in this case we're talking of the edge cloud and there are certain certain aspects even within the rendering that you can actually upload to a to the cloud like the global lighting and a global lighting has a much higher latency budget so even if it's like 200 milliseconds away the cloud is 200 millisecond away you still be okay to
00:19:42 kind of when the experience will be okay from a user point of view so we so we have been able to kind of distribute this different compute tasks under different nodes and bring them together in such a way that from a user point of view they're looking at absolutely fully immersive experiences and without the constrain off of the device computer so
00:20:07 apart from the fact that we are able to distribute the compute and be able to do that there's so many other technical challenges we need to address to make this experience possible like how do you how do you deliver this on a wireless network like an wireless network known to be pretty finicky how do you take care of that how do you decide what tasks to be distributed where it's all
00:20:29 dynamically like how are you doing going to do that how do you schedule do those tasks so all those problems is what we kind of addressing at grid raster and kind of working with some of the large companies and intelligent telecom and large enterprises in aerospace automotive to evolve the product and we continue to do that this is just just from a little deeper like architecture
00:20:56 in terms of what we are trying to do as is that the core pieces that we're trying to do is on the rendering the 3d ai and the 3d world reconstruction but having said that like these are still these are the core technology but in order for us to kind of put that together in such a way it's kind of usable there is a lot that you need to do around it the first thing is we
00:21:22 beyond the core tech what we have been able to build as we kind of product eyes the whole technology is the ability that you can actually have different kind of devices like people are going to have different devices ok you can't really control who is just what device so if we with our architecture what is going to happen is whether you want to use an Android you're like a hololens or you're
00:21:47 an oculus quest you should be easily using it through pretty easy API integration and same thing on the content side so when we you in the consumer in the consumer domain most of the content that you would see today is you ready based but if you go to the industrial you have different like Katya models you have Autodesk you have PTC creo so we we have we provide we are
00:22:13 providing the API so that the onboarding of all these applications is going to be extremely easy when they somebody comes onto our platform and third things from a deployment standpoint what what you would see today that many of these enterprises just because of the dole whole data security pieces they do not want to deploy that on to the public cloud
00:22:36 okay so cloud which is open to others so so for us it was important that we provide all the both the deployment scenario is that whether they want to deploy it on Prem or deploy it on the cloud we kind of facilitate both of it and the results that we have seen has been extremely extremely good okay and on back of it like for example um if you take a standalone microsoft hololens
00:23:07 device which is considered to be the best in class right now when it comes to the AR glasses the maximum is the maximum polygon that you can actually run on those devices is 100k okay so if you have something which is kind of going to be bigger than that all that you have to do okay you have to kind of optimize and decimate it the right word is decimate because you've got to kind
00:23:28 of remove that okay so and that's that's a huge huge problem for for a lot of lot of companies particularly in the enterprises well there is a reason why the models are created which that fertility you know so but with using our platform you can with a single GPU we can actually scale up to like 50 million poly 15 million poly complexity of models can run using using our out
00:23:56 architecture using a single GPU and if you are more if you want to run hundreds and 100 or 200 million poly all that we've got to do is kind of spawn few more GPUs to kind of enable those experiences 60 frames per second still kind of used a yardstick because most of the displays that will see the 60 frames per second that's a device constraint that we have to work with
00:24:21 motion to photo latency latency you want to ensure it's a sub 20 millisecond motion two photon latency in terms of augmented reality and virtual reality like for people the motion for so the whole thing is about if I take an action and I move my head the scene that has to kind of move within the sub 20 millisecond that's the from my motion to the photon though the view has to be sub
00:24:45 20 millisecond and today from the network even if your computer is like sitting 40 milliseconds away we can still bring down the perceptible delay which is the motion to vote on leading to sub 20 millisecond because we're using a lot of that predictive predictive algorithms caching there's a whole bunch of different techniques that we are using to kind of do this 3d
00:25:09 spatial mapping today like the devices kind of map at 1 frames per second because we are not constrained from a computer standard we can do it absolutely 60 frames per second which allows us to have much more fine-grained you know create the fine grain messes which then we can actually use it for more precise rendering so we can overlay the virtual objects on a physical world
00:25:34 in a much much precise way almost to a millimeter precision than any of the current state-of-the-art devices could do it yeah so this this is all from a technology performance and part of it like where is it being used the question is like ok it's it's it's awesome that we're getting some technology in place but where is it being used actually for most people the bleach had always had
00:26:07 been gaming like when it came to VR VR kind of was pretty much led by the whole gaming thing ok but when you come to a are actually it's primarily is being driven by the enterprise use cases and and the use case it's it's no more the use cases are no more in like mmm experimental phases actually getting deploy okay when I talk about so so most of the use case is like
00:26:36 the design and engineering where let's take specifically for a car design when you're doing a designing of the car earlier you used to use the the four models or the clay models who kind of create those models textures and overlay and all that was kind of happening today that is being almost replaced by the by a are where you can now you can create a physical form model to build start with
00:27:04 but all the iterations that whether I will change the dashboard I want to kind of overlay the different grille or the or the the rearview mirror so you can keep those either a Shahnaz as quickly almost like real-time you can do that which the that design which used to kind of take almost months to complete now you can actually do that in days on the and on the operation and manufacturing
00:27:30 side so there is a the with with the air glasses actually the level one employees are being a is out of performing level three employees so there's a huge jump in the productivity or the efficiency with which they can do this training like I mean a training has been has been an absolute success all the Walmart all the employees with today join Walmart actually are mandated to go through the
00:28:00 VR training today consumer experiences like I think if you happen to get an opportunity to go at any of the re retail stores they have those VR experiences there where you can actually customize your car the way and then just get into VR and just you see the car what what you need and even in the VR environment you take it on a drive to a mountain or SC depending on what what
00:28:26 what what kind of a person what you fancy there so today like that's going to be the deployment where the automotive aerospace and industrial which is kind of picking up in a big way so that's why we are focusing but we are keeping an eye on the consumer use cases we believe the most of the people are going to be in used to the two VR&E are through work
00:28:50 and that will kind of help even propel the the consumer adoption the certain future trends which will further kind of help in proliferation of the demotion medium 5g is definitely one of the big pieces just having a conversation just before we came here that today okay the compute part is taking care what K what about the bandwidth I the file G will unlock some of those problems so you're
00:29:17 going to have further lower latency then back through port you're going to use the spectrum in a much much better way as I said the enterprise use cases is going to kind of propel many of the adoption the consumer AVR and also there is a we believe that there's a big scope for the peer to peer network we're like just think of it any device at your home you add you almost you're creating your
00:29:44 own virtual cloud that's something that is going to happen so that any device within your in in your in your house is is can use the computer of any other device which is sitting idle I mean that and those things are going to kind of also help in a big way the whole concept of the immersive thing using those computer just sitting idle so that's it from my side for now I'm happy to kind
00:30:14 of address any other questions which are there I'm happy to chart after the end of the session thank you very much while you're standing here ok I'm changing slides I want to ask you is one of the reasons why a are and VR has not been deployed more because of kind of user fatigue no so so they're much there are multiple things which are there um obviously the one of the one of the
00:30:51 thing is the price point many of them for from a device standpoint the price point is also a big deterrent and the the fatigue part that you are talking about is has lot to do with the fertility because see the human eye that who does the feeling of nausea when you are actually wearing a air or we are glass has a lot to do with your body feeling as if you were being poisoned
00:31:14 it's exact reaction to you you you you've been kind of food poisoning or something you start feeling nausea and that part has got to do with the fidelity not matching the real world it would be more about the resolution like 16 K at like for like 90 frames 120 frames per second okay where the day as long as mind is not able to detect that the you are being kind of gained you
00:31:45 know it's if it is not able to distinguish okay the real world in the virtual world you're not going to feel the fatigue and the year and that will go with what we're talking about okay yeah great thanks so why don't you have a seat and thank you will ask you to give your comments and we will all have a discussion let's put it off until after
00:32:05 the next set of comments thank you when I was a research engineer back at Bosch I used to come here often to attend similar talks so I'm actually quite chuffed to be here alright so a brief introduction charter I promise I'll try to make it short charter is a large cable mass media company we we go in our footprints by the name of spectrum and so we have about 28 million
00:32:45 customers across 41 states in the US and we have 98 thousand employees so some of the key points I want to hit on today so taking a slightly different point of view not contrary you know I'd say but slightly different point of view then dija is you know one introduced immersive as a form of entertainment potentially I want to hit upon the fact that you know
00:33:16 there are some technical challenges and DeJong touched on most of them the cost of computer does need to be resolved we can help service providers can help it is also a hard problem but it can be solved and finally we think our equation or simple math equation is bandwidth low latency in compute should give us superior experiences okay so in introducing immersive just as an
00:33:45 extension of video you look to the streaming video today there is no interactivity all the content is pre-loaded apply bandwidth you know with HD even 4k 15 to 25 megabits per second latency yes it does you know if you have too much latency and too much error on the network then obviously you'll see some artifacts but generally very tolerant to let me see it doesn't create
00:34:13 the whole motion two-photon experience loop so far more tolerant and special compute at the client display no we all use you know hardware decoder that's in our phones or anywhere else for that matter a cloud gaming so this is basically where the content starts get getting rendered on the fly so you've got basically all your game controller input you're mashing your controller so
00:34:41 I'll going up in the cloud it's rendering frames and it's bringing them that back down and that loop needs to happen at a rate at which you don't perceive the lag so you know there are there are many many articles written about the subject we're going with 50 milliseconds latency you might find another place that says 80 millisecond latency you might find another place
00:35:05 that says 30 millisecond latency right so but you know we think based on some of the field trials that we've run you know 50 milliseconds latency is it's a pretty good estimate of the round-trip loop and then of course there's compute involved you know there's an interesting statistic that says that you know the the content keeps outpacing the the
00:35:30 devices and so you know developers have these unique challenges either they can develop to all the devices and basically deliver cartoonish content or they can develop really bespoke content but then it's only available on you know a full-fledged PC with a vipro hmd for example or an oculus rift and then last but not the least we do actually believe in this feature so you know I'm gonna go
00:35:58 back and add my two cents to one of the things that was discussed towards the end here between you Richard and DJ um there's of course fatigue there's of course the lack of retinal grade resolution but one of the things that has been said about AR and VR is it can be an isolating experience especially when you look at it as a form of entertainment because when you like to
00:36:19 be entertained you know you sometimes want to be with friends right so you can argue that yes now I can do this in my virtual living room with other friends who are far away so in that sense it's not isolating but then with regards to your own family it might feel really weird right so again two-photon latency kicks in you know things like like fields sometimes
00:36:44 are not about latency at all they're about pure bandwidth trying to render like 50 hundred different perspectives all at the same time so that you know whichever perspective you take into looking into the light field you know you without going through the motion to photon latency loop you know you get your your image your view some challenges related to virtual reality
00:37:11 and augmented reality as we see it I think the jam has been through most of this you know high cost of pcs to deliver what I would call premium experiences you know you've got the download to go watch a 360 video kind of stuff but I think people who get introduced to VR that way you know do not really enjoy it so then the second thing is you know all
00:37:38 this fragmented content and you know the battles that go on in the market augmented reality once again I think DeJohn pointed to both of these and you know the high price of hm DS so this is what I would call sort of the virtuous circle so what is the problem today as you know we touched upon earlier is this price point right you gotta buy a $2,000 computer then you
00:38:06 gotta buy a $400 I'd have to say 400 now today with the rift s so you know you're looking at a fully-loaded price of 2500 to access a premium experience or they're about maybe 2000 you know maybe you get a good deal right so so you know the challenges of users are should I invest in this you know I've got I've got limited content I've got a high price of HMD you know at
00:38:32 the same time the developers are like what's the point of making VR content what's the point of making air VR content because you know it does I can't monetize it right and and then the hmd guys are like I really need scale so that you know I can bring down the price points of my hm DS so what needs to happen in this market is that the virtuous growth cycle needs to be
00:39:00 stimulated how can network service providers help we think that if we embed compute in the network we may be able to jumpstart this virtuous cycle once you're able to take some of that compute out then you can deliver those premium experiences on you know low-power HMDs that don't heat up you know that have good form factors that essentially leads to more people buying you know
00:39:34 developers earn more money all the tools improve content gets less fragmented as the industry matures so a long time back I think I stole this statement from a Sun Microsystems John gage the network is the computer and I'm adding my own twist to it because the network will be the computer again so to the left is basically you know proven artifacts that GPUs can be migrated to
00:40:10 the cloud to deliver cloud gaming ham and you've all heard of stadia of course some of the early reviews have been a little rough but we invested in 2018 in a company called blade shadow which does something similar but different in many ways but essentially it's a cloud gaming solution it has been proven to essentially work under certain constraints and it is indeed a proximity
00:40:40 based solution so that's really the the one of the constraints you know you got to think about some kind of an edge that edge is not your local last mile that can be far away so all of this followed by a RvR and so these the streaming content as it develops you know service providers may choose and to find the business case profitable to start moving compute from national data centers to
00:41:09 regional data centers and then back down to the edge data centers and then last but not the least you know as you start seeing more and more light fields but displays clients that demand demand more and more from the network then you know bandwidth latency and all of those things evolve along with the computer so this is one of our ecosystem building activities this year earlier we founded
00:41:40 the idea or the immersive digital experiences Alliance I highly encourage you guys to go check it out at immersive Alliance dot-org what are we doing we're trying to create a royalty-free technical specifications around the convenience of immersive media and we generally do education seminars and and we basically curate new ideas and what I'm
00:42:08 specifically interested in the the work group that I lead is the network architecture work group and you know we have you know huge spectrum of content right you've got essentially raytrace saying you know you got to run this on multiple GPUs for X hours to render out you know cinematic content and then there's the interactive content which is like can i virtualize my GPU can i slice
00:42:33 it and can i use it so you know there's there's a huge spectrum and we're looking at how the network could potentially support all of these applications perhaps in real time we don't know whether we can do all of it in real time such as you know ray tracing and physically based rendering engines but we're doing some experiments and so this is some early curation of
00:42:55 ideas and on what direction we're moving in which is that essentially you know in today's world in the video world there's an end-to-end flow that actually gets negotiated end to end but you know what we're looking at is can the network actively engage can there be a quality of experience attributes and client attributes that help describe the quality of experience attributes that
00:43:21 can then lead to the network actually engaging storage and gauging compute engaging its networking capabilities so that the right quality of experience can be delivered to the right device okay I love this slide simple my equation you know there's there's a delicate balance depending on the application and the quality of experience you know you need to balance all of these three three
00:43:50 things you know there's bandwidth there's latency and there's computer so now I can finally move into edge computing more as a concluding remarks since I started with AR and VR so you know I basically took these off of this state of the edge to any 18 report I highly encourage you to read you read it essentially all this is a set of use cases you know you've got
00:44:18 autonomous vehicles which is gonna be you know ultra sensitive to latency and jitter there's edge content delivery which may be just about better video delivery there's video surveillance where you know there's a huge deluge of upstream data video data there's a imgine learning you know I already talked about gaming and here we are and then yes yes so according to me good bad
00:45:01 so great does it mean that all of all of this is so there is no scale right because none of these applications are actually happened but it is someone's attempt to try and put sort of an idea of a scale without units on this so I mean availability can still be measured security you'd have to probably come up with a serious metric to measure security at the edge but the point is
00:45:33 kind of like how important it is for the success of attributes yes exactly this is really always this is somebody else's rendition of what attributes matter and in all of the edgiest cases you know perhaps you know which attributes like for example high bandwidth and low latency is that what's gonna go on or is it maybe not latency sensitive like he says with video
00:45:58 surveillance not as latency sensitive so so yeah I mean okay I am NOT gonna claim authorship of this it is a rendition take it for what it's worth and that's it red doesn't mean yes so latency is low and it's load to her yes though is Louis by the definition of this guy blue right whether it means more resource constraint it by definition still he has called it blue a
00:46:42 system with four autonomous vehicles will have high bandwidth well will be low jitter yes a very high availability and also high yes so I appreciate your thoughts on this slide for the first time I was beta testing this one clearly it could do anyway so I'll come to my concluding slide ok so I do believe that the network edge computing has a potential to democratize AR we are and
00:47:18 other immersive experiences I'd make slightly provocative statement here for those of you who have been following 5g which is that low latency high bandwidth is probably a not probably is as as certainly is a prerequisite to reap the benefits of edge computing however its computing is actually agnostic of any network technology so you know if 5g offers you you know in future if LTE
00:47:49 offers you 25 to 30 milliseconds latency on the access network you know today DOCSIS 3.1 will offer you something like 10 milliseconds then 5g will make it do to 5 milliseconds and then you know we have a path to 1 millisecond latency over fiber so so there's going to be some industry competition that is going to be beneficial to the consumer to drive Layton sees down and then last but
00:48:16 not the least this was a very interesting report that came out again I encourage you guys to take a look is chapin sharma claims that the Internet economy in about a decade from now will kind of achieve the same scale and size the cloud economy after day with mobile economy so yeah with that I will conclude yeah those are very large numbers dijanna let's go to a showing BJ
00:48:50 BJ up on stage and you had a question that I made you put off so why don't you go ahead and ask I'll let you go first even before me device and egg and the cloud so the question is how do we decide that when we put what compute so so we have this ability so any if you take any of the immersive experiences there are different tasks you think you can attend
00:49:29 you say one other thing I was talking about was like the global lighting you see whenever you see a the AVR experience so there is a lighting shadow there are many aspects many tasks which are there and within that the aspects which is more dynamic like which requires immediate what I say a real-time movement anything that involves around that it has a low
00:49:55 latency budget so you need to do it closer to you so like the global lighting has a almost like a 200 millisecond kind of a budget so I can have the global lighting done on them even a computer which is father which in this case could be a cloud if the cloud is available to me within a 200 millisecond on the on the on the device side because the whole thing it's we do
00:50:17 use the device as well as I was pointing out that we know that the wireless network is finicky and many of the actions that we are taking is based on the our prediction prediction algorithm and end of the day the prediction is the probability right it may not the solve the prediction may not be correct and then we have to reconstruct the whole scene using the device itself with the
00:50:41 past data out of there's some caching that we have done we know the past state of May and and what is the position or the you know the gauge of the person who is kind of using it we can actually recreate the whole thing on the device itself so that's that's that's how we can kind of do that and identify the tasks and kind of run it at different notes okay before before we get into
00:51:02 other audience questions can I ask you real quick how much how far along is grid last or how much money have you raised sure yeah so we actually close up close around very very recently to point million round Soviet per day is close to two 3.2 million right now all together and hopefully with all the engagements that we have and and pretty pretty
00:51:28 confident that we'll have the commercial things done I think the coming year 2020 we shouldn't have raised our next round so is your model going to be licensed like the platform that's the business yes that would be the end goal where we could be kind of it would be annual licensing kind of a model we license our technology and on top of it anybody
00:51:47 wants to run any use cases they want to do so you're going to make an open-source it's too early I'm putting an awful lot of people I saw three hands over here why don't you go first and then you okay go ahead your first you mentioned you need be to 22nd I agree but then the rocker equally adds that's what standard is being used you know operators that I'm not aware of
00:52:50 a standard I know that for example Nokia and Ericsson and where we are exactly I'll make summarization so just wondering in case whatever you are the standardization of edge computing architectures right but what you're talking about the first one was was like easy given practical because of yeah real good body not the computer yeah you know why so the way the way we do it is
00:53:20 just think of it the game logic kind of runs on the on the server side I just just to kind of give it quick answer to that what we do as I said there's a lot of prediction based so my network is I sure like even if it's like 40 millisecond away what what what essentially I'm trying to do here is I am I'm running my game logic on the cloud and rendering the rendering the
00:53:47 views and as the 40 to 50 millisecond before even the user kind of does anything around that and this is what the prediction part of it is and if I have kind of put that data already on to the device or cash into onto the device now based on the based on the you know user position or any specific interaction I can actually I don't have to go back to the cloud to kind of show
00:54:15 that because I've already done that I've already put it onto the onto the device only places where my prediction is not not able to if my prediction is is not right and the user actually makes a movement which I didn't kind of predicted correctly in that case I have to regenerate everything on the device that's that's that's at the broad level of what we are kind of doing and by
00:54:42 using that now how far that on the on the network I can pool but the computer has lot to do with how much bandwidth that is available for me it could be a case that I kind of generate everything and put it on to the device the storage that is required on the device so there's the trade-off that we are doing on that but today the best mark that we have reached is like 40 mil
00:55:04 at a 60 frames per second 2k kind of a resolution we can actually and if even if it's 40 millisecond away we can reduce the perceptible delay to some 20 millisecond and it has a lot to do with the prediction algorithms that we are running so to what extent is standardization going to be essential or is it going to be different from application to application that's a DJ
00:55:28 quest so I wouldn't call anything standardization right now I think we are all in an ecosystem building phase and there are some essentially I mean the way I at least my opinion about standards is when you know something is so mature that you know it's not a key part of the value chain and now you move to need to move to the next level of things that's when you come up with some
00:55:51 level of standardization and I feel like this area is very very new that said there are some good ecosystem and specifications for example that have come out of open XR which is more about trying to make sure that this fragmentation that is occurring between compatibility of headset and content doesn't occur in future there's a new virtual link you know if any of you have
00:56:16 tried we are two laptops you've got all these unwieldy cables so there's a new virtual link standard which will have to standardize the hardware interface there and then in the idea group we are looking at what what does a distribution format which is 3d media native look like so we just actually put out an eye TMF immersive technologies media format out there which basically specs a scene
00:56:45 graph and there are versions of scene graphs out there like you know glt opengl gltf which is a transmission format and so on and so forth so we just found some of the existing formats somewhat inadequate so our highly photorealistic rendering so I need to ask a follow-up question before we go on with your questions so neither one of you mentioned an EVC which either
00:57:16 is mobile edge controller or multi access edge controller yes and maybe this is also a DJ question are those something that the network is likely to own and it may be located in a base station are you thinking of you know because you will also have a bunch of truly mobile ones like inside self-driving cars or connected cars right right so I mean mobile edge
00:57:41 computing or multi access edge computing to be more accurate essentially is about you know assuming you have an edge it is it is about contextual information available to you all the time right the problem with mobile edge compute or multi access education is it doesn't really address the more so immersive neck is really addressed by guys like grid raster mm-hmm and some of the work
00:58:08 we are also doing in partnership so so I'd have to say that existing Mac is highly inadequate to really address er VR okay so that's another kind of initiative that has to be done this is some whether or not I work to be done okay so I would also say this that you know there are things that need to run out of Etsy yeah because they link to how you know the guys are gonna make
00:58:38 chipsets and so on and so forth I don't believe it has been determined that you know an immersive Mac necessarily needs to come out of SC okay though okay until somebody makes that determination so I'm seeing lots of hands why don't you go next and then you and then you go please okay all right can be tablets smartphone car windshield or headsets the problem with
00:59:09 VR right now is the headset I have twice $3,000 pull the lens one in to try to get one with this kind of like but I could only wear two $300 of oculus quest I could only watch no more than 10 minutes because just like you say there's fatigue and also the weight on my the pressure the nose really I think that's the reason it hasn't caught up in the mass adoption and market yes they
00:59:52 are maybe because Apple maybe two years cannot release their a are glasses that is connected to smartphone not independent until eyeglasses how many years do you think that these will become a mass-market mass adoptions that we as developers can make money so did John you had mentioned that Enterprise is gonna be first yeah right so I mean that's that's
01:00:27 something we kind of want to juggle like think about every day right just how do you monetize because the end of the day the platform unless it is used by somebody it's it's going to be we are not going to make money right what we are seeing is that the consumer story has I think that a lot of variables which needs to be kind of fall in place like as you said was the device
01:00:52 and there's a price point and people getting comfortable then you know then the whole content the good thing with and the reason week we kind of moved from the consumer to the enterprises precisely this reason that in enterprises it's it's more it's more value driven right so for example I'll take the case of mini aerospace let's suppose there is a
01:01:17 there is a there's the airplane that something has gone wrong and it's actually stranded and in one of them the hangar some let's suppose in India and let's suppose it's an it was the Airbus Airbus airplane which was kind of there now what is happening using a are only ever used to happen the export will travel all the way from France to India repair it and then come back the whole
01:01:46 the cost of travel is insignificant that what is significant is the amount of time that that look in place okay and that amount could run into hundreds of K okay for them using a hololens which even if it's takes like three k5k it's still the value they will get out of its to take humongous resumes where people are wearing our headsets we're on the early side of this what kind of a
01:02:21 time frame are you saying for a doctor ai and industria are in the industrial settings on the industrial setting a while what I'm saying is if they're already beginning to deploy okay so that you would this is one of the customers obviously I'm not able to rebuild I mean they're actually buying to Orleans and thousands now it's not only training for even the manufacturing processes for for
01:02:47 hands on somebody doing and you get the overlay of all the the models of the steps instructions that you need to do buying hololens in my closet when I'm talking thousand is multiple puzzles now that's that's what they're beginning to use it that's that's the scale that we're looking at so on the one side were already saying there's an awful lot more movement we'll have yet to see DJ what
01:03:12 do you think about the emergence the side emergent experiences yes so I mean if if you look at penetration time or at least consumer stuff today I mean depending on which report you look at it somewhere between five and ten percent greater five in eleven if you look at work analysts predict in 20 so you know if I out right now there is absolutely no security very modern very minimum
01:04:16 coding most of initiative devices given the complexity which should be more secure why did you put it in the middle because I will say it much further because I act on these devices much more senior absolutely absolutely in fact if you talk to each other like on-premise edge is the best edge so these guys these are they want to build in the next internet you know the ultra
01:05:01 low latency you know where so their perspective is a little more consumer DJ while you're on that are you looking at say the development of these kinds of architectures and other places besides the u.s. too because I understand that there's some you know two to five rack mini-date Micro data centers being constructed in some of the smart cities in China
01:05:27 look I mean I've heard great news about China heard great things about SK Telecom in Korea yeah you know right and you know so some of these stories are actually you know you know excellent use cases of how the word should do their development that said I mean the u.s. is a very large swath right I mean you gotta run a backbone across the u.s. yeah I mean it's crazy it's just it's a
01:05:54 huge land mass as you run fibre across of it and then you take that code and backbone and then you move into market data centers and then you move into head ends or what de Jong called central offices for telcos and then you're serving a few hundred homes on the other so there's a lot today that goes on on building a network like that but I think some of the other insights I'd offer is
01:06:20 in places like Asia I think this is kind of relevant to AR and VR is there's a lot of location-based entertainment far greater than the US that is actually also cropping up so that's a very interesting trend I mean in places where I'd say cost of computing may still be higher you know you're actually finding you know some of these successful people you know if you look at service business
01:06:47 model for example it's like you know they just set up oh you know a reseller of Servius content would basically set up eight foot by 8 foot bays and the same experience that a US consumer would pay $3,000 for you know they would get it in Asia in a location for I don't know four pennies on the dollar I would imagine so there's there's a lot of that going on in Asia in the US on the other
01:07:18 hand these location-based experiences are very jazzed up to some extent there's a large segment of the population there is actually not experienced we are right in the u.s. at all because there's nothing in between there isn't that you know eight foot by eight foot base but I highly do encourage you guys if anybody's not been to Dave and
01:07:39 Buster's I think they were pretty good okay looking at yes there'll be channel partners they'll always be there and Autodesk or Dessau systems they would be like excellent from our angle what what for us at this stage what it goes beyond is whatever you're building it needs to be used okay and that's where I think we will be able to evolve the value in the
01:08:58 product so today that's that's the reason the our initial customers that in five or six of them that we are currently working is all like directly working with them okay and with that we are learning like somebody is using padishah Cree or somebody is using the Autodesk or the the Katia models even in the same organization people have Katia and different for different purposes
01:09:24 different group use different purposes so those are the things actually we're learning which is more on the integration side of it the product ization side of the things yes once we kind of trashed that out it would be excellent for for me to kind of take this solution to an order desk okay Abby I can enable you this with all your different plan
01:09:43 although all your different customers who kind of require it like we are also part of the the Siemens the frontier program they have which is again which kind of pretty well kind of overlaps it does the fill terms of the focus they're pretty big presence in the automotive and aerospace I think 70% of the revenue come from
01:10:02 that so that that's that's something that we are already kind of beginning to start working with not yet but we did kind of apply at some state so the the the what is that statics forum but I think there was I think it was two preliminary for us that's all absolutely absolutely thank you other questions go ahead did I hear you say the balance network balance
01:10:56 misunderstood suicide thing you use the phrase balance network okay good next question might have on especially when you think about the streaming data yes any development on the GPU on the FPGA that that part of it only helps us you know in from you're saying on the device on the device [Music] away from the collection point certain
01:12:29 decisions have to be made in terms of how data gets managed how it gets distributed can I throw my own so you're going to have all sorts of different architectures right you might have a factory if you're if you've got workers going around with glasses they're not going to have much on the glasses in terms of a sort of the system board and so it makes much more sense to have
01:12:53 something on the floor of the factory that's kind of doing most of the processing right there at the factory in a self-driving car it's got to pretty much be in the car and yeah I saw that from Volvo and a few weeks ago yeah but in any case you're going to see different things it's not that one is right it's getting the right fit for the right business and that's going to be
01:13:21 where the killer you know kind of that's why I was gonna do jumps framing the hierarchical and field right so if you have compute everywhere then based on the quality of experience that's needed you have to make some dynamic decisions yes you know start looking into I also like your point about edge being agnostic of the network which means the edge can go on and develop you don't
01:13:49 have to wait for 5g that's and I think this too in some ways edge optimizes the network infrastructure that you have but it must be kind of scary for a telecom company to think about the sort of possible capital capital expenditures that might be involved in having the compute infrastructure to support edge computing and that's exactly right right so you've got I mean if you look at it
01:14:19 from a telecom companies point of view they first have to get all the connectivity and low mid high band and then they have to get the computing and I think for any operator which has a capital intensive business this is going to be a challenge I think we're thinking about it exactly I mean all of you so once again go back to sort of the hierarchy of
01:14:54 computer analogy but I mean the way we look at the network is you know if there are access to your point virtualization and software-defined network if there are any of those latency critical functions that are right around you know five Mac or you know Radio selection all of those will have to sit right at the edge of the access network which you know for a telco is a tower right for us
01:15:18 but if I can also kind of interrupt and put a plug in the videos of most of the previous sessions in this series are already viewable online you can see them on our website Asia dot stanford.edu and they're free total access the third session in our program was from Intel about chip acceleration and so you know that that has been discussed in this series
01:15:47 already correct well actually you've got FPGA you've got GPU and increasingly you have Asics because if you have large numbers of devices then it becomes cost-effective to design a mission specific chip for that specific device and you'll always be able to reduce wiring and reduce the size of the chip and get faster throughput and lower power so that
01:16:17 that's really a huge thing that those chip industry is just delighted about right yes I mentioned also that you know for us we we talk in terms of converged networks now so you know it really depends you know depending on where your act could be wired it could be Wireless it could be a small so it could be Wi-Fi you know so you know it's really about you know where you're at and what's your
01:16:43 best link what are the things that you're most worried might prevent kind of what could happen in edge computing what do you think are the sort of big dangers on the horizon that couldn't mess up this whole picture well I mean I think I mean the edge computing industry has barely started and so we are not even hit what I call the hype cycle just yet we're about getting there
01:17:29 right so today I don't think that we have a traffic disillusion when quite happening just yet so it's about getting your decisions right it's about getting your business cases right and I'm pretty sure though in the future that we there would be lots of things that would keep us up at night but you know what I would like to see is mass adoption of immersive for me there's a couple of
01:17:56 things on the edge computing even so as I was saying the reason that we went after use cases based on value rather than the the volume was because for the volume to happen there are many other aspects even from a computing standpoint the GPUs are still pretty expensive GPUs are still very expensive if you were you're looking to build the kind of the volume that you're looking at it at
01:18:20 those price points also has to kind of come down and if they do not happen I kind of money that will be required to kind of put out that kind of infrastructure is is going to be become quite challenging so I definitely see either the price part of GPU coming down and that's going to happen consider considering which some of the information insights that we have and
01:18:41 the developments that is happening in even like all the biggies are kind of coming in there so that's definitely the more you hear about it the less riskier start looking and that's how we just have to kind of navigate ok great I'm afraid we kind of have run out of time we've got some refreshments outside please plan on continuing our
01:19:04 conversations in a more informal setting for now if you would please join me in thanking Dijon thank you