AI inferencing at the 5G connected edge for video analytics

To embed our video on your website copy and paste the code below:

<iframe src="https://www.youtube.com/embed/RPHq9-ftOQI?modestbranding=1&rel=0" width="970" height="546" frameborder="0" scrolling="auto" allowfullscreen></iframe>
Martin Jensen, Athonet (00:04):
Thank you very much for having me here on stage. What I'm going to talk about today is inference at the edge, how to lower your cost, how to improve safety, and also how to improve your revenues. Sometimes oftentimes we get questions on where is the edge. The edge is at the enterprise and enterprise. That's where you want to do the inference, not at the cloud, but you want to do it closer to the user. Sometimes you want to have your solutions air GAed, and that's what we want to talk about today. Here's the subset of where we get the questions from. I want to do it by interference, but I want to do it at my facility. That could be anything from cities. You hear a lot of people talking about smart cities, people don't want to do it in the cloud. They want to do it in the city where the data is generated and you want to be able to do the inference there so they can do it faster, better, more secure, and in a closed environment, it is manufacturing oftentimes where you want to be sure to do the interference at the manufacturing plant so that you can do a quick reconciliation of where your assets are.

(01:08)
Defect detection, yield optimization, all about optimization. The other one is retail stores. People looking at how can I optimize what I have shelf, make sure that I have what I need for my customer, want to acquire it, inventory management. I need to do interference at the edge to be able to identify all of those things. It's entertainment, it's finding out where I have my congestion at my stadiums. It's finding out how I can optimize my queues to make sure that I can optimize my yield and optimize my sales. Just recently in entertainment, we were told about, well, we want to have a moving robot move around to where the peoples are so I can sell more sausages, I can sell more beer to my audience in the stadium, and we can show you some of that over here at our demo with our wheel. Me industrial sites are also one where we get, oftentimes it's about safety.

(01:58)
Are people wearing their helmets? Send out an alert if they're not wearing their helmets, it's campus, it's access control. Make sure people enter the places that they can as easy as possible with interference at the edge by either facial detection or something like that. It's again, it's warehouses, the larger utility companies, the warehouses. How do you analyze what I have on my shelf? How do I make sure that I get it in and out as fast as possible? And it's places like airports and these are just a small subset of places where it's interference is being asked of. Oftentimes in these places we get questions such as, well, I have my cleaning robot roaming in. Can I check what I have on the shelves so they can tell me where I need to restock my stuff? It's a security robot that roams around and scans people in a venue to make sure that people are where they need to be and unwanted people are being escorted out of the buildings.

(02:55)
Next one, I'm going to talk a little bit about the architecture. So the architecture that we are also showing here. We have a server in the middle that runs the ethernet private 5G core that controls the private network. You're now able to have a network like Telefonica, like at and t, like Vodafone dedicated to your private enterprise. It runs on a ProLiant H HP server core runs right there. It runs with all the other applications that we have. The wheel mirror that I'm going to show over here at the booth is a robot that roams around that is controlled by the private 5G network. There's also a mission critical push to talk application running on the same server. And then there's a video analytics that we're going to talk a little bit about. You can have wired connections to it, but you can also have wireless connections like the cameras.

(03:39)
We have two in this booth. There's one sitting right up here that does interference, so interference at the edge. So up here we are counting in and out people going in and out of this area. We are looking for lost backpacks, we are looking for phones, and you can see that all being captured by this camera up here. There's another camera sitting up here, the corner up here. This is capturing another video stream where we are looking for other objects inside this video stream. All of the, it's being transferred, this video stream being transferred over the air to two radios that are hidden behind these walls up here and coming down to the server that is located right here. With these ones, we can generate alerts coming out to different people, coming out to phones, coming out to different systems, and we can of course see that right here at the edge and we're showing it all at the demonstration over at the booth. So the next one,

(04:30)
Yeah, this is the server that we're using. What is different about this server is with this server, we have a partnership with Nvidia. We have L four GPUs in the server, which means that we can run the interference at the edge together with all these applications. Other applications that we are running, again, we are running three applications on this, but there are customers that are running many other applications that they use in their enterprise. But it's also neat about this is now this runs a full 5G core network, exactly the one that at and t and Verizon are using, but it all runs on a very small scale that is dedicated for your particular enterprise.

(05:06)
The video interference AI application that we are using is from a partner, another partner company called Iron Yon. Again, they use the Nvidia GPU chip to identify these different use cases that we have up to 30 different use cases are available at the moment on the platform. And we can also ingest camera feeds from either existing cameras or new cameras that you can connect to the platform. Scalable one up to a thousand cameras, of course with the larger GPU units, but this is the application that we are using to run the inference at the edge and not at the cloud.

(05:44)
These are the use cases that we are showing here. Again, there's a lot of different use cases On this particular one. We are showing objects detection. As I mentioned before on one camera, we are seeing here cell phones. We're looking for cell phones. We are looking for lost baggage. For example, if somebody leaves a backpack over here, sometimes that can be in security risk for airports for example, and we are looking for backpacks. So you see these object detections. You see these green arrows are basically detected at object. If it's red, it's protected at object for more than 30 seconds. So that's what we are doing right there at the edge up here. We are looking for in and out, so this is an intrusion detection use case. We're looking for people walking in and out. This is also used by entertainment where you want to look for people where they're walking in and where they're walking out so that you can optimize your yield, you can put the hotdog stands where it should be, and it could also be for museums for example, if you don't want people to enter certain areas.

(06:44)
And the last one, that is the alerting mechanism. Actually, this graph here shows you how many violation is what we call it that was happened during this period of time and the kind of violations that were seen. And the last one there. Here, this is the alerting mechanism to alert for when these particular thresholds are met. So in a nutshell, that was what we have and I want to welcome you all to the booth. We have another small surprise over there. We integrated the whole AI 5G with a chatbot. So where all of this could be pretty complicated for an non telco engineer to understand if I need to stop my video stream and I have to log in to find my MC number to IP address to find out, to lock that camera stream. We have developed a chatbot that can interface with all this complexity technology in order for us to, for example, say, I like to stop this camera stream because I don't want to record it for a compliance perspective.

(07:40)
For example, I want to stop it for 30 seconds. With a chatbot, you can say Stop camera one and it will stop the video feed. By doing so, it basically runs a couple of API calls into the different elements and stops the camera feed. And then I can start it again with a simple chat bot. And there are many more applications we're looking at, for example, the wheel me robot that roams around. Over there we have the same way where we want to say, turn left and turn right with that chatbot you can actually run some commands and human commands and then will stop and make the robot turn left and turn right. So all that, that was it.

Moderator (08:14):
All right, well thank you so much, fun. Thank you.

Martin Jensen, Athonet (08:16):
Thank you.

Please note that video transcripts are provided for reference only – content may vary from the published video or contain inaccuracies.

Martin Jensen, Head of 5G Solution Architecture Americas, Athonet, a Hewlett Packard Enterprise acquisition

The ability to spot problems and instantly react can bring huge benefits for manufacturing, industry, and other enterprise operations. Now, enterprises are combining private 5G and AI at the edge to optimise their businesses. New AI vision solutions can analyse video in real time, identify objects, and provide essential insights to improve public safety, efficiency and reduce costs.

Recorded February 2024

Email Newsletters

Sign up to receive TelecomTV's top news and videos, plus exclusive subscriber-only content direct to your inbox.