Ben: Welcome back to our SYSGO TechCast! We’re excited to have you with us as we dive into fascinating discussions on the latest trends and challenges in embedded systems, safety, and security. Let's explore an incredibly exciting and evolving topic: AI and machine learning in embedded systems. It’s a field that’s rapidly growing, with innovations happening at a breathtaking pace. But what does it mean for embedded systems? How do we ensure these technologies can be integrated safely and efficiently into critical environments?
Mark, one of the most interesting aspects of AI and machine learning is how they can uncover insights we might not even be looking for. These technologies have come to stay in our world and will also play a big role in real-time applications, right?
Mark: I agree and I think one of the lessons I learned in the IoT world was you don't know what you don't know. That sounds like a strange statement, but I have several projects where we implemented technology to solve one problem by monitoring something and we discovered something completely unconnected to that. The challenges that we discovered were different than the ones that we expected to find. We solved a business problem and proved the business value of the use case that we were looking at. But actually by monitoring the thing we discovered a whole bunch of other stuff that we weren't expecting to. And as a consequence, different business models were also driven out of that. And I think if we start looking at deploying AI or its predecessor, machine learning, or if we start adding that to existing infrastructure, because that's where it's going to be attached, there's a lot of potential to learn new things, to understand things that we didn't previously know, to monitor things and learn how to do things better.
The challenge is I think how do we make best use of AI and machine learning without it breaking what we have out there. So the word that I keep on hearing an awful lot is autonomy. But how do we control autonomy? How do we make sure that it's just autonomous enough? How do we check some measures in place? It's one of the questions I think it's going to be answered eventually, but I don't know the answer myself. But also if we look at putting AI or machine learning close to existing infrastructure, I think we will discover how we can make those things run more efficiently, how we can prevent things from breaking. What actually good looks like and what's an indicator of something's going wrong? If we can understand that something is likely to happen sooner, then we can stop it from happening. Maybe when we deploy AI, we discover a particular frequency of noise indicates something's going to break or a certain color of light or a certain heat.
As we start looking at things in more detail closer to the edge, we're going to learn an awful lot that we didn't know before and hopefully to our benefit. But the underlying question is how do we do that in a way that's controlled?
Having something like a hypervisor, having something like Cisco technology means that we can deploy AI, deploy machine learning in a controlled environment alongside other technologies. So we can manage how that deployment happens and actually manage that technology without it impacting things that are happening in real time or we can separate that system from other technologies so that it doesn't impact it, it only improves it or provides it with the capability of understanding how to improve it.
Ben: That’s a great point! And it raises another important question: How do we ensure that AI and machine learning don’t interfere with the stability of existing systems? We hear the term “autonomy” a lot, but how do we strike the right balance between autonomy and control?
Mark: Do we need to certify AI is the other question? And yet again, I don't have the answer. Far wider people and me will be debating this. Or if they're not debating it in a business sense, I sincerely hope they're having a conversation about it over a point in the pub because it's one of those conversations. AI, one of the use cases we've seen recently is certification and high end GPU. And I say and not certification off. And one of the interesting things that we can do at Cisco with PikeOS is we can have certified things alongside uncertified things. What does that mean? You can get the benefit of a high end GPU, 30 teraflops of compute power or numbers that my 30 year old 30 year old self would have been scared at. But we have this incredibly complex technology that we can take the benefit of, but without having to certify that technology. And we can put it in a partition within a type one hypervisor. We can control it. We can make sure that it only does what it does and it's only allowed to feed data out based on the controlled data that we feed in. So we keep it in its own wall garden and we make sure that it doesn't break out. And we can have certified things running alongside it. So it's a high end, very complex, uncertifiable beast alongside something that is certified.
And one of the benefits of PikeOS is we make sure that those two things do not mix. Or we control what the beast has access to and we control what the beast is fed. And acknowledging that it's, it is a beast. It is technology that would take far too much time and resource to certify. And ironically, in the world that we currently live in, it would probably be out of. Out of support and end of life before that certification journey was even halfway through because it is so complex. So complex separate from less complex that is certified. And so therefore I'd say it's an end, not, not an or. We have certification and high end GPU certification and AI, not certification of AI and not certification of in GPU. And that's an emerging conversation I'm seeing out there in industry. How do we adopt these new technologies that we can do really clever things with and build going back to the shiny shiny new shiny things and solve new shiny shiny problems. But we still need to make sure that they don't break the systems that we already have. It's it's an end, not a an of, if that makes sense.
Ben: That’s a really interesting perspective, especially when it comes to managing certified and uncertified components side-by-side. Looking ahead in our industry, Mark, in your view: What’s the big picture for embedded software development? Where are we heading in the next few years?
Mark: I think the future is bright, but the future is interesting. I think going back to my notebooks in 2019, where we were heavily into what was going on inside the silicon and very interested in which pin did what at a very, very low level and the mob is simplifying here. But we intimately knew the technology that we were working with because we were trying to e-count every single bit of computing power and trying to get the best out of the technology that we had at that time. I think there's a big gap between that world and the world we're seeing now. The ability to build applications and write good code and manage to code or code that using technologies like Rust, etc. to enforce or reinforce best practice in the application space when we're writing code is going to help drive best practice from a what do the applications look like. I think people like us with PikeOS and other technologies will continue to provide their framework to help you certify and keep things safe and secure.
I think the open source world, I'm seeing lots of debates about open versus closed source and companies moving from open source to closed source in some of their technologies and vice versa. I think the open source world will continue to provide us with a background or support in technologies that are emerging. The bit I'm unsure of is how much will we do at the very, very low level in the future. So will we, will more and more of the embedded world be taking platforms and existing COTS platforms? And adding applications on top. So will the pressure on people like ourselves and our partners be more to, I don't want to use the word commoditize, but provide a better integrated platform on to which customers add their applications, configure the thing underneath, but don't drive down to the low depths? Or will there be a resurgence of the requirements to actually understand what's going on in the deep depths of a complex SOC so that we can control it from the bottom up? I don't know.
I see more and more capability in the application space because that's what people come out with at a university and be encouraged to do by industry. I see less and less capability in the really low level. So I think we might see a move to more platforms of technologies that already work. And we've done some of that ourselves already working with partners. For instance, on the i.MX8 quadmax and on some of the x86 platforms we're working on, we have a good level of integration with partners so that customers can take what we provide and get working day one. You can almost see that becoming more of a norm in the industry where we and our partners are working even more closely together to provide platforms that have a better level of integration day one than they ever had before.
But I see the need for embedded software getting larger. More and more things are being connected. The connectivity is the norm. It's not unusual anymore. And more and more business models assume the ability to connect something and get data off it. So there's going to be a big drive to do that wherever we look. And as a consequence, I can see embedded software having to change in that it provides more platforms because there won't be enough resource to do everything that we want to do. There won't be enough people to know how to do it and therefore pre-built systems will become, I guess, more of the norm. It's a guess. If I had a crystal ball, I could probably predict the improvement of the changes that I've seen over the last 30 years. And my 30-year-old self wouldn't be quite as surprised. But it's ever more ever expanding world than the embedded world, even though it's a world that still is quite hidden from most people.
Ben: That’s an insightful outlook, and it’s clear that embedded software will continue to evolve as an essential part of our connected world. Before we wrap up, thanks you very much, Mark, for sharing your expertise and having this great conversation!
Mark: Thank you! If I can add value, I'm very happy to add value. I look forward to the next conversation because every day, as I said, I'm learning something new. It's always interesting to have a conversation about what I've learned and what I'm seeing out there in the world.
Ben: Thanks to all of our listeners for tuning in. If you found this discussion insightful, make sure to subscribe and stay updated for our upcoming episodes. And if you’re as passionate about embedded systems as we are, why not join us in person? We’ll be at the Embedded World in Nuremberg in March. Come by the SYSGO booth 126 in hall 4, say hello, and let’s keep the conversation going!
Until then, hear you next time, take care and keep having fun with embedded technology!