Maximum throughput, Minimum Latency - Building a mobile app as a HMI for a high-speed data logging and control ecosystem | Perth Mobile App Developers Meetup
- EEA
- Feb 17
- 19 min read
You can’t just throw real-time data at an app and expect it to work!
At the last Perth Mobile App Developers Meetup, our very own Ayrton Sue and Ravinder Bhandari shared the challenges of handling high-speed data in mobile apps, particularly for data logging and control solutions, like our Cranio ecosystem.
When real-time IoT data is streaming in at 20Hz or more, the usual solutions start to fall apart. JSON quickly becomes a bottleneck, off-the-shelf charting libraries struggle to keep up, and UI updates can lag behind, making the whole experience unusable.
To solve this, Ravinder and the team had to rethink how data flows from sensor to screen. We moved to Protobuf for faster serialisation, built a unified communication layer that works across Android, iOS, and web, and developed a custom charting solution that maintains smooth performance even at high data rates.
The team is also designing a state machine modelling system that allows developers to configure complex device interactions without touching firmware or low-level code.
These aren’t just IoT challenges, they’re the same issues faced by anyone working with high-frequency updates in mobile apps. Whether it’s real-time analytics, live telemetry, or anything that demands low latency and high reliability, getting the architecture right from the start makes all the difference.
The next meetup is happening on Wednesday, 26th February 2025, at Adapptor. If you’re into networking and love a good discussion, come along!
More details on the Meetup webpage, click here!
For those who'd rather read than watch, here is a transcript of the presentation!
Ayrton: My name’s Ayrton Sue, I’m Managing Director of Element Engineering. This is Ravinder Bhandari. He's a lead app developer and basically software engineer at Element Engineering.
My background, I'm a mechanical engineer. And I'm very aware that I'm a mechanical engineer talking about apps in front of a whole bunch of app and software people. So go easy on me!
I've done a lot of data analysis for race cars, Formula 3, V8 supercars, things like that, and we used high speed data loggers to get information off the race cars. I started to use those data loggers in the mining industry on mobile equipment and stationary equipment. We were doing data logging of vibration or g forces or certain forces, and we were able to get information about what the machine was going through. Then we could change the mechanical design to be able to make a better product.
That was back in 2010. And we would basically have a big Pelican case where we would strap these data loggers onto a machine. And you would have a certain amount of storage, which was usually around one megabyte. You could get up to 5,000 hertz on different channels. And you would have to literally go with a USB and download it, look at the data, look at the different trends that were happening in the lines. And then try and interpret it and then make the product better.
That was awesome when you got it to work, but clients would say, that's great. Can you fit one of those out on every truck that we've got? And that was a process. You'd also have to have a $20,000 box on every truck, and you'd have to have a data engineer go every day to the truck. Download the data, interpret the data, and then try and figure out what's going on with the system.
Around 2010, when I started the business, we were using Arduinos, which had sensors. It was always a challenge, and it still is a challenge now, to be able to do interesting high speed data logging and output control, with limited knowledge in coding and things like that for myself.
We're a mechanical engineering company that likes to use IoT to give machines a voice. But we've now ended up in a place where we're making a generic data logging and control system, which we're calling Cranio. Cranio is the Italian word for skull. It's a rugged box that we put a brain into.
Our idea is really to get a device, or set of devices, to market that are cost effective, require no code, can be programmed to do interesting data logging and control things, in a web or app UI using a visual based state machine. And then allow smart people who might be more applications based rather than engineers themselves. We want to really open up the market to doing IoT for everyone. That's been a really hard challenge for the last 14 years.
At the moment we've got these devices over here. This is our Cranio Pro IO. It's a rugged form factor data logging control device, which we can basically interface with any industrial sensor and actuator. And we're looking to launch that by the middle of 2025.
IoT is never done in the same way. Every single product seems to have a different code base. You have to have an integrator that can integrate between the different code bases. We were learning the way to do high speed data logging and control. And a lot of the libraries and a lot of the systems we were using were not able to keep up with the data rates that we were particularly trying to do. And everything talked a different language.
My lead firmware engineer, Sam, he's not here tonight, but we had three projects on the go at one point in time. We looked at the future and we said, it's unsustainable for us as a business. And it's also unfair on clients to have to be reinventing the code base every time.
What if we took the learnings that we have in IoT and created a single code base that could then be applied to many products? Have specialists in each one of those domains, be able to work in the firmware, the backend software, the app, the web interface. And then share that code with a larger group of people. The maintenance would become easier, the cost of entry for everyone would become lower. And we can really hone in on making the best IoT devices, and let the companies who are employing it concentrate on the application of the IoT.
And that's really, really hard.

Ravinder: Today we'll be going through a demonstration that's like the very basic key components of what Cranio can actually do, which are the Cranio device itself, the application that we use to interface with it, and a backend server that acts as a mediator to talk to the device.
We designed the key components in a way that we built each and everything with the purpose-built architecture. And then it can integrate with the rest of the things seamlessly. That's where I think we emphasise a lot more on building things in a generic way so that we can reuse it. And that's the whole idea of building purpose-built architecture.
Ayrton: One addition to that is, in the Cranio architecture, one of the key ideas that we had was, if you've got a microcontroller talking to another microcontroller, it speaks the same language as a microcontroller talking to a modem module, modem module talking to the backend, backend talking to the web application, they're all speaking the same language.
And that's where we found a lot of our challenges was having to parse things or change the data structure. We've sort of come up with a data structure which presents endpoints. And those endpoints are sensors, actuators, or indicators. And they are cohesive through the whole domain. So that's the core of it, really.
Ravinder: Yep. And if we talk about the app, it does provide an interface where you can visualize the data that's been logged by the device. Which it can log into an SD card, or it can log to a backend. It again depends what configuration you've enabled, but it'll still use the same protocols.
The application that we built is not just for the visualisation. It can also be used to control the configurations that we want to put into the device. It serves a lot more purposes than one. And if you talk about the Cranio device, it serves more than one inputs. It can work with sensors, actuators, and then you can go for any number of inputs, altogether. I mean, it's not just restricted to one input at a time.
The back end plays a very important role in acting as a mediator when you're talking to the device. You can talk to the device directly through the app. We've also got Bluetooth built in. And you can interact with the device directly through the application by connecting via Bluetooth. But the back-end acts as a very good mediator if you want to use the state machines modelling that we can implement on the back end.

So what exactly do I mean by state machine modelling? Think of it as a use case, like Cranio is connected to two different type of inputs. And one of those inputs that we've got today is an ultrasonic sensor. And it will sense how far the certain thing is from itself. And then once the threshold is reached, you can configure the state machine to transition to the second input. That's how you can transition from one state input to the other. And all that mediation can be done with the back end.
That's what enables us to do a lot of things in a generic way where you don't have to actually code any of these things. You can just set up your state machine configurations from the front-end UI. And then back end can take care of implementing those transitions from one state to another. And then get you the outputs or results you want.
Continuing with the purpose built architecture, i would say the next important piece is that that enabled us to do this as the real time data flow. We couldn't do it if we didn't have a purpose-built architecture where we are defining what each component will play as a role.
When we really wanted to visualise data at a very high speed, we had to take a lot of things into consideration. Like, what will be the latency? The device can input the points at a frequency of 10 hertz or maybe 20 hertz. Will the application be able to take care of that? And what about data coming from different communication channels, if Bluetooth can support that level of transmission, or do we go for socket implementations or TCP protocols?
So all these things were the ones that we need to answer and figure out what's the best way. The key aspects that came from these learning is to figure out what type of protocols we will use in terms of the data that we are transferring. I mean, what structure we will use.
The JSON was one option. Then Protobuf was another option. We wanted to go for Protobuf because it's a lot faster. It works directly with binaries and serialisation and deserialisation, it's a lot faster. Whereas JSON is like mostly text. So that's what we formalized throughout all the platforms that were talking to the Device.
And that's where we define same protocol throughout the platforms and we reuse the same Implementation. Again, you have to focus on unified communication standards across different platforms. That's where you can actually get the benefits of cross platform compatibility, i would say. Because if you want to use the same code base or business logic across different platforms, you have to build it in a way that you can have a unified communication system where all the platforms can understand what you're trying to decode or process.
So the device can talk to the back ends, it can talk to the app directly, it can talk to a web application, and it all can be done with one single defined protocol. You can change it if you want, but it'll be similar for all the platforms so that the data integrity is maintained well.

Another thing that we did while we built this Cranio system is to identify how we can reutilize things that we've already built. How we can repurpose logics or implementations that we've done for one project and want to reuse.
That's when we thought of building generic libraries that can take care of these interactions with devices or servers. That's when we decided to build a library like ElementSync. This is something that we presented last year as well, but I think it's a lot more evolved now. This used to handle just the Bluetooth interactions back when we implemented it, but now we can use it for sockets and TCP protocols, that's on its way.
The main idea of building ElementSync was to make sure we are not defining different ways of talking to the device for different platforms. This library can be ported to an iOS application or a backend. We can build it in a way that is based on Kotlin's multi platform support and can be ported to any of those.
That's where we thought, it's good to build generic libraries that can be ported to different platforms. We can reuse a lot of things and it'll save us time. And, of course, make things more affordable.
The biggest power of Element Sync library, I would say, is the encoders and decoders we wrote that can work with byte of streams. You can process raw data directly. You can play with the bytes. You can transform them into whatever data models you want. It is totally independent of the platform that it's running on. If it's running on an Android device, on an android application, it does Not need any android specific components. It just needs an input stream that will automatically itself manage different buffers for different type of messages.
Once it knows all the data points or data packets for the messages received, it will rebuild the entire message. And then transform it into the model that you want. That's the power of encoders and decoders that we've got in Element Sync. That's what enables us to use it on different platforms.
The problem with the visualisation of high-speed data is, you have to strike a right balance between the stream of data that you're getting and the complexity of visualising it. You cannot keep on updating your UI components every time a new data point is received. You cannot plot a chart using that analogy because there will be a lot of UI refreshes and re rendering, and it won't be smooth enough for anyone to actually make sense out of it. We have to strike the right balance of what frequency we'll be working with while receiving data and then how frequently we'll be updating the UI to have a smooth transition and visualisation of the data points.
That's where we made good use of Jetpack Compose in Android and, of course, Swift UI for iOS. We highly rely on UI states and the decomposition of components when certain data points are added to the line chart.
Of course, we cannot go for any predefined or prebuilt library that's available, because most of those libraries would not cater to our use cases and the capabilities that we've got with Cranio. So we had to build our own UI components and design it in a way that we can update certain aspects of it, so that it provides a better rendering experience.
There are there are other reasons as well, to opt for an in house or our own data visualisation platform. Because when we thought of trying the visualization tools that are available off the shelf, they are not as performant as what we actually want, and what the device is actually capable of.
Our device is capable of emitting data points at a speed of 20 hertz, or it can be faster in future. This is the device I'm talking about which has 32 kilobytes of RAM, so of course it's going to be a lot faster.
So we wanted to build our own platform that can cater to those kind of requests. We thought of using some of the off-the-shelf platforms like Grafana. But it couldn't process more than one data point in a few seconds. And the update was so laggy that you would actually see something happening probably after 5 to 10 seconds of something even being logged by the device.
Ayrton: We definitely tried! We tried using everything open source and just using everything else that was out there, because I didn't want to have to build it ourselves. But we've ended up having to go and build a whole lot of this stuff ourselves because of the performance limitations. And our mantra at the moment is, if we can conquer high speed data, then the slow stuff is easy.
A lot of our customers are wanting to do reasonably high-speed things and we're kind of attacking that with our Pro system first. Going to market, I guess, more slowly. And then being able to then roll out to slower stuff, within the same ecosystem. But it's the hard way and the long way and not advised. If you want to do it yourself.
Ravinder: Yeah, and there are other limitations as well. I mean, it's not as configurable as we want it to be. We want to add certain sections within the graphs where we want to highlight or do a historical lookup. And maybe zoom into the very detailed data interactions within certain milliseconds. All these compatibilities or features were not provided by any of the off the shelf tools, and we couldn't leverage all the things that Cranio can actually do.
The limitations with the available tools, online tools, was not in the favour of using any of those. So that's why we thought of building one for ourselves.
There are other logging platforms, I think, available, which Ayrton also mentioned about using for tuning the mechanics for his car, right?
Ayrton: There are heaps of off the shelf things.
Ravinder: I don't think any of those work with live streaming.
Ayrton: No, a lot of them are proprietary or you must get licensed software. Nothing's easily web based. We want to be able to have a person buy a widget, plug a sensor in really easily, push a button on a web interface that deploys them a server, or they become part of a free server, and they're up and running.
There are solutions out there that you can do this sort of stuff, but you still have to do an element of coding, and you still have to have quite an understanding of electronics. So that's the bridge that we're trying to connect.
Ravinder: That's what sets this tool apart from what's already there in the market. And that's what makes it even more affordable than any of the software that Ayrton just talked about. And the main USB would be to be able to do this through a web interface or an application interface with a very high-speed transmission of data, which no other device I think is doing right now. They are all doing a historical lookup of recording the data points within the device or logging it somewhere. But no one visualises that as a live stream, as of now.
Building Cranio from a Developer's perspective, what would be my take on this, in terms of what challenges we actually saw and how we thought of overcoming them? First is, I would say, building generic libraries or reusable components that we can run across different platforms is the key to building a core ecosystem like Cranio where you have a device interacting with different platforms.
It's important to build in a generic way and reuse most of those things so that you optimise on how things are done. You ensure the data integrity. You ensure the business logic runs the same across all different platforms. That's one important factor for me.
I think the other one would be to ensure that the entire flow is testable, and we get continuous feedback loops in terms of what we are developing. We have learned a lot from our previous iterations, and that's how we are making progress to this application and, of course, the Element Sync library that we just talked about.
Having that continuous feedback loop is another important aspect from a developer's point of view to make sure you're on the right path and you're making the right choices in terms of what technology you're using.
Audience Member: This Kotlin multiplatform, can you run that on a small IoT device like an Arduino or similar? Or is this just more in the backend layer?
Ayrton: This is just in the app layer at the moment. The backend we use some other, well, we're using a lot of Python. We use a lot of C, direct C code. So the same C code that's been used on the microcontrollers gets rolled out as modules in the back end. But the Kotlin, this Kotlin piece is basically for the Android app. But we are thinking about using it in different places as well, to be able to reuse it. But yes, this eventually could be put onto any IoT device that has enough RAM and resources to be able to run it.
Ravinder: This is interpretable. It can be put on backend, or used for iOS devices as well. Which, yeah, there's still an overlap between how we built the firmware, and how it's interacting with the back end. There's still an overlap in terms of code interpretability in those two platforms.
Audience Member: So also on the portability of the graphing code that you've done, like where can that run and where does it run at the moment?
Ayrton: We've done a lot of charting for a client in the US, in the cycling industry, with reasonably high speed data. So, you know, a couple of data points a second. And a lot of that was done in D3 back in the day. And we've sort of maintained a lot of that. We haven't used that through... things have gone, have moved on. In the web, we are using some generic off the shelf things, but we're having to sort of extend on those libraries. And this library is specifically for Android. This particular charting library. So you've created your own charting library for Android to be able to get the desired speed we've got.
Because these devices, even those little tiny microcontroller devices, are really capable of about 100 hertz. And we can push 100 hertz pretty, pretty easily through cellular and Wi Fi, on multiple channels. We found that really hard to chart through history, mainly because when we do it, we packetise it into sort of chunks, of like 10 data points at a time. And those data points don't seem to come in order a lot of the time. It's quite an overhead.
Audience Member: So just adding on that, how do you plan to achieve your goal of any old person firing up a web browser and seeing their device at 20 Hertz? Real time in a web browser. Is that something you've achieved?
Ayrton: Yeah, that is something we've achieved already, not with this different... Yeah, but it's an extension on it, because we're having to improve it, I guess.
Ravinder: Just to add on that. The charting library that we're using for Android is different from what we've got for web application. We are going to build a library that's compatible with different platforms using cross platform tools. But the reason why we went for this, as of now, is because we wanted to use a common tool base, just for the business logic, to speed up things and to make sure we have good performance when it comes to using native components of a device. So that's why we went for building it natively.
The library can be built in a hybrid platform where we can reuse it for web or iOS. But yeah, this is separate right now.
Audience Member: This might be more of a question for the business side, but I'm just curious, what are your most popular sensors? What are people using these for? What are you measuring on those F3 cars?
Ayrton: Oh, yeah. Race car stuff's way harder than anything that we see in mining and things like that. A lot of the work we do is smart sensors where we have to make a sensor that's not an industrial sensor off the shelf.
But actually, a lot of the industrial things that we do are just on and off. And that's still reasonably, well. You still need to have an electronics guy, a coding guy, a front end guy, a back end guy, whatever, to do that. So, that's just one aspect of it.
Analog signals, so if you've got industrial analog sensors, it could be distance sensing, it could be an angle, things like that. Those things can be reasonably high speed. But the really high speed stuff would be strain gauging. So you've got a strain gauge and you've got it on a particular part, and you'll be seeing stresses changing in the hundreds or thousands of hertz. Those things are really only used in sort of engineering applications when you're trying to improve something.
General control things. Think about, dewatering plant on a mine site. You want to know that that dewatering pump is on or off. You want to know what the flow rate of that water is. You might want to know about that. It's pretty dumb. You might not want to know it every five seconds. A lot of the things that are out there that haven't been monitored yet are pretty dumb, but it's still hard to do. So that's kind of the reason why we're doing it. But we're trying to do it in a way that we can do high speed stuff.
Audience Member: Having this ability for you guys to just have like a single platform that you've been able to have available to deploy to different clients. Does this also mean if there were unrelated industries, companies, whatever, using the Cranio, can they be integrated simply? Is that like the data storage, the messaging defined well enough that you're going to prevent having to redo the work of integration later?
Ayrton: We hope so. Once we come to a conclusion that the way we're doing it is actually good, which we think it is, we want to make it pretty open. The devices are relatively dumb at the moment. They just stream lots of data. They are slaves, effectively. And you subscribe to their endpoints, and then they send you data at the rate that you ask of it. The eventual end state is, you have a controller layer, and that goes above. One of the devices becomes a controller, and it's subscribes to the different end points within its local network. So that could be a device, a set of devices that are isolated from the internet. And then you could intermittently come along and upload a state machine to the controller, and then the devices would autonomously work.
In that case, to be able to integrate with it, you would be able to, hopefully one day we can do it via Bluetooth. You could do it via sockets over the internet. And hopefully, you could do it directly to your server. But, at the moment, the way we're doing it, because we're trying to create the ecosystem, we're kind of locked into, like everyone else unfortunately, having our own back end server, having our own protocol that's kind of closed.
So yes, that's definitely the case. We want to be really good at this, and then we want to open it up. IoT also comes with the overhead of the data itself. And even though we are using protobuf through the whole stack, when we use JSON to, you know, make it more easy for developers to do things, your data blows out by ten times. That comes with the computing overhead, the cost involved, that sort of stuff.
They're the challenges that we have to get right first before we can open it up to everyone else. But that's the eventual goal, is if we can have an ecosystem that you guys can just rely on to do stuff and then you can just plug into it with a data stream. And then you can go and do interesting things and actually apply your application, without worrying about the IoT.
About 10 years ago, I was talking to BHP, Rio Tinto, Woodside, all trying, all having internal innovation teams, trying to use Raspberry Pis to create IoT devices. Those people were really, really good at doing the applications, but they had to become IoT engineers. And a lot of them failed, and a lot of them fell over, or didn't get it done.
We can do that for you, and then everyone else can use it to apply their large language models and do interesting things with data.
Audience Member: Do you see that as part of the business goal as well, offering these different components?
Ayrton: Yeah, it definitely is. But it's probably at least a couple of years away for us.
Audience Member: Since this is streaming data, have you faced any issue with the data duplication? Or does it even matter?
Ravinder: No, we haven't faced any issue related to that, yet. The main reason being, we have designed our own protocol layer that's similar to how TCP works. And for Bluetooth interactions when device is directly talking to the app, it's pretty reliable in itself. Bluetooth interactions almost never drops data packets. I said almost. So you might lose some data packets, but you'll never see duplications.
The next Perth Mobile App Developers Meetup is happening on Wednesday, 26th February 2025, at Adapptor. If you’re into networking and love a good discussion, check out the Meetup page here!
Comments