Distributed Ledger Technology for Data Sharing and Auditable Records of Cybersecurity Testing and Monitoring in IoT Ecosystems

By Bernd-Ludwig Wenning*  

 

Today’s increasingly connected world sees the introduction of Internet of Things (IoT) systems in a huge variety of areas. In industrial settings such as smart manufacturing facilities as well as in personal residential settings where smart home applications are growing in popularity, IoT systems and devices are becoming increasingly ubiquitous. However, this comes with cybersecurity challenges as these systems often comprise resource-limited devices with known or unknown vulnerabilities, making them an attractive target for attackers. Hence, cybersecurity testing and monitoring is required in these systems to make them robust and resilient against attacks.

The TELEMETRY project aims to develop a suite of tools for cybersecurity testing and monitoring in use cases covering the areas of aerospace, smart manufacturing and telecommunications. This suite includes tools for monitoring of network traffic, sensor data and device behaviour, tools for offline testing of software and firmware, and tools that process the test and monitoring results to identify risks and assess trustworthiness within the IoT system.

To facilitate this, there has to be a common data infrastructure that is to be used in the IoT system, as the testing and monitoring tools need to share their findings and make them available for those that perform the risk analysis and trust assessment. Further, the findings should be recorded in a reliable and tamper-proof way so that they are available for cybersecurity audits as well as for incident analysis.

 

Distributed Ledger Technology

This is where Distributed Ledger Technology (DLT) comes into play: DLT is a technology for distributed data storage. A distributed ledger is essentially a database that is distributed across multiple computers, so-called nodes. Blockchain is one of the best-known types of DLT.

DLT provides key features that enable data sharing among the TELEMETRY tools and auditable record-keeping:

  • Decentralised: There is no single database in a single location, but a ledger that is shared among several nodes, avoiding a single point of failure and reducing the risk of manipulation as multiple nodes maintain synchronised copies of the ledger.
  • Immutable and append-only: Any data that is committed to the ledger cannot be altered or deleted. New data does not overwrite previous data but is appended to the ledger. This is a critical feature for auditability, as it ensures that whatever is recorded is being kept on the record and cannot be tampered with.
  • Consensus-based: As multiple nodes maintain copies of the ledger, any addition to the ledger must be based on a consensus among the participating nodes, preventing single rogue actors from inserting false information.
  • Transparent: Every participating node can see all transactions, when they occur and by whom they are made. This ensures that a reliable history is being kept for auditing purposes.

API

Does every tool in TELEMETRY have to become a DLT node? No. While DLT provides the key features mentioned above, having all testing and monitoring tools interact directly with the ledger adds significant complexity and overhead to the tools.

Therefore, the TELEMETRY approach is to have gateways that operate between the tools and the ledger. Such a gateway can serve multiple tools and offers a RESTful API which abstracts from the underlying DLT, providing the tools with an interface that is independent from the actual DLT implementation that is used underneath. Anything sent via this interface is formatted in JSON.

The gateway also facilitates user management and access control to ensure only authorised users have access to the ledger, which is essential as cybersecurity testing and monitoring results may contain sensitive information about the system’s current security posture. Further, it implements a context driven data model that ensures all data transactions adhere to predefined formats.

 

Context driven data model

What is a context driven data model? A context is a JSON format which defines the structure and meaning of a use-case specific data transaction, similar to a database schema in a conventional database. This includes structures for data as well as metadata. In addition, it defines permissions related to this context, i.e. which users can write or read data transactions for this context.

Any data transaction has to include a context id, indicating the context it relates to. It is then validated against that context, ensuring that the user is permitted to submit this data transaction and that the transaction complies with the structure that is given in the context.

In TELEMETRY, several contexts are defined to accommodate the different data transaction contents from various tools, e.g., a context for network anomaly reports or a context for vulnerabilities that have been identified based on offline software testing. Each tool that sends data will format its transactions in compliance with the context that is meant for this type of transactions. This also allows the consumers of those data, such as the risk or trust assessment tools, to query based on context to receive all transactions that have been seen for that specific context. This query functionality also enables operator dashboards to fetch information for an overview of the system’s current health. Last but not least, it provides the interface to retrieve all transactions for auditing purposes.

 

Conclusion

DLT is an enabling technology that comes with key features for reliable sharing and auditable recording of cybersecurity testing and monitoring results. It enables the TELEMETRY project to collect and record the outputs of various tools and store them on a trustworthy ledger that satisfies auditability requirements. Further, it allows the sharing of these data among the tools for further analysis. Hence, DLT is a key element in the overall TELEMETRY architecture, facilitating interoperation within the suite of TELEMETRY tools.

 

* Bernd-Ludwig Wenning is a Research Fellow at Munster Technological University (MTU) in Cork, Ireland. He holds Dipl.-Ing. and Dr.-Ing. degrees in electrical engineering and information technology from the University of Bremen, Germany. 

In 2012, he joined the Nimbus Centre at MTU. Since then, he has worked on several national and EU funded projects. His research interests include mobile and wireless networks and protocols, IoT and cyber physical systems. Throughout his research career, he has authored or co-authored more than 50 publications.

European Cyber Security Community Initiative (ECSCI)

The European Cyber Security Community Initiative (ECSCI) brings together EU-funded cybersecurity research and innovation projects to foster cross-sector collaboration and knowledge exchange. Its aim is to align technical and policy efforts across key areas such as AI, IoT, 5G, and cloud security. ECSCI organizes joint dissemination activities, public workshops, and strategic dialogue to amplify the impact of individual projects and build a more integrated European cybersecurity landscape.

Supported by the European Commission, ECSCI contributes to shaping a shared vision for cybersecurity in Europe by reinforcing connections between research, industry, and public stakeholders.

European Cluster for Cybersecurity Certification

The European Cluster for Cybersecurity Certification is a collaborative initiative aimed at supporting the development and adoption of a unified cybersecurity certification framework across the European Union. Bringing together key stakeholders from industry, research, and national authorities, the cluster facilitates coordination, knowledge exchange, and alignment with the EU Cybersecurity Act.

Its mission is to contribute to a harmonized approach to certification that fosters trust, transparency, and cross-border acceptance of cybersecurity solutions. The cluster also works to build a strong stakeholder community that can inform and support the work of the European Union Agency for Cybersecurity (ENISA) and the future European cybersecurity certification schemes.

CertifAI

CertifAI is an EU-funded project aimed at enabling organizations to achieve and maintain compliance with key cybersecurity standards and regulations, such as IEC 62443 and the EU Cyber Resilience Act (CRA), across the entire product development lifecycle. Rather than treating compliance as a one-time activity or post-development task, CertifAI integrates compliance checks and evidence collection as continuous, embedded practices within daily development and operational workflows.

The CertifAI framework provides structured, practical guidance for planning, executing, and monitoring compliance assessments. It supports organizations in conducting gap analyses, building compliance roadmaps, collecting evidence, and preparing for formal certification. The methodology leverages best practices from established cybersecurity frameworks and aligns with Agile and DevSecOps principles, enabling continuous and iterative compliance checks as products evolve.

A central feature of CertifAI is the use of automation and AI-driven tools—such as Retrieval-Augmented Generation (RAG) systems and Explainable AI—to support the interpretation of complex requirements, detect non-conformities, and generate Security Assurance Cases (SAC) with traceable evidence. The approach is organized into five main phases: preparation and planning, evidence collection and mapping, assessment execution, reporting, and ongoing compliance monitoring.

CertifAI’s methodology is designed to be rigorous yet adaptable, offering organizations a repeatable process to proactively identify, address, and document compliance gaps. This supports organizations not only in meeting certification requirements, but also in embedding a culture of security and compliance into daily practice.

Ultimately, CertifAI’s goal is to make compliance and security assurance continuous, transparent, and integrated, helping organizations efficiently prepare for certification while strengthening their overall cybersecurity posture.

DOSS

The Horizon Europe DOSS – Design and Operation of Secure Supply Chain – project aims to improve the security and reliability of IoT operations by introducing an integrated monitoring and validation framework to IoT Supply Chains.

DOSS elaborates a “Supply Trust Chain” by integrating key stages of the IoT supply chain into a digital communication loop to facilitate security-related information exchange. The technology includes security verification of all hardware and software components of the modelled architecture. A new “Device Security Passport” contains security-relevant information for hardware devices and their components. 3rd party software, open-source applications, as well as in-house developments are tested and assessed. The centrepiece of the proposed solution is a flexibly configurable Digital Cybersecurity Twin, able to simulate diverse IoT architectures. It employs AI for modelling complex attack scenarios, discovering attack surfaces, and elaborating the necessary protective measures. The digital twin provides input for a configurable, automated Architecture Security Validator module which assesses and provides pre-certification for the modelled IoT architecture with respect of relevant, selectable security standards and KPIs. To also ensure adequate coverage for the back end of the supply chain the operation of the architecture is also be protected by secure device onboarding, diverse security and monitoring technologies and a feedback loop to the digital twin and actors of the supply chain, sharing security-relevant information.

The procedures and technology will be validated in three IoT domains: automotive, energy and smart home.

The 12-member strong DOSS consortium comprises all stakeholders of the IoT ecosystem: service operators, OEMs, technology providers, developers, security experts, as well as research and academic partners.

EMERALD: Evidence Management for Continuous Compliance as a Service in the Cloud

The EMERALD project aims to revolutionize the certification of cloud-based services in Europe by addressing key challenges such as market fragmentation, lack of cloud-specific certifications, and the increasing complexity introduced by AI technologies. At the heart of EMERALD lies the concept of Compliance-as-a-Service (CaaS) — an agile and scalable approach aimed at enabling continuous certification processes in alignment with harmonized European cybersecurity schemes, such as the EU Cybersecurity Certification Scheme for Cloud Services (EUCS).

By focusing on evidence management and leveraging results from the H2020 MEDINA project, EMERALD will build on existing technological readiness (starting at TRL 5) and push forward to TRL 7. The project’s core innovation is the development of tools that enable lean re-certification, helping service providers, customers, and auditors to maintain compliance across dynamic and heterogeneous environments —including Cloud, Edge, and IoT infrastructures.

EMERALD directly addresses the critical gap in achieving the ‘high’ assurance level of EUCS by offering a technical pathway based on automation, traceability, and interoperability. This is especially relevant in light of the emerging need for continuous and AI-integrated certification processes, as AI becomes increasingly embedded in cloud services.

The project also fosters strategic alignment with European initiatives on digital sovereignty, supporting transparency and trust in digital services. By doing so, EMERALD promotes the adoption of secure cloud services across both large enterprises and SMEs, ensuring that security certification becomes a practical enabler rather than a barrier.

Ultimately, EMERALD’s vision is to provide a robust, flexible, and forward-looking certification ecosystem, paving the way for more resilient, trustworthy, and user-centric digital infrastructures in Europe.

SEC4AI4SEC

Sec4AI4Sec is a project funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101120393.

This project aims to create a range of cutting-edge technologies, open-source tools, and new methodologies for designing and certifying secure AI-enhanced systems and AI-enhanced systems for security. Additionally, it will provide reference benchmarks that can be utilized to standardize the evaluation of research outcomes within the secure software research community.

The project is divided into two main phases, each with its own name.

·       AI4Sec – stands for using artificial intelligence in security. Democratize security expertise with an AI-enhanced system that reduces development costs and improves software quality. This part of the project improves via AIs the secure coding and testing.

·       Sec4AI –  involves AI-enhanced systems. These systems also have risks that make them vulnerable to new security threats unique to AI-based software, especially when fairness and explainability are essential.

The project considers the economic and technological impacts of combining AI and security.

The economic phase of the project focuses on leveraging AI to drive growth, productivity, and competitiveness across industries. It includes developing new business models, identifying new market opportunities, and driving innovation across various sectors.