Leveraging AI and ML Techniques to Detect Misuse of IoT Devices in Cybersecurity

By Oscar Garcia Perales

 

The rapid adoption of Internet of Things (IoT) devices has transformed industries, homes, and cities. From smart sensors in aviation and connected telecom hubs home devices to industrial sensors, IoT technology is integral to modern living. However, this proliferation comes with significant cybersecurity challenges. IoT devices often have limited security measures, making them attractive targets for attackers. Misuse can manifest as data theft, botnet attacks, strange behaviour, unauthorized access, or system disruption. Artificial Intelligence (AI) and Machine Learning (ML) techniques are increasingly pivotal in identifying and mitigating these threats effectively.

Challenges in Securing IoT Devices

Before delving into AI and ML solutions, it is essential to understand the unique challenges posed by IoT security:

  1. Diverse Ecosystem: IoT devices vary widely in their hardware, software, and communication protocols, complicating standardization and security measures.
  2. Resource Constraints: Limited computational power, memory, and battery life in IoT devices make implementing robust security features difficult.
  3. Large Attack Surface: The sheer number of interconnected devices creates numerous entry points for attackers.
  4. Dynamic Network Topology: IoT environments are highly dynamic, with devices constantly joining or leaving the network.

AI and ML Techniques for IoT Security

AI and ML have revolutionized cybersecurity by enabling automated, adaptive, and intelligent defence mechanisms. Below are some key techniques and algorithms used for detecting misuse of IoT devices:

  1. Anomaly Detection

Anomaly detection techniques identify deviations from normal behaviour, signalling potential misuse or attacks. These approaches include:

  • Supervised Learning: Algorithms like Support Vector Machines (SVM) and Random Forests use labelled data to classify normal and abnormal behaviours.
  • Unsupervised Learning: Techniques such as k-means clustering and Autoencoders analyze data patterns without predefined labels, identifying outliers as potential threats.
  • Time-Series Analysis: Recurrent Neural Networks (RNNs) and Long Short-Term Memory (LSTM) models are effective for detecting irregularities in IoT device activity over time.
  1. Behavioural Profiling

Behavioural profiling involves creating baselines of typical device behaviour. By comparing real-time data to these profiles, ML models can identify unusual activities indicative of misuse. For example, a smart telecom hub sending large volumes of data to unknown servers could signal a botnet attack.

  1. Intrusion Detection Systems (IDS)

AI-powered IDS can monitor IoT networks for malicious activities. Key approaches include:

  • Signature-Based Detection: Identifies known attack patterns using techniques like Decision Trees and Naïve Bayes.
  • Heuristic-Based Detection: Uses probabilistic models and fuzzy logic to detect previously unseen attack vectors.
  • Deep Learning: Convolutional Neural Networks (CNNs) and Graph Neural Networks (GNNs) can analyse complex network traffic patterns to uncover sophisticated threats.
  1. Federated Learning

Federated learning allows IoT devices to collaboratively train ML models without sharing raw data, preserving privacy and reducing latency. This technique is particularly useful for distributed environments with limited connectivity or bandwidth.

  1. Reinforcement Learning

Reinforcement Learning (RL) can be applied to dynamically adapt IoT security policies. RL agents learn optimal strategies to mitigate threats by interacting with the environment and receiving feedback in the form of rewards or penalties.

Practical Applications in IoT Security

  1. Botnet Detection: Identifying and mitigating large-scale attacks like Mirai botnets that exploit vulnerable IoT devices.
  2. Access Control: Preventing unauthorized access by analysing login patterns and device fingerprints.
  3. Data Integrity Monitoring: Detecting data tampering or exfiltration by analysing device communication logs.
  4. Firmware Validation: Ensuring the authenticity of IoT device firmware updates using AI-powered validation techniques.

Challenges in Implementing AI and ML for IoT Security

Despite their potential, deploying AI and ML solutions for IoT security is not without challenges:

  • Data Scarcity: High-quality, labelled datasets for training models are often unavailable.
  • Resource Limitations: Many IoT devices lack the computational power to run sophisticated ML algorithms.
  • Evolving Threat Landscape: Attackers continuously develop new methods, requiring AI models to adapt quickly.
  • Privacy Concerns: Balancing security and user privacy is critical, particularly in sensitive applications like healthcare.

Where does TELEMETRY come into play?

Cybersecurity via Trustworthy tools and methodologies is a crucial challenge for IoT ecosystems. TELEMETRY project aims to develop and validate novel trustworthy tools and methods for testing and detecting security vulnerabilities in IoT devices and systems.

Devices, software and systems are designed to perform a purpose with a specific usage in mind, and they are deployed in socio-technical systems with human users. These human users may either be unaware of acceptable operating conditions or may deliberately aim to misuse components with malicious intentions, so there is a clear need for detection of misuse of components in systems. Further, dynamic testing should not only cover illicit access to components but highlight component vulnerabilities due to the misuse, thus supporting continuous improvement of the components. As such, TELEMETRY is in the process of creating a Misuse Detection ML tool to detect the misuse of software components & systems based on baseline behavioural patterns identified in historic usage scenarios. Several approaches for the learning of user-interaction models and the detection of divergences in user behaviour from the norm are being investigated, using similar principles to social engineering for capturing user aspects such as user functional footprint, temporal behaviour and statistical data distribution. These anomalies raise warnings that can identify aspects such as impersonation of an authorised user by an attacker, insider attacks or inadvertent misuse.

Conclusion

AI and ML are indispensable tools for addressing the complex cybersecurity challenges of IoT ecosystems. By leveraging advanced techniques such as anomaly detection, behavioural profiling, and federated learning, organizations can detect and mitigate misuse of IoT devices effectively. However, a comprehensive approach that includes robust encryption, regular updates, and user education is essential to complement these technological advancements. As IoT continues to evolve, so too must the AI and ML strategies that protect it.

European Cyber Security Community Initiative (ECSCI)

The European Cyber Security Community Initiative (ECSCI) brings together EU-funded cybersecurity research and innovation projects to foster cross-sector collaboration and knowledge exchange. Its aim is to align technical and policy efforts across key areas such as AI, IoT, 5G, and cloud security. ECSCI organizes joint dissemination activities, public workshops, and strategic dialogue to amplify the impact of individual projects and build a more integrated European cybersecurity landscape.

Supported by the European Commission, ECSCI contributes to shaping a shared vision for cybersecurity in Europe by reinforcing connections between research, industry, and public stakeholders.

European Cluster for Cybersecurity Certification

The European Cluster for Cybersecurity Certification is a collaborative initiative aimed at supporting the development and adoption of a unified cybersecurity certification framework across the European Union. Bringing together key stakeholders from industry, research, and national authorities, the cluster facilitates coordination, knowledge exchange, and alignment with the EU Cybersecurity Act.

Its mission is to contribute to a harmonized approach to certification that fosters trust, transparency, and cross-border acceptance of cybersecurity solutions. The cluster also works to build a strong stakeholder community that can inform and support the work of the European Union Agency for Cybersecurity (ENISA) and the future European cybersecurity certification schemes.

CertifAI

CertifAI is an EU-funded project aimed at enabling organizations to achieve and maintain compliance with key cybersecurity standards and regulations, such as IEC 62443 and the EU Cyber Resilience Act (CRA), across the entire product development lifecycle. Rather than treating compliance as a one-time activity or post-development task, CertifAI integrates compliance checks and evidence collection as continuous, embedded practices within daily development and operational workflows.

The CertifAI framework provides structured, practical guidance for planning, executing, and monitoring compliance assessments. It supports organizations in conducting gap analyses, building compliance roadmaps, collecting evidence, and preparing for formal certification. The methodology leverages best practices from established cybersecurity frameworks and aligns with Agile and DevSecOps principles, enabling continuous and iterative compliance checks as products evolve.

A central feature of CertifAI is the use of automation and AI-driven tools—such as Retrieval-Augmented Generation (RAG) systems and Explainable AI—to support the interpretation of complex requirements, detect non-conformities, and generate Security Assurance Cases (SAC) with traceable evidence. The approach is organized into five main phases: preparation and planning, evidence collection and mapping, assessment execution, reporting, and ongoing compliance monitoring.

CertifAI’s methodology is designed to be rigorous yet adaptable, offering organizations a repeatable process to proactively identify, address, and document compliance gaps. This supports organizations not only in meeting certification requirements, but also in embedding a culture of security and compliance into daily practice.

Ultimately, CertifAI’s goal is to make compliance and security assurance continuous, transparent, and integrated, helping organizations efficiently prepare for certification while strengthening their overall cybersecurity posture.

DOSS

The Horizon Europe DOSS – Design and Operation of Secure Supply Chain – project aims to improve the security and reliability of IoT operations by introducing an integrated monitoring and validation framework to IoT Supply Chains.

DOSS elaborates a “Supply Trust Chain” by integrating key stages of the IoT supply chain into a digital communication loop to facilitate security-related information exchange. The technology includes security verification of all hardware and software components of the modelled architecture. A new “Device Security Passport” contains security-relevant information for hardware devices and their components. 3rd party software, open-source applications, as well as in-house developments are tested and assessed. The centrepiece of the proposed solution is a flexibly configurable Digital Cybersecurity Twin, able to simulate diverse IoT architectures. It employs AI for modelling complex attack scenarios, discovering attack surfaces, and elaborating the necessary protective measures. The digital twin provides input for a configurable, automated Architecture Security Validator module which assesses and provides pre-certification for the modelled IoT architecture with respect of relevant, selectable security standards and KPIs. To also ensure adequate coverage for the back end of the supply chain the operation of the architecture is also be protected by secure device onboarding, diverse security and monitoring technologies and a feedback loop to the digital twin and actors of the supply chain, sharing security-relevant information.

The procedures and technology will be validated in three IoT domains: automotive, energy and smart home.

The 12-member strong DOSS consortium comprises all stakeholders of the IoT ecosystem: service operators, OEMs, technology providers, developers, security experts, as well as research and academic partners.

EMERALD: Evidence Management for Continuous Compliance as a Service in the Cloud

The EMERALD project aims to revolutionize the certification of cloud-based services in Europe by addressing key challenges such as market fragmentation, lack of cloud-specific certifications, and the increasing complexity introduced by AI technologies. At the heart of EMERALD lies the concept of Compliance-as-a-Service (CaaS) — an agile and scalable approach aimed at enabling continuous certification processes in alignment with harmonized European cybersecurity schemes, such as the EU Cybersecurity Certification Scheme for Cloud Services (EUCS).

By focusing on evidence management and leveraging results from the H2020 MEDINA project, EMERALD will build on existing technological readiness (starting at TRL 5) and push forward to TRL 7. The project’s core innovation is the development of tools that enable lean re-certification, helping service providers, customers, and auditors to maintain compliance across dynamic and heterogeneous environments —including Cloud, Edge, and IoT infrastructures.

EMERALD directly addresses the critical gap in achieving the ‘high’ assurance level of EUCS by offering a technical pathway based on automation, traceability, and interoperability. This is especially relevant in light of the emerging need for continuous and AI-integrated certification processes, as AI becomes increasingly embedded in cloud services.

The project also fosters strategic alignment with European initiatives on digital sovereignty, supporting transparency and trust in digital services. By doing so, EMERALD promotes the adoption of secure cloud services across both large enterprises and SMEs, ensuring that security certification becomes a practical enabler rather than a barrier.

Ultimately, EMERALD’s vision is to provide a robust, flexible, and forward-looking certification ecosystem, paving the way for more resilient, trustworthy, and user-centric digital infrastructures in Europe.

SEC4AI4SEC

Sec4AI4Sec is a project funded by the European Union’s Horizon Europe research and innovation programme under grant agreement No 101120393.

This project aims to create a range of cutting-edge technologies, open-source tools, and new methodologies for designing and certifying secure AI-enhanced systems and AI-enhanced systems for security. Additionally, it will provide reference benchmarks that can be utilized to standardize the evaluation of research outcomes within the secure software research community.

The project is divided into two main phases, each with its own name.

·       AI4Sec – stands for using artificial intelligence in security. Democratize security expertise with an AI-enhanced system that reduces development costs and improves software quality. This part of the project improves via AIs the secure coding and testing.

·       Sec4AI –  involves AI-enhanced systems. These systems also have risks that make them vulnerable to new security threats unique to AI-based software, especially when fairness and explainability are essential.

The project considers the economic and technological impacts of combining AI and security.

The economic phase of the project focuses on leveraging AI to drive growth, productivity, and competitiveness across industries. It includes developing new business models, identifying new market opportunities, and driving innovation across various sectors.