Counter Terror Business - Technology /features/technology en Advancing AI technology: ethical and regulatory challenges in AI-driven security /features/advancing-ai-technology-ethical-and-regulatory-challenges-ai-driven-security <div class="field-item even"><img typeof="foaf:Image" src="/sites/default/files/styles/696x462_content_main/public/adobestock_960419667.jpg?itok=Nw33tTOY" width="696" height="390" alt="Reflection of code in eye" title="Reflection of code in eye" /></div><div class="field-item even"><a href="/features/technology" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Technology</a></div><p>Nearly 15 years ago, in our inaugural issue, Pauline Norstrom wrote about the launch of a new video content analysis guide produced by the British Security Industry Association (BSIA). In the intervening years, she held the position of chair of the Association and is now a non-executive.</p> <p>This journey reflects the dramatic evolution of AI, generative AI (GenAI), and biometric-based security technologies, which now play a central role across sectors such as transportation, critical national infrastructure (CNI), retail, and education. These advancements, while transformative, bring a host of ethical and regulatory challenges.</p> <p>The use of biometric technology has extended far beyond traditional applications in access control. Today, facial recognition technology (FRT), combined with a range of AI techniques, underpins security solutions in environments as varied as airports, shopping centres, schools, and sensitive infrastructure. Unlike earlier rule-based systems, modern AI-driven biometric solutions can:</p> <p>Learn and adapt: machine learning enables systems to improve continuously, recognising&nbsp;patterns and identifying novel risks without explicit programming. &nbsp;&nbsp;</p> <p>Interpret context: multimodal AI systems combine biometric data with other sources, such as geolocation or transactional records, to deliver nuanced threat assessments. &nbsp;&nbsp; &nbsp;</p> <p>Enhance situational awareness: generative AI models synthesise complex datasets, providing security teams with actionable insights presented in natural language. &nbsp;&nbsp; &nbsp;</p> <p>While these innovations strengthen security capabilities, they also amplify the potential for privacy violations and raise concerns about bias and misuse.</p> <p><strong>Role of BSIA and BS 9347</strong></p> <p>The ethical use of FRT and biometric systems has become a focal point in industry discussions. The BSIA has developed an ethical guide to facial recognition, providing a framework for responsible deployment. Building on these principles, the recently introduced BS 9347 code of practice offers a comprehensive standard for the ethical use of FRT in video surveillance. This AI standard embeds the OECD principles for responsible AI throughout the supply chain, ensuring: &nbsp;&nbsp;</p> <p>Transparency: stakeholders are informed about how biometric data is collected, processed, and used. &nbsp;&nbsp;</p> <p>Accountability: Clear guidelines hold organisations accountable for ethical and legal compliance. &nbsp;&nbsp; &nbsp;</p> <p>Fairness: systems are designed and implemented to minimise bias and ensure equitable treatment of individuals. &nbsp;&nbsp; &nbsp;</p> <p>These standards provide a roadmap for organisations to navigate the complexities of deploying a range of AI and biometric technologies responsibly.</p> <p>Regulatory momentum: EU AI Act and ISO/IEC 42001</p> <p>The regulatory environment surrounding AI and biometric technologies is maturing rapidly. The EU AI Act represents a landmark in AI governance, introducing stringent requirements for high-risk systems processing sensitive data.</p> <p>Although the UK has no plans to regulate FRT, key provisions of the EU AI Act include: &nbsp;&nbsp; &nbsp;</p> <p>Certification: biometric security products must&nbsp;compliance with safety, fairness, and transparency requirements. &nbsp;&nbsp; &nbsp;</p> <p>Public disclosure: organisations are required to inform individuals when AI systems are deployed in rights-impacting scenarios. &nbsp;&nbsp; &nbsp;</p> <p>Prohibited uses: practices such as real-time biometric surveillance in public spaces are restricted unless justified by compelling public security needs. &nbsp;&nbsp; &nbsp;</p> <p>In parallel, the ISO/IEC 42001 standard for AI Management Systems establishes a framework for a business to govern AI systems across their lifecycle. This aligns closely with the work of organisations like Anekanta®AI, which specialise in evaluating high-risk AI systems and guiding businesses through compliance with these emerging standards.</p> <p><strong>Balancing innovation with responsibility</strong></p> <p>The integration of biometric systems into diverse environments underscores the power and peril of AI-driven technologies. For example, combining FRT with geolocation or social media data creates a robust tool for threat detection but also risks encroaching on individual privacy. Ethical deployment requires: &nbsp;&nbsp; &nbsp;</p> <p>Transparency and consent: organisations must clearly articulate the purpose of AI systems and obtain informed and valid consent where applicable. &nbsp;&nbsp; &nbsp;</p> <p>Oversight mechanisms: robust governance structures ensure human review of critical AI decisions. &nbsp;&nbsp; &nbsp;</p> <p>Alignment with ethical frameworks: adherence to standards such as BS 9347 and regulation measures such as GDPR and the EU AI Act protects against misuse and safeguards civil liberties.</p> <p><strong>Fostering good governance through board engagement</strong></p> <p>As AI technologies become increasingly integrated into organisational strategies, fostering good governance at the board level is critical. Anekanta®AI actively engages with boards to build awareness and understanding of AI risk and governance. By providing tailored insights and frameworks, the company helps boards align their decision-making with ethical standards and regulatory requirements. This proactive approach ensures that organisations not only comply with emerging regulations but also embed responsible AI practices across their operations.</p> <p>Over the past decade, the security industry has witnessed a significant shift in leadership roles and perspectives on AI governance. For example, organisations like Anekanta AI exemplify this evolution, offering expertise in high-risk biometric AI and regulatory alignment. &nbsp;&nbsp; &nbsp;The rise of AI-driven biometric technology presents a transformative opportunity to enhance security across sectors. However, with great capability comes great responsibility. By embracing ethical standards like BS 9347, aligning with regulatory frameworks such as the EU AI Act and ISO/IEC 42001, and fostering transparency and accountability, the industry can build systems that respect human rights while addressing pressing security challenges. &nbsp;&nbsp; &nbsp;</p> <p>The future of AI, GenAI, and biometric technology in security lies not just in their technical excellence but in their ability to align with societal values. With the right guidance and governance, these technologies can serve as a force for good, safeguarding both security and individual freedoms in an interconnected world.</p> Mon, 06 Jan 2025 15:11:50 +0000 Meghan Shaw 17281 at /features/advancing-ai-technology-ethical-and-regulatory-challenges-ai-driven-security#comments AI and misinformation /features/ai-and-misinformation <div class="field-item even"><img typeof="foaf:Image" src="/sites/default/files/styles/696x462_content_main/public/adobestock_576595621edited.jpg?itok=bzlDWHfV" width="696" height="205" alt="" /></div><div class="field-item even"><a href="/features/technology" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Technology</a></div><p><strong>Professor Harith Alani, director of the Knowledge Management Institute at the Open University looks at how AI can be used for good and bad.</strong></p> <p>Social media still runs on a fuel of controversy. That means people being actively rewarded for sharing engaging content regardless of the facts — contaminating beliefs and attitudes. Work at the Open University during the Covid-19 period, as one example, showed how false information about the disease reached three times more people than the corrected facts.</p> <p>It might be casual, unintentional posting or it might be intentional harm. Either way, when misinformation or disinformation relates to issues of health, politics, the environment, the economy, it’s a kind of pollution which is a threat to the role of governments and societies themselves. A shared belief in the existence of a common good and common truths, after all, has been the basis of democracy and its freedoms.</p> <p>Obvious breaches of law, those posts which involve hate crime or child pornography, now fall under the Online Safety Act. But misinformation can be subjective, subtle and complex. People share what they want to share, what they find eye-catching and alarming, and are rewarded with shares and attention for being controversial, no matter how inaccurate or harmful the claim: an unstoppable flood across networks of information, into people’s homes, conversations and thinking around the world.

</p> <p><strong>AI for good </strong></p> <p>Here is a hugely important and particular way that AI can be used for social good, taking on the vast job of protecting media content of all kinds from obvious types of pollution and restoring trust. AI is going to be an increasingly important tool for analysing what’s happening around misinformation and experimenting with ways of preventing spread and repairing its damage.</p> <p>There have been many lab studies into the work of misinformation, involving simulations with controlled groups and their responses. The problem is that in the real world, the dynamics are very different, especially when misinformation is being shared deliberately. There is also the need to look at the actual impact of corrections. It can’t be assumed that sharing accurate information will resolve anything in itself.</p> <p>The OU’s Knowledge Management Institute is currently looking into the mechanics and impact of corrections. Research into both Covid-19 and other misinformation spread via Twitter/X found how 5,000 posts were corrected, but of those, only around 10 per cent reacted in any way, the majority appeared to ignore the correction; and only around 10 per cent of those who did react did so positively.</p> <p>Given the nature of digital media and how it’s used, misinformation can’t be eliminated, but AI and machine learning can be used to build a new environment, improving awareness and the responses in a more timely and effective way. In this way, the system can be turned on its head: where truth matters and is recognised positively, creating a new kind of fuel for social media and Internet content generally, pushing engagement in the right direction.</p> <p>AI can work with the mass of historic data to help identify what is likely to constitute misinformation, picking up on previously debunked claims and recurring templates. The technology can automatically assess and monitor the credibility of online accounts, and be used to predict the use of misinformation before it happens based on past events — when and why trends for misinformation occur, like a pandemic or conflict — allowing for more advanced algorithms and counter-strategies to be prepared. AI is also important for tracking the spread and effectiveness of fact-checking and corrections. Timing is critical. Evidence suggests that corrections need to be circulating before a tipping point of false claims has already taken hold.</p> <p>Generic fact-checked responses can be more or less effective depending on the audience. More needs to be done, using AI, to identify the nature of the recipients of corrective messages and personalise material. Are they influencers, conspiracy theorists, extremists or just accidental misinformers? Bot-like programs can be used to trial different approaches and monitor impacts, monitoring audience reactions to corrections, and automatically tuning and personalising interventions to maximise visibility and effects as they learn more about people’s characters and behaviours.

</p> <p><strong>Social media </strong></p> <p>Certainly the major social media platforms are doing more to verify and report misinformation. During the pandemic, Meta worked with fact-checkers from more than 80 organisations, claiming to have removed more than 3,000 accounts, pages and groups and 20 million pieces of content. Twitter/X published policies to highlight its approach to reduce misinformation. But there are still questions over when policies are actually being enforced and to what extent. Businesses want to protect their operations from criticism and restrictions, while at the same time minimising the costs involved. Twitter/X has been employing ‘curators’ to provide notes on context relating to trending topics which might be controversial, around the war in Ukraine for example (when the curators are believed to have removed 100,000 accounts for breaking rules). There is evidence this has had a positive effect in limiting the spread of false claims. The purchase of Twitter by Elon Musk, however, is understood to mean a reduction in the use of moderation.</p> <p>There can be an element of self-regulation. When generative AI software first came out, such as ChatGPT and BARD, they were ready to generate endless streams of false claims if prompted to do so, but more recent updates have improved the situation. Systems now refuse to generate what they detect to be potentially harmful or misinforming. However, it is unclear what stems have been used and to what extent they span across different topics and claims.</p> <p>Ultimately though, self-regulation by social media platforms needs to be combined with legal frameworks. And that needs to include every social media player, not just the obvious targets, the fringe platforms as they emerge: blocking illegal content, demoting false information, promoting fact-checked and known truths. Monitoring and management of a good global communications space is a mind-boggling one, but positive use of AI makes it workable.</p> Tue, 13 Feb 2024 15:51:46 +0000 Robyn Quick 16763 at /features/ai-and-misinformation#comments Innovation to keep the nation safe /features/innovation-keep-nation-safe <div class="field-item even"><img typeof="foaf:Image" src="/sites/default/files/styles/696x462_content_main/public/security_rapid_impact.png?itok=bXv0DNH9" width="696" height="364" alt="" /></div><div class="field-item even"><a href="/features/technology" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Technology</a></div><p>Ellie Rice, security lead at the Defence and Security Accelerator, details why innovation should be at the forefront of national security for the UK and the importance of reaching out to a broad range of innovators to achieve this The first duty of government is to keep citizens safe and the country secure. The evolving nature of the terrorist threat means that those working in the security services, law enforcement and wider government to deliver this duty need to embrace innovation to stay ahead of those wishing to do us harm. The opportunities presented by science and technology are broad and growing but our adversaries are taking the same opportunities to fulfil their aims. The increasing accessibility of advanced technologies, such as cloud-enabled artificial intelligence and machine learning, and end-to-end encryption, lowers the barrier to entry for those who seek to infringe upon our democracy and way of life. Layered on top of this is the potential future impact of next generation technologies, such as synthetic biology. Beyond terrorism, the return of great power competition and the race to develop vaccines against COVID-19 demonstrate the need for innovation as critical to the UK’s national security. So it is only by embracing innovation and leveraging new technology and novel ways of doing things that the UK can continue to keep citizens safe and the country secure. But while it is one thing to embrace this mind-set and understand that innovation can help the UK stay ahead of security threats, it is another to find and access the ideas and the innovators that will help us achieve this. The Defence and Security Accelerator (DASA) (<a href="http://www.gov.uk/DASA">http://www.gov.uk/DASA</a>) was set up to help address this need and is a key pillar of the UK’s effort to tackle security challenges by helping government stakeholders identify and access innovative technology and approaches. DASA’s role is to find and fund exploitable innovation projects, working with those in the security services and across wider government to support the generation of new capabilities. The Integrated Review, published in March 2021, not only states innovation is important – it says it is essential to success for national security and that government agencies need to work collaboratively in order to avoid duplication of effort. When it comes to innovation, DASA is an enabler for this, sharing the fruits of security innovation across government and its agencies. In 2021, DASA developed the Security Rapid Impact Innovations Open Call (<a href="https://www.gov.uk/government/publications/defence-and-security-accelerator-dasa-open-call-for-innovation/open-call-competition-document#security-rapid-innovations">https://www.gov.uk/government/publications/defence-and-security-accelera...</a>) with the 鶹 Office, Department for Transport and other government security stakeholders, specifically to find ideas that enhance our understanding of threats to UK security and safety, enable threat prevention, or enhance the threat response. This £20m initiative runs until March 2024 and enables government to work in partnership with the private sector and academia to find novel ideas that meet a security sector challenge, and offers competitive funding to support the development of the best ideas. Innovation by its very nature requires diversity of thought, so it is only by accessing a wide range of innovators that security agencies can find the best solutions and gain access to new ideas. DASA understands that diversity and inclusion is a capability multiplier for innovation – we need to be able to draw upon the talents of as wide a pool of individuals as possible to help keep the UK secure – and we have the capability to do this. DASA has an Outreach Team of Innovation Partners, who are located across the economic regions of the UK, enabling us to interact with and tap into local ecosystems. This ensures that innovators – including those who may have not worked with government or the security sector before – are aware of how to access UK innovation support, can understand national security challenges and identify which innovation projects might align with security requirements. For established suppliers and those new to government, working with DASA represents an attractive opportunity. DASA provides full funding of innovation projects and straightforward contracting mechanisms and innovators retain full ownership of background and foreground IP. DASA’s security team works closely with both innovators and security stakeholders across the public and private sector to help inform innovation projects – ensuring that end-user needs are considered early in the product development lifecycle. This co-creation approach supports the delivery of results that are fit-for-purpose and able to integrate with other government-operated systems - enabling the best innovations to protect our people and prosperity and future-proof the supply chain. This is all backed up by a range of post-contract services to support commercialisation, scale-up and routes to market. We believe that building the business behind the innovation is also critical to ensuring a sustainable security innovation ecosystem. Many small businesses lack follow-on funding and require guidance on the commercial aspects of their ambitions, not just technical guidance and operational insight from users. DASA’s Access to Mentoring and Finance team help innovators who wish to grow their businesses through mentorship, links to business networks and access to investors, such as venture capitalists and angel investors, to help raise capital. Many innovators funded through DASA are small or micro enterprises, so the business mentoring programme offered by the DASA Access to Mentoring and Finance team is invaluable in helping small organisations think about how they can scale up. Innovative ideas are all around us, but we need to find and harness the best of these and realise their benefits. Innovation is critical to tackling threats to the defence and security of the UK and ensuring a safer future for us all. DASA is a key route to achieving this. Contact DASA if you have an innovative idea for an in-confidence discussion with your local Innovation Partner. If you are in government, please get in touch to work with DASA to identify relevant innovation projects and help us to assess proposals, ensuring that we choose the most impactful innovation projects. DASA will be exhibiting at International Security Expo 2022, Stand B-82. Hear from Ellie Rice, Security Lead at the Defence and Security Accelerator at International Security Expo 2022, 28th September, 1200 – 1230 on Why Innovation Should be at the Forefront of National Security : <a href="https://www.internationalsecurityexpo.com/international-security-expo-2022-seminars/innovation-forefront-national-security">https://www.internationalsecurityexpo.com/international-security-expo-20...</a></p> Mon, 12 Sep 2022 16:05:23 +0000 Polly Jones 16007 at /features/innovation-keep-nation-safe#comments How the defence sector can take its IT to the next level /features/how-defence-sector-can-take-its-it-next-level <div class="field-item even"><img typeof="foaf:Image" src="/sites/default/files/styles/696x462_content_main/public/mod_cyber_5.jpg?itok=chUo7ykr" width="696" height="977" alt="" title="How the defence sector can take its IT to the next level" /></div><div class="field-item even"><a href="/features/technology" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Technology</a></div><p>Digitalisation is integral to defence. It’s a fact keenly felt by the Ministry of Defence (MOD), reflected in the launch of its plans last summer to build a ‘digital backbone’ and invest £1.6 billion in digital, data, and cyber security over the next ten years.</p> <p>But the MOD won’t be building from scratch. Along with other public sector spaces, the defence industry has turned to digital tools for a while now, which among other benefits, have helped them optimise online collaboration, navigate the severe disruption of the pandemic, and work with greater precision and accuracy. This is a crucial part of minimising costs and maximising value against a backdrop of increasing pressure to ‘do more with less’.</p> <p>However, the development of digital practises within defence doesn’t come without its problems. Integrating innovative, new technology with legacy systems is always a difficult task, and consolidating these will be one of the biggest challenges for defence as it sets about shaping its IT future.</p> <p>But it’s not the only area defence organisations need to turn their attention towards. When it comes to creating this digital future, there are several key points defence needs to address. In our <a href="https://forms.loadpage.co.uk/forms/view/61a8d7617415cfd06e8b4569" target="_blank">recent research</a>, we took the pulse of industry experts and honed in on these points to uncover opportunities to resolve inefficiencies and supercharge growth.</p> <p>Here are five areas the defence sector should focus on to ensure its tech is meeting its digital potential—citing statistics from our <a href="https://forms.loadpage.co.uk/forms/view/61a8d7617415cfd06e8b4569" target="_blank">Shaping the future of IT in defence 2021 survey report</a>.</p> <p><strong>1. Consolidation</strong><br>As highlighted above, consolidation should be one of the top priorities for defence. An overly complex IT infrastructure allows more opportunities for errors and inefficiencies, which can obscure visibility.</p> <p>Our research found defence professionals recognise the benefits of consolidation. Almost all survey respondents (90 per cent) stated they were already benefiting from a consolidation solution or expected consolidation to bring benefits in the future. The highest-ranking perceived benefit from consolidation was the ability to collaborate more effectively with colleagues (96 per cent)—which is understandable due to the shift to remote and hybrid work. This benefit was closely followed by the ability to gain a more centralised overview of applications, data, and users (95 per cent).</p> <p>Yet, despite this, our survey found that almost half of the defence sector (43 per cent) has yet to look at IT consolidation as a formal initiative.</p> <p>It’s clear there’s work to be done in overcoming reticence around consolidation, while raising awareness on the topic. A good place to start may involve addressing some of the barriers to adoption. The two biggest barriers revealed in our research were the perceived cost of change (60 per cent) and the risk of service disruption (58 per cent). If defence wants to reap the benefits of IT consolidation, then it needs to think deeply about myths and attitudes around these barriers, along with considering the costs and risks of not addressing consolidation opportunities.</p> <p><strong>2. Flagging and minimising risk factors</strong><br>As IT networks and capabilities have expanded in the defence sector, so has its risk.</p> <p>Security is, understandably, one of the top issues, with 45 per cent of respondents ranking it as one of the top three challenges of their current IT environment—but concern around interoperability was identified as the biggest risk factor (51 per cent) by professionals in our survey.</p> <p>This speaks to the collaboration challenges outlined by the responses to our consolidation questions, and it makes sense—as defence organisations have rushed to keep up with tech development, many have onboarded niche solutions for specific problems but then have run into issues around integration.</p> <p>Other risk factors identified include problems around managing legacy technology and the challenges of maintaining easy oversight of systems.</p> <p>These findings demonstrate if defence is to make the most of its IT, then it needs to properly structure and manage its systems—both in ensuring its network and digital assets are safe, but also in streamlining its IT ecosystem.</p> <p><strong>3. Automation</strong><br>IT automation is a hot topic across many sectors and there's a good reason for it. Automation takes over basic tasks to free up professionals’ time for more complex tasks, allows for reduced resolution times through automatic alert features, and cuts down on errors caused by manual data entry.</p> <p>This is another area where defence is holding itself back.</p> <p>Our survey found only six per cent of defence professionals say their organisation has been able to automate all day-to-day, repetitive tasks to free up teams’ time to focus on more meaningful work. And around one in three (30 per cent) said their organisation hadn’t automated any tasks.</p> <p>Consequently, 34 per cent of respondents reported spending a significant proportion of their working day dealing with digital performance issues, ranging from one-fifth (21 per cent) to three-fifths (60 per cent) of their time. Twenty-six percent said they don’t know how much time they lose.</p> <p>Encouragingly, however, there appears to be a recognition within defence that automation can help and there are stirrings of change. Nearly half of respondents in our study said they believe they’re spending between 21 per cent and 60 per cent of their time on tasks that could be automated and 40 per cent report their organisation has undertaken a fair degree of automation.</p> <p><strong>4. Cloud adoption</strong><br>Cloud migration is another key area of focus for almost every industry, and this should include defence.</p> <p>Cloud adoption supports organisations with remote and hybrid working and bringing backup benefits, reduced IT infrastructure costs, and assisting with data storage. However, only 19 per cent of defence professionals in our survey said their organisation had completed their cloud adoption strategy, meaning most organisations are missing out on the full benefits of cloud technology.</p> <p>Although cloud migration offers great growth potential for defence, the sector has understandably big concerns around security needing to be addressed and managed. Security was identified by 74 per cent of professionals as a challenge in achieving a complete, cloud-first, collaborative workplace and so choosing the right cloud partners will be of utmost importance to defence organisations. Although important, cloud adoption isn’t something to thoughtlessly rush into and organisations should ensure they’re getting expert, comprehensive advice when investigating potential cloud systems.</p> <p><strong>5. Maintaining momentum</strong><br>The pandemic enforced a digital acceleration but while restrictions lift, defence shouldn’t let go of this momentum.</p> <p>Our survey promisingly showed 84 per cent of defence professionals say their organisation is well positioned to adapt their IT environment rapidly, as and when needed. However, almost three-quarters of respondents (73 per cent) reported the defence sector is no further ahead than other sectors in its IT development journey. If the defence sector wants to fulfil the promise of its digital future, it can’t afford to become complacent and must maintain the drive for tech innovation.</p> <p><em><strong>Written by By Charles Damerell, Senior Director UKI at <a href="https://www.solarwinds.com/" target="_blank">SolarWinds</a>.</strong></em></p> Thu, 03 Mar 2022 12:01:23 +0000 Michael Lyons 15746 at /features/how-defence-sector-can-take-its-it-next-level#comments Embedding human governance in AI frameworks to maintain ethical standards /features/embedding-human-governance-ai-frameworks-maintain-ethical-standards <div class="field-item even"><img typeof="foaf:Image" src="/sites/default/files/styles/696x462_content_main/public/hitesh-choudhary-t1paibmtjim-unsplash.jpg?itok=g3eU_ave" width="696" height="391" alt="" title="Embedding human governance in AI frameworks to maintain ethical standards" /></div><div class="field-item even"><a href="/features/technology" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Technology</a></div><p>The UK’s <a href="https://www.paconsulting.com/insights/2021/integrated-review-of-security-defence-development-and-foreign-policy/" target="_blank">Integrated Review of Security, Defence, Development and Foreign Policy</a> set out the country’s ambition to be a global leader in artificial intelligence (AI) and data technologies. This was welcome news for the defence and security sector, which already relies on data to inform strategies, insights and operations, a fact <a href="https://www.gchq.gov.uk/files/GCHQAIPaper.pdf" target="_blank">GCHQ</a> says will only become more apparent as AI becomes more able to analyse large, complex and distinct data sets.</p> <p>But law enforcement and national security are high risk contexts for the use of AI. Flawed or unethical AI could result in unjust denial of individual freedoms and human rights, and erode public trust.</p> <p>While frameworks such as the <a href="https://www.oecd.org/going-digital/ai/principles/" target="_blank">OECD Principles on Artificial Intelligence</a> and <a href="https://www.turing.ac.uk/sites/default/files/2019-06/understanding_artificial_intelligence_ethics_and_safety.pdf" target="_blank">Turing guidelines for responsible AI </a>are bolstering the regulations governing defence and security’s use of data, organisations can do more. By rooting their own ethical frameworks in the human context of AI deployments, they can better ensure their AI is ethical.</p> <p><strong>Keep humans in the analytical loop</strong><br>Establishing the scope of an AI application is key to ethical frameworks in defence and security. Regulators, academics and the public all agree that the scope of AI shouldn’t go so far as to replace human decision making in defence and security.</p> <p>In counter terrorism, for example, <a href="https://1library.net/document/zlvmnlry-artificial-intelligence-and-uk-national-security-policy-considerations.html" target="_blank">RUSI</a> has outlined how AI would be inaccurate if used to identify potential terrorists, concluding that ‘human judgement is essential when making assessments regarding behavioural intent or changes in an individual’s psychological state’. And in criminal justice, most people oppose automated decision making due to the likelihood of unintended biases, such as algorithms based on data that might reflect historic inequalities in society.</p> <p>Without human analysis, use of AI risks inaccuracies and loss of trust. So, AI should be a tool to aid human decision making. Humans should process and validate outputs against other sources of intelligence while understanding the potential limitations of AI-derived findings. Keeping humans in the analytical loop should be a cornerstone of an ethics framework.</p> <p><strong>Embed human governance structures to support ethical AI</strong><br>An ethical framework for AI relies on human governance structures and oversight. Organisations should consider the line of sight from requirements setting, through development, to operational outputs – a diverse, skilled and knowledgeable panel should have visibility of all stages so they can consider factors such as the proportionality of the data used and the potential undesirable consequences. The panel should include those who understand the development of the AI and the data involved, the legal and policy context and the potential real-world effects of proposed developments. And the panel should provide expert, unbiased challenge and diversity of thought.</p> <p>Keeping this line of sight will maintain ethical principles throughout the lifecycle of development and deployment. Doing this properly will involve workforce upskilling and careful navigation of sensitivities and silos. For example, you could involve data scientists in the planning of operational deployments, or train operational managers to build their understanding of the AI technologies deployed. This might require new governance structures, roles and responsibilities to complement existing compliance structures.</p> <p>There’s a lot defence and security organisations can learn from the major tech companies, which are very good at ensuring humans are overseeing potential AI deployments. In recent years, Google has scaled back or blocked several AI developments after finding the <a href="https://www.reuters.com/legal/transactional/money-mimicry-mind-control-big-tech-slams-ethics-brakes-ai-2021-09-08/" target="_blank">ethical risks too high</a>.</p> <p><strong>Balance AI and existing data frameworks</strong><br>The defence and security sector will need to consider and address tensions between AI and existing compliance, data protection and privacy frameworks. But this isn’t unique to defence and security. For example, a leading pharmaceutical company found strict data retention limits would impact the availability of data needed for AI training and oversight. The company acknowledged these tensions and developed privacy and ethics principles specifically for AI.</p> <p>Assuring the quality of data sets should be a key part of any AI ethics framework as higher quality inputs lead to more accurate and trustworthy outputs. For defence and security, data might be biased to a specific context that makes it unsuitable for the proposed use, or it might be incomplete or contain errors, which could lead to misleading outputs. This is because security exemptions might mean data subjects are unable to exercise rights that would be available to them from other organisations, such as the right to correct inaccurate data. Similarly, defence and security organisations can’t always implement effective feedback loops between the users of data and data subjects.</p> <p>So, organisations should make adjustments to mitigate potential quality issues. These mitigations could be business processes (such as the inclusion of data scientists in decision making for operational deployment) or they could be technically based (such as using statistical testing to identify biases in the training data).</p> <p><strong>Defence and security organisations must embed human governance in AI frameworks</strong><br>By creating an ethical framework for AI that prizes human analysis, embeds human governance structures and addresses the compliance challenges unique to the sector, defence and security organisations will be able to leverage the power of data and ensure accurate outputs while maintaining public trust and upholding the high ethical standards upon which the UK prides itself.</p> <p><em><strong>By Elisabeth Mackay, digital trust and security expert at <a href="http://www.paconsulting.com/industries/aerospace-defence-security/" target="_blank">PA Consulting</a>.</strong></em></p> Wed, 26 Jan 2022 15:42:06 +0000 Michael Lyons 15681 at /features/embedding-human-governance-ai-frameworks-maintain-ethical-standards#comments OSINT – the Cinderella of the Investigative family? /features/osint-%E2%80%93-cinderella-investigative-family <div class="field-item even"><img typeof="foaf:Image" src="/sites/default/files/styles/696x462_content_main/public/procurement_1.jpg?itok=_LOypfhD" width="696" height="390" alt="" title="OSINT – the Cinderella of the Investigative family?" /></div><div class="field-item even"><a href="/features/technology" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Technology</a></div><p>OSINT (open source intelligence) can be a powerful intelligence and investigative tool but is too often overlooked and underdeveloped in the suite of capabilities available to investigators. In too many organisations there are significant barriers to the adoption of effective OSINT, as well as a failure to adapt fast enough to emerging technologies and data sources. A cultural shift is needed, as well as investment in technology, in order to elevate the status of OSINT and ensure that it is used to its full potential.</p> <p><strong>The case for OSINT</strong><br>OSINT is, in my view, a critical component of the modern investigator’s toolkit. The volume of data available online is constantly growing, providing investigators with a rich information source to draw from. The insights that OSINT can offer are unlikely to be found in internal datasets, curated databases, or sanctions lists. Failure to make use of open source data can lead to both embarrassment and intelligence failure. There are many powerful examples where OSINT was instrumental in the solving of a case: Bellingcat’s insights on the downed flight MH17 in 2014 relied exclusively on OSINT, and the technique featured heavily in the recent <em>FT</em> investigation into Sanjeev Gupta and Greensill Capital.</p> <p>OSINT should also be considered an essential element of counter-terrorism and counter-misinformation programmes. The mapping of terrorist networks on social media – especially the more grassroots right-wing extremist groups that are now popping up on platforms like Parler – is a highly effective means of identifying the individuals behind these crimes.&nbsp; Investigators have also had great success identifying networks that are spreading misinformation/ disinformation and the real-life identities behind them. In 2015, a year into my time leading the UK’s Counter Terrorism policing efforts from Scotland Yard, our teams convicted one of the early returners from Syria. Imran Khawaja received 12 years for preparing for acts of terrorism, attending a training camp and possessing firearms. OSINT provided much of the evidence.</p> <p>Whether you are the police chasing criminals and terrorists, intelligence agencies pursuing spies, banks looking for money launderers and fraudsters, or others with investigative duties, it is hard to not to conclude that open source investigations are of growing strategic significance. Furthermore, they can save money as a rapid and economic way to understand an offender early in an investigation before deploying more expensive and intrusive tactics.&nbsp; Why then are so many organisations still failing to take advantage of the wealth of opportunity provided by OSINT?</p> <p><strong>What are the barriers to adoption?</strong><br><em>Misconceptions</em><br>The reasons for lack of investment in OSINT are often based on a misunderstanding of what exactly open source intelligence entails, and how valuable it is. Open source intelligence can conjure a somewhat negative image, with connotations of hacker-like behaviour and invasions of privacy. However, the type of OSINT whose adoption I am arguing for can be better described as online open source investigation: making use of freely available online information in a targeted and non-invasive way.</p> <p><em>Cultural and technological barriers</em><br>Culture and technology deficit are also factors in this attitude towards OSINT. Many wrestle with outdated technology architecture and spend most of their efforts focusing on how better to curate internal data. However, this is driven by the culturally outdated assumption that the greatest insights will always be found in the mountains of data that big organisations have spent decades accumulating. This was once true, but increasingly the insights from open source data into individuals and companies will almost always be significant and often be greater than those found internally.</p> <p>Where organisations are realising the importance of open source data, they are often only using it in the form of curated datasets, thus limiting its potential impact. These datasets don’t capture all of the rich, valuable information available on the internet. For example, a well-known curated dataset, LexisNexis, offers six petabytes of data. The entire internet is thought to have over 1,200 petabytes (as of 2020). By relying solely on this database, investigators could be missing out on 99 per cent of available data, meaning that they will almost certainly lose out on valuable insights.</p> <p><em>Lengthy and bureaucratic processes</em><br>Whilst there is clearly a need for thorough and fair procurement processes in every organisation, their complexity and length can also stifle such investments. This was evident in my own experiences: in 15 years as a Chief Police Officer, I was most able to deliver cutting-edge technological change successfully at speed when there was an especially urgent requirement.&nbsp; In early 2012 I joined the Metropolitan Police as part of a new leadership team tasked with dealing with the aftermath of the 2011 riots, where it was concluded that rioters had run rings around the police by organising themselves on social media. The forthcoming Olympics meant that there was an urgent requirement for capability to counter this sort of risk, meaning that I was able to set up the UK police’s first serious OSINT team in just a few months.</p> <p>In this case, the bookends of the 2011 riots and 2012 Olympics created a unique forcing function that facilitated operational clarity and the circumvention of normal procurement rules. After this success I pushed continual investment, but the lack of obvious urgency around OSINT capability meant that progress continued to be slow. As I was retiring from policing, I found myself outside New Scotland Yard announcing to the world&nbsp; that Sergei and Julia Skripal had been subjected to a nerve agent attack in Salisbury. Subsequently, Bellingcat identified the two Russian agents responsible simply from advanced open source investigative techniques – again highlighting the vital importance of OSINT.</p> <p><strong>Increasing flexibility and the role of technology</strong><br>To facilitate increased investment in OSINT, systemic, strategic and technological change is needed.</p> <p>Firstly, organisations need to shift towards more flexible commercial and procurement methods that reflect the reality that many high-quality open-source tools are to be found in early-stage companies. These companies often find that they are accidentally designed out of the complex procurement processes in governments and other large institutions.</p> <p>Secondly, there is a need for a new strategic approach to investigative processes. Organisations need to recognise the changing landscape and makes a conscious decision to allocate a proportion of technology investment and training budgets towards equipping investigators with cutting-edge open-source tools.<br>Thirdly, technologies that offer a sophisticated mix of functionality designed to professionalise the OSINT investigation should be supported and invested in.</p> <p>Technology plays a vital part in reducing operational difficulties in using OSINT by increasing:<br>•&nbsp;&nbsp; &nbsp;<em>Security</em>: gathering online data risks revealing the investigator’s identity, undermining operations<br>•&nbsp;&nbsp; &nbsp;<em>Speed</em>: data can overwhelm without technology that helps you quickly get to the relevant information<br>•&nbsp;&nbsp; &nbsp;<em>Insight</em>: finding connections and presenting data from disparate sources.<br>•&nbsp;&nbsp; &nbsp;<em>Connectivity to other data</em>: OSINT will always be one part of a wider strategy that combines various strands of data to help investigators to see the full picture. The ability to combine data from different sources, both structured and unstructured, is essential.</p> <p>Today there is an exciting portfolio of companies I work with in this field. Blackdot Solutions provide some of the best software to assist open-source investigators; Deloitte are helping big organisations, especially in the financial services sector, transform their investigations through use of social media; and Quest is a specialist security and investigations company which has set up a ‘threat matrix’ with Signify to tackle the racist abuse of leading sports men and women – especially in football.</p> <p><strong>Conclusion</strong><br>My own experience, as well as recent events, have demonstrated the increasing value of including OSINT in an investigation strategy. There are numerous advantages to doing so, and tools, such as <a href="https://blackdotsolutions.com/videris/" target="_blank">Blackdot’s Videris platform</a>, are available to help investigators use open source information quickly, securely and effectively. However, without a strategic drive to ensure the open source tools are part of a deliberate mix of capabilities in the investigator’s toolbox, many organisations will find that cultural, technical and commercial barriers leave this part of their armoury underpowered.</p> <p><em><strong>Written by Sir Mark Rowley QPM.<br>Sir Mark Rowley was one of the most senior police figures in the UK with 31 years of service. He led UK <a href="http://www.counterterrorism.police.uk" target="_blank">Counter Terrorism Policing</a> between 2014-2018. Previously, he held positions as Assistant Commissioner at the London Metropolitan Police and Chief Constable of Surrey Police.</strong></em></p> <p><img alt="Sir Mark Rowley" class="image-within_content_" height="160" src="/sites/default/files/styles/within_content_/public/mark_rowley_0.jpeg?itok=P9hTpnyM" title="Sir Mark Rowley" width="300"></p> <div class="field-item even"><a href="https://blackdotsolutions.com/" target="_blank" title="nofollow">https://blackdotsolutions.com/</a></div> Thu, 28 Oct 2021 14:08:46 +0000 Michael Lyons 15588 at /features/osint-%E2%80%93-cinderella-investigative-family#comments New technologies are the way forward for counter terror in civil aviation /features/new-technologies-are-way-forward-counter-terror-civil-aviation <div class="field-item even"><img typeof="foaf:Image" src="/sites/default/files/styles/696x462_content_main/public/heathrow_175811081243986.jpg?itok=QkQdcnsl" width="696" height="521" alt="" title="©Heathrow" /></div><div class="field-item even"><a href="/features/technology" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Technology</a></div><p><em>As the civil aviation industry continues to deal with the threat of terror attacks, new technologies that use AI are quickly evolving to meet the counter terror needs of critical civil infrastructure and the challenge of upholding national security in the modern world.</em></p> <p><em>Adrian Timberlake, chief technical officer at Seven Technologies Group (7TG), examines how intelligent technologies work to weave together the bigger picture to spot threats of terror on the horizon, and how digital transformation in the aviation sector is upscaling the implementation of counter terror measures</em></p> <p>To the general public, airports would appear to be some of the most secure buildings in the world. While security practices in civil aviation are far more stringent than almost any other form of travel excluding rail networks that involve a border crossing, civil aviation’s usefulness to our national economy, businesses and quality of life, unfortunately makes airports vulnerable to terror.</p> <p>Since 9/11 and the global upscale of airport security, terrorists are becoming ever more inventive and varied in methods. Threats not only consist of terrorists posing as flight passengers; there’s the risk of ‘insider’ threats, interference from unauthorised unmanned aircraft (i.e.. drones) and attacks on IT infrastructure to prepare against.</p> <p><strong>Spotting threats on the horizon</strong><br>Early awareness and intervention is the key in preventing terror attacks, and it’s within these areas that technologies that use AI prove useful as counter terror solutions.</p> <p>AI is being used to analyse data in real time to report intelligence, as it is able to spot both patterns and inconsistencies faster than a human. Software incorporating the technology can be programmed to send alerts to security staff, law enforcement or intelligence agencies on recognising ‘red flags’. This helps to foster early awareness of potential threats and also provides time for security teams to prepare and evacuate the public.</p> <p>Current security solutions for outsider threats, such as perimeter electronic fencing and CCTV, only becomes useful once the threat is already on the doorstep. In this scenario, there’s a risk of security teams meeting the attacker indoors or later at passenger checkpoints.</p> <p>But modern counter terror software, that can be programmed to combine various new technological capabilities including facial recognition, objects recognition (including weapons and drones) and MAC mobile phone address alerts, is capable of spotting approaching threats from miles away, whether it be land or air-based, a known-terror suspect or a stolen vehicle, given that the software is housed within the right optics.</p> <p>To tackle insider threats, counter terror software can be programmed to capture a facial recognition reading which can then raise the alert if a member of staff is on a police or intelligence agency’s ‘watch-list’ – a known subject of interest to law enforcement – or can raise the alert of a potential threat if a member of staff repeatedly explores areas outside of their parameters or is inexplicably on-site on days off.</p> <p>Technology that uses AI can also act as a force-multiplier when integrated into cameras across a wide area; essentially, the technology acts as extra pairs of eyes that can provide entire site coverage while security teams are on patrol. Furthermore, modern software can be retro-fitted into most existing equipment, including CCTV, body-cams or gates.</p> <p><strong>Digital transformation in terminal construction and upscaling counter terror solutions</strong><br>The building of Heathrow T5 is often credited with setting the standard for new terminal construction. Central to the project’s aspirations was a SME (single model environment) - later CDE (common data environment) - which made core project information available to all who needed it. This increased understanding, teamworking and formed a single source of information which helped to increase quality and consistency. Due to use of innovative software, including CAD (computer aided design) models, and a data-focused approach, the T5 project achieved its major design and construction goals on time and on budget.</p> <p>Planning the incorporation of new technologies into the very foundations of critical infrastructure in the early stages of a project’s design allows project leaders to integrate the anti-terror solutions that would best safeguard the public and suit the conditions of the environment. As an example, for the construction of Ramon International Airport (Israel), landside security infrastructure and technologies had been adapted to accommodate the desert terrain and local weather.</p> <p>As terror threats to civil aviation become increasingly varied and sophisticated, counter terror security must also continue to adapt to provide intelligence on early warning signals to help on-site security teams and law enforcement agencies to neutralise terror risks before they become threats to public safety and national security.</p> <p><em><strong>Adrian Timberlake is chief technical officer at Seven Technologies Group (7TG), a UK defence manufacturer, specialising in the provision of intelligence, surveillance and reconnaissance (ISR) systems.</strong></em></p> Mon, 07 Sep 2020 09:58:38 +0000 Michael Lyons 15002 at /features/new-technologies-are-way-forward-counter-terror-civil-aviation#comments Looking up: Evolving drone threats /features/looking-evolving-drone-threats <div class="field-item even"><img typeof="foaf:Image" src="/sites/default/files/styles/696x462_content_main/public/drone-1449878_1920.jpg?itok=gS7COyuH" width="696" height="464" alt="" title="Looking up: Evolving drone threats" /></div><div class="field-item even"><a href="/features/technology" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Technology</a></div><p>We are in the midst of a global turn to the drone. Following their emergence as central tools of military arsenals in the conduct of remote control warfare, 95 countries worldwide are now estimated to possess drones in active inventory. While we often think first of iconic drones such the MQ-1 Predator and MQ-9 Reaper, military drones are in fact far more varied in size, role, and capability. Comprising a vast ecosystem, military drone fleets span platforms for the use and purview of the military only, to those commercially available off-the-shelf.</p> <p>Consumer drones are themselves experiencing a global embrace. Small drones are increasingly deployed and developed as tools across a growing range of roles, from inspection and infrastructure monitoring, security provision, emergency service assistance, image capture and videography, to the delivery of commercial and medical goods. In the UK alone, the number of certified commercial drone operators has leapt from around 400 five years ago, to over 5,500. This growth is echoed too in the growing number of drone hobbyists, casual users flying drones for fun or sport. Estimates place the number of consumer drones sold monthly worldwide at around 200,000. As drones have become more accessible, affordable and popular, so too have concerns grown around their potential as dangerous and disruptive deployable devices.</p> <p><strong>Drone threats</strong><br>While large military drones have long sought technological advantage over adversaries and communities under their violent watch, non-state actors have taken steps to turn the tables. Here, small drones have proved effective tools; deployed, modified and built by a growing number of non-state actors and militant groups. A growing number of such groups have employed drones both to gather surveillance imagery and propaganda, and to conduct attacks. Ranging from the airborne dropping of targeted explosives to fitting drones with explosives designed to detonate when the platform is being inspected on the ground, small drones have emerged as a malleable feature of contemporary battlegrounds.</p> <p>Given that, as drone scholar Arthur Holland Michel writes, drones have emerged as the ‘weapon of choice for non-state groups’, concerns mount around the potential of terrorist weaponised drone deployment in non-battlefield domestic contexts. This concern acted as a pillar for the UK Government Defence Committee’s Inquiry into the ‘Domestic Threat of Drones’. As those providing evidence to the Inquiry noted, there is notable precedent here. In August 2018 a C4-laden commercially-available drone was flown towards Venezuelan President Nicolás Maduro as he delivered a speech at a military parade. Remotely triggered in mid-air, the explosion marked what was widely reported as the first assassination attempt via consumer drone.</p> <p>In mapping drone threats more fully, it’s valuable to distinguish between reckless and malicious drone use. Reckless drone flights include those by individuals not following relevant regulation. This has to date included both those flying unsafely (though without malice) - resulted in cuts, lost eyes, and concussions, as well as those seeking (tourist) imagery of high profile sites including The Colosseum, Eiffel Tower, and the White House.</p> <p>Reckless flights are however increasingly accompanied by malicious ones, whereby individuals are deliberately and intentionally misusing drones in undertaking criminal activities. Here we can understand drone weaponisation as the desire to both inflict harm and cause damage and disruption more widely.</p> <p>Events at Gatwick Airport in December 2018 are of course notable here. Following reports of drone sightings, the airport suffered serious disruption for over 30 hours – to 800 flights, 120,000 passengers, and at a cost of around £50 million, an event which has as yet has not resulted in a conviction. These events further acted to inspire climate-activist group Extinction Rebellion to propose launching drones near Heathrow Airport in seeking operational disruption. While Gatwick arguably remains the largest drone event at an airport, it is far from alone, with many more global airports experiencing smaller-scale operational halts prompted by drone incursions. Disruption is, however, not all that is feared at and around airports. Citing growing close-call and drone sighting figures from the UK Airprox Board, The British Airline Pilots Association (BALPA) continue to vocalise concerns around the potential risks of a drone-aircraft collision, those which it fears harness the potential to cause ‘critical damage’ to aircraft.</p> <p>We also are increasingly witnessing illegal drone presence beyond our airfields. Given that drones are primarily associated with the capture of aerial imagery, we are seeing a growth of drones deployed with the aim of obtaining sensitive or personal imagery. These have spanned those targeting individuals – such as drone ‘stalking’ and flying over cash points, as well as corporations – such as flying over Apple’s campus or film sets, incursions that researchers at the ‘Remote control’ project argue demonstrate the potential for ‘corporate espionage’ via drone. Drones too have been spotted over a range of sensitive facilities, such as international embassies and naval, submarine and nuclear bases, with governments globally vocalising mounting concern around the vulnerability of critical infrastructure to both unauthorised data collection and drones equipped with harmful materials such as explosives or chemicals.</p> <p>Here there also lies precedent, with operators equipping drones with a growing range of items, exploiting their carrying capacity. In seeking to convey a political message, operators have, for example, launched drones carrying flags over football stadiums, banners over political rallies, and even radioactive sand over the Prime minister of Japan’s office. In the context of organised crime, globally both prisons and borders have emerged as key sites for drones-as-carries. In the case of prisons, drones have reportedly carried items as diverse as phones, USB sticks, hacksaw blades and knives, sim cards and DVD players directly into prison windows. Criminal actors have similarly deployed drones in large-scale smuggling operations, ranging from drugs across the US-Mexico border and mobile phones between China and Hong-Kong. Drone-enabled crime is growing in scale and scope.</p> <p><strong>Drone developments</strong><br>While existing malicious drone applications certainly prompt pause, we need also to undertake ‘horizon scanning’ and think with emergent capabilities and their potential to become weaponised. After all, just as drones are becoming more accessible, they are becoming more capable too.</p> <p>In the case of higher end consumer drones, advancements in ‘intelligent flight’ are of note. Referring to types of flight mode, intelligent flight (as advertised by leading drone manufacturer DJI) includes the ability of drones to lock onto and follow particular points/ objects/ persons, and to increase speed or ascend/ descend rapidly. While marketed as cinematographic techniques, the ‘Committee on Counter-Unmanned Aircraft System Capability for Battalion-and-Below Operations’ warned of the potential of such techniques to “invite creative thinking and engineering”, potentially re-imagined and weaponised to target an individual or object. Similarly, partnerships between higher end consumer drone manufacturers and social media giants such as Facebook are enabling users to live-broadcast drone footage, an innovation which could facilitate the live-streaming of drone-captured propaganda.</p> <p>Such developments are accompanied by the re-imagination of drone itself. While small drones are commonly understood as ‘low and slow’, the emergence of the aerial sport ‘drone racing’ has seen this status shift. Purportedly capable of travelling 60 - 160 miles-per-hour, such small and swift drones represent the potential to disrupt and overwhelm a site, cordon or security provision. Similarly, while often imagined in isolation, both within and beyond military environments experimentation continues apace in the area of drone swarming – namely drones flying collaboratively in group formation. Such cooperative drones could too act to disrupt and overwhelm, as was in 2018 demonstrated by a group of criminals who deployed a small swarm of drones overhead an FBI hostage operation to keep an eye on their actions, resulting in an official stating that high speed low passes left the FBI ‘blind’. &nbsp;</p> <p>Such deployments speak to a second kind of drone re-imagination, one that drone researcher Don Rassler refers to as ‘improvised innovation’ or DIY drone modification. Here, to more fully understand potential drone threat, we can turn to hobbyist community experiments with drones. As is readily visible on Youtube, DIY drone hobbyists have equipped drones with both functioning weapons - including flamethrowers, chainsaws, handguns, tasers, paintball and BB guns, as well as other appendages – graffiti cans, fireworks, lighters, and DIY agricultural sprayers. While predominantly not maliciously designed, rather playfully undertaken, such DIY experimentation nonetheless reveals how drones could be adapted in the infliction of harm, causing of damage, or to disrupt a site or event.</p> <p><strong>Countering drones</strong><br>How then to counter drones? As articulated in a report authored by researchers at the US ‘Center for the study of the drone’, a suite of responses to errant and rogue drones have emerged – ranging in form from regulatory and legislative to the technological. Profiling 537 counter-measures, the report details the range of styles making up the current technological counter-measure market – from those alerting you to a drone’s presence, to those blocking and even taking over control. While a flourishing market, as the report details, many counter-measures remain confounded by challenges spanning the short decision-making window (given the drone’s speed of travel and counter-measure device range), potential hazards of the drone if rendered a falling object, counter-measure legality and potential interference with communications systems, cost, and lack of testing data. When combined with the challenges posed by increasingly accessible drones (including those available through second hand and informal selling), the ability to pre-programme flights, the potential distance of the operator from their drone, and ambiguity of what a drone might be doing – a picture of the drone as a challenging object to govern and police emerges.</p> <p>As the UK’s Brandon Lewis MP and Baroness Vere of Norbiton remind us, it remains that ‘there is no technological silver bullet suitable for use against all drones’. It is to this end that those regulating and legislating drones in UK airspace have actively pursued both raising education (see Drone Safe website) and accountability (via mandatory registration and amendments to the Air Navigation Order), while increasing police powers. October 2019 further saw the launch of the UK’s ‘Counter-Unmanned Strategy’, outlining an approach to assess evolving risk, build relationships with industry, and empower police.</p> <p>While laudable to see growing resource dedicated to mitigating and managing rogue drones, as the UK’s own airspace shifts to welcome growing numbers of commercial and civil drones, the complexity of this task also grows. As skies may busy, so too might nefarious operators adapt – copying platform markings or duplicating flight patterns of legitimate users to cause confusion. While its greatest asset, the mobile drone’s versatility remains too a double-edged sword.</p> <p><strong><em>This article was written by Dr Anna Jackman, lecturer in Political Geography at <a href="https://www.royalholloway.ac.uk/research-and-teaching/departments-and-schools/geography/" target="_blank">Royal Holloway</a>, University of London.</em></strong></p> <p><strong><em>Anna is an active drone researcher and has published on consumer drone threats. Anna was appointed Specialist Adviser for the House of Commons Science and Technology Committee Inquiry into the ‘Commercial and recreational use of drones in the UK’, and contributed evidence to the Defence Committee’s ‘Domestic threat of drones in the UK’ Inquiry.</em></strong></p> <p><strong><em>Anna can be found on Twitter <a href="https://twitter.com/ahjackman" target="_blank">@ahjackman</a>.</em></strong></p> Fri, 04 Sep 2020 13:31:27 +0000 Michael Lyons 14997 at /features/looking-evolving-drone-threats#comments The application of UAVs in managing port security /features/application-uavs-managing-port-security <div class="field-item even"><img typeof="foaf:Image" src="/sites/default/files/styles/696x462_content_main/public/southampton_harbour_0.jpg?itok=o9qQPudy" width="696" height="464" alt="" title="The application of UAVs in managing port security" /></div><div class="field-item even"><a href="/features/technology" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Technology</a></div><p><em>Risto Talas and Tom Ellis, from the University of Portsmouth, write an article alongside Commander Suwaid Al Abkal, Kuwait Navy, discussing the application of UAVs in port security with a view to enhancing security and complimenting existing security regimes</em></p> <p>Unmanned aerial vehicles (UAVs), also known as drones, have proliferated due to improvements in the weight and power of batteries and electric motors. UAVs are now common among hobbyists, commercial organisations involved in survey work and film and photography. UAVs are now beginning to be deployed more in the monitoring of safety and security and this study aims to reveal the extent of this as applied to maritime ports.</p> <p><strong>Types of UAV</strong><br>There are numerous types of UAVs and drones currently in operation. From nanodrones, microdrones and minidrones, which are widely available for purchase by civilians, to small range and long-range Medium Altitude and Long Endurance (MALE) and High Altitude and Long Endurance (HALE) drones, which are typically employed by military forces. All these vehicles have some similarities, such as the lack of humans aboard them and control via radio/infrared communication.</p> <p>The ranges, weights, payloads and altitudes of these vehicles also vary. Their power units offer flight times of between 12 minutes and over 24 hours; their maximum take-off weight ranges between 0.4 kg and more than three tons; and they can fly between heights of 30 metres and over 10km. Talas (2016) states that the current regulations for the commercial flying of UAVs differ from nation to nation. In the UK, UAV regulation is governed by the Civil Aviation Authority’s (CAA) Unmanned Aircraft System Operations in UK Airspace: Guidance. This guidance, also known as CAP722, states that UAVs operating in the UK must meet at least the same safety and operational standards as manned aircraft. UAVs are classified into three categories: those which weigh up to 20kg; those which weigh between 20kg and 150kg; and those which weigh more than 150kg.</p> <p>In the UK, drone hobbyists are not required to register their UAVs and nor do they need operating permission from the CAA or a pilot qualification. Furthermore, drones must not be flown within 50 metres of people or over or within 150 metres of any congested area or of an organised open-air assembly of more than 1000 persons. The drone must be flown in visual line of sight so that direct, unaided visual contact can be maintained with the aircraft, which is sufficient to monitor its flightpath in relation to other aircraft and persons on the ground. The drone must remain within 500 metres horizontally and no more than 400 feet vertically from the operator. If a drone operator, including commercial UAV flights, wishes to fly within 50 metres of people, or 150 metres of a congested area, then prior permission must be obtained from the CAA. Where UAVs are deployed for commercial purposes, it is also necessary for the pilot to have undergone training for at least the basic national UAS certificate for small unmanned aircraft.</p> <p>In a port, a UAV could be deployed on regular perimeter checks to assess any fence-line breaches or to overfly buildings to check that roof access doors have not been left open. Furthermore, checks can be made from the air on any restricted areas in the event of the failure of another detection system, such as CCTV cameras.</p> <p><strong>Potential to be exploited by terrorist organisations</strong><br>The constantly growing market in commercial UAVs raises concern among specialists who perceive the increased risk posed to security. The fact that UAVs are largely available for virtually anyone increases the possibilities for them to be misused for reconnaissance and surveillance for criminal purposes. The level of development which permits these devices to carry elaborate imaging equipment and consistent payloads generates additional reasons for concern. The price of this equipment is another feature which renders them highly accessible to the general public. Moreover, it is highly probable that the rapid expansion of the market will push the prices down even further. &nbsp;</p> <p>Specialists in the field point out that there are already UAVs available for under €900 that can transport a payload of approximately 1kg, that can fly for at least 5km, and that have full GPS. If the UAVs are equipped with both GPS systems and autopilot, then they can be used to fly independently according to pre-established routes to deliver any type of payload.</p> <p>The following categories represent some of the key security risks generated by UAVs:</p> <p>Reconnaissance and surveillance – UAVs could be used to identify potential targets or to conduct surveillance missions to inform their operator of the actions undertaken by legitimate individuals from private, public or military establishments. The increased number of unregistered civil UAVs makes it difficult to rapidly establish whether such a device is operated for recreational use, in order to support news-gathering and other similar activities, or if it serves criminal intentions.</p> <p>Smuggling – since some UAV models can carry significant payloads, one of the existing concerns is that these vehicles could be used for the transportation of illicit goods. There have been events which motivate these concerns; most of them intended to introduce different materials into prisons or to transport drugs across borders.</p> <p>Kinetic attack – this type of threat is also related to UAVs’ ability to carry a payload. The nature of the payload correlated with the intentions of those operating the vehicles determines the nature of the risk they pose. If the payload consists of guns or explosives which are flown into people or structures, then such an attack may result in loss of life or material loss. The list of potential targets is virtually unlimited, but, importantly, these could include important and strategic infrastructure.&nbsp;&nbsp; &nbsp;</p> <p><strong>Results and discussion</strong><br>A total of 66 respondents participated in the survey for our study. The data were collected from Kuwaiti port security staff using online tools, while taking into account relevant ethical considerations.&nbsp; The first question was about the field of expertise of the respondents and the results are summarised in Table 1.</p> <p><img alt=" Respondents’ field of expertise" class="image-within_content_" height="160" src="/sites/default/files/styles/within_content_/public/table_1.png?itok=IpIouEJ7" title=" Respondents’ field of expertise" width="300"></p> <p>As Table 1 shows, the majority of the respondents were governmental organisation officials. Only 12 per cent of the respondents were UAV experts, while six per cent were suppliers of UAV equipment. 14 per cent of the respondents were port security experts, which increases the reliability of the data they provided. The next question referred to the respondents’ familiarity with UAVs, with the results shown in Table 2.</p> <p><img alt=" Familiarity with UAVs" class="image-within_content_" height="160" src="/sites/default/files/styles/within_content_/public/table_2.png?itok=eWnt6bc7" title=" Familiarity with UAVs" width="300"></p> <p>As the results show, only 28.6 per cent of respondents indicated that they were very familiar with the devices in question, although a further 44 per cent of respondents declared that there were fairly familiar with the device. Under a quarter (22 per cent) said they were slightly familiar with UAVs, while only two respondents (three per cent) claimed they knew nothing about UAVs. The third question of the survey was ‘how serious do you consider the threat from UAVs to port security?’ As shown in Table 3, 74.6 per cent of the respondents considered that the threat UAVs posed to port security was either serious (38 per cent) or very serious (36.5 per cent).<br>&nbsp;<br><img alt=" Seriousness of threat from UAVs" class="image-within_content_" height="160" src="/sites/default/files/styles/within_content_/public/table_3.png?itok=Znr0RZ-y" title=" Seriousness of threat from UAVs" width="300"></p> <p>As observed in the literature before 9/11, the most significant threats to port security were drug smuggling and organised crime. However, after those attacks, terrorism became a significant threat.&nbsp; One of the most significant current threats to port security comes from cyber attacks, which could be conducted by terrorist groups using UAVs.</p> <p>The fourth question was ‘please rank accordingly the risk of UAVs in terms of a security threat where 1=highest risk and 5=lowest risk’. The results are shown in Table 4.&nbsp; The majority of the respondents indicated that the highest risk is that of an act of terrorism: 40 per cent of the respondents indicated the highest risk. Furthermore, 20.6 per cent of the respondents also indicated that there was a higher risk of illegal surveillance using UAVs.</p> <p>It is also important to observe from Table 4 that the respondents considered that there was a limited risk to using UAVs to smuggle weapons into the secure areas of the port, considering the fact that the UAVs would be able to transport a small explosive device (approximately 1kg) into secure locations.</p> <p><img alt=" Security risks from UAVs" class="image-within_content_" height="160" src="/sites/default/files/styles/within_content_/public/table_4.png?itok=tAQIU4Ri" title=" Security risks from UAVs" width="300"></p> <p>The fifth question was ‘In your opinion how effective could the deployment of UAVs in a port facility complement existing security measures?’ The deployment refers directly to the monitoring requirements as prescribed in the ISPS Code, namely the monitoring of the ship-port interface; port areas; and ships stores (see Table 5).</p> <p><img alt=" Effectiveness of UAVs to complement exisiting security measures" class="image-within_content_" height="160" src="/sites/default/files/styles/within_content_/public/table_5_0.png?itok=Ov8iWcEl" title=" Effectiveness of UAVs to complement exisiting security measures" width="300"></p> <p>Despite the security risks that the respondents identified in relation to the use of UAVs by others, Table 5 demonstrates their confidence that UAVs can also be used legitimately to increase port security. Sixty-six per cent of the respondents believed that UAVs could be effective in monitoring port areas that do not benefit from standard surveillance methods. Furthermore, 57 per cent of the respondents also considered that UAVs could be very effective in monitoring ship-to-port interfaces.<br>The sixth question was ‘In your opinion should it be mandatory for UAV ownership to be registered with local or state authority?’ (see Table 6).</p> <p><img alt=" Mandatory registration of UAVs with a local or state authority" class="image-within_content_" height="160" src="/sites/default/files/styles/within_content_/public/table_6.png?itok=7G076aUt" title=" Mandatory registration of UAVs with a local or state authority" width="300"></p> <p>As we can see from Table 6, nearly three-quarters of respondents (73 per cent) strongly agreed with the registration of ownership with the local or state authorities. This reflects that the number of civilian users of UAV devices is increasing and the authorities have no possibility of determining the purpose for which the UAVs are being purchased. This increases the risks associated with the use of UAVs by civilian users. However, the authorities could not breach the right to privacy of a person, only their right to use a device that can potentially be employed with criminal intent. Nevertheless, the registration of such a device would associate the owner with a specific device identified by a unique serial number. This can both increase the likely legitimacy of the device user and make it easier for the authorities to identify the owner of a device who intended to use it to commit a criminal act. Only one respondent expressed a strong disagreement with the registration of UAV ownership.</p> <p>The study has revealed some interesting findings regarding the application of UAVs in port security with a view to enhancing security and complimenting existing security regimes. It is also noted that the presence of UAVs controlled with criminal intent can be perceived as a threat to port operations, hence the high proportion of respondents looking for UAVs to be registered with a local or state authority. There is scope for the study to be extended to include a wider geographic area and also to address other nodes in the supply chain beyond ports.</p> <div class="field-item even"><a href="http://www2.port.ac.uk/institute-of-criminal-justice-studies/staff/dr-risto-talas.html" target="_blank" title="nofollow">www.port.ac.uk</a></div> Fri, 30 Aug 2019 07:59:10 +0000 Michael Lyons 14504 at /features/application-uavs-managing-port-security#comments Building a critical network of support for first responders /features/building-critical-network-support-first-responders <div class="field-item even"><img typeof="foaf:Image" src="/sites/default/files/styles/696x462_content_main/public/communications.jpg?itok=7cq7wTjp" width="696" height="586" alt="" title="Courtesy of Chula Vista Police" /></div><div class="field-item even"><a href="/features/technology" typeof="skos:Concept" property="rdfs:label skos:prefLabel" datatype="">Technology</a></div><p><em>Tony Gray and TJ Kennedy explore the new technologies enabling better and more connected critical communications within the public safety sector</em></p> <p>First responders such as police, fire and rescue and medical services have always relied on high-quality voice communications, built on designed-for-purposed narrowband technologies such as TETRA and P25 with dedicated network implementations and spectrum. The availability, security and reliability of these technologies infer on them the right to be termed ‘mission critical’, since lives can depend on the ability of the user to immediately connect and communicate.</p> <p>These traditional narrowband technologies are very effective in their ability to carry mission critical voice services, and they also have some mission critical data capability such as sending short messages and images. However, with the availability of mobile broadband, the potential for harnessing a new range of critical data services to enhance the work of first responders is in the process of being realised.</p> <p><strong>Developing mission critical broadband</strong><br>Currently, work is underway to specify mission critical features required by first responders for commercial LTE/4G networks and incorporate these into open technology standards. This work was first catalysed by TCCA in 2012, and development and testing work is ongoing in 3GPP to ensure the standards meet the needs of critical users, and that products and services under development adhere to the standards specifications. 3GPP is the organisation that&nbsp;unites telecommunications standards development bodies around the world and provides their members with a stable environment to produce the reports and specifications that define 3GPP technologies. TCCA is the 3GPP Market Representation Partner for critical communications, ensuring that the needs of the mission critical market are addressed in the standards development process.</p> <p>Once the standards are specified, there needs to be a thorough testing process to help validate them, and to accelerate the time to market for mission critical products. These are called Plugtests™, and are run by the European Telecommunications Standards Institute (ETSI) – initially founded to serve European needs but now with a global perspective. ETSI is a 3GPP organisational partner and one of its roles is to help develop 4G and 5G mobile communications.</p> <p>Earlier this year, ETSI completed its third MCX Plugtests™ event (MCX is the combined term for Mission Critical Push to Talk (MCPTT), MCDATA and MCVIDEO). These Plugtests ensure real world interoperability between implementations and open standards compliance. TCCA provides key technical support for the Plugtests, which are also endorsed by PSTA.</p> <p>This work will eventually enable mobile broadband networks to have mission critical capability, and for first responders to take full advantage of the plethora of data services that can enhance their work in the protection of people and property.</p> <p><strong>The rise of the IoLST</strong><br>As a complement to the emerging mission critical mobile broadband services, there is huge interest in the potential of the lifesaving side of the Internet of Things (IoT) and its applications, and how this network of connected devices can assist first responders.</p> <p>There are billions of devices connected to the IoT, with sensors collecting and sharing data in real time, and there is a growing subset of the IoT known as the IoLST – the Internet of Life Saving Things. IoLST devices are those that help protect individuals, communities and infrastructure, and which can support first responders in their daily operations.</p> <p>The availability and variety of these devices is increasing each day. They include sensors and devices in ‘smart’ cities, which are in many instances considered part of the IoLST and can be used to improve the response in an emergency. Examples of sensors that could be accessed to share critically important information in emergencies include those associated with street cameras, highway/traffic monitoring, building and public surveillance. Other applications include public panic buttons, facial recognition technology, and gunshot and audible recognition sensors.</p> <p><strong>Smart devices need smart analytics</strong><br>The number of devices and the amount of data identified and collected by the IoLST are anticipated to grow exponentially in the next several years, so it is important to be able to efficiently evaluate the data as it only becomes useful information once analysed. Take facial recognition as an example: the analytics need to be smart enough to identify the same face taking the same route several times and flag it as suspicious. For surveillance, the analytics need to be sophisticated enough to recognise the lone package in an airport that hasn’t been moved within critical timescales, and send an alert to the authorities.</p> <p>Many consumer devices can also assist first responders, and have huge potential via the IoLST. Connected medical devices can provide key information to help monitor chronic conditions such as diabetes and asthma, and enable remote intervention so patients do not always have to travel to the hospital for routine checks. This is not only more convenient for the patient, but frees up the time of health professionals to increase their availability for critical care.</p> <p>Smart watches and fitness trackers have been widely adopted by consumers, and can send emergency messages if they detect a dangerous health issue, alerting health professionals to a potential heart attack victim, for instance, and sending the exact location of the casualty. There are also ‘connected pills’ that send a message to health carers once digested by the patient; and smart medicine dispensers that record and transmit usage. Both of these can enable a higher success rate for prescribed treatments, especially for the elderly, and health professional need only intervene when necessary rather than having to constantly check up on the patients.</p> <p>Virtual assistants such as Amazon’s Alexa, as well as home monitoring and security systems, also have the ability to receive feedback from sensors to alert homeowners and public safety professionals in the event of a suspected burglary, or a sudden rise in temperature indicating a fire. These connected devices can also provide status reports, including real time video.&nbsp;In the future it is likely that photos, video and situational data sent to emergency services via 112, 911 or 999 will be rapidly analysed and converted into actionable information, so home assistants currently used for personal convenience will become IoLST assets for first responders.</p> <p><strong>Connected devices for critical support</strong><br>For fire and rescue services, the use of connected unmanned aerial vehicles (UAVs) – or drones – to scope out the extent of a wildfire, or to give an accurate overview of a road or rail crash, is becoming more common. Thermal imaging can pinpoint the heart of a fire, with video and images sent to the incident command and control. Land-based robot drones that can ‘see’ through smoke are sent into burning buildings to transmit images of the status, or into situations where hazardous materials are involved so firefighters can assess the best way to respond to the incident. This not only keeps the firefighters safer, it can improve the outcome of the response.</p> <p>Police forces are increasingly adopting body cameras that can record unfolding events for post-situation analysis and evidence, and the newer versions can live stream video from an incident over broadband networks. This live streaming video capability is a critical tool for police on the front line, sending crucial information to command and control to enable the most informed response to an incident and improve officer safety.&nbsp; It is important to note, however, that each country will have its own rules and regulations around data privacy and civil liberties, so implementing devices such as body cameras and drones is not a simple or automatic process.</p> <p>It is clear that public safety and commercial users are recognising that there are many new tools that can be utilised to support the work of first responders, and provide insights to manufacturers, service providers, public safety and government regulators to ensure these technologies are developed and deployed in a way that best serves and supports our first responders.</p> <p><strong>Case study – Drones as First Responders</strong><br>In December 2015 the Chula Vista Police Department in California formed the Unmanned Aerial Systems (UAS) Committee to study the use of the technology in its public safety operations.&nbsp;UAS Committee members met dozens of times to study best practices, policies, and procedures regarding the use of UAS technology in law enforcement. A special focus of the team’s research was an effort to address concerns about public trust, civil liberties, and the public’s right to privacy during the operation of CVPD UAS systems.</p> <p>Prior to implementing its UAS Program, CVPD discussed its plan for UAS operations in the media, in public forums, and in posted information about the project on the CVPD website.&nbsp;This outreach included a mechanism for the public to contact or email the UAS Team to comment on CVPD’s UAS policy, or to express concerns or provide feedback. It is important to note that, out of respect for civil liberties and personal privacy, CVPD’s UAS Policy specifically prohibits the use of UAS Systems for general surveillance or general patrol operations. After exhaustive planning and research, CVPD activated its UAS Program in the summer of 2017 to support tactical operations by CVPD first responders.<br>&nbsp;<br>Since October 2018 and with strong support from the community, Chula Vista Police has been deploying drones from the rooftop of the Police Department Headquarters to 911 calls and other reports of emergency incidents such as crimes in progress, fires, traffic accidents, and reports of dangerous subjects.&nbsp;This Drones as a First Responder (DFR) System is transformational by providing first responders with something they have never had before, a faster perspective of the situation since the drones are deployed to the incident and arrive well before ground units.&nbsp;</p> <p>The on-board camera streams HD video back to the department’s real-time crime centre where a teleoperator, who is a trained critical incident manager, not only controls the drone remotely, but communicates with the units in the field giving them information and tactical intelligence about what they are responding to.&nbsp;The system also streams the video feed to the cell phones of the first responders and supervisors on the ground so they can see exactly what the drone is seeing.</p> <div class="field-item even"><a href="http://www.tcca.info" target="_blank" title="nofollow">www.tcca.info</a></div> Thu, 18 Apr 2019 14:46:23 +0000 Michael Lyons 14353 at /features/building-critical-network-support-first-responders#comments