03 July 2024
With the 4th July election almost upon us, threats to democracy lie in wait. From ransomware to AI, which are the most significant dangers to a fair and open election – and how will they be addressed?
A s the UK heads towards the polls on – auspiciously – 4th July, the cyber threat landscape has become front and centre to those who seek to attack and expose weaknesses and those tasked with defending and ensuring the integrity of our democracy.
“Bad actors may use combinations of simple and sophisticated methods, including social engineering, phishing and malware approaches with the intent to gain unauthorised access to online services. If successful, this may lead to the spreading of false information and the possibility of gaining a strong foothold to execute potentially more disruptive cyber-attacks,” says Christian Reilly, field CTO EMEA, Cloudflare.
In the most exciting General Election of a generation, the chips are down, and the pressure is well and truly on.
The cyber threat
Daniel Schwalbe, CISO & VP IT, DomainTools reports the welcome news that the UK’s Election System is largely immune to network-based attacks or attempts to manipulate vote counts electronically, as all governmental elections are still conducted entirely using traditional paper ballots.
“Elections in the UK rely on a mixture of citizen volunteers and government employees to manage voting locations and to conduct counts after the polls have closed,” reports Schwalbe. “Vote counts are done by hand, under the oversight of independent observers and party representatives. This system creates checks and balances that makes vote count manipulation at scale impractical.”
Back in 2020, President Donald Trump infamously claimed that the 2020 United States presidential election was rigged by means of tampered voting machines, electoral fraud, and an international communist conspiracy. However, experts like Schwalbe assert that this kind of tampering is rare across the developed world; much larger threats exist with the power to truly sway an election.
Political interference, whether domestic actors engaging in unethical practices such as voter suppression or the dissemination of false information; or foreign governments and/or entities using cyber-attacks or heavily funding specific parties, are a particular concern this year, especially with ongoing geopolitical tensions and the threat of World War III.
“The UK government has previously indicated that hostile nation states including China Iran, Russia and North Korea offer a significant threat to election integrity due to their advanced cyber warfare capabilities, however, when considering the threat landscape and how to defend against it, there should also be consideration given to the potential threat from organised crime groups and ‘lone wolf’ hackers whose motivations may be different but may still cause disruption if their attacks are successful,” adds Reilly.
Cybersecurity threats will be another major problem. Attackers could target election infrastructure, such as voter registration databases and election management systems, disrupting the election process, stealing sensitive data, and manipulating results. Meanwhile, ransomware attacks could lock election officials out of crucial systems or data, demanding a ransom to restore access, which could cause significant delays and disruptions.
“The UK’s National Cyber Security Centre (NCSC) has repeatedly stated that, in their opinion, ransomware is the largest single threat facing the public and private sector alike,” says Reilly. “Therefore, we should consider the possibility and impact of a ransomware event to be a strong one for the forthcoming elections. Recently, a number of local authorities in the UK have been subject to successful ransomware attacks which have affected various critical services including voter registration and validation.”
“Ransomware is a considerable threat to the upcoming UK general election, largely due to the evolving sophistication of cyberattacks, which are increasingly powered by AI,” agrees Srinivas Mukkamala, AI leader and CPO, Ivanti. “Generative AI technologies might amplify cybersecurity risks by making sophisticated attacks more accessible. The potential attacks could range from those targeting essential infrastructure like voter databases and result reporting systems to those affecting a wide array of individuals including MPs and civil servants, all aimed at creating chaos. The implications of such disruptions could be severe.”
Richard Hummel, threat intelligence lead at NETSCOUT, warns that the rise of politically motivated DDoS hacktivism is a major concern: “The months leading up to the UK general election have seen a shift in the global cybersecurity landscape towards a trend of politically motivated DDoS attacks. An unprecedented number of attacks were launched by hacktivist groups such as NoName057(016) and Anonymous Sudan, targeting opponents for geopolitical causes, as well as waging political and religious war against any nation or official that stands in the way of their ideals. In terms of election-related DDoS attacks, cybercriminals can impede voting processes in several ways, such as overwhelming voter information and registration sites, disrupting campaign websites and targeting official results reporting. The outcomes of these votes can also lead to an uptick in cyberattacks. For example, Poland experienced a surge in DDoS attack activity at the hands of NoName057(016) in late December 2023 after the swearing-in of its new Prime Minister, Donald Tusk, who expressed the nation’s support for Ukraine, which stands in direct opposition to the threat actors’ interests.”
Guiding political candidates
Ahead of the local elections, European Parliament elections and General Election taking place over the 12 months from May 2024, the National Cyber Security Centre (NCSC) has published an updated guide on cyber-attacks that target election candidates or political party they represent.
The publication includes guidance on identity and access management policies; enhancing website security; prevent digital impersonation; educating constituents about misinformation; and preparing for ransomware and deepfake attacks.
“We work closely on an ongoing basis with the local authorities who have responsibility for the running of the elections, as well as the Electoral Commission,
however we are also conscious of the impacts of any potential cyber-attack on the candidates themselves, or the parties they represent. It is vital that we guard against the potential for an attack at any point within the electoral cycle, and that we continue to raise awareness amongst those taking part in elections of the importance of strengthening the security and resilience of the ICT systems and devices they are using,” states NCSC Director Richard Browne.
AI: influencing an election
For the first time in UK history, AI may well play a significant role in deciding the outcome of the General Election, possibly influencing voters or even interfering with results.
“AI tools can be used to interfere with the UK’s upcoming General Election in several ways such as disinformation campaigns with deepfakes and social media manipulations, bots and fake accounts, cyber-attacks on voting infrastructure by assisting cybercriminals in hacking by exploiting vulnerabilities in the infrastructure and DDoS attacks to disable websites and online services related to the election such as voter registration portals or election result reporting sites,” reports Reilly.
Industry experts warn that AI can be used to create and spread false information rapidly. Deepfakes, fake news, and misleading social media posts can sway public opinion or create confusion among voters.
“The biggest opportunity for hostile actors to attempt to influence the election would likely be through disinformation campaigns using social media and other electronic communications channels such as Telegram, for example,” asserts Schwalbe.
“While there’s currently no evidence that AI can directly disrupt the electoral process, its role in influencing public opinion through misinformation is a significant concern,” shares Mukkamala. “For instance, a BBC investigation revealed that young voters in critical electoral regions are being targeted with AI-generated fake videos. Given this, it’s vital for voters to be aware of and carefully consider potential shortcomings of AI, such as unintended bias, erroneous baseline data, and/or ethical considerations.”
The general public should be aware that AI algorithms can analyse vast amounts of data to create targeted political advertisements, which can be tailored to specific groups or individuals based on their online behaviour, potentially influencing voting decisions. Moreover, automated bots can amplify political messages or disrupt online discourse by spamming, trolling, or manipulating social media trends. Media manipulation is another pressing concern where AI tools can generate convincing fake content, making it harder for voters to distinguish between genuine and false information.
“Media has influenced people’s political opinions since the print press was born, but we have seen a huge increase in polarisation and partisanship since the advent of the internet and especially social media,” says John Smith, Veracode EMEA CTO. “The danger comes when people use social media as their news source. Many platforms are rife with misinformation and malicious actors use targeted ads to skew voters’ opinions in one or another. Influencing votes and misleading voters has become even easier with generative AI being not only very widespread, but also so easy to use to create fake audio and video clips of prominent political figures.”
Indeed, “fears around AI interfering with the upcoming election have only grown in recent months as political figures, including Prime Minister Rishi Sunak, have had their identities spoofed in deepfake videos or audio clips, falsely portraying them as saying politically damaging things,” says Stuart Wells, chief technology officer, Jumio. “As deepfakes have caused havoc during other elections, UK electoral candidates have since been warned that anyone involved in the election process could be targeted by online disinformation and offered guidance on reducing the likelihood of a deepfake attack. It’s clear that deepfakes and disinformation have the power to undermine trust in our democratic system.”
In the immediate term, organisations providing media platforms must remain vigilant and proactive, working in tandem with government efforts to safeguard the electoral process from the misuse of AI-generated content.
“They can also look to collaborate with vendors who offer state-of-the-art deepfake detection pipelines that can detect and prevent deepfake-based disinformation,” adds Wells.
Maintaining democracy
The integrity of the UK’s general election is under threat and requires a coordinated effort from government agencies, election officials, cybersecurity experts, and the general public to ensure a secure, fair, and transparent electoral process.
The UK’s IT sector can play a crucial role in supporting elections, leveraging their expertise to ensure the process is efficient, transparent, and secure. From protecting election infrastructure and monitoring threats to maintaining systems and software and providing infrastructure support through the cloud, there is a great deal to do in a very limited time, given the short notice provided.
“The UK’s IT sector is pivotal in enhancing election security, from providing essential IT infrastructure for ballot processing to safeguarding critical data. With the rise of AI and associated risks, there is a growing need for robust IT support,” says Mukkamala. “By establishing clear guidelines and a flexible regulatory framework, the IT sector can empower the public sector to use AI responsibly, maximising its benefits while ensuring electoral integrity.”
“Ahead of the election, government organisations, service providers, and enterprises should be prepared for DDoS attacks to increase significantly,” asserts Hummel. “This necessitates organisations implementing industry best current practices (BCPs) in conjunction with ensuring their DDoS protection solutions are up to standard and ready to take on the threat that hacktivist groups pose.”
As an industry, the UK’s IT sector has a collective opportunity to continue to guard and protect the democratic elections by combining technologies, skills and experiences to help defend, identify and fact check systems and sources that are used to provide critical online services throughout the timeline of the election, says Reilly. Education and targeted training, specifically to help combat the threat of social engineering, or phishing, against candidates, prospective candidates and those involved in political campaigning can significantly reduce the risk of an attack being successful.
Additionally, while AI has the capability to interfere with elections, installing proactive measures can make all the difference to its impact. Governments can establish regulations to monitor and control the use of AI in political campaigns and adverts, while platforms can improve transparency around political ads and the sources of information. Implementing robust cybersecurity measures and using AI to detect and counteract malicious activities.
“It is important to put in place mitigation measures starting with audits, fast-checking and identification, and using AI to increase the level of protection across all security areas, ranging from application security to email security and a robust Zero Trust platform,” says Reilly. “This includes creating customised protection for every customer for API or email security, or using our huge amount of attack data to train models to detect application attacks that haven’t been discovered yet.”
The UK’s IT sector possesses the tools and expertise necessary to support and enhance the election process, fulfilling a critical role in safeguarding democracy, ensuring fair access to voting, and protecting the integrity of electoral outcomes. Some believe it is their moral duty to contribute to these efforts – and may even come under CSR commitments promoting good governance and ethical practices - reflecting a broader commitment to societal good and national security.
“It is in the best interest of the UK and our democracy for tech companies to openly share threat intelligence and to provide services for protecting and mitigating risk free at the point of use for government organisations involved in the election process,” asserts Reilly.
Future voting systems
As threats evolve, the networks supporting the elections and voting infrastructure must too evolve to become more sophisticated, secure, and resilient.
Built on networks that prioritise security, resilience, and user accessibility, future voting networks are expected to incorporate blockchain, AI, and cloud computing. Adhering to stringent cybersecurity measures and open standards, these networks will support robust, transparent, and trustworthy electoral processes, to ensure that every vote is counted accurately and securely while maintaining the public’s confidence in the integrity of the electoral system.
Reilly believes that future networks will ensure that the critical services that are delivered across them are increasingly protected against being overloaded, taken off-line completely, or compromised by the types of attacks that allow malware and ransomware to infiltrate an organisation.
“Providing built-in resiliency, where voting systems are distributed across multiple physical locations, will also be a key feature of future networks ensuring that even if an attack is initially successful, there is sufficient redundancy within the network to ensure continuity of service,” says Reilly.
Beware the deepfakes
Jumio’s ‘2024 Online Identity Study’ reveals fear and concern among the British public on the political influence AI and deepfakes may have on the General Election set for 4 July.
The study examined the views of more than 2,000 adults from across the UK. The results suggest that 60% of Britons are worried about the potential for AI and deepfakes to influence upcoming elections, and only 33% think they could easily spot a deepfake of a politician.
The data also reveals a changing relationship between the public and online media. 64% of Britons are more sceptical of the political content they see online, compared to the last election in 2019.
The arrival of generative AI and deepfakes has not significantly changed Britons’ trust in traditional news — 56% of Britons said nothing has changed in their level of trust in print or broadcast news, while 25% said they trust print and broadcast news less. However, four in 10 Britons say they trust what they see on social media less than they did before the arrival of generative AI.
As such, the populace has an appetite for increased regulation of AI: over half (53%) think UK laws around AI don’t go far enough, while only 26% trust the government’s ability to regulate the technology.
“With the UK heading to the polls, it’s vital that we have an open conversation about the role that generative AI and deepfakes could play in the national debate,” said Stuart Wells, Jumio’s chief technology officer. “The public’s lack of confidence in their ability to identify fraudulent content online is concerning, and more needs to be done to educate consumers on how to spot deepfaked content, and how to report it should they see it. Online organisations also have a responsibility and should implement multimodal, biometric-based verification systems or other deepfake detection mechanisms to keep deepfakes from influencing voters in the days leading up to the election and beyond.”