SK logo
July 14, 2025

Digital repression as a barrier to doing good

Digital repression as a barrier to doing good

Date updated:
Digital repression as a barrier to doing good

How AI is being used against people and organisations invested in changing society for the better

Digital repression has been accurately described as “the use of information and communications technology to surveil, coerce, or manipulate individuals or groups in order to deter specific activities or beliefs that challenge the state.”

Artificial intelligence and machine learning technologies (together, “AI”) significantly increase the scale, speed, efficiency and reach of such digital repression. It makes today’s society a difficult place for the third sector. You need to be smart and box clever. 

The following groups should be particularly concerned:

  • Campaigners and activists 

  • Pro-environment, anti-war and human rights organisations

  • Organisations which advocate for minorities or the marginalised

  • Organisations which seek to question authority or hold power to account

Below are some common ways in which AI is being used by some bad actors around the world to facilitate digital repression:

Generative AI is being used to create vast quantities of disinformation, to disseminate it at a previously unimaginable scale and speed, and to present it as factual and directly relevant to people’s lives. Problematic consequences include:

  • Discrediting effect: With a few clacks of a keyboard and clicks of a mouse it is easy to discredit you, your organisation, and what you say. Unmerited criticism or blatant factual inaccuracies can be set out as plausible fair comment from trusted or reputable sources. It is possible to do this anonymously or virtually untraceably, meaning that usual protections – such as defamation law – are difficult to use.

  • Propaganda paradise: Bad faith actors can use generative AI, bots-for-hire and algorithmic targeting to generate support for, for example, corrupt or socially harmful decisions, policies or candidates, and to persuade people to vote against their own class interests. It is simple to create plausible-looking content with the appearance of factual accuracy and impartiality which is emotive by design.

  • Societal fracture: Generative AI has been an important weapon in the conduct of culture wars, particularly to create and disseminate content on social media which aims to identify and amplify wedge issues and sow division amongst different communities who may otherwise realise they have common class interests. Algorithms can accurately identify and relay an internet user’s biases, concerns and fears, which are then played upon by purpose-designed content which is cheap and easy to create without creative or journalistic qualification or background.

AI offers extremely tempting costs savings for organisations with (or indeed without) financial issues. Think of a wealthy owner of a newspaper or broadcaster who may be tempted to replace senior editorial staff who have dissenting or divergent views with software which can be programmed to edit in a way which censors or leaves out due criticism or challenge. 

Bad faith actors do not necessarily need AI to surveil and monitor you, but AI dramatically expands the possibilities for doing so: 

  • Digital footprint: AI makes it possible – in real time and at a speed, scale and efficiency previously uncontemplated – to track a person’s digital footprint, whether active (i.e. intentional actions like social media posts, completing online forms, or accepting cookies) or, more troublingly, passive (data left unintentionally or unknowingly from a website visit, usually tied to your internet protocol (IP) address, for example via web beacons or unlawfully deployed cookies).

  • Authoritarian sift: AI software can be instructed to sift through and examine large amounts of footage collected by surveillance cameras in a time frame and with an accuracy that humans cannot offer, resulting in the unprecedented scale and reach of being able to observe people as they go about their daily lives. It does not take much imagination to link these capabilities with low-threshold anti-protest criminal law. It is easier than ever before for authorities to identify who has attended a protest and to monitor them or threaten them with criminal sanction.

  • Codebreakers: AI software can be geared and used to help crack user account passwords via a repetitive trial and error process which would take years for a human, making it easier to access messages sent in the mistaken belief that they are encrypted or websites visited in the mistaken belief that movements are invisible.

AI allows cash-strapped or power-ambitious governments or autocrats to harness the deterrent power of police or military without needing to pay for it or necessarily even have them onside. It concentrates power among fewer people, as the number of people needed to operate automated police operations is smaller than a traditional police force, making it easier for a commander to overcome as there is a smaller network to influence. It is therefore easier to achieve and maintain substantial authoritarian control. This concentration of power also increases loyalty and decreases the discretionary exercise of restraint and human consideration in decision-making.

AI also offers governments the option of decreased costs and increased pervasiveness when it comes to surveillance, overcoming traditional barriers to panoptic monitoring. Low cost cameras and drones can monitor large public areas without human resource costs.

To be clear, the uptake is broad: the Carnegie Endowment for International Peace reports that, currently, governments in 75 countries are using some form of AI surveillance technology. This is not limited to dictatorships and autocracies: its uptake is just as enthusiastic amongst liberal democracies. The technology is just as available for hire and use by private individuals and companies.

Does the law help us?

The UK Government does not appear to have plans to create AI-specific legislation or regulation, having been lobbied effectively by predominantly American big tech and domestic security and intelligence services. The Government has made clear that other areas of law (like data protection, intellectual property and equalities legislation) do apply as usual, but regulators are, at present, being required by the Government to publish details of steps they have taken to facilitate AI innovation. As lawyers, we find this slightly troubling: a regulator’s job is not to facilitate innovation, it is to monitor compliance with, and enforce, rules which are in place for a reason.

There is currently no clear or straightforward way to challenge a particular use of AI as unlawful. It may also be difficult to substantiate and evidence any such challenges, if there even is a regulatory appetite to field and investigate complaints.

Case law and regulatory decisions will crop up over time which provide more certainty, but, presently, there is not much to go on.

So, what can we do to protect ourselves while doing our important work?

The above may seem daunting and disheartening, but it is far from a lost cause. These developments should be taken as the serious threats they are, but you are not helpless or without agency.

Third sector organisations should be thinking along the following lines:

If you suspect or find out that an organisation has used AI to monitor you or otherwise prejudice your interests, there are several questions you can ask or challenges you can make under UK data protection law:

  • Asking for confirmation that your personal data has been processed and for details of how (i.e. exercising your right of access under Article 15 UK GDPR).

  • Asking for confirmation of whether you were notified in advance about the use of your personal data by the AI software in question for the purposes of Articles 5(1)(a) and 13/14 UK GDPR and, if you were not notified, why the relevant organisation considered itself exempt from having to notify you.

  • Asking for details of the lawful basis for collecting and processing your personal data (for the purpose of Articles 5(1)(a) and 6 UK GDPR), and for details of the condition relied on to collect and use your special category (i.e. sensitive) personal data or criminal offence data (for the purposes of Articles 9 and 10 UK GDPR and Schedule 1 Data Protection Act 2018).

  • Individuals also have the right not to be subject to any decision based solely on automatic (i.e. with no human involvement) use or consideration of their personal data which produces legal or similarly significant effects (Article 22 UK GDPR).

  • If the relevant AI system has been provided by a supplier, request details of due diligence steps and the relevant data processing agreement (for the purposes of Articles 5(1)(f), 28 and 32 UK GDPR).

Think carefully about where your people work or travel. For example, many countries outside of the UK and the European Economic Area have domestic data protection and privacy laws which offer a lower standard of protection and rights for individuals, meaning that it is easier and less risky for private organisations or authoritarian regimes to access information about them, monitor their movements and try to take steps to prevent them doing the work they are there to do. 

For example, a UK-based human rights charity may want to think twice before sending their people to a speaking or working engagement in a country where the intelligence and security services are subject to only limited restrictions when accessing information about people entering the country. In jurisdictions where this is the case, there is potential for individuals to be subject to scenarios such as detainment on arrival and deportation without the right to appeal.

There are a number of IT measures you can take to keep your digital footprint private:

  • Only use genuinely end-to-end encrypted instant messaging services which ensure that your messages cannot be accessed either in transit or at rest (for example, Signal, WhatsApp or iMessage). Signal does not collect or store metadata about who is calling or texting who, and it is not operated by a US tech giant.

  • Additional encryption: modern iOS and Android smartphones offer the option to add alphanumeric passwords or biometric ID – make use of these as they are much harder to crack using AI software.

  • Be careful with cloud storage. While is it undoubtedly advantageous in many ways (for example, you can never run out of space and your files and data are backed up), the trade-off is bringing a third party into the mix – cloud companies that hold/manage your data can usually access it at will, and can sometimes, in some jurisdictions, be compelled to hand it over. Some suppliers offer encrypted services.

  • Use internet browsers that obscure your IP address or are set to private as a default. Always use your company’s VPN when working, and consider getting a trustworthy (always read reviews) VPN for your personal internet usage as well.

  • Be careful with cookies – don’t just accept them automatically on arrival at a website because it is quicker and easier to do this. If your choice is all cookies or no cookies, choose none. If you are offered a granular choice, review and choose those which you are comfortable with. Cookies are a very common method by which data regarding your online behaviour is captured and sold anonymously, and an area where some organisations do operate unlawfully in the knowledge that regulators do not have adequate enforcement resources.

  • Good general IT hygiene goes a long way:

    • Reputable anti-virus software and firewalls reduce risks of passive data sharing

    • Multi-factor authentication and strong/complex passwords

    • Avoid public Wi-Fi 

    • Install ad-blockers

    • Don’t underestimate the power of simply switching your devices off when they are not in use – cracking encryption is much harder to do when the device is off

Lastly – and importantly – keep doing what you are doing. Campaigning and protesting are vital to a functioning democratic society. Use your voices and platforms to demand accountability and transparency in the development and deployment of AI across society, and take time to highlight its risks and dangers to those who may otherwise be unaware. 

Be careful and vigilant, use the tips above, and if you have any further queries on this subject, please do get in touch with Lucas Atkin and the Information Law Team.

On