Non-consensual creation, processing, and distribution of intimate images in the Western Balkans

Legislative and institutional framework

Most of the countries covered in the study have still not directly regulated the non-consensual creation, processing and distribution of images and videos, sometimes colloquially referred to as revenge pornography,1 in their Criminal Codes, except Croatia and Slovenia (both members of the EU) and Montenegro as of the end of 2023. However, each country has a set of criminal offences that can be evoked in these cases. These offences broadly fall under two categories: 1) protection of body and sexual integrity (e.g. sexual harassment, harassment, stalking) and 2) protection against privacy intrusions through technological means (e.g. unauthorised recording/filming, disclosure of personal data). Despite limitations, these offences can provide protections from a normative perspective, though it is difficult to assess if existing offences offer meaningful protection in practice.

Most recently, Montenegro criminalised the unauthorised taking of photographs as well as the unauthorised publication and presentation of other people’s writings, portraits, and recordings through two articles in the Criminal Code. This change presents a significant step forward in the ways in which sexually explicit non-consensual content is regarded and addressed. The criminal acts carry with them a potential prison sentence of between six months and two years. In Serbia the criminal acts of sexual harassment and unauthorised recording are prosecuted based on private criminal charges, meaning that there is no assistance from the police/prosecutor (the affected party is responsible for proving the crime occurred) and more importantly the charges need to be filed in the period of up to 3 months after the person finds out about the content. In all countries, except Serbia, these cases are prosecuted by the General Prosecution office, and there are dedicated police units for countering cybercrime. It is unclear if these units also investigate cases of distribution of non-consensual intimate materials or, more importantly, offer assistance in enforcement procedures (e.g. requests for removal of the content). In some countries, like Serbia and Albania, there are special police units dedicated to counter domestic violence that have a certain level of expertise in dealing with gender-based violence (GBV) cases. 

The enforcement of judgments is tied to severe problems as the digital platforms are reluctant to cooperate with the state authorities. CERT (Computer Emergency Response Teams) and similar bodies are often involved in these procedures (e.g. request to block/remove content from digital platforms) with different levels of success and expertise. In general, there is little to no intervention and assistance from the human rights oversight bodies like the Gender Equality Commissions or Ombudspersons who are still not particularly vested in this societal issue.

Main challenges in the region

  • Most countries lack institutional competence in official bodies such as police and judiciary for dealing with gender-based online violence (GBOV) cases, which often leads to lack of sensitivity or sense of urgency when such cases are reported.
  • Even in cases where the law tries to catch up with the technology and non-consensual processing and distribution of intimate images could be punished, the creation of such materials is still not recognized as a crime, allowing the perpetrators to get off and not face punishment (in the case of Telegram groups in Serbia, the administrator of the groups was the only one persecuted and ultimately released).  
  • Insufficient knowledge on issues of gender-based online violence is prevalent and therefore these issues are not problematised adequately, affecting all aspects of society, from public authority figures and education institutions to the broader public.
  • When these kinds of cases are reported on in the media, they are mostly reported on in a sensationalist way and can often retraumatise survivors through unethical reproduction of the content, especially if the case involves celebrities or other public figures.
  • In most countries, educational institutions do not discuss issues such as sexual harassment or gender-based violence, neither in the offline or online context, leaving students without any adequate knowledge on the topic.
  • The nomenclature surrounding these crimes is also problematic, associating the crimes both with revenge (in some instances such cases do not necessarily feature an element of revenge, and this kind of wording might suggest the survivors have done something that warrants retaliation) and pornography (the majority of people with an experience of image based sexual harassment or violence did not consent to the material being made and/or distributed), thus contributing to further stigmatization.
  • The issue of gender-based online violence is often seen as an endemic one, rather than a greater societal issue stemming from cultural approaches to violence against women, sexual harassment, and gender inequality.
  • Civil society organisations and women’s rights organisations working on these topics are usually underfunded and therefore rely on short-term projects and grants to offer their support to survivors and are seldom invited to contribute in official discussions.
  • The stigmatisation of survivors leads to the underreporting of such crimes, which makes it more difficult to understand the widespread nature of such crimes as well as to come up with effective ways to curb them.

What can be done?

In Slovenia and Croatia the Criminal Code recognises the distribution of intimate materials without consent, and moreover, the Croatian Criminal Code addresses more specifically deepfakes and other forms of image-based sexual abuse. A widespread distribution of non-consensual content also carries a higher penalty in these countries. In some countries (North Macedonia) policy changes are taking place that should ensure compatibility of the Criminal Codes with the Istanbul Convention, while in others specific amendments to the Criminal Code are being proposed to deal with such crimes directly (Montenegro). This is an important moment to open up discussion with the legislative bodies about issues of non-consensual image based abuse and ensure that it is taken into consideration when drafting future laws. Although it might not always be necessary to include a direct provision, as in the case of Slovenia or Croatia, it is important to ensure that non-consensual image based sexual abuse is broadly covered and included in state legislature or that certain problematic legal requirements are amended (e.g. in the cases of Montenegro and Serbia – private criminal charges and extremely short three months preclusive period). Additionally, civil society organisations, including those offering shelters, legal support, and awareness campaigns, play a vital role in providing comprehensive support and raising awareness about these critical issues.

Regardless of the policy changes, it is of utmost importance to undertake in-depth case law research to understand the previous court rulings and legal reasoning in digital GBV cases before proposing any legislative changes. This research could in fact show that Criminal Codes and case law are sufficiently equipped to offer protection, in which case the advocacy efforts should focus on strategic litigation, raising awareness, and sensitisation of judicial authorities and law enforcement. 

Concerning good practices, it can be said that, for example Albania has a geographically balanced spread of women-led groups and resources, while some of the women-led initiatives in Bosnia and Herzegovina, e.g. the women of Kruščice, have shown that community activism can lead to significant changes – which can motivate communities to work on such issues too. In Greece there were certain cases (e.g. S. Panagiotopoulos case) that raised social awareness on non-consensual image based sexual abuse and represented a stepping stone for survivors to seek justice (as recently happened in a case in Thessaloniki). The penalties accompanying these cases are too lenient to serve as adequate deterrence, but serve as a yardstick for similar cases. Similar to Serbia, survivors in Greece are encouraged to reach out to CSO support structures, before engagement with police, so that trained lawyers can provide trauma support early, and throughout the process. 

In Montenegro, the Women’s Rights Centre organises trainings for public servants who work on cases of non-consensual sexual materials creation and distribution and there is news of upcoming legislative changes that would allow state-wide filtering and blocking of online content, including the distribution of non-consensual image based sexual abuse materials. NGOs in North Macedonia are working under the Platform for Gender Equality lobby to change not just the legal framework, but also the public narratives (e.g. victim blaming) in non-consensual image based sexual abuse cases through public protest and demonstrations,2 as well as provide survivors with free legal aid and psychological support. 

In March 2022, two people were convicted on charges of production and distribution of child pornography in North Macedonia. Kosovo’s 2010 Law against domestic violence is currently going through a process of amendment, and the new focus will be set on countering violence against women online. This could create momentum to more effectively counter GBV in the country. Serbia’s Strategy for Prevention and Combating Gender-Based Violence Against Women and Domestic Violence (2021-2025) recognises ‘revenge porn’ as a form of GBV requiring more attention and awareness-raising. Existing experience, and decades of learning and developing alternative systems to counter GBV and support women have the potential to bring change, raise awareness, and offer immediate help to survivors. 

Joint efforts on educating the public and relevant stakeholders in the region on these issues is also a necessary advocacy step. Demystifying concepts which surround GBOV and adequately approaching them can open avenues for understanding and collaboration between civil society and other groups such as governments, the media, and the private sector. Arguing for improved knowledge on this topic will be beneficial for shaping future generations and underlining the importance of strong and appropriate regulatory mechanisms and frameworks. 

1 This term can be misleading and insulting, as it both refers to revenge as to imply there is something to take vengeance for, and paints the materials as pornography, which is not the case in the majority of instances. A more appropriate way to refer to it is as non-consensual image based sexual abuse.

2 In February 2021 hundreds of protesters gathered outside North Macedonia’s Interior Ministry called on the government to crack down on private messaging groups sharing unauthorised and often explicit photographs and videos of women and girls such as the Telegram group Public room with more than 7,000 members shared thousands of private photos and videos of women and girls, including their private data such as addresses, phone numbers, ID, etc. (doxxing).

Mila Bajić is the Lead Researcher at SHARE Foundation with a focus on the relationship between new media, technology and privacy.

Read more:

Apply for the 2024 Digital Rights Summer School!

The applications for the 2024 Digital Rights Summer School in Perast, Montenegro are now open!

The School takes place from 25 to 31 August 2024, and this year’s program is designed for enthusiasts based in Southeast Europe, who are passionate about digital rights and eager to learn more about the latest developments in this field.

During the school, we’ll explore the impact of new technologies on human rights through lectures and talks by regional and international digital rights experts. You’ll take part in workshops and discussions on digital markets and online content regulation, information warfare, online harassment, AI-enabled surveillance and policing, while exploring their potential effects on regional affairs and strategies for advocacy. As a participant, you will also have the opportunity to connect, share experiences and collaborate on digital rights initiatives across the region. These are just some of the topics you will hear about and gain both theoretical and practical knowledge!

This program is organised by SHARE Foundation, European Digital Rights (EDRi) and Digital Freedom Fund – it offers a unique opportunity to learn from leading experts and network with other professionals in the field. Join us for an engaging program in the beautiful setting of Perast. Accepted applicants will be provided with travel and accommodation during their stay.

To apply, please fill out the form at the following link: 

Applications form closes on 15 May, 17:00 CEST (Belgrade time). Decisions of acceptance will be made in early June. 

Please feel free to reach out at [email protected] if you have any questions.

Read more:

Elections on the information margin

Preliminary analysis of media content for the most visited online media in Serbia: 1 November – 17 December 2023 

On Wednesday November 1, the President of the Serbian Parliament, Vladimir Orlić, announced local elections in 65 cities and municipalities in Serbia, including Belgrade. A few hours after Orlić’s traditional protocol of signing the Decision on calling for elections in the hall of the National Assembly, the message “Long live Serbia! Happy elections!” arrived from the digital address of the President of the Republic of Serbia. Aleksandar Vučić’s cyber address also announced snap parliamentary elections, which started the pre-election campaign that lasted until December 14 at midnight. Meanwhile, on November 16, provincial elections were announced.

Research shows that online media’s importance as sources of information keeps steadily increasing, both in the world and at home. Although television is still in the lead when it comes to sources of obtaining information, the role of online media, which is currently in third place, requires special research attention. Taking into account the importance of free, comprehensive and balanced information from the point of view of the level of democracy of the entire election process, this research focuses on digital media and the information environment they create. The aim of the research is to identify and explain the key features of the informational content published by the leading online media in Serbia, in order to determine to what extent and in what way citizens were informed about the elections in the pre-election period, but also about key social and political issues, and the ways in which the central themes were defined in the digital environment and specific forms of support to the ruling structures were shaped.

Elections 2023: Preliminary Report (.pdf)

Mila Bajić is the Lead Researcher at SHARE Foundation with a focus on the relationship between new media, technology and privacy.

Snežana Bajčeta is the SHARE Foundation Researcher in the fields of digital technologies, media and journalism.

Read more:

EU proposal of the AI regulation adopted

Late into the night on Friday, December 8, the lengthy negotiations on the final version of the EU artificial intelligence regulation (AI Act) were concluded, with the first of a dozen technical meetings expected this week to specify the details of the law’s implementation.

According to initial reactions, the adopted solutions did not fully meet the expectations of human rights organizations and activists, nor did they satisfy industry lobbyists and security-focused politicians.

Among other things, it is mentioned that “predictive” systems will only be partially prohibited, meaning that not all applications of artificial intelligence systems in policing and “crime prediction” are classified as unacceptable risks – a significantly weaker protection than what members of the European Parliament voted for this summer. The final ban includes some predictive systems based on “personal traits and characteristics”, but not geographic crime prediction systems already used by police forces across Europe. Critics note that such a partial ban allows for the creation of additional exemptions in the future.

Of particular concern is the possibility that any application of artificial intelligence systems in the context of “national security” would be entirely exempt from the regulation’s scope, including bans on unacceptable risks and transparency requirements.

The most challenging part of the negotiations concerned the bans, that is the classification of AI systems as posing unacceptable risk. The adopted proposal, as reported by those with insight, bans real-time remote biometric identification (RBI) in publicly accessible places—except when used to search for specific suspects or victims of certain crimes, to prevent “specific, substantial, and imminent threats to the life or physical safety of natural persons or a specific, present threat of a terrorist attack,” as well as for the “targeted search for specific victims of abduction, trafficking in human beings and sexual exploitation of human beings as well as search for missing children.”

The use of RBI needs to be approved by a judicial or otherwise independent authority and is limited in time and space, and it cannot include constant comparisons of all people in the public spaces with full police or other databases. In urgent situations, the judicial authorisation has to be done ex-post within 24 hours.

Post-Remote Biometric Identification (not in real-time, but on video footage) is not banned, but now a high-risk category and only possible if there is a prior judicial authorisation, or if it is strictly necessary in an investigation for the targeted search of a person convicted or suspected of having committed a serious criminal offence that already took place. Member States may introduce more restrictive laws on the use of Post-RBI systems.

Biometric categorisation is banned “that categorise natural persons based on their biometric data to deduce their political opinions, trade union membership, religious or philosophical beliefs, sex or sexual orientation from this biometric data.”

Restrictions have been introduced regarding emotion recognition, some applications of high-risk AI systems in the private sector, while criteria for risk classification have been expanded. The publication of the adopted version of the regulation is expected in the coming months.

Read more:

European Promotion of the SHARE Foundation’s Book on Biometric Surveillance

One of the most comprehensive studies on the use of biometric systems worldwide, the SHARE Foundation’s book “Beyond the Face: Biometrics and Society” was presented on Monday, December 4, in Berlin and on Wednesday, December 6, in Brussels.

The promotion brought together the community for the protection of digital rights and freedoms at the Tactical Tech premises in Berlin, where the authors and attendees discussed the legal and social consequences of mass biometric surveillance.

In Brussels, a discussion was organised in the European Parliament on the study’s key findings, with opening remarks given by Members of the European Parliament Viola von Cramon and Sergey Lagodinsky. As an urgent danger, the MEPs pointed to a potential precedent in Serbia with the application of biometric surveillance technology towards the creation of a dystopian surveillance society. Patrick Breyer, also a member of the European Parliament, took part in the discussion.

Coincidentally, the book promotion in Brussels took place during the final stages of the trilogue on the new European regulation on artificial intelligence (AI Act). In fact, after the event MEP Lagodinsky moved on to the negotiations on banning of unacceptably risky systems, carrying his own copy of the book. The impact of the AI Act will have far-reaching consequences for the regulation and use of biometric surveillance systems around the world. Thus the importance of such detailed analysis of the biometric surveillance application in different countries, contesting problematic provisions of the future European law that could legitimise these practices.

The study on the social consequences of mass biometric surveillance provides a detailed description of technologies from various manufacturers used by public and private actors for surveillance. It includes a comparative analysis of regulations governing this field in the US, the EU, and a range of countries in Africa, Asia, and Latin America. Additionally, it explores some practical cases of biometric surveillance application, from Myanmar and the United Kingdom to New York and Belgrade, for border control, public space monitoring in major cities, or suppression of opposition activities.

Spanning over 300 pages, divided into three main segments—Technology, Law, Practice—the book acquaints readers with the current global experience of the conflict between fundamental human rights and the profit-driven biometric surveillance industry. Despite abundant evidence, authorities worldwide still believe that these systems contribute to the security of society.

The book is freely available, currently only in English. The editors of the publication are Ella Jakubowska (EDRi) and Andrej Petrovski & Danilo Krivokapić (SHARE). The authors of the texts are Bojan Perkov (Technology), Jelena Adamović and Duje Kozomara (Law), Mila Bajić and Duje Prkut (Practice).

Read more:

Spyware attack attempts on mobile devices of members of civil society discovered

SHARE Foundation warns of the disastrous impact of misuse of technology against the critical public in Serbia

On October 30, two members of civil society from Belgrade received an alert from Apple that they were potential targets of state-sponsored technical attacks. Thanks to good cooperation with civil society organisations in Serbia, they contacted the SHARE Foundation immediately after receiving the warning and asked to check the allegations to determine if their devices were attacked by any known spyware.

After the SHARE Foundation team, in cooperation with Internews, received confirmation from Apple representatives that the alerts were authentic, mobile devices were analysed to determine whether they had traces of spyware infection, among which the most well-known are Pegasus and Predator. For the final confirmation, the SHARE Foundation team turned to international organisations Access Now and Amnesty International, which have high expertise in the field of digital forensics.

Based on the reviewed data, these two respectable organisations confirmed that traces of an attack attempt that took place on 16 August 2023 were found on both mobile devices. Both expert organisations came to the same findings – that in the initial phase the attack was attempted via a vulnerability in the iPhone’s HomeKit functionality. The Pegasus spyware has previously been linked to multiple exploits targeting HomeKit, including PWNYOURHOME.

The SHARE Foundation warns that spyware attacks on representatives of the critical public have a disastrous impact on democracy and human rights, especially in the pre-election period. The use of spyware is illegal and incompatible with democratic values.

We remind the public that these and similar tools for technical attacks on mobile devices are used by non-democratic regimes around the world to spy on members of the opposition, civil society, independent media, dissidents and other actors working in the public interest. Such activities threaten the freedom of expression and association, as well as the right to privacy and secrecy of communication guaranteed by domestic and international law.

The SHARE Foundation invites media and civil society representatives who may have received the same message from Apple to contact the foundation to verify the warning.

NOTE: In accordance with the wishes of the members of civil society who were the target of the attack, as well as with security measures, SHARE Foundation will not provide additional details about this incident.

NOTE 2: The part of the text related to device analysis and vulnerabilities that were targeted was amended on 29 November 2023 at 12:38 for precision.

Read more:

Digital Rights Summer School: Where Is AI Leading Us

The second Digital Rights Summer School was held from July 23rd to 29th in Perast, Montenegro, with more than 50 participants and lecturers coming together to exchange and acquire knowledge about current issues at the intersection of society, technology, and human rights. The Digital Rights Summer School is organised by the SHARE Foundation in cooperation with the European Digital Rights (EDRi) and the Digital Freedom Fund (DFF).

The central theme of the School was artificial intelligence, considering the significant societal challenges in the context of using biometric surveillance in public spaces, border control and migration, or manipulative generated content. Participants also had the opportunity to learn more about practical aspects of researching phenomena such as networks for sharing intimate content and propaganda internet campaigns.

Some of the significant questions raised during the discussions included the environmental sustainability of digital infrastructure, which is expected to have an even greater demand due to the expansion of artificial intelligence. Additionally, the use of advanced software (such as Pegasus) for spying on journalists’ phones was discussed, along with maintaining a balance between freedom of expression and privacy in light of current and future regulations. Set at the site of the Old Austrian Prison in Kotor, an expert panel explored the perspectives of digital and European integration policies in the Western Balkans.

During the course of the School, an exhibition titled “Imagine Boka” by Andrija Kovač was opened in Perast. Blurring the lines between fact and fiction, this exhibition presents an intriguing collection of AI-generated photographs that explore an imaginable, alternative history of Boka Kotorska in the 1970s and 1980s – its people, places, customs, and traditions.

Many thanks to all participants, lecturers, and guests for the exciting week we spent together. We invite everyone to follow our website and social media channels and sign up next year!

We owe our gratitude for the support of the Summer School to the Gieskes-Strijbis Fonds, the Open Society Foundations Western Balkans. The Summer School was also supported by a core grant from the regional project SMART Balkans – Civil Society for a Connected Western Balkans, implemented by the Center for Civil Society Promotion (CPCD) in Bosnia and Herzegovina, the Center for Research and Policy Making (CRPM) in North Macedonia, and the Institute for Democracy and Mediation (IDM) in Albania, financially supported by the Ministry of Foreign Affairs of the Kingdom of Norway.

Read more:

Strengthening Cybersecurity: MFA v. Phishing

In today’s digital landscape, cybersecurity is of paramount importance to protect sensitive information from unauthorized access. Multi-Factor Authentication (MFA) has emerged as a powerful security measure, adding an extra layer of protection against account breaches.

MFA is an authentication approach that strengthens the login process by requiring users to provide multiple elements or “factors” from different categories. These factors encompass something you have, something you know, and something you are.

MFA integrates two or more of these factors into the authentication flow. Examples include typing a password and responding to a push notification on a registered smartphone, entering a password and providing a one-time code from a hardware authentication device, or utilizing a biometric facial scan and/or passphrase to unlock a cryptographic credential stored on a registered device, such as a phone or hardware token.

However, it’s essential to understand that MFA is not foolproof and can be bypassed in certain scenarios, such as phishing attacks. 

The Importance of Multi-Factor Authentication (MFA)

Many cybersecurity agencies in Europe and the United States have elaborated on the importance of MFA, which can be summarized in the following bullets:

  1. Strengthening Authentication: MFA combines multiple authentication factors, such as passwords, physical tokens, and biometric data, significantly increasing the difficulty for attackers to gain unauthorized access. Even if one factor is compromised, the additional layers of security act as a barrier against unauthorized entry.
  2. Protection Against Password-Based Attacks: MFA mitigates the risks associated with weak or compromised passwords by requiring an additional authentication factor, making it harder for attackers to exploit password vulnerabilities.
  3. Safeguarding Remote Access: With the rise of remote work and cloud-based services, MFA plays a crucial role in securing remote logins, ensuring that only authorized users can access corporate resources or personal accounts from various locations.
  4. Compliance and Regulatory Requirements: MFA is often required or strongly recommended by industry standards and regulations, demonstrating a commitment to protecting sensitive data and instilling customer confidence.

When implementing MFA, a company should consider these benefits and disadvantages:

adds layers of security at the hardware, software and personal ID levelsa phone is needed to get a text message code
can use OTPs sent to phones that are randomly generated in real time and is difficult for hackers to breakhardware tokens can get lost or stolen
can reduce security breaches by up to 99.9% over passwords alonephones can get lost or stolen
can be easily set up by usersthe biometric data calculated by MFA algorithms for personal IDs, such as thumbprints, are not always accurate and can create false positives or negatives
enables businesses to opt to restrict access for time of day or locationMFA verification can fail if there is a network or internet outage
has scalable cost, as there are expensive and highly sophisticated MFA tools but also more affordable ones for small businessesMFA techniques must constantly be upgraded to protect against criminals who work incessantly to break them

Understanding How Phishing Bypasses MFA

Not all MFA methods offer equal levels of security. In the past two years, numerous attacks have exploited weaknesses in MFA implementations, enabling criminals to bypass MFA protection. It is crucial to note that not all MFA solutions provide the same level of defense against authentication attacks, and the security and usability of an MFA deployment can be influenced by critical implementation details.

  1. Phishing Attacks: Phishing involves cybercriminals impersonating legitimate entities and tricking individuals into disclosing sensitive information. By exploiting human vulnerabilities, attackers can obtain usernames, passwords, and even MFA codes or tokens, compromising accounts.
  2. Real-Time Phishing: Attackers conducting real-time phishing can quickly capture MFA codes or tokens immediately after victims enter them during login. By using the obtained codes before they expire, attackers can bypass MFA’s additional layer of security.
  3. Man-in-the-Middle Attacks: In man-in-the-middle attacks, attackers intercept communication between users and legitimate services, collecting credentials, including MFA codes, without detection. The intercepted information is then used to gain unauthorized access.
  4. Social Engineering and Impersonation: Phishing attacks heavily rely on social engineering, with attackers impersonating trusted entities to deceive victims. By creating convincing replicas of emails or websites, attackers increase the likelihood of victims disclosing MFA credentials.

Mitigating the Risks

To mitigate the risks of attacks against MFA, businesses should consider the following:

  1. Security Awareness Education: Regular training programs can help individuals recognize phishing attempts and avoid falling victim to them, reducing the risk of disclosing MFA credentials.
  2. Two-Way Authentication: setting number matching authentication adds an extra layer of security by utilizing a separate communication channel for verification prompts, making it harder for attackers to bypass MFA.
  3. Advanced Phishing Protection: Utilizing advanced anti-phishing solutions that employ machine learning and threat intelligence can detect and block phishing attempts, reducing the chances of successful attacks.
  4. Strong Passwords and MFA Settings: Emphasizing the use of strong, unique passwords and implementing phishing-resistant MFA helps minimize the impact of successful phishing attacks.

Multi-Factor Authentication (MFA) is a crucial security measure that significantly enhances authentication mechanisms. However, it is not impervious to phishing attacks. Understanding the importance of MFA and the tactics employed by cybercriminals is essential for strengthening overall cybersecurity. By combining phishing-resistant MFA with security awareness education, two-way authentication, advanced anti-phishing solutions, and strong password practices, individuals and organizations can bolster their security defenses and reduce the risk of falling victim to phishing attacks that aim to bypass MFA.

If you want to learn more about multi-factor authentication and how your organization can successfully and effectively deploy MFA, download our latest paper, “Selecting A Multi-Factor Authentication Solution: How to Address the Human and Technology Concerns”.

Ninoslava Bogdanović is an Information Security Specialist at SHARE Foundation. Her fields of work are analysis of the state of digital security and building security measures and procedures in organisations so they could defend against cyber attacks, as well as providing assistance with cyber incidents.

Read more:

Empowering Individuals for Enhanced Identity Protection

Although cybersecurity may appear to be primarily a technological concern, it ultimately revolves around human beings. Humans play a pivotal role in cybersecurity, as they can unintentionally compromise sensitive information and systems through social engineering tactics or errors, emphasizing the need to empower individuals with appropriate technologies and awareness training.

In addition to the challenges posed by advanced attackers and the technical aspects of implementing multi-factor authentication (MFA), the true obstacle lies in inspiring individuals, both in personal and professional settings, to embrace this crucial security feature. Unfortunately, numerous reports suggest that businesses and individuals are not fully leveraging the potential of MFA. 

While 56% of businesses claim to have implemented MFA. Shockingly, only 8% of C-suite executives utilize MFA across their various applications and devices. However, the issue extends beyond the corporate realm. Even social media users neglect best practices to safeguard their online accounts and personal information. For instance, a mere 2.6% of Twitter users have activated MFA for their accounts.

Several reasons contribute to this risky behavior:

  1. Implementation and integration challenges: The complexity of incorporating MFA into daily business workflows makes it a daunting task.
  2. Ineffective communication: The importance of implementing MFA fails to resonate effectively with businesses and society.
  3. Misconceptions about cybersecurity: Some individuals hold beliefs such as “it won’t happen to me” or “I have nothing to hide,” undermining the perceived need for MFA.
  4. Fear and uncertainty: The intimidating nature of cybersecurity alienates people from actively engaging in protective measures.

To address these concerns, it is vital to recognize that cybersecurity is not solely reliant on technology or processes. While technology can only offer a certain level of protection, employees can provide the contextual understanding necessary to detect and prevent attacks. By providing the right tools, knowledge, and support, organizations can unlock the full potential of their workforce and create a culture that embraces and maximizes the advantages of technology. 

It is important to empower people in cybersecurity and identity protection to harness the benefits of digital technologies. Here are some strategies for achieving this goal:

Cultivating a Digital Mindset:

To empower individuals, it is crucial to foster a digital mindset within the organization. This involves developing an organizational culture that embraces technological advancements, encourages experimentation, and promotes continuous learning. By emphasizing the value of technology and its potential to drive positive change, employees are more likely to adopt new tools and approaches, becoming active participants in digital transformation.

Cultivating a Cybersecurity Mindset:

To safeguard our digital identities in today’s interconnected world, empowering and engaging individuals in cybersecurity is paramount. Every organization possesses a security and organizational culture that should be transformed into a positive and proactive one. Blaming individuals for mistakes is counterproductive. Merely bombarding people with more technology exacerbates the situation by introducing unnecessary complexity. Instead, we should foster a culture that celebrates small victories. By focusing on all three domains of cybersecurity—people, processes, and technology—our businesses and societies can become safer and stronger.

Providing Training and Development Opportunities:

Investing in training and development is key to empowering employees to leverage technology effectively. This includes offering comprehensive training programs, workshops, and resources that equip individuals with the necessary skills to utilize technology tools and platforms efficiently. By providing ongoing learning opportunities, organizations enable employees to stay updated with the latest technological advancements and leverage them to enhance their work processes. Security awareness training should not solely focus on the “why” (the consequences of a breach), but also on the “why me?”. The “why me?” aspect, provides individuals with the context needed to comprehend the relevance of cybersecurity to their own lives. Without this understanding, it becomes challenging to influence people’s intrinsic motivation, which is key to driving behavioral change. Understanding the reasons behind certain behaviors, or the lack thereof, is crucial for impactful awareness training. 

Tailoring Technology Solutions to Individual Needs:

Recognizing that each employee has unique requirements and preferences, organizations should strive to offer technology solutions that cater to individual needs. This can involve providing a range of tools and platforms to choose from, allowing employees to select the ones that align best with their work style and objectives. Customizable interfaces, flexible application integrations, and personalized user settings empower individuals to optimize their technology experience for enhanced productivity.

Empowering individuals to leverage the benefits of technology is a powerful strategy for organizations aiming to thrive in the digital age. By cultivating a digital mindset, cybersecurity mindset, providing training and development opportunities, tailoring technology solutions, encouraging collaboration, emphasizing automation benefits, and fostering innovation, organizations can create an environment where individuals feel empowered to harness technology to its fullest potential.

If you want to learn more about multi-factor authentication and how your organization can successfully and effectively deploy MFA, download our latest paper, “Selecting A Multi-Factor Authentication Solution: How to Address the Human and Technology Concerns”.

Ninoslava Bogdanović is an Information Security Specialist at SHARE Foundation. Her fields of work are analysis of the state of digital security and building security measures and procedures in organisations so they could defend against cyber attacks, as well as providing assistance with cyber incidents.

Read more: