“Apple knows everything about you”: How Military Personnel Understand Social Media Risk
Military personnel understand the risks of their social media use more comprehensively and complexly in relation to their identity as everyday consumers of social and digital media than in relation to their identity as a member of the Armed Forces.
This finding emerged from the results of the ‘User’ strand of the DUN Project which sought to understand why and how military personnel use social media and how they understand risk in relation to social media. We conducted a series of focus groups with military personnel of all ranks and ages across the three services of the Armed Forces.
In these groups, our participants initially outlined the risks of social media as a member of the Armed Forces which they discussed in broad and generic terms. The tenor of their discussion resonated with current guidance on social media for Armed Forces personnel which emphasises that individuals should take responsibility for their own social media content.
Our participants stated that the greatest risk of social media was to their own personal security, particularly being tracked and targeted by ‘terrorists’ through their social media use:
I mean at the end of the day we are targets in our own country and we do need to be careful. We are constant targets all the time and we don’t know it. I don’t know if someone is out there tracking me? It could be happening.
So we are always a target…nothing might ever happen to us but we’re always going to be potential targets of terrorism.
What was significant about these discussions was that the participants rarely discussed the risk from terrorism in relation to their own social media use. Instead, they drew upon examples from training briefings or high profile media cases. Here, they predominantly understood risk to derive from the posting of social media content that rendered the user identifiable as a member of the Armed Forces who could be tracked and targeted.
If you’ve got a Facebook profile and you’re dressed in a military uniform, then if you’ve got someone who’s got a casual attitude towards carrying out an attack, then they could find information.
Examples they cited included the terrorist tracking of a family member through Facebook, an attempted Lee Rigby copycat killing, and the ‘Tipton Taliban’. Indeed, a significant number of participants believed that Gunner Lee Rigby, who was murdered in May 2013 near a barracks in Woolwich, had been tracked through his social media use:
…and that’s what happened to Lee Rigby and everyone else who got killed in Central London or, you know, you haven’t got a clue who is following and who is not.
…that was a clear cut case of social media was used to track this individual, there was a plot and these men were charged and sent to prison for that. The Lee Rigby one, I think social media was then used…
The kind of people that carried out the attack on Gunner Rigby, they’re going to have the knowledge and the know-how to actually track someone down on Facebook. The threat is that, and what happened to Gunner Rigby, in my opinion, it’s a very low-tech and easy way to target someone.
Yet, there is no evidence that Lee Rigby had been individually “tracked” on social media. An Intelligence and Security Committee report on Lee Rigby’s murder only highlights that one of the killers engaged in an “online exchange” with an “extremist”.
Consequently, their citing of these examples is important. Not only does it suggest that they understood the risks of social media in broad and generic terms rather than through their own experience of social media, but that as a result they have potentially misunderstood how and when risk may be generated.
In contrast, when our participants discussed the risks of social media in relation to their own everyday, mundane, routine use of social media they demonstrated a far more complex understanding of risk. Here, they understood risk to predominantly derive from the technological infrastructures of social and digital media. Risk was cited as including, for example, surveillance and tracking, data algorithms, lack of data ownership, fraud, ‘scamming’, harassment and sexual harassment.
Critically, in these discussions risk was understood in terms of social media practice as well as content. In other words, ‘doing’ social media was considered as risky as the posting of content. Here, for example, they highlighted the risks associated with geo-tagging and checking in:
The mobile phone…tracks where you are, it knows where you are….It knows you go to the [pub] twice a week.
But they also demonstrated an awareness that the storing, sharing, and triangulation of their own data may generate risk:
Everything gets stored everywhere you go. Accounts for PayPal link with your E-bay and your E-bay links with Facebook…everything links with each other so if you can hack one account you can pretty much hack every other account.
Apple knows everything about you, unless you’ve gone into several menus down to turn off, you know…it knows you go to the station three times a week.
What this reveals is that despite appearing to have a limited understanding of social media risk from the perspective of being a member of the Armed Forces, our participants demonstrated significant and sophisticated awareness of risk in relation to their identity as a mundane consumer of social media and digital technologies.
In turn, this suggests that the ways in which members of the Armed Forces are currently being asked to conceptualise social media and risk through MOD guidance, training and briefings may not be resonating with either their everyday experience of social media or the wider architecture of the digital environment. Aligning this training to the more mundane experience of military personnel’s use of social media may permit a greater awareness and understanding of how risk feeds into operational and personal security practices.