I am delighted to have been invited to give evidence at a Meeting of Joint Committee on Communications, Climate Action and Environment. Transparency in political social media is of deep importance to democracy.
I was due to appear today, but due to time constraints, that will take place at a later date. This meeting is in relation to Deputy James Lawless’ Social Media Transparency Bill, as well as recent revelations regarding Facebook and Cambridge Analytica.
You can read my full submission here, and the main points of the executive summary are below:
Problematic issues in regulation social media have been known for some time, the era of self-regulation must come to an end
However, overly simplistic to ‘blame’ the online platforms for this, must be collaborative
Problem of fake or automated accounts is vast within social media
‘Viral’ propagation of messages is quite rare, information generally cascades via a traditional ‘broadcast’ model
Misinformation not easily corrected, continues to be shared after being debunked
Hence easy for adversaries to push disinformation, sowing confusion
An environment has developed where it is difficult for citizens to know what is true and trustworthy
Politicians must improve own cybersecurity practices as a matter of urgency
Much of the content of the Bill has been pre-empted by policy changes by the online platforms in the last six months in political ad transparency
However, changes have yet to take effect – urge immediate roll-out here of political advertisement changes
While transparency in online political advertising is probably achievable, not clear that making bots illegal is feasible, suggest mandatory labelling by online platforms
Urge Government to invest in interdisciplinary research on these topics in local context
Urge progress of permanent Electoral Commission to oversee all political advertising
Urge Government to consider national factual information/education campaigns on online platforms
I will update this page later, once I’ve spoken at the Committee.
Here’s the video of my presentation at IRISSCON, the 9th IRISSCERT Cyber Crime Conference in Dublin, Ireland, on the 24th November 2017. The full title of my talk is “Protecting What Matters: Cyber Security Lessons From Surviving An Earthquake” – abstract is below.
In this talk I am focussing on incident response in cybersecurity – in other words, how to respond in a crisis. Taking inspiration from Bruce Hallas’ ‘The Analogies Project’, and also theory of cyber securitization, I describe these events in relation to a very personal experience: when disaster struck during my recent honeymoon.
Check out the video below, where I describe how my wife and I survived an earthquake, you can survive a cyber attack!
Not a week passes these days without another major cybersecurity event occurring. Yet some companies manage to handle these events well, and thrive, whereas others handle them poorly, and struggle to survive. In this talk I try to provide some insight into how cybersecurity incident response can improve by applying some lessons from my own experience. But not professional or technical experience. A couple of months ago, while on honeymoon on the Greek island of Kos, my wife and I experienced a 6.7 magnitude earthquake. (You may have heard me on Morning Ireland!). In this talk, I will attempt to explain how some life lessons from this event can be applied to cybersecurity incident response. I will talk about back-up procedure, crisis communications, and corporate culture. I’ll also talk about dealing with the media, coping with aftershocks and what to do when things go feral. In sum, if we survived an earthquake, you should be able to survive your next breach.
Here’s the video of my presentation at the Psychological Society of Ireland’s Annual Conference in Limerick, Ireland, on November 11th, 2017. The full title of the talk is ’10 years of psychology and social media: Watch out for these apps, for they come to take your jobs’.
The abstract is below, and a fully referenced paper will follow. Overall, the presentation is about the complex relationship between the study of psychology and social media.
As I have said before, the relationship between human psychology and our self-technologies, like social media, is a complex one, which deserves careful study. I feel that it is of great importance that research on psychological topics – which necessarily means social media – should be carried out with a strong focus on participant dignity and respect. Comments/queries welcome!
At the 2010 PSI Conference, I presented on what was an increasingly popular but then largely trivial pastime: Facebook. Today, I return with a more sobering message. In these uncertain times, social media is bound up with multiple crises of a psychological nature, be it cyberbullying, fake news, or radicalisation. Reviewing a decade of social media studies, and interpreting them in the light of Foucault, Danziger, Rose and other philosophers of the human sciences, I have three findings. Firstly, social media has profoundly changed the way we relate to ourselves and to each other: norms are shifting in developmental, interpersonal, clinical and many other psychological contexts. Secondly, social media studies are rapidly evolving and new methodologies threaten to render several areas of psychological research obsolete. Big data analysis of social media usage is moving into sensitive topics – including personality analysis and prediction of suicidal ideation. Finally, while we may struggle to keep pace with complex technological changes, I propose a number of clear strategies for navigating these volatile times. In a word, ethics.
The abstract of my keynote is here and the slides are below:
Since Kevin Mitnick first coined the phrase in 2002, the cybersecurity industry has been awash with the phrase ‘the human factor is the weakest link’. From vendors to researchers, engineers, hackers, and journalists, we are all fond of blaming the ‘dumb users’ at every available opportunity. Not only when something goes wrong, but even before any discussion begins, ‘the stupid human’ is taken as read in any cybersecurity forum.
In this chapter I critically interrogate this trope in the discourse around information security and cybersecurity: where it came from, what it assumes, what it produces, and how to get away from it. Each of these I demonstrate with examples from recent events, white papers and research reports, not only from the cybersecurity industry, but also from human factors and related fields.
Fundamentally, I argue that when we say that the ‘human being is the weakest link in cybersecurity’, not only are we telling a lie, we are inevitably setting ourselves up for a fall. More to the point, when we devalue our end users, our co-workers and colleagues, we cannot expect them to stand by us when our systems inevitably suffer attacks, crash and are breached.