Noteworthy paper using large-scale dataset just released by researchers from the Oxford Internet Institute. From the abstract below, it looks like it will pour cold water on recent tabloid hyperbole regarding the effects of technology usage on mental well-being. I’ll be reading it with much interest.
The widespread use of digital technologies by young people has spurred speculation that their regular use negatively impacts psychological well-being. Current empirical evidence supporting this idea is largely based on secondary analyses of large-scale social datasets. Though these datasets provide a valuable resource for highly powered investigations, their many variables and observations are often explored with an analytical flexibility that marks small effects as statistically significant, thereby leading to potential false positives and conflicting results. Here we address these methodological challenges by applying specification curve analysis (SCA) across three large-scale social datasets (total n?=?355,358) to rigorously examine correlational evidence for the effects of digital technology on adolescents. The association we find between digital technology use and adolescent well-being is negative but small, explaining at most 0.4% of the variation in well-being. Taking the broader context of the data into account suggests that these effects are too small to warrant policy change.
via Nature (Human Behaviour)
I am delighted to have been invited to give evidence at a Meeting of Joint Committee on Communications, Climate Action and Environment. Transparency in political social media is of deep importance to democracy.
I was due to appear today, but due to time constraints, that will take place at a later date. This meeting is in relation to Deputy James Lawless’ Social Media Transparency Bill, as well as recent revelations regarding Facebook and Cambridge Analytica.
You can read my full submission here, and the main points of the executive summary are below:
- Problematic issues in regulation social media have been known for some time, the era of self-regulation must come to an end
- However, overly simplistic to ‘blame’ the online platforms for this, must be collaborative
- Problem of fake or automated accounts is vast within social media
- ‘Viral’ propagation of messages is quite rare, information generally cascades via a traditional ‘broadcast’ model
- Misinformation not easily corrected, continues to be shared after being debunked
- Hence easy for adversaries to push disinformation, sowing confusion
- An environment has developed where it is difficult for citizens to know what is true and trustworthy
- Politicians must improve own cybersecurity practices as a matter of urgency
- Much of the content of the Bill has been pre-empted by policy changes by the online platforms in the last six months in political ad transparency
- However, changes have yet to take effect – urge immediate roll-out here of political advertisement changes
- While transparency in online political advertising is probably achievable, not clear that making bots illegal is feasible, suggest mandatory labelling by online platforms
- Urge Government to invest in interdisciplinary research on these topics in local context
- Urge progress of permanent Electoral Commission to oversee all political advertising
- Urge Government to consider national factual information/education campaigns on online platforms
I will update this page later, once I’ve spoken at the Committee.
I’m really excited to be speaking tomorrow at Digital & Cyber Security 2016 in Scandic Park, Helsinki.
The abstract of my keynote is here and the slides are below:
Since Kevin Mitnick first coined the phrase in 2002, the cybersecurity industry has been awash with the phrase ‘the human factor is the weakest link’. From vendors to researchers, engineers, hackers, and journalists, we are all fond of blaming the ‘dumb users’ at every available opportunity. Not only when something goes wrong, but even before any discussion begins, ‘the stupid human’ is taken as read in any cybersecurity forum.
In this chapter I critically interrogate this trope in the discourse around information security and cybersecurity: where it came from, what it assumes, what it produces, and how to get away from it. Each of these I demonstrate with examples from recent events, white papers and research reports, not only from the cybersecurity industry, but also from human factors and related fields.
Fundamentally, I argue that when we say that the ‘human being is the weakest link in cybersecurity’, not only are we telling a lie, we are inevitably setting ourselves up for a fall. More to the point, when we devalue our end users, our co-workers and colleagues, we cannot expect them to stand by us when our systems inevitably suffer attacks, crash and are breached.
I was delighted to be interviewed for Gordon Smyth & Brian Honan’s new Securing Business podcast – talking about the human factors of cybersecurity.
You can sign up for the podcast here: itunes.apple.com/ie/podcast/securing-business/