Media & Press Media - Text

[MEDIA] Competitive Christianity? Why a summer festival’s marketing is raising eyebrows @ChristianToday

I’m quoted in a recent Christian Today article, looking at the marketing tactics of a Christian festival:

The AdWords technique was used by UKIP during the last general election to divert searches for the Conservative Party to its own website. However, Bex Lewis, a lecturer in digital marketing at Manchester Metropolitan University, told Christian Today there were questions around the extent to which it was ethical.

‘Within the marketing world, it is seen as a legitimate practice to piggy-back on the brand equity of competitors,’ she said.

‘In many ways, as a new festival, it makes sense to target those who are likely to be in the right market, but as some big brands have sued those who have done something similar, there is a question as to whether this capitalises upon the brand of others without returning anything – and whether as Christians we should be held to a higher standard.’

Read full article.

History Reviewer

#EmptyShelf17 #16 The Immortal Life of Henrietta Lacks by @RebeccaSkloot

The Immortal Life of Henrietta LacksThe Immortal Life of Henrietta Lacks by Rebecca Skloot
My rating: 5 of 5 stars

It had been a long week at work, I wanted to turn my phone off, I was polyfilla-ing a wall – and I picked this up. I read it in the course of an evening – really well written, fascinating insights into the person behind amazing medical discoveries, and plenty to think about re research ethics (one of the subjects that comes up in my MSc Research Methods).

View all my reviews

Digital Event Speaker

[CONFERENCE] #DigitalRevolution in Manchester


I was invited to speak on a panel entitled ‘Technology, Ethics and Regulation’ at the end of Manchester’s second ‘Digital Revolution‘ conference, hosted by Manchester Digital.

The blurb was as follows:

Technology is no longer the limiting factor for the advancement of society but how we implement it, how it changes society and in some cases our humanity raises some big worrying questions. Is automation going to be our downfall or our saviour, what will humans do when robots replace white collar work as well as blue collar? How do we protect our privacy when everything about us can be monitored and tracked and where is the burden of responsibility to protect us? What will driverless vehicles mean to us as humans and to industry, who should they be programmed to protect?

  • Chair: Martin Bryant
  • Dr Bex Lewis, Senior Lecturer in Digital Marketing, Manchester Metropolitan University
  • Professor Tony Prescott, Professor of Cognitive Neuroscience & Director of Sheffield Robotics
  • Dr Simon Holgate, CEO, Sea Level Research

The potential questions were as follows:

  • Automation: 
    • People talk about AI taking jobs but where are we at with that now?
    • What areas of automation will have themes impact on the labour market and our lives?
    • Will there be enough jobs for all? Is a universal basic income going to become necessary?
    • The ‘who should a self-driving vehicle save?’ dilemma
  • Privacy and ethics:
    • Great current example: Google Deep Mind analyzing NHS data – what are the pros and cons?
    • Putting citizens in control of their own data (e.g., GM Connect). Ethically ‘the right thing’ but is it really a good idea?

I was seeking to bring my interdisciplinary humanities experience from history, media studies, educational technology, theology and now business to bear on the conversation, and brought up a number of historical comparisons (we’ve survived these moral panics before), let’s not fall for technological determinism, but didn’t manage to get in ‘will AI ever have a soul’, or reference to films such as Black Mirror and Ex-Machina!

In preparation for this panel, after looking back at my 2013 talk on ‘technology as a plug in drug‘, and my 2014 talk on ‘photoshopped selves‘, I looked at the following material (some of it was more helpful than others):

and, wow, just search ‘artificial intelligence will take over‘!

I also collected a few tweets from the day:


BBC Radio 4: Digital Human (Series 6:2014: Episode 6: Ethics) #DigiHuman


6/6: Ethics

If a driverless car has to choose between crashing you into a school bus or a wall who do you want to be programming that decision? Aleks Krotoski explores ethics in technology.

Join Aleks as she finds out if it’s even possible for a device to ‘behave’ in a morally prescribed way through looking at attempts to make a smart phone ‘kosher’. But nothing captures the conundrum quite like the ethical questions raised by driverless cars and it’s the issues they raise that she explores with engineer turned philosopher Jason Millar and robot ethicist Kate Darling.

Professor of law and medicine Sheila MacLean offers a comparison with how codes of medical ethics were developed before we hear the story of Gus a 13 year old whose world was transformed by SIRI.

  • Looking ‘behind the curtain’ to those creating our digital world – and how their morality, etc. feeds into that … especially in making decisions about the functionality of the driverless car.
  • Facebook – social contagion experiment – Scientists are presented with the idea that they are ‘providing’ cut and dried solutions, and need more understanding that it affects social interactions. Are making ethical decisions.
  • Ethics themselves are difficult decisions, but placing this into the technology was hard – e.g. created automatic traffic tickets, and found that although the law seems to be cut/dried, as different numbers of tickets were issued depending on how the law was interpreted by the programmers.
  • Machines need to make decisions re driverless cars as to whether the car will hit a baby’s pram, or a wall?
  • What sets our internal moral compass comes from culture, upbringing, or a higher power… but it’s hard to set an ethical position into specifications.
  • Certain number of things are “easy”, but creating a Kosher phone for the Jewish Orthodox community … the developer thought it was ‘pointless but harmless’. Kosher has been determined as ‘fit for purpose’ by a Rabbi… many apps/connectivity removed – developer sees it as an ‘extreme porn filter’, filtering out any possible damage/distraction/against the 613 items of the Torah. The more that the development continued, the more complicated it became, especially as the developer would not be engaged with directly by the Rabbis.
    • Do you get the phone to automatically shut down on the Sabbath, or do you ‘trust the user’ in a self-policing community.
    • How responsible is the developer for the fact that the technology no longer is able to contact outside the community?
  • Driverless cars are programming logic into the cars where unavoidably about to crash. Ensure engineers understand that it’s not just a technical problem, but an ethical problem… drivers may not want to trust the developers, but would they want to press ‘wall’ or ‘child’ on setting… which takes it too far?
  • How has medicine dealt with some of these questions – many decisions made after Nuremberg trials when Nazis over-rode moral questions – the patient now often makes the decision, rather than the technologists.
  • Gus, the power of SIRI – can give him endless answers to his (endless) questions. Need to ask very clearly – the possibilities of social bonding with machines – adds extra possibilities for social interactions (for autism). SIRI, however, isn’t designed to form this function, so need to be sure that not replacing human interaction. The machine isn’t the friend – it’s the bridge to friends.
  • Human/robot interaction – recognise cues, can they manipulate those cues to assign particular behaviour? We respond to the lifelike cues that the machines give us – even though we know they are robots …. Companies could exploit that emotional attachment (e.g. compulsory upgrade). Kate Darling – calling for social scientists, philosophers, etc. into these decisions = importance of interdisciplinary input = not just technological decisions (as bioethics has done for years).
  • Require ethics boards … Google only has once because they acquired a company that made it part of the deal – difficult to incentivise for companies.
  • Don’t want to do – as much software does – test it – see if it works – and adapt as it goes wrong. Some issues are too high stakes for this.
  • Kosher Phone – supports agency rather than draining it, in conducting an ‘observant lifestyle’.
  • We tend to align ourselves with the moral codes, rules/regulations, etc. of our societies, until they don’t appear to align with our moral compass. The same goes for technology – but as we don’t see technology as anything more as ‘neutral’ we don’t realise when the ethic of that technology stands in contrast to our own ethical world view – do we give our choices to ethical boards at tech companies, or do we force them to become more transparent about their decisions.
  • If it works smoothly, it’s a product that people will embrace – you make a better product.

And see the Tumblr associated with the programme.

Academic Digital

Should McCann Twitter abuser have been doorstepped on TV? for @ConversationUK


A recent piece, published for The Conversation UK, under Creative Commons licence (republished on Durham University):

Should McCann Twitter abuser have been doorstepped on TV?

By Bex Lewis, Durham University

Brenda Leyland, a 63-year old woman from Leicestershire who had been accused of publishing a stream of internet abuse about the family of missing child Madeleine McCann, has been found dead in a hotel room.

Her death raises important questions about the wrongs and rights of how we handle people who express unpalatable views online.

Leyland had been exposed in a Sky News report as the person behind the Twitter account @sweepyface, which had been used to post offensive messages about the McCanns. These included the accusation that Madeleine’s parents were responsible for her disappearance. When confronted by a Sky News reporter about whether she should have posted such messages, Leyland said: “I’m entitled to do that.”

Days before Leyland’s death, BBC Radio 4 ran a story about how the police were investigating abusive social media messages sent to, or published about, the McCanns. Madeleine’s father Gerry McCann featured, suggesting that these messages are fuelled by press reporting. He added that he thinks more people should be charged for internet abuse and revealed that his family tends to avoid the internet because of the nature of threats and insults they receive.

For obvious reasons, the McCanns had encouraged a high-profile press campaign after Madeleine’s disappearance. But without answers about what happened to Madeleine, conspiracy theories have abounded. Brenda Leyland was one of many to discuss the McCann case online. As Rev Pam Smith, one of my Facebook connections said, are we really saying that people are not “entitled” to share adverse views online?

Leyland said she “hoped she hadn’t broken any laws”, but the Malicious Communications Act 1988, which covers Twitter, notes that it is an offence to send messages to another person which are “indecent or grossly offensive”, threatening or false. If the message is intended to cause distress or anxiety to the recipient, they breach the law.

We have to consider whether Sky has a case to answer in this particular situation too though. The broadcaster’s correspondent approached Mrs Leyland on her own doorstep in a live broadcast. She evidently had no idea that she was going to be confronted or that the footage would be broadcast to the world.

Whether or not we like what Leyland had been doing, she was clearly just one of several people who had been expressing their opinions online. She was certainly not the worst. Is doorstepping people, outing them on TV, and ensuring that their face circulates the internet, really the answer? Had Sky done any research into this woman before they put her face in the public domain? Did they know anything about her mental state? Did she just have the misfortune to be the first person who could be made an example of?

Her case carried echoes of the recent media treatment of Cliff Richard. The BBC was heavily rapped for broadcasting live from his home as police raided it. The police of course need to investigate such stories but it is a worrying sign of our culture that trial by media and even trial by gossip appear to have become acceptable.

Media ethics are typically concerned with truthfulness, accuracy, objectivity, impartiality, fairness, public accountability and limitation of harm. After the Leveson inquiry, there has been increased emphasis on press responsibility. But in a time of rapid media change and fast-moving news, broadcasters must ensure they too meet their ethical responsibilities.The Conversation

Bex Lewis does not work for, consult to, own shares in or receive funding from any company or organisation that would benefit from this article, and has no relevant affiliations.

This article was originally published on The Conversation.
Read the original article.