Reblogging from the CODEC blog, posted earlier today:

Thow-we-thinkhis week the CODEC team focused upon the third chapter of Katherine Hayle’s How We Think: Digital Media and Contemporary Technogenesis (University of Chicago Press, 2012). The third chapter focuses upon ‘How We Read: Close, Hyper, Machine’, and certainly gave us lots to chew on.

Initial comments were that we liked what was written, but found the emphasis on all the negative reports about digital as frustrating. An oft heard argument is that our reading is worse ‘because of digital things’, and some members of the team felt that there were broader cultural factors at work rather than solely technological factors. There was agreement that the forms of technology may be changing the manner of reading, as we referring to the ‘F-Shaped Pattern for Reading Web Content’, noting that the further a user scolls, the more eye attention tends to drop off. On p.66, Hayles noted that “Canny web designers use this information to craft web pages, and reading such web pages further intensifies this mode of reading” – so in a self-reinforcing manner, as this form of reading becomes common, more people write for it, so it becomes more common.

There was a short debate re p55, that people “are doing more screen reading of digital materials than ever before”, referring to other kinds of screen reading such as OHP, reel-to-reel cinema, and microfiche, but these were largely seen as recent forerunners of ‘screen’. Debating the etymology of the word ‘screen’, we wondered when/how it stopped being a word that stopped a user being burnt by fire, and became something projected upon a screen. Was it the safety screen at the theatre? We noted that many words for technology come from the analogue world, and referred to an earlier conversation that day in which CODEC prepares to run the worship for Cranmer Hall later this month, that we a) didn’t want it to be a service of gimmicks (the technology serving the theology and not vice versa) and b) did not want the experience to be too far from the established format for the service (otherwise risking being seen as irrelevant). We wondered if screen was a printing technology term, as it certainly still uses a lot of the structure of print.

On p78, Hayles referred to Wordle as a form of machine reading. We debated whether Wordle’s are machine reading, or whether it is a visual method that provides visual data, offering insights into e.g. frequency rather than tone. The @bigbible’s Tumblr, which takes each chapter of the Bible and represents it visually, has received feedback that it gives new insights into reading the Bible, and others that it is not the Bible, and shouldn’t be interpreted as such. We then questioned how often we use these kind of tools in our everyday theological reflection, rather than as ‘something special’, and if they are ‘tools of our time’, whether we should be using them more to be relevant for our culture. If we read faster, is it necessarily distracted reading. Is this a different way of reading – this is part of what the chapter seeks to address. Are the programmes that exist signposts of the digital age? Why did someone create a programme that allows text to be grouped like Wordle? Was it for fun, or was culture changing and this was something produced as a response? Are they beneficial, and do they help us?

With Pete having recently been to an event on DarkNet, we turned to looking at recent events in which GCHQ and NSA have been identified as downloading emails. They indicate that they are not reading them, but just storing them. The machine ‘reads’ those emails, and algorithms will have been set up in which keywords identified with particular terrorist activities will send up a red flag. A level of human reading is then required to contextualise that email to identify if it is a threat or not. Surveillance and machine reading aren’t an either/or, they are different and complementary. Machine reading gets through vast amounts of data, close reading gives deeper insights into human intricacies.

Josh referred to the classic ‘How to a read a book’ as we questioned whether hyper-reading is something particularly digital, or whether it is in fact a very familiar form of reading to academics, particularly those who need to get through a large volume of data. Bex noted that her PhD, focused on 20th century history, required hyper-reading of vast quantities of data, whereas earlier periods of history have to work with sparse sets of data. She noted that when she started her PhD, the Public Record Office (now the National Archive) used paper-based indexing systems, whereas part-way through it converted to digital indexing. We concluded this hadn’t specifically changed the way she read, but allowed easier access of (even more) material.

As we return to notions of what machine reading is, we referred to the fact that CPUs don’t currently match human brain-power, but there is an expectation that in 15-20 years it will do. As we listened to some computer-reading, we questioned what the loss of intonation changes. We got involved in a debate about as to in what sense does a machine understand something? Can it close read and “understand”, or does it have to work within the limits of the fact that it is programmed by humans? Speech recognition software, what is it set up to recognise? Does Siri work in understanding meaning? Are we born tabula rasa, and how do we learn language – can a computer do the same? We are typically limited in thinking about the machines on our desk, but we need to think about bigger systems such as Watson, what do they understand? Do they just understand what they are told to understand, or can the AI take over and self-learn? What about the film Robot & Frank, in which a human-computer relationship developed – until the computer was rebooted? How AI already become scary? The computer is still asking one question, but is learning more efficient ways of gathering that data. How do SatNavs use data to produce a coherent narrative? AIs typically question if something is good/bad, now starting to question ethical decisions or say “I don’t know”: they are moving beyond cognitive binary decisions.

Earlier that morning, a story had circulated that “teens who use screens more sleep less”, which Bex – drawing on her book – and the fact that she grew up screen free, but stayed up late reading books – whether all the variables had been considered, and whether it was the screen, or the staying up late that was the problem (acknowledging that the issue of Melatonin changing body clocks has been well researched) – see this opposing view article last year. Is there a difference between staying up to read a print book, in which case you will be tired, or if reading a screen, will the body-clock have been fooled into thinking that it is daytime, and be impacted in other ways? Have we changed our behaviours in many ways because technology is available, but also to make technology more workable? The world has become technologized so that we can get more out of the workers, and we think we can cope with this, but we have seen that songbirds are being negatively affected by a 24/7 lifestyle. Some bloggers have referred to the invention of the lightbulb (rather than the screen) as disrupting our sleep patterns, but we can see further back that activities continued with candles/firelight.

For all the reflection there is on technology, is part of the role of digital theology to try and get a birdseye view of the situation? There’s no sign of it slowing it down, what is the harm that we may do if we are not aware of the affordances and constraints of digital technology (in a similar way to the way that smoking was publicised as healthy)? Do we need to look more deeply, think about what we have been ‘forced’ to do, where we have choices, and how much information we need to make decisions (and where the information comes from to inform those decisions). Are our brains being rewired, and is that a problem is so? Where are the positives – for example, with mobile devices, typically people are reading significantly more as are not tied to a desk top machine.

With the book now three years since publication, does this well-written text already feel a little old fashioned? If we look at efferent/aesthetic readings, are there more modern ways of approaching this? Are questions in the digital realm moving so fast that we need to be focusing on on-going technical reports, rather than books?

Dr Bex Lewis, Research Fellow in Social Media and Online Learning

Professor Katherine Hayles is IAS Fellow at St Mary’s College, Durham University (January – March 2015)

Leave a Reply

Your email address will not be published. Required fields are marked *

This site uses Akismet to reduce spam. Learn how your comment data is processed.