Not Found

22.11, Speculative Futures

Fake News, Automation & the Future of Journalism

What impact can automation and artificial intelligence have on the construction and consumption of information at a time when “fake news” is created, distributed, and detected at an unprecedented pace? Thomas Grogan and Paul van Herk discuss the importance of the orientation of new technologies geared towards the framing of content, and speculate about a future where AI platforms automate news entirely by organizing, editing, and broadcasting content.

 

Fake News, Real Context

Fake news is now a household term, and was crowned as Collins Dictionary’s “word of the year” in 2017. The following year, BBC World Service controller Mary Hockaday countered that “fake news is nothing new.” She argued that the use of more precise terms like “propaganda,” “misinformation,” and “suppression of free speech” would better nail down the intentions and origins of misleading broadcasts that generate unfounded beliefs and confusion. She may be correct in principle, but we begin by asking how such a finer categorization can be established outside of top-tier news organizations which are no longer found to dominate the information market. With user-responsive algorithmic content feed’s constantly refreshing reels of commercial clickbait, Breitbart, Chinese state TV, Twitter bot swarms, and The Onion, completely different agendas and forms of fakeness turn up in the same “uncurated” digital attention space.

Fake news might not be new, but the scale and pace at which content is produced, sorted, and presented, is. The availability of cheap processing power means that almost anyone can make “deep fakes” and post them on social media. Digital equivalents in the detection space, like Snopes, can be used to discern between real and false content, but this cat and mouse game may not be enough to counter the magnitudes and asymmetries of “fake news.” As platforms begin to integrate UGC (user generated content) into “editing rooms” which now effectively number in the billions, new technologies are needed that don’t just augment our ability to find fakes, but also to find the correct context of a story.

Using various examples uncovered in a collaboration with FACT Liverpool (Foundation for Art and Creative Technology) and the BBC, we will highlight some instances where the failure to contextualize information is just as dangerous as fake news itself. This is important because most “fake news” is not inherently false or concocted from scratch, but instead is re-narrated and re-ordered (i.e. re-contextualized) and therefore fundamentally altered in meaning. Real footage can easily be made to fit the story, rather than the other way around. A great example of this is Fresco, a platform and app that allows civilian users to upload their footage and earn itemized payments when news channels cut their content into their programming. Fresco’s biggest client is Fox News, which buys the footage à la carte to use as free-floating content depending on the program and its agenda.

The context of a story is often discursive and so best established by “friction points” that emerge with discussion and disagreement. Last year, Trevor Noah, comedian and host of The Daily Show, found himself in an awkward bind with Marine Le Pen of the French far-right party National Rally, when they both tweeted that “Africa won the World Cup” after the victory of Les Bleus. The statements were of course diametrically opposed in intention: one was about undesired African immigrants threatening French identity (Le Pen) and the other was about the pride of African heritage in a contest where wealthy nations always win out (Noah). In the end, Noah exchanged letters with the French ambassador to the US that teased out a fundamental disagreement between hyphenated (African-American) vs. incorporated (just ‘French’) national identity tags. This is a precious slow-motion example of how said friction points can develop around a story that resonate beyond the mere event (France winning the World Cup with superstar minority players) and into the context of colonialism, elitism, and the immigrant experience.

Of course, each story has at least as many contexts as it has viewers, as well as a time dimension that makes it effectively infinite in scope. The cherry-picking of relevance (and selective ignorance) in bottomless information streams is an innate human skill, and it would be naive to suggest that an algorithm could outperform us in this subtle work without being programmed with a whole set of rigid biases. It is already a fact, however, that machines can provide us with different hierarchies of signals, images, and compositions. If designed well, we see no reason why they couldn’t assist us to respond to news stories more efficiently, and maybe even partition levels of care to them more proportionally.

It is still early days for machine cognition of verbal nuance, and satirical news sources like The Onion demonstrate the problem of indexing context based on purely linguistic means. As a satire website, the intention of The Onion is not one of malicious misinformation, but that of a lie that reveals something true. Appreciating it requires a nuanced understanding of what is and isn’t possible, what has already happened, and where the borderline of possibility lies. There are hilarious lists of “onion fails” where powerful people have fallen for the fake news stories, such as Kim Jong-un being voted the “sexiest man in the world,” which was re-reported on Chinese TV. While the idea of an algorithm that helps the literal-minded to understand satire is in itself very funny and dark, it seems much more likely that for now at least, algorithms augmenting our ability to contextualize will be more focused on processing images and footage rather than text.

A good and pre-AI example of this is a French TV show called Zapping that cuts together selected parts of recent television shows without narration or introduction. The idea is that the agencement (ordering) of video can speak with a syntax as clear as words and sentences. As a wonderful proof of concept, creator Patrick Menais was fired from the host channel CanalPlus for using the show to protest against the channel director without uttering a single word. This is a current affairs take on the Kuleshov (i.e. montage) theory of early avant-garde cinema, where each slice of footage becomes re-contextualized in relation to the others and not in reference to a single meta-narrative. This is a poetic technique under the vision of a director, but what if there is already indeed a material link between events depicted in series that machines could pick up and sort directly? Imagine reels of schizophrenic footage with strong causal linkages to each other in space and time. Given the asymmetry and entangled strands of geopolitics, the order would probably seem absurd or random at first reading, suitable for late night viewing or while under the influence, but it would in many ways be more faithful to reality. Such linkages could even be navigated through as a “folded” digital map, rather than being arrayed in linear sequence; more current affairs video game than show. This would be a fundamental structural and philosophical shift from the delivery of news as mere procedural entertainment: i.e. “what you might be interested in next.”

Investigative journalism is perhaps the most precious form of skilled reporting, yet it is constantly threatened by budget cuts in networks that cannot successfully justify the labor cost to their budget committees. In conflict zones and countries where the free press is violently absent, it is even more endangered. Activist and data-based journalism can sometimes be the only way to shed light on catastrophic situations, as the group Raqqa is Being Slaughtered Silently has been bravely demonstrating in northern Syria. The group used patchy mobile phone networks and a widely distributed array of covert citizen operatives on the ground to report on the horrific daily events taking place during the recent ISIL occupation of Raqqa.

Bellingcat is a larger and more generalized news platform which scrapes web data and undertakes “online investigations” from a distance, starting with local data collectors (i.e. interested people with phones). Bellingcat trains some of its users in basic data visualization skills to process citizen gathered data, which requires modeling and compilation techniques to then be presented as stories. A highly graphic and spatial approach is best exemplified by the Forensic Architecture group at Goldsmiths University, which dedicates many hours of highly skilled labor to constructing virtual models of events which are then interrogated as virtual evidence. The BBC has even used amateur films to investigate the murder of two women and their children in Cameroon. By analyzing recordings and using simple tools like Google Maps and Facebook, the journalists were able to make a solid claim as to the location, time, and likely perpetrators of the crime, which included members of the Cameroonian army who were blocking a local investigation through intimidation.

We have made a condensed case for the framing of content as a primary focus of digital news sources that seek to remain objective and accurate in their “reporting.” It has become relatively pointless to argue over oceans of disconnected facts and non-facts without firm and thorough contextualization—without it, everything is potentially “fake news” and “post-truth.” New forms of interface, data gathering, automation, and machine intelligence could better help to define the field in which news is constructed, but they are still relatively piecemeal in application. Our projective questions therefore turn up the dial: How could these processes be automated? What would an AI-led news channel be like? What would it mean for media production and consumption—and for privacy, accuracy, and agency?

 

User-Automated News

The BBC’s Research & Development teams are currently developing tools that assist in journalists’ workflows. These tools mainly perform laborious tasks that journalists don’t fancy doing, such as sifting through exhaustive archives, adding hyperlinks to articles, or advertising new content to an audience. A number of tools already use machine learning (ML) algorithms, which take advantage of the BBC’s gigantic datasets of video, audio, and written content gathered during almost a century of broadcasting and archiving.

Some tools are less focused on reducing editing room drudgery though, and in one experiment the editing room itself is almost entirely automated. This process consists of training algorithms to find correlations between the types of shots being taken and the style of show being made, initially by training the AI on datasets of “comedy quiz" footage from which it will learn to switch between and move studio cameras at the appropriate time. The R&D team are also developing complementary tools that then automate the cutting, editing, and broadcasting of the footage, meaning that when the tools are strung together it amounts to an almost entirely machine-produced broadcast of satirical panel show Mock The Week or equivalent.

We can project forward a little from these experiments and imagine the live coverage of events becoming more accessible to smaller and smaller crews as the production stages become less labor intensive. A platform which offers the necessary software and is available for subscription could even allow for a production “crew” of just one person, or even a bot. We at least envision a wider distribution of “journalists” who inhabit less technical spaces and smaller enterprises. In “the field,” the border between reporters and audience would become much more blurry, as would the border between broadcasters and social media accounts.

The new meta-profession of “citizen journalism” is an extension of the concept of user-generated content (UGC), which has been both celebrated (à la Wikipedia) and decried (Facebook) since its ascendancy. Just like “fake news,” the concept of user contributions in media is not at all new—letters to the editor were popular in the 1800s, radio callers began in the 1900s, and reality TV in the 2000s. What is new in UGC media today is its central role in production that was previously very marginal. This might initially seem quite terrifying if we’re interested in impartiality in serious and complex events, but the use of UGC in conjunction with skilled reporting can contribute towards expanding news coverage and potentially even its accuracy too. It has already proved its worth in emergency scenarios, such as the BBC’s launch of the first UGC news team in 2005, immediately following the London bombings.

The outcomes of the entry of UGC into the mainstream news space will be partly defined by the hardware and software available to it. For the past decade, professional reporters have been using what is called a portable single camera (PSC) setup, which includes a camcorder, tripod, radio mics, and lights. The camera person also carries a transmitter inside their backpack to send footage back to newsrooms from the field. This setup, while being the most portable of professional gear, can’t compete with the discreet ergonomy and financial accessibility of a decent smartphone. Most phones now have video stabilization software and filters that produce video realism and quality that depart from stereotypical amateur footage. Admittedly, they remain inferior to the professional gear and particularly fall down in file transmission speeds and battery longevity, but we should expect the gap to be closed fairly rapidly and for journalists (or at least camera people) to be effectively in more places at once.

One of the primary concerns around the profusion of citizen journalists then becomes that of privacy and probity. The distinction between public and private space is defined legally in planimetric models as drawn by city planners, but only very loosely in the world of hand-held footage gathering: A world of criss-crossing linear projections and cones, distorted perspectives, montage, and layering. In citizen journalism, the “angle” becomes more literal than metaphorical in the delivery of a news report, as it carves out conical volumes of gradated public interest space. With UGC “angle” becomes pluralized as “angles,” and the city becomes the set of a rolling incidental cinema for an audience that is also producer.

The world of the citizen journalist also has to negotiate another form of ownership space: that of intellectual property (IP) and copyright. A profusion of UGC content, such as that collated by Fresco, reduces the default value of each megabyte of footage while allowing it to be attributed value based on its popularity, much like the YouTube or Twitch payment model. To enforce IP, YouTube uses relatively simple checking software that can rapidly recognize content like songs to match it to a database in real-time. The Electronic Frontier Foundation (EFF) is concerned that such automation of UGC filtering will increase the wrongful blocking of content, such as in the “dancing baby” case where a mother films her cute baby dancing to music for which she doesn’t own the property rights. The video is swiftly removed by YouTube on behalf of Universal, which objects on the grounds that the video could go viral and gain for the mother a not insignificant amount of revenue.

New media consumption behaviors like scrolling feeds, touch notifications, and chat bot discussions make the consumption of news possible an overwhelming 24 hours a day. There are already 24-hour news channels, but imagine one that doesn’t merely repeat rapidly curated stories, but is continuously drawn from online UGC streams and subsequently edited and cut by AI. In conjunction with an editing logic of causal linkages as previously mentioned, this format might help to represent multiple asymmetrical perspectives beyond the conflict of one world leader against another, one ethnic group against another, or one sports team against another. The theory is that loading many multiple subjectivities into the programming is one way of achieving a working objectivity constructed as an aggregation or triangulation, rather than striving to an ideal state devoid of bias.

Admittedly, in such a “freestyle” reel there would be a precipitous lack of definition with respect to proportion, agendas, and spheres of influence that generally come with news channels. The BBC recently came under fire for a “fake treehouse scandal,” in which documentary-makers asked West Papuan villagers to rebuild their houses higher so as to appear more epic on camera. The Chinese state media in particular used this as an example of the hypocrisy of major Western media outlets. Given that the BBC is state funded, it is indeed difficult to quantify and compare its ambitions to impartiality, except that “we know” that a flawed report on a treehouse is not of the same magnitude as a flawed Chinese state media report on the situation in Xinjiang. When a few decades from now the BBC more resembles an IT department than a newsroom, engineering and maintaining tools that curate decentralized media content, it might well spend a decent chunk of resources demonstrating the impartiality of its output in quantitative terms to build trust and brand value.

Last but not least, the choice architecture of interfaces through which news is consumed define its ability to strike its audience, and the BBC is looking into various models which tailor content around individual users. The idea is to reduce what “nudge theorists” Richard Thaler and Cass Sunstein call the “choice overload” that can funnel people into making poor choices in the interface and/or to opt out of it. The BBC’s studies have shown that users prefer to choose content by selecting tag words each time, rather than pre-loading a profile of themselves once. This would be “I want to watch a thriller series tonight” rather than “I am 25-35 years old.” The trust we place in a one-time profiling algorithm is understandably very low—it is unsettling that a machine would read us as a fleshy datascape, annoying if it is wrong, and most troubling of all if it is correct. We might become lightly receptive to what machines can tell us about ourselves if an interface highlights our demographic biases and consumption patterns without appearing to restrict our subsequent choices.

Aiming to complement the usual warnings of techno-capitalist clumsiness, we believe that there are many technical and cultural opportunities offered by AI and automation that could circumvent media funding crises and tabloidification—if they are intended to. New tools are presenting ways to achieve wider representation, more proportionate concern, and a greater diversity of contributors in the news. It is imperative that the right people take an interest in its development, and that philosophical discomfort does not discourage all but the most wolfish investors and controlling regimes to experiment with automation. An automated news channel will be built by direct and incidental developments, in different languages and spheres, containing different preferences and propensities. Its design will in turn design how and what “you know.”

This article is based on a research project undertaken by Thomas Grogan and Paul van Herk in collaboration with FACT Liverpool and the BBC in Salford Quays, UK.

Thomas Grogan & Paul van Herk

Thomas Grogan is an artist and researcher based in London, and Paul van Herk is an architect and writer based in Accra. They are both alumni of The New Normal program at Strelka Institute.

If you noticed a typo or mistake, highlight it and send to us by pressing Ctrl+Enter.

Share