The demon of artificial intelligence in journalism: innovations and societal challenges

by Mary Fagioli


Artificial intelligence has fully entered the newsrooms. One of the first applications was the Quakebot software in 2014, developed by The Los Angeles Times. A few moments to write an article about an earthquake brought robot journalism to life. Here algorithms were able to get information from the USGS scientific agency and generate automated content, reviewed by employees before being published. Two years later the Washington Post used Heliograf for the Rio 2016 games short news. 

Automated journalism is based on data mining, which is the extraction process starting from a large amount of raw data. The related technological innovations lead to questioning about the nature and uses of data, at a time when the tech giants Facebook and Google have increasingly become news distributors. In these frameworks, fake news and disinformation can easily spread on the web, undermining democratic freedoms (Obama, 2016) and open debate.



automated journalism, freedom of information, artificial intelligence, algorithm, press, journalism, media, natural language generation, robot journalism, news-writing bots, algorithmic journalism, fifth estate, democracy, bias, innovation, technological unemployment, jobs of the future.



A thinking machine has ideological implications. With a mind of its own (Haugeland, 1985) or the resulting automation of intelligent behaviours (Luger and Stubblefield, 1993), the position in which artificial intelligence falls may be controversial. Newell (1973) defined it as aimed at building intelligent artefacts, where operationalized intelligence is based on tests of intellectual and mental capacities.


Technological innovations

Digital disruption has had a significant impact in the journalistic sector, in terms of timing, processes, adoption of the latest technological tools, and new forms of employment. Compared to newspapers and newscasts, there are constant updates across different sites, with ever shorter times and without the deadlines established in the traditional news distribution. To operate these technologies, the acquisition of new technical skills becomes necessary. As a result, the figure of the backpack journalist has arisen. This multidisciplinary professional is able to produce content independently. Also, new types of journalism, such as data-driven journalism and immersive journalism, emerge. Rich media content requires transversal competencies too, from planning to more technical skills such as graphics and web design.

Artificial intelligence is applied daily in various fields: personal assistants in computers and smartphones, search engines, cyber-security, automatic translations, shopping, and online advertising. It is based on the use of algorithms, which are a series of instructions in order to achieve specific results, with a certain degree of autonomy. Three are the levels of artificial intelligence: Narrow, General, and Super. Narrow can perform a single task, for example in the weather forecast, chess game, data search; General AI can figure out the environment, as in social media, gaming, mail, VR; Super can use creativity. Among the many uses of artificial intelligence, are audience engagement, labour efficiency, and story discovery. Moreover, text generation, automated translation, and intelligent agents, such as chatbots and voice agents. Besides, it has found its place in the publishing industry, which becomes in turn a hybrid model of journalism and technology. It can definitely cover relevant aspects of the news-gathering, production, distribution. 

News dissemination is one of the focal points of the media system, and now more than ever the journalist has to manage the social channels, in a mutual exchange with the reader. Among the changes, there is an increase in flexible work and fixed-term contracts. In journopreneurialsm (Prein, 2014), journalists have to run their own business, while keeping up with innovations and relate to followers. It is a human-centred communication.

Coming out of the dichotomy of useful tools for civil rights achievements and instruments for surveillance repression and disinformation, digital and its automatisms have certainly changed the way of working in the newsrooms. Between the journalist and the machine, there is a collaborative relationship. In this context, they are definitely complementary subjects. The Panama Papers in 2016 have shown how to build a narrative by processing a huge amount of data. The investigative story represents big signs of progress in newsrooms. It is not just a question of time reducing, but of the feasibility of the story, because of the difficulty to process and organize consistently all the data. So these technologies open up to increasingly qualitative investigations, outlining a future hybridized news work. In the Panama Papers, the data have connected individuals, whose relationships could delineate corrupt relationships or other crimes. 

Computational journalism is based on the use of official sources, databases, data, social science tools, and can be open source. One of the most comprehensive applications is the crowdsourced Guardian article on MPs’ expenses scandal in 2009 (Daniel and Flew, 2010). 

Machines still have limitations, especially when they try to deepen a private sphere or contextualize a local community. Here critical thinking and the complex communication of humans still make the difference (Levy and Murnane, 2003), like learn and think outside the box. Consequently, functional metacognition for abstract spaces and new situations. The human worker may still ensure high-quality results.

Artificial intelligence is a branch of computer science where computers or machines, capable of replicating the human brain, are developed. There are subfields, some of which are: vision, robotics, expert systems, speech recognition, natural language processing, and machine learning (Kayd, 2020). The latest innovation among the algorithms is the GPT-3, a language model developed by OpenAI, a project carried out by the entrepreneur Elon Musk. This linguistic algorithm is able to interpret and write correctly and clearly, with a much higher parameter storage capacity than the old GPT-2. In addition, it no longer requires human supervision, thanks to deep learning.

Another example of human-machine interaction is the creation of news stories in artificial intelligence. Newsbots are self-learning natural language processing software and generate content starting from a template. As a last step, they require the revision of the publisher. So, newsmakers can focus more on narrative, while AI collects and interprets the data flowing. They help to augment capabilities and reduce inefficiencies. Smart systems can make mistakes because they are trained by humans, who do, so they need constant validation. 

Narrativa Inteligencia Artificial is a company that has created a natural language processing software to provide solutions such as writing journalistic texts.  The operation involves the reworking of a large amount of data in a human-like language, so the final composition may expect sentences and paragraphs. This trend in newsrooms is called robojournalism (Ulfarte Ruiz and Manfredi Sánchez, 2019). It moves higher profits at lower costs, relieving humans of boring and repetitive tasks, with the aim of aligning speed to accuracy at scale. The process of conformation and data entry is given by the contribution of journalists and sees new professional figures emerge, between those who process data and those who focus on more cognitive processes. Gabriele, the Narrativa’s AI proprietary software, is highly customizable and ranges from SEO positioning to reports, starting from statistical data.

In the video coding application field, the R&D department of the Italian public broadcasting company RAI is working on two main approaches. The first is hybrid and combines AI-based algorithms with the traditional video codec, replacing one or more blocks of the previous schema with a learning-based one. The second is the end-to-end video compression method. It is disruptive, and it is all based on neural networks. A big problem with artificial intelligence applied to video is the computational complexity, because, in the end, it is a general-purpose encoder that should run on all end-user devices, such as mobile phones, browsers, TV sets. Another consideration is that algorithms strictly depend on training. International standard organizations working with video codecs train neural network algorithms on their dataset, which are quite different from those of the broadcaster. As television standards advance, the content must be able to be displayed correctly. The question is that RAI has archives dating back to 1930, with a large variance in the material. Unexpected results can therefore occur, such as the removal of the film grain by the neural network algorithm, because it could be confused with an artefact. Other types of error are when neural networks adopt an incorrect behaviour. During a Scottish football match in Scotland, the artificial intelligence camera operator confused the ball for the linesman’s bald head. Actually, these kinds of mistakes are without a solution. A further aspect worth dwelling on is the choice between beautiful or true content. With reference to the GAN neural networks, which are capable of reconstructing or creating complex image content, truthfulness could be lost (Iacoviello, 2021).

Case studies are plenty. Texty, one of the news organizations included in the 117-page report by Polis (Beckett, 2019), has carried out an investigation into illegal umber mining in the forests of three Ukrainian regions. They were able to analyze images at scale across a big part of the territory with the contribution of machine learning algorithms, finding relevant crimes. Another similar example, but based on text, comes from the Peruvian newsroom Ojo Público. They built their own algorithm Funes, starting from official sources, to find collusions by the government and big private companies in their country. It was developed taking into account the interpretation and consistency of the expected results, through data prioritization, classification, association, filtering. These two very different examples, one working on images and environment, the other on text and corruption, clearly show how creative use of algorithms and machine learning can really allow journalists to tell stories in ways that would not be possible otherwise (Peretti, 2021).

In 2011, the computer scientist Slonim started a new natural-language-processing machine, the Jeopardy project, which argued about politics. The types of argumentation that the machine had to learn are called patterns. One of the scientists of the team, Dan Lahav, estimated that there were between fifty and seventy types of argumentation that could be applied to cover at least every possible debate question. The first debate was in 2019 between an IBM project debater, proposed by Slonim, and the British economic consultant Harish Natarajan, in San Francisco. This was the first AI system able to exchange views with humans about complex topics, and capable to build persuasive points of view. 


Ethical challenges

The social media algorithms induce relationships among the users, and these mechanisms build a programmed sociality. News media has become algorithmically attuned, outlining a mixed system with social media (Chadwick, 2013). This reciprocity is based on relevancy and clicks, which in turn engages readers. It is evident that these new dynamics have taken journalism to an extreme, and have seen the birth of trends such as clickbait and churnalism, subjected to a click return strategy. These practices are clearly detrimental to the credibility of the role of the press. 

Algorithms sometimes make controversial decisions, bordering on censorship, as in the case of the winning “Napalm Girl” photography at the Pulitzer Prize. The deletion of the image from a post, echoed by the Norwegian minister Erna Solberg, has raised requests for a revision of the editing policy that regulates Facebook. She argued that the social media giant is the most powerful publisher in the world, and the freedom of the press might be protected. But a Facebook spokeswoman defended the company’s stance as the post was flagged by a user, and the review team enforced the community standard. Ultimately, decision-making, as well as mistakes, can be made both by human beings and by algorithms. This also means that the procedure amplifies the existing biases in our society, such as inequalities, social injustices, and stereotypes, because the algorithm is basically trained by humans and humans are biased.  

There is another interesting aspect to be explored: the Facebook’s eight-day news ban in Australia, in consequence of the proposed law to make Facebook and Google pay for the contents of news publishers. Under these circumstances, Big Tech shows its dominance and the power of the few. Hence, the necessity for regulating the social media channels, especially in regard to attention-grabbing algorithms. Regulations are required to avoid the single market and the single ideology, to the detriment of the democratic participation and of the many (Moore & Tambini, 2018). 

A substantial percentage of bots populate the social media platforms. On Twitter, are estimated between 9% and 15% of active accounts (Onur Varol et al., 2017). During the 2016 US elections, there was a shower of fake news, and after two years more than 80% of these accounts were still active. In this scenario of false users, trumped-up stories, and manipulated videos and images, called deep fakes, disinformation and radical scepticism towards evidence have increased. 

Then there is the matter of the use of personal data. The mathematical models offer to profile the users and their behaviours. This surveillance capitalism (Deibert, 2019) has a famous precedent: the Facebook – Cambridge Analytica data scandal in 2018, in which 87 million Facebook accounts were sold as psychological profiles for political propaganda purposes. Social media has just swallowed everything up: the private sphere, government, security, touching also the banking system (Bell, 2016). They are ruled by algorithms based on relevance, therefore, as a direct consequence, they amplify the filter bubbles. This term indicates the isolation that people gradually are experiencing because of these models that predict the user’s ideologies and behaviours, at the expense of the diversity that democracy may offer. 

It is precisely the algorithmic accountability measure that investigates the power of making decisions for our lives, underlining errors, biases, and legal violations. In short, data transparency, security, as well as ethical issues. Transparency, aimed at regulating the data collection method, conservation areas, criteria for selection and analysis, restores a balance in the digital market. Whoever exploits data can be monitored. Given the growing power of the opaque algorithm in our societies, whose procedures and information are not provided, without any control data could be subjected to various uses, including commercial. This Algocracy has infiltrated human lives, data, choices, like a controlling Big Brother. In Italy, possible regulatory solutions have been devised to stem these opaque practices, such as search neutrality and algorithmic transparency (Razzante, 2018). On the other hand, it would mean limiting search engine innovations, and probably it is also necessary to understand what benefits there could be.

One of the emerging challenges within the AI governance framework is the responsible AI, which focuses on the ethical and legal uses of these technologies. The program describes the rational purposes and behavioural decision-making. Testing and governance criteria are necessary to prevent machine learning from being hacked easily, therefore minimizing the risks. Artificial intelligence can be used for malicious purposes or in any case to defraud a service or users. It is the same artificial intelligence that helps to counter these phenomena. It becomes the virus and the vaccine, as in the fight against fake news and deep fakes (Metta, 2021).


Impacts on society

The digital revolution has changed our society, giving everyone a voice and making content accessible. Internet ideally is open and free. However, the real independence of the press, as an institution within a democracy, seems to be threatened. This new environment is user-centered in a globally competitive world and provides less information on political or world news to those who are not interested, who may prefer entertainment, weather, sports, job listings, and social interactions. The fourth estate had the mission to tell the truth to power. As the content adapts, it loses credibility (Starr, 2012).

A new army of truth-seekers has arisen on the internet, between government forums and open documents. Social media has broken down the barriers of mainstream information, becoming a feature of the democratic model. Everyone has the potential to offer thoughts and produce content, being part of the digital ecosystem. This is the fifth estate, as the expansion of the fourth estate with grouped individuals in non-traditional media outlets. The Grenfell Tower tragedy showed how a massive reaction on Twitter influenced the public policy formation. After this clicktivism practice, the government opened a public investigation at unusual speed. The event also highlighted the decline that local print journalism is experiencing over the past two decades. In fact, through social media, journalists easily connect with locals and are able to cover the story. The present and future of the news media see an osmosis between journalists and technology. The biggest challenge is to align the speed of machines with the reliability of data and information, remaining relevant in a hyper-social world.

Social media has the power to build a loyal audience and readers manage to have stronger relationships with the editorial staff. There is sharing, collaboration, and participation. Digitization has increased the use of visual products. The online user experience has gone from point and click to touch and voice commands, thus landing on the internet of things. There is a need for content that is adaptable to device capabilities. New generations of millennials and generation X look for information on the internet, often relying on these channels. Reuters has had to meet these demands, also by inserting new categories, the fun and weird news. Gen Z, especially those born after 1996, uses more visual content than text and prefers interactive navigation, spending time in new immersive storytelling technologies like VR and AR. 

Two experimental VR projects in journalism are the documentaries of Guantanamo Camp and the Project Syria. They aim for an immersive involvement. The Project Syria, produced by De La Peña, is made possible starting from photographic images and audio recordings. Here the authenticity and objectivity of the stories collide with the persuasiveness of the VR medium. The perception is that of the real world, unlike screens, and the full-body illusion is due to the close connection with the sensorimotor system.

Other remarkable innovations are telepresence robots, which save reporters’ lives and cover news in conflict zones. One of the first experiments is the Csikszentmihalyi Afghan Explorer. These technological means do not know bias, they collect all possible data, and they do not suffer serious psychological consequences. Other tools used are drones, robotic snakes, artificial eyes.



The contemporary era has based its existence on the idea of scientific and technological progresses. The past century defeated diseases such as tuberculosis and plague, developing the cities as we live nowadays. And then the petrol engine, the electricity, the telephone. This is where the modern economic theory was born. The myth of progress and the growth of profit come into conflict with the human capital. We are entering the post-human era, in which machines and people will converge and technology seems to be the next Belphegor. 50% of jobs made by humans could not survive, and in 5-10 years most written stories can be made by machines (Lemelshtrich, 2018). Google’s futurist-in-chief Ray Kurzweil hypothesizes that in 2029 the complex natural language systems could be endowed with consciousness.

Big data, artificial intelligence, cybernetics, and behavioural economics are influencing and shaping a data-controlled society that is more and more similar to the Orwellian Big Brother. The risk is not to be able to control the created creature.


Reference List


Cast AI Podcast

The "Cast AI" podcast focuses on the artificial intelligence daemon in newsrooms. It will be deepened the entrepreneurial point of view, the divergent opinion of the journalist belonging to the Fourth Estate, the overview of the innovation ecosystem, the perspective from the RAI R&D department's insiders.

The interviewees:
1. The CEO of Narrativa David Llorente
2. The journalist and broadcaster at the BBC Leigh Banks
3. The Manager of JournalismAI Mattia Peretti
4. The Lead Research Engineer at RAI Roberto Iacoviello and the Senior Researcher at RAI Sabino Metta

Special thanks go to RoboPeter's voice 🙂

You may also like

Leave a Comment

This website uses cookies to improve your experience. We'll assume you're ok with this, but you can opt-out if you wish. Accept Read More