In recent years, more and more digital, automated and self -learning or so-called AI (Artificial Intelligence) tools have appeared on the market for the sake of the journalistic research, editorial and verification process. DataMinr, for example, uses such algorithmic tools to process millions of Tweets to make sure that journalists can follow the latest trends. Google also has entered the journalistic market, offering different AI tools to organise, select and verify information.
Even though AI tools and search engines work efficiently, they are hardly objective. These tools organise information based on user popularity. Research (Tylor, 2015) also shows that information ranked higher on the list of a search page is often perceived as more relevant and more credible. That way search engines play a crucial role in the information process.
The fact that algorithms aren’t neutral requires research into the leverage of automated tools in journalism. To what extent is the journalistic process steered by these tools? What interests, norms and values does it implicitly ‘embody’ by making use of AI?
The key question in this research is: how are journalists influenced by AI in the selection and verification of (relevant, diverse and reliable) information?
This key question is divided into four sub-questions:
- To what extent are AI tools used in the daily journalistic research process?
- To what extent are journalists aware of the use and functioning of such AI tools?
- How does the use of automated tools affect the diversity and reliability of journalistic information?
- How can the functioning of AI in the journalistic research process be made more transparent?
Zo denken journalistiek over Artificial Intelligence
Google denkt mee aan bijna elk nieuwsartikel dat in Nederland verschijnt. Is dat erg?