Abstract
Smartphones have become popular tools for data collection in the social sciences due to their high prevalence and mobility. Surveys, experience sampling (ESM) and tracking/logging are among the most used smartphone data-collection methods. However, existing apps are either commercial solutions, require programming skills, collect sensitive data, or do not handle all three methods simultaneously. When two or more data collection methods are used simultaneously, it further burdens both researchers and participants. This paper introduces the app MART (Mobile Assessment Research Tool) that solves these problems and is available for Android and iOS devices. Content and data collection settings can be customized dynamically via a web interface without the need to compile a new version of the app when changes are made. While the logging functionality is only supported on Android devices, data donation via the app Screen Time is requested on iOS devices. MART is already functional, and the source code is open-source and available on GitHub. The necessary long-term revisions for its use in custom projects without reprogramming are currently under development.
Zusammenfassung
Das Smartphone ist durch seine hohe Verbreitung und Mobilität ein beliebtes Instrument zur Datenerhebung in den Sozialwissenschaften geworden. Befragungen, Experience Sampling (ESM) und Tracking/Logging gehören dabei zu den meistverwendeten Methoden. Vorhandene Lösungen (Apps) sind allerdings entweder an Kosten gebunden, erfordern Programmierkenntnisse, erheben sensible Daten oder beherrschen nicht alle drei Methoden gleichzeitig. Wenn zwei oder mehrere dieser Methoden gleichzeitig verwendet werden sollen, erschwert dies außerdem die Datenerhebung sowohl für Forschende als auch für Teilnehmende. In diesem Beitrag wird die App MART (Mobile Assessment Research Tool) vorgestellt, die diese Probleme löst und sowohl für Android- als auch für iOS-Geräte in den jeweiligen App-Stores zur Verfügung steht. Inhalte und Einstellungen für die Datenerhebung lassen sich dynamisch über ein Web-Interface anpassen, ohne dass bei Änderungen eine neue Version der App kompiliert werden muss. Während die Logging-Funktionalität nur durch Android-Geräte unterstützt wird, wird auf iOS-Geräten entsprechend um eine Datenspende aus der App Bildschirmzeit/Screen Time gebeten. MART ist bereits funktional, und der Quellcode ist open-source auf GitHub verfügbar. Die erforderlichen nachhaltigen Überarbeitungen für die Nutzung in eigenen Projekten ohne Umprogrammierung sind derzeit im Gange.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
1 Introduction
Over the last 15 years, smartphones have become highly relevant in society and social research. In addition to studies on smartphone use itself, smartphones have been used as data collection tools due to their ubiquity and pervasiveness in everyday life. However, researchers in communication science and related disciplines, such as (media) psychology or political science, are often interested in variables requiring different valid measurement methods. For example, attitudes towards certain issues can be measured in surveys, as these are relatively stable and reliable concepts that show little intra-individual variation and are easy to retrieve for respondents (cf. Bhattacherjee 2012, pp. 73–74). Smartphone use (mostly measured through frequency and duration) is usually also assessed retrospectively via self-reports in (online) surveys. Such data are, however, typically influenced by poor retrospective estimation and social desirability bias (cf. Schwarz and Oyserman 2001, p. 129). For this reason, the data differ quite significantly from those produced by less subjective, technologically assisted methods, such as logging, which are preferable in this case (cf. Parry et al. 2021, p. 1543). Other subjective variables, such as well-being or reactions to news articles, are also affected by such problems but cannot be assessed through logging. These need to be measured with self-reports but need to be completed as soon as possible after the situation of interest occurs. This can be accomplished with in-situ methods, such as the Experience Sampling Method (ESM, cf. Larson and Csikszentmihalyi 2014). All three methods of data collection have been implemented on smartphones before (e.g., cf. Johannes et al. 2020, pp. 8–9; Naab et al. 2018, p. 6), and smartphones have also been used for data donations of other multimodal information for quite some time (e.g., cf. Otto et al. 2022; Raento et al. 2009).
Many data collection apps in the past were programmed for individual studies and were not maintained or used afterward; only a few were published for other researchers to use. The published apps were either limited to particular measurement methods (e.g., cf. Menthal n.d.; Movisens 2022), required programming skills (cf. AWARE 2022), or were either commercial solutions or collected non-optional, sensitive data (e.g., GPS locations: cf. Murmuras n.d.). Furthermore, people’s willingness to participate in studies using passive observation methods like logging is already limited (cf. Struminskaya et al. 2021, pp. 448–451). Therefore, apps combining logging with other methods are bound to reduce willingness to participate even further. The GESIS-Leibniz Institute for the Social Sciences is currently working on an app dedicated to ESM data collection and another for assessing sensor data (logging, accelerometer, etc.), which tackle some of these problems (cf. GESIS n.d.). However, they do not combine self-reported and behavioral data assessments within a single app. As different methods are required for validly measuring smartphone use and other variables, it is necessary to establish a universal tool for research that implements all these methods. A new smartphone app is, therefore, ideally suited for this purpose.
This paper presents the Android and iOS app MART (Mobile Assessment Research Tool) to the communication and media research community. It qualifies as free and open-source software (FOSS), is customizable without programming knowledge, and combines all three measurement methods mentioned. With this app, smartphone use and other variables relevant to the social sciences can be assessed using the most appropriate method. Data collection occurs right on the device that study participants use daily, reducing the burden for them and the researchers.
In line with recommended open science practices (cf. Peter et al. 2020), the source code of MART was already made available via GitHub. Currently, the developers’ goal is to prepare a framework that enables any researcher in the field to use it. This paper describes MART’s use cases, functionality, backend mechanics and prospects.
2 Use cases
Many studies in communication science and adjacent fields combine multiple data collection methods (e.g., cf. Andrews et al. 2015; Bjørner 2016; Burt and Alhabash 2018; Naab et al. 2018). Given the participant burden of single methods like ESM alone (cf. Eisele et al. 2020, pp. 10–13), the combination of and transition between multiple methods in a study can be expected to increase this burden even further. MART reduces the participant burden as far as possible by allowing for combinations of multiple methods on a single device: the participant’s smartphone.
For example, MART enables researchers to first assess sociodemographic information and attitudes toward specific behaviors using a survey. Then, researchers can measure whether participants’ self-reported behaviors align with their attitudes using in-situ measurement through ESM. If a behavior is associated with smartphone use, the logging functionality adds an objective assessment of that behavior, which can be used to evaluate the validity of the ESM data.
Another possible application of MART is using only one or two of these measurement methods. While it is possible to conduct online surveys on platforms such as SurveyMonkey, ESM using dedicated apps (e.g., cf. Movisens 2022), and logging using custom solutions (e.g., cf. Toth and Trifonova 2021, pp. 3–4), MART can handle all these methods without switching between tools within and between studies. Consequently, it is an all-inclusive solution for combining methods and for single-method studies.
3 Functionality
MART was initially developed by nvii-media GmbH (2022) as part of a research project led by the author of this article. The project was funded by the German Research Foundation (DFG) and involved the development of the app and participant recruitment for a study that used it. This chapter describes the implementation of each of the three data collection methods enabled by MART. Other than differences in the implementation of logging/data donation, the Android and iOS versions of the app are identical.
3.1 Survey
The most basic functionality of the app is the survey, which largely corresponds to online surveys implemented on platforms like Unipark, Qualtrics, LimeSurvey, LamaPoll, or SoSci Survey. It is currently possible to set up pages containing text only (e.g., for briefings or question introductions) or four different types of items.
Radio buttons serve the purpose of selecting one element from a list of options. Alternatively, the same options can be displayed as a drop-down menu. Sliders can be used as an alternative to radio buttons, which is helpful if options are numeric. Checkboxes provide a list of options, but they enable multiple choice. Finally, text input can be used for the assessment of non-standardized information. All scales/answering options and default values can be customized. Items can be grouped in item batteries, within which their order can be randomized.
Additionally, there is a mechanism for time-limited questions (e.g., five seconds per item). This is used to assess habitual media use initiation using the Response-Frequency Measure of Media Habit (RFMMH) described by Naab and Schnauber (2016). These time-limited questions can provoke choices under time pressure to access unconscious tendencies (cf. Naab and Schnauber 2016, pp. 134–136).
3.2 ESM
All functions described in the previous section are also available for ESM questionnaires. Additionally, it is possible to set up a signaling schedule. In this case, a signal is equivalent to a notification on the smartphone screen encouraging the user to fill out an ESM questionnaire.
Time frame and intervals
First, researchers have to define the maximum number of ESM questionnaires a study participant should be able to complete. Then, they have to set a specific time frame during the day to enable signals (full hours and even number of hours, e.g., 8 am–10 pm). This time frame is then split into intervals of two hours each (e.g., 8 am–10 am, 10 am–12 am, 12 am–2 pm, etc.). Additional options will be added.
Schedules
In ESM research, different signaling schedules are used. Schedules are either signal-contingent, interval-contingent, or event-contingent (cf. van Berkel et al. 2019, p. 119). Currently, the first two types of signals can be set up in the app.
Signal contingent schedules signal randomly within the defined time frame (cf. van Berkel et al. 2019, p. 119). Interval contingent schedules signal at specific times within the time frame in relation to the intervals (cf. van Berkel et al. 2019, p. 119). An additional time constraint can alter this mechanism. This constraint is a specific amount of time that needs to have passed since the last signal in order to avoid two signals in immediate succession (e.g., 9:57 and 10:02). For example, if study participants are asked to report on behavior that occurred within the past half hour, at least 30 min need to pass between two signals to avoid overlap. Signal and interval contingent schedules can be combined so signals are randomized within intervals. This way, researchers can ensure that signals are sufficiently spread throughout the day. Event contingent schedules signal when specific events occur, such as unlocking the device (cf. van Berkel et al. 2019, p. 119). This approach is no longer feasible since Android version 9, as apps no longer have constant access to information such as unlocking and locking times for data protection reasons (cf. Broadcast overview 2022). On iOS, accessing this type of information is not possible either (see section Logging/data donation).
In each interval, it is only possible to fill out an ESM questionnaire once the signal is activated. There are two ways for study participants to access the questionnaire: They can either tap the notification, which opens the app and presents the questionnaire, or they can open the app themselves. The latter option is useful if a participant has removed the notification (either on purpose or by mistake) and still wants to participate during the current interval. If no ESM questionnaire is filled out and submitted by the end of an interval, this process restarts in the next one.
3.3 Logging/data donation
On Android devices, MART can access a log file containing all events that take place on the smartphone. Events are categorized into event types (for more details, see the official Android documentation, cf. Google 2022). They include activities like locking or unlocking the screen, opening or closing apps, and booting or shutting down the device. The log file “contains data from up to two weeks in the past” (Toth and Trifonova 2021, p. 3) up to the time it is accessed. The accessible information includes the event type, a time stamp, and additional information depending on the event type (e.g., app names). Please note that no information on contents received on the smartphone, such as texts or images, is captured.
On iOS devices, events cannot be assessed this way (cf. Apple Inc. 2022). MART still offers a way to gather aggregated phone and app use duration and frequency. This works by giving study participants detailed instructions on accessing the iOS feature Screen Time, describing which information to read there (e.g., daily average use duration and number of pickups during the current week), and asking them to enter these figures into designated fields. This approach is data donation, as the data are not logged passively in the background but manually transferred and curated by participants (cf. Ohme et al. 2021, pp. 294–295).
4 Backend mechanics
MART is configured to retrieve all contents and settings from a WordPress (WP) page. On the WP page, researchers can define and adjust all items and scales for the survey and the ESM questionnaires, as well as the ESM and logging settings. This allows for adjusting content and settings anytime, as the app does not need to be re-compiled from the source code every time something changes. This is also why MART could be provided in the Android and iOS app stores, especially benefiting less tech-savvy study participants (see section Deployment). At the same time, this means that the app requires Internet access to retrieve necessary information (e.g., when accessing an ESM questionnaire). The connection between the app and WP works both ways: survey, ESM, and logging data are transferred from the app to the WP page and are accessible there. Communication between the app and WP is achieved through the WP REST API (cf. WordPress.org n.d.). Technically, MART can be altered to communicate with any other Content Management System (CMS). However, this requires further source code adjustments as it is currently tailored to a WP setup. MART is also provided as a web app that is accessible through a browser. Its main purpose is to enable downloading collected data (survey and ESM answers, logging/data donation data, and information on the operating system used) in separate files (.csv format).
4.1 Deployment
MART is fully functional but not yet customizable without programming knowledge. The source code is available on GitHub (cf. Toth 2023) and will be maintained and updated regularly. Documentation of the procedure for customizing and using the app is also under development.
The author is collaborating with researchers from the University of Bremen to establish an institution that supports MART’s technical and infrastructural advancement. This process also involves developing a system involving a database and account management that 1) lets researchers set up custom studies and 2) allows participants to participate in multiple studies via MART. Providing servers for collected data in accordance with GDPR is also part of the collaboration, so researchers do not need to set up individual servers and look after proper data protection themselves (see Data protection section). MART is available in the Google Play Store [t.b.a.] and the Apple App Store [t.b.a.], which significantly improves accessibility for study participants, as they do not have to manually download and install the app using a .apk file (Android) or another app (iOS, cf. Apple Inc. 2023).
4.2 Data protection
The data collected by MART are anonymized from a technical standpoint. On Android, the logging function only collects the start and end points of phone and app uses. On iOS, it merely asks participants to report an aggregated figure provided by Screen Time. It is therefore impossible to trace the information back to an individual participant. The degree of anonymity of the survey and ESM assessments naturally depends on the items themselves, respectively.
Otherwise, the security of all data transferred with MART depends on the WP setup used, which is left to interested researchers until the collaboration described earlier progresses. While WP offers to use its own servers (cf. WordPress.com 2022), these may not align with the researchers’ data protection requirements and should be checked individually (cf. WordPress.com n.d.). Hosting WP on a custom (local) server (cf. WordPress.org 2022) is possible if this expertise is available. Alternatively, it is possible to use a third-party server hosting service, such as STRATO (cf. STRATO AG 2022) or HostPress (cf. HostPress GmbH 2022), with sufficient data protection measures. These services offer server space, hosting, and administration of a ready-to-use WP page which can then be connected to MART.
5 Prospects
In this article, the author presented MART, an app for Android and iOS devices that enables social science researchers to combine surveys, ESM, and logging/data donation within a single tool. These three data collection methods are used to assess attitudes, behaviors, and most other variables in the social sciences. While many data collection apps have already been developed for research, they each have certain disadvantages—they are either constrained in their functionality, not available for free, only accessible to programmers, or only support a single data collection method. MART aims to combine all relevant data collection methods in an app that can be customized without programming knowledge and qualifies as FOSS.
First and foremost, the development of MART needs to proceed in the way outlined in the section Deployment. It is of utmost priority that creating a WP page on a custom or third-party provider server is automated and simplified as much as possible. As soon as the setup is finished, editing the contents and settings in WP is a rather trivial task that does not differ too much from setting up online surveys with popular providers. To achieve this, a sustainable maintenance and deployment strategy is highly needed and is currently in development.
As a further step, the author aims to add further data collection methods to MART. Like the already available methods, it should be possible to activate and deactivate them individually for each study. One of these additional methods may be the assessment of location (GPS) data (e.g., cf. Bayer et al. 2018) for studies that require precise information on participants’ whereabouts; however, this information is highly sensitive and should only be assessed if absolutely necessary. Another method concerns the assessment of the content received on the device. There are multiple approaches to assessing this, including the frequent, automatic creation of screenshots (cf. Yee 2022). As these contain highly sensitive data, new techniques are currently being developed where the content in such screenshots is identified and encoded into categories on the device—this way, the screenshots can be discarded immediately and participant anonymity is protected (cf. Zerrer et al. 2022). Lastly, eye-tracking allows for precisely measuring participants’ attention to information on the screen, which could be used for complementing self-reported and behavioral data. This method gained increased popularity in social science research over the past years and usually requires complicated and elaborate technical setups (e.g., cf. Ohme et al. 2022, p. 345). However, the increased quality of mobile front-facing cameras has enabled their use as tools for eye-tracking, too (cf. eye square GmbH 2022), which could be implemented in MART in the future.
MART is a mobile application that combines surveys, ESM, and logging/data donation in a single, universal tool that qualifies as FOSS. It does not require programming skills and does not collect unnecessary or sensitive data. It is particularly useful for studies that combine multiple data collection methods but can also be used if only one or two of these methods are needed. It reduces the burden for researchers and study participants by allowing for a seamless transition between different phases of data collection. The implementation of each method is not yet as refined as in applications that only focus on a single method. MART is currently under development and will be available to any researcher to use without requiring programming as soon as possible. To achieve this, the author closely collaborates with other researchers to progress the development and ensure sustainable usability in the future. Feedback is very welcome and will help to improve the final product.
![figure a](http://media.springernature.com/lw685/springer-static/image/art%3A10.1007%2Fs11616-023-00788-6/MediaObjects/11616_2023_788_Figa_HTML.png)
Change history
15 October 2023
An Erratum to this paper has been published: https://doi.org/10.1007/s11616-023-00816-5
References
Andrews, S., Ellis, D. A., Shaw, H., & Piwek, L. (2015). Beyond self-report: tools to compare estimated and real-world Smartphone use. PLoS One, 10(10), e139004. https://doi.org/10.1371/journal.pone.0139004.
Apple Inc (2022). Meet the screen time API. https://developer.apple.com/videos/play/wwdc2021/10123/. Accessed 2 May 2023.
Apple Inc (2023). Beta testing made simple with testflight. https://developer.apple.com/testflight/. Accessed 2 May 2023.
AWARE (2022). AWARE. https://awareframework.com/. Accessed 2 May 2023.
Bayer, J., Ellison, N., Schoenebeck, S., Brady, E., Falk, E. B. (2018). Facebook in context(s): measuring emotional responses across time and space. New Media and Society, 20(3), 1047–1067. https://doi.org/10.1177/1461444816681522.
van Berkel, N., Goncalves, J., Lovén, L., Ferreira, D., Hosio, S., Kostakos, V. (2019). Effect of experience sampling schedules on response rate and recall accuracy of objective self-reports. International Journal of Human Computer Studies, 125, 118–128. https://doi.org/10.1016/j.ijhcs.2018.12.002.
Bhattacherjee, A. (2012). Social science research: principles, methods, and practices (2nd edn.). Tampa, Florida, USA: Anol Bhattacherjee. https://digitalcommons.usf.edu/oa_textbooks/3/
Bjørner, T. (2016). Time use on trains: media use/non-use and complex shifts in activities. Mobilities, 11(5), 681–702. https://doi.org/10.1080/17450101.2015.1076619.
Broadcast overview. (2022). https://developer.android.com/guide/components/broadcasts#manifest-declared-receivers. Accessed 2 May 2023.
Burt, S. A., Alhabash, S. (2018). Illuminating the nomological network of digital aggression: results from two studies. Aggressive Behavior, 44(2), 125–135. https://doi.org/10.1002/ab.21736.
Eisele, G., Vachon, H., Lafit, G., Kuppens, P., Houben, M., Myin-Germeys, I., Viechtbauer, W. (2020). The effects of sampling frequency and questionnaire length on perceived burden, compliance, and careless responding in experience sampling data in a student population. Assessment. https://doi.org/10.1177/1073191120957102.
eye square (2022). Smartphone eye-tracking. https://www.eye-square.com/en/smartphone-eye-tracking/#case-study. Accessed 2 May 2023.
GESIS Digitale Verhaltensdaten. https://www.gesis.org/institut/digitale-verhaltensdaten. Accessed 2 May 2023.
Google (2022). UsageEvents.Event. https://developer.android.com/reference/android/app/usage/UsageEvents.Event. Accessed 2 May 2023.
HostPress (2022). Wordpress hosting von hostpress. https://www.hostpress.de/. Accessed 2 May 2023.
Johannes, N., Meier, A., Reinecke, L., Ehlert, S., Setiawan, D. N., Walasek, N., Dienlin, T., Buijzen, M., Veling, H. (2020). The relationship between online vigilance and affective well-being in everyday life: Combining smartphone logging with experience sampling. Media Psychology. https://doi.org/10.1080/15213269.2020.1768122.
Larson, R., Csikszentmihalyi, M. (2014). The experience sampling method. In Flow and the foundations of positive psychology: the collected works of Mihaly Csikszentmihalyi (pp. 21–34). Dordrecht: Springer. https://doi.org/10.1007/978-94-017-9088-8.
Menthal https://www.menthal.org/. Accessed 2 May 2023.
Movisens (2022). movisensXS. https://www.movisens.com/en/products/movisensxs/. Accessed 2 May 2023.
Murmuras Offer pricing. https://academia.murmuras.com/pricing/. Accessed 2 May 2023.
Naab, T. K., Schnauber, A. (2016). Habitual initiation of media use and a response-frequency measure for its examination. Media Psychology, 19(1), 126–155. https://doi.org/10.1080/15213269.2014.951055.
Naab, T. K., Karnowski, V., Schlütz, D. (2018). Reporting mobile social media use: how survey and experience sampling measures differ. Communication Methods and Measures, 13(2), 126–147. https://doi.org/10.1080/19312458.2018.1555799.
nvii-media (2022). Nvii-media. https://www.nvii-media.com/. Accessed 2 May 2023.
Ohme, J., Araujo, T., de Vreese, C. H., Piotrowski, J. T. (2021). Mobile data donations: assessing self-report accuracy and sample biases with the iOS screen time function. Mobile Media Communication, 9(2), 293–313. https://doi.org/10.1177/2050157920959106.
Ohme, J., Maslowska, E., Mothes, C. (2022). Mobile news learning—investigating political knowledge gains in a social media newsfeed with mobile eye tracking. Political Communication, 39(3), 339–357. https://doi.org/10.1080/10584609.2021.2000082.
Otto, L. P., Thomas, F., Glogger, I., de Vreese, C. H. (2022). Linking media content and survey data in a dynamic and digital media environment—mobile longitudinal linkage analysis. Digital Journalism, 10(1), 200–215. https://doi.org/10.1080/21670811.2021.1890169.
Parry, D. A., Davidson, B. I., Sewall, C. J. R., Fisher, J. T., Mieczkowski, H., Quintana, D. S. (2021). A systematic review and meta-analysis of discrepancies between logged and self-reported digital media use. Nature Human Behaviour, 5(11), 1535–1547. https://doi.org/10.1038/s41562-021-01117-5.
Peter, C., Breuer, J., Masur, P. K., Scharkow, M., Schwarzenegger, C. (2020). Empfehlungen zum Umgang mit Forschungsdaten in der Kommunikationswissenschaft – AG Forschungsdaten im Auftrag des Vorstands der DGPuK. SCM Studies in Communication and Media, 9(4), 599–626. https://doi.org/10.5771/2192-4007-2020-4-599.
Raento, M., Oulasvirta, A., Eagle, N. (2009). Smartphones: an emerging tool for social scientists. Sociological Methods and Research, 37(3), 426–454. https://doi.org/10.1177/0049124108330005.
Schwarz, N., Oyserman, D. (2001). Asking questions about behavior: cognition, communication, and questionnaire construction. American Journal of Evaluation, 22(2), 127–160. https://doi.org/10.1016/S1098-2140(01)00133-3.
STRATO (2022). STRATO. https://www.strato.de/. Accessed 2 May 2023.
Struminskaya, B., Lugtig, P., Toepoel, V., Schouten, B., Giesen, D., Dolmans, R. (2021). Sharing data collected with Smartphone sensors: willingness, participation, and nonparticipation bias. Public Opinion Quarterly, 85(S1), 423–462. https://doi.org/10.1093/poq/nfab025.
Toth, R. (2023). MART (Version 0.22.0) [Repository]. https://github.com/tothrol/MART. Accessed 2 May 2023.
Toth, R., Trifonova, T. (2021). Somebody’s watching me: Smartphone use tracking and reactivity. Computers in Human Behavior Reports. https://doi.org/10.1016/j.chbr.2021.100142.
WordPress.com (2022). Hosting. https://wordpress.com/hosting/. Accessed 2 May 2023.
WordPress.com Your WordPress.com Site and the GDPR. https://wordpress.com/support/your-site-and-the-gdpr/. Accessed 2 May 2023.
WordPress.org (2022). Get WordPress. https://wordpress.org/download/. Accessed 2 May 2023.
WordPress.org REST API Handbook. https://developer.wordpress.org/rest-api/. Accessed 2 May 2023.
Yee, A. (2022). ScreenLife capture. https://www.andrewzhyee.com/screenlifec/. Accessed 2 May 2023.
Zerrer, P., Krieter, P., Puschmann, C. (2022). Video-based mobile screen logging of young activists’ news consumption [Presentation]. Interational Communication Association (ICA), Toronto, Canada.
Funding
The article processing charge was funded by the Federal Ministry of Education and Research (BMBF), grant no. 16DII131, and the Open Access Publication Fund of the Weizenbaum Institute for the Networked Society, Berlin.
Author information
Authors and Affiliations
Corresponding author
Additional information
The original online version of this article was revised: The Funding information section was missing from this article and should have read:
Funding: “The article processing charge was funded by the Federal Ministry of Education and Research (BMBF), grant no. 16DII131, and the Open Access Publication Fund of the Weizenbaum Institute for the Networked Society, Berlin.”
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Toth, R. One App to Assess Them All. Publizistik 68, 281–290 (2023). https://doi.org/10.1007/s11616-023-00788-6
Received:
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s11616-023-00788-6