Schwerpunktthema: Argumentative Intelligence (Arguing)

Many decision-making situations are characterized by an overwhelming amount of information, complex dependencies between factors, and multiple decision criteria that stand in a trade-off relationship to each other, requiring weighting or preferences to be resolved to favor a certain alternative over another. Individuals and organizations are quickly overwhelmed by the plethora of decision options and alternatives. In such situations, decision makers need to systematically consider arguments in favor or against a certain option under consideration of different assumptions, perspectives, and preferences in order to reach an informed, justified, and balanced decision. Automatic decision-making support by machines will increasingly play a role in such contexts since humans have difficulties in taking into account all available information and in understanding the impact of decisions at different levels and for different stakeholders. In sum, we need machines that can provide rational argumentation support.

Such rational argumentation machines are still rare, even among those that nowadays are claimed to be “explainable”. The reason is that machines lack domain knowledge and knowledge about causal relationships as well as understanding of how premises and assumptions relate systematically to conclusions—a sine qua non for argumentative decision support. Without an understanding of the relation between premises and conclusions, without the ability to compare and evaluate different arguments, without the ability to understand how to resolve trade-offs and the implications thereof, without the ability to provide counter-arguments that attack an inferential step, or without the ability to challenge another one’s reasoning, there cannot be a rational decision support by machines except in the most low-level applications such as recognizing, e.g., tumors in diagnostic images. Without argumentative abilities such as described above, there is likely to be no rational level at which machines and humans can cooperate in terms of decision making. This leaves us with three scenarios:

Machine decision making: In this scenario, machines are the only decision makers. The scenario raises all the well-discussed ethical and practical questions about responsibility, accountability, transparency, etc. However, this model may be justified in situations where timeliness or cost-effectiveness are decisive, and where the effects or repercussions of the decisions are limited or controllable.

Human-confirmed machine decision making: Here, machines take decisions, but humans have to confirm or reject them. Humans may or may not be able to understand the reasons or relationships that lead a machine to draw a certain conclusion and thus may or may not be able to meaningfully intervene. Even if they are able to inspect the model of the machine, they may or may not be able to relate the decision to own background knowledge or to decision-making processes that rely on domain knowledge and understanding of the logical and causal relationships between the key factors.

Humans as the ultimate and sole decision makers: Here, machines merely provide weights or probabilities for different decision alternatives. The actual decision making is left to the human who needs to construct the arguments supporting the decision for or against a certain alternative and needs to perform the full rationalization of the decision alone.

Of course, this is not a clear-cut trichotomy, but more of a continuum. Nevertheless, all scenarios are characterized by the fact that there is no joint decision making in the sense that both parties involved can challenge the arguments of the other party and propose alternative views, perspectives, assumptions, implications, or possibilities for resolving trade-offs. Machines that merely “decide” on the basis of patterns found in data and that are detached from domain knowledge or the decision-making context can not relate these patterns to causal relationships between variables or make explicit how a conclusion follows from assumptions; actually, they will fail in empowering humans to make decisions.

This special issue of the Datenbankspektrum features contributions from projects that are funded within the DFG priority program on Robust Argumentation Machines (RATIO). Started in 2017, the priority program seeks to foster a paradigm change, namely, to learn which argumentative structures are considered as core information units manipulated by machines. Hereby, RATIO aims at developing argumentative machines that can analyze, aggregate, and summarize large amounts of arguments exchanged by humans on the Web, but also at rational machines that can bring new arguments relying on deep knowledge about a domain and a deep understanding about how facts can be used in premises to yield meaningful conclusions, and that can engage in joint decision making with humans at a rational level.

To induce this paradigm change, the priority program brings together the following computer science sub-disciplines to jointly investigate new methods supporting the development of rational machines: Knowledge Representation and Reasoning, Semantic Web, Information Retrieval, Computational Linguistics, and Human-Computer-Interaction (HCI).

The research program comprises the development of methods that can extract, compare, and summarize arguments extracted from unstructured documents as well as the development of new semantic models, formal representation languages, reasoning systems and ontologies for the representation of arguments in relation to domain knowledge. The program also supports the development of new search engines and information retrieval systems that index and retrieve arguments as the main unit of information and that can find all pro- and con-arguments for a given topic. In addition, the program aims at developing new methods that can enrich, extend, and complete arguments or even assess their plausibility using new inference and argumentation evaluation and validation methods.

Finally, the program also investigates new HCI paradigms by which users can explore and interact with arguments to support rational decision making as well as cooperation between humans and machines along the lines sketched above.

In the following, we give a brief summary of these papers.

Argument Mining

In their paper The ReCAP Project: Similarity Methods for Finding Arguments and Argument Graphs, R. Bergmann et al. present an approach to index arguments via a graph in order to support retrieval of relevant premises given a certain query topic (conclusion). In addition, they present an approach to use Case-Based-Reasoning methods to retrieve similar arguments from an argument graph relying on similarities between nodes using embeddings.

The paper Relational and Fine-Grained Argument Mining by R. Trautmann et al. presents an NLP approach to identify argumentative units in textual discourse. They provide an overview of different argument mining tasks and present their results on sentence-level and token-level argument identification.

The paper The Road Map to FAME: A Framework for Mining and Formal Evaluation of Arguments by R. Baumann et al. attempts to bridge between (a) NLP approaches to argument mining, which typically do not employ formal approaches to reasoning with arguments, and (b) approaches in the tradition of abstract argumentation frameworks, which do not represent the content or structure of arguments. The authors propose to use a controlled language as a way to represent natural language arguments while being translatable into first-order logic thus supporting reasoning.

The paper ArgumenText: Argument Classification and Clustering in a Generalized Search Scenario by J. Daxenberger et al. presents an approach to extract arguments from heterogeneous textual sources including web crawls of news data and customer reviews. They present an approach supporting the clustering of arguments. The main application proposed is supporting decision making in innovation management and the analysis of customer feedback.

The paper Reconstructing Arguments from Noisy Text: The Brexit Referendum on Twitter by N. Dykes et al. proposes an approach to extract arguments from text and formalize them in a co-algebraic logical framework. The identification of arguments relies on the identification of recurring linguistic argumentation patterns and represents a high-precision approach to identifying arguments in a text corpus.

The paper Explaining Arguments with Background Knowledge—Towards Knowledge-based Argumentation Analysis by M. Becker et al. discusses the problem that many arguments appearing in textual sources are incomplete in the sense that premises may be omitted. The paper discusses how to reconstruct such enthymemes by leveraging external knowledge resources such as ConceptNet, WordNet, or DBpedia. Further, it discusses how state-of-the-art, transformer-based language models can be used to infer relations between arguments. The main task considered is inferring and classifying argumentative relations such as attack and support. The paper shows that the performance on the task is positively affected by the inclusion of common sense or background knowledge.

The paper Analysis of Political Debates through Newspaper Reports: Methods and Outcomes by G. Lapesa et al. proposes a hybrid approach to analyze political debates carried out in the news. The methods are applied to the analysis of the debates around immigration in Germany in the year 2015. The hybrid methodology consists of a combination of discourse network analysis and NLP methods, which partially automatize some processes of this methodology. The authors present and discuss their first results on automatic claim detection.

Interacting with Argumentation Machines

The paper Answering Comparative Questions with Arguments by A. Bondarenko et al. discusses an approach that allows users to submit comparative queries to search engines and to obtain results in which the entities in question are compared along key aspects. The authors describe their work on a prototype that—given two entities—can extract and rank sentences in which the entities are compared. They further discuss work on identifying comparative questions using a machine learning approach as a first step towards allowing users to directly pose comparative questions to a search engine.

The paper How to Win Arguments—Empowering Virtual Agents to Improve their Persuasiveness by K. Weber et al. argues that the way arguments are framed using non-verbal elements such as body language, gazing behavior as well as emotions can have a significant effect on the level of persuasiveness and thus on the audience’s stance on the topic. The paper presents a reinforcement-based approach by which two policies can be learned, one that optimizes the strategic aspects of an argument and a second that optimizes the emotional flavor of an argument.

Opening the ML Blackbox

The paper Towards Understanding and Arguing with Machine Learning: Recent Progress by X. Shao et al. proposes new machine learning approaches that support users in understanding and arguing with classifiers, thus allowing to open the machine learning box. The authors develop a novel tractable deep probabilistic classifier which is a conditional variant of sum-product networks (SPNs). These CSPNs combine simple models in a hierarchical fashion in order to create a deep representation that can model multivariate and mixed conditional distributions while maintaining tractability. An approach to interactively arguing with classifiers is also presented.

In their paper Leveraging Arguments in User Reviews for Generating and Explaining Recommendations, T. Donkers and J. Ziegler aim at opening up black-box models for recommendation algorithms by including explanations in the form of arguments, highlighting why a certain item is recommended to a user. The authors propose a novel architecture based on Aspect-based Transparent Memories (ATMs). The architecture can memorize user opinions on relevant items as mentioned in raw texts to derive multi-faceted user and item representation. Experiments on three datasets show that the proposed approach outperforms existing methods such as NARRE.


Die Rubrik „Community“ berichtet unter News über aktuelle Informationen, welche die DBIS-Gemeinde betreffen.

Künftige Schwerpunktthemen

Data Management for Future Hardware

This special issue of the “Datenbank-Spektrum” is dedicated to the research achieved by the DFG Priority Programme “Scalable Data Management on Future Hardware”. We invite submissions on original research as well as overview articles addressing the challenges and opportunities of modern and future hardware for data management such as many-core processors, co-processing units, new memory and network technologies.

Paper format: 8–10 pages, double-column (cf. the author guidelines at

Deadline for submissions: June 1st, 2020

Issue delivery: DASP-3-2020 (November 2020)

Guest editors:

Kai-Uwe Sattler, TU Ilmenau

Alfons Kemper, TU München

Digitale Lehre im Fachgebiet Datenbanksysteme

Die Lehre im Bereich Datenbanken und Informationssysteme hat ihren festen Platz in den Curricula für Informatik-Studiengänge an Universitäten und Hochschulen. Neben klassischen Inhalten wie dem relationalen Modell oder SQL finden sich in den Lehrveranstaltungen auch stetig neue Themen, u. a. NoSQL und NewSQL. Der wachsenden Bedeutung von Big Data und Data Analytics wird auch durch eigene Profilierungen und Studiengänge im Bereich Data Science Rechnung getragen.

Neben diesen inhaltlichen Veränderungen macht die Digitalisierung natürlich auch vor der Durchführung der Lehre selbst nicht halt. Neue Lehrformen wie das Flipped-Classroom-Modell oder digitale Angebote wie Massive Open Online Courses (MOOCs) setzen mit Videos und Quizzes verstärkt auf digitale Lernmaterialien. Technische Innovationen, wie z. B. die Virtualisierung mit Docker oder die Verfügbarkeit großer Datensätze, ermöglichen Lernenden Zugriff auf komplexe Lernumgebungen für praxisnahe Übungen.

Dieses Themenheft des Datenbank-Spektrums soll einen Überblick über die Entwicklungen der digitalen Lehre im Bereich Datenbanken sowohl im Universitäts- und Hochschulkontext als auch in der betrieblichen Weiterbildung geben. Zu den relevanten Themenbereichen gehören unter anderem:

  • Architekturen und Werkzeuge zur Durchführung praktischer Übungen u. a. im Bereich relationaler Datenbanksysteme oder Big-Data-Systeme

  • Systeme zur (semi-)automatischen Bewertung typischer Aufgabenformate im Bereich Datenbanken

  • Aufbau und Erfahrungsberichte zu neuartigen Curricula oder Lehr-Lern-Szenarien (z. B. Flipped Classroom, Blended Learning)

  • Evaluationen zur Wirksamkeit digitaler Lehre.

Wir erbitten Einreichungen in Deutsch oder Englisch mit einem Umfang von 8 bis 10 Seiten (zweispaltig) gemäß den Layoutvorgaben (siehe

Frist zur Einreichung: 1. Okt. 2020

Erscheinen des Themenheftes: DASP-1-2021 (März 2021)


Stefanie Scherzinger, OTH Regensburg

Andreas Thor, HTWK Leipzig

Berlin Institute for the Foundation of Learning and Data (BIFOLD)

Das Berlin Institute for the Foundations of Learning and Data (BIFOLD) ist ein von BMBF und dem Land Berlin gefördertes Kompetenzzentrum, das aus der Fusion des Berlin Big Data Center (BBDC) und dem Berliner Zentrum für Maschinelles Lernen (BZML) hervorgegangen ist. BIFOLD hat sich zum Ziel gesetzt, hochinnovative Technologien zu entwickeln, die riesige Datenmengen organisieren und mit deren Hilfe fundierte Entscheidungen getroffen werden können, um wirtschaftlichen und gesellschaftlichen Mehrwert zu schaffen. Zu diesem Zweck werden die bislang isoliert voneinander existierenden Gebiete Datenmanagement und Maschinelles Lernen verschmolzen. Die Technologien des Zentrums sollen den Stand der Technik in der Erforschung von Methoden des Datenmanagements, des maschinellen Lernens und deren Schnittstelle vorantreiben und die führende Stellung Deutschlands in Wissenschaft und Wirtschaft im Bereich der KI ausbauen. Als Technologietreiber stehen mehrere wirtschaftlich, wissenschaftlich und gesellschaftlich relevante Anwendungsbereiche im Fokus: Fernerkundung, digitalisierte Geisteswissenschaften, die Medizin sowie Informationsmarktplätze.

Aufbauend auf weltweit anerkannten Forschungsergebnissen sollen eine automatische Optimierung, Parallelisierung sowie eine skalierbare und adaptive Verarbeitung von Algorithmen in heterogenen, verteilten Umgebungen unter Einsatz modernen Rechnerarchitekturen ermöglicht werden. Daneben stehen Erklärbarkeit, verantwortungsvolles Datenmanagement und innovative Anwendungen der Datenanalyse im Fokus. Behandelt werden dabei Bereiche des Datenmanagements, Maschinellen Lernens, der linearen Algebra, der Statistik, der Wahrscheinlichkeitstheorie, der Computerlinguistik sowie der Signalverarbeitung. Durch Entwicklung und Bereitstellung von Open-Source-Systemen sowie von Algorithmen und Methoden zur Datenanalyse wird das Zentrum die Ausbildung, Forschung, Entwicklung, Innovation und kommerzielle Nutzung von Big Data Analytics und KI-Anwendungen in Deutschland fördern und so deutschen Firmen einen Wettbewerbsvorteil sichern.

Wir erbitten Einreichungen in Deutsch oder Englisch mit einem Umfang von 8 bis 10 Seiten (zweispaltig) gemäß den Layoutvorgaben (siehe

Frist zur Einreichung: 1. Feb. 2021

Erscheinen des Themenheftes: DASP-2-2021 (Juli 2021)


Dr. Alexander Borusan, TU Berlin