1. DEAL with Springer Nature signed

https://www.projekt-deal.de/about-deal/

The DEAL project has announced the signing of an Open Access agreement with Springer Nature in January 2020. This 3-year agreement grants participating institutions (mostly German research institutions and university libraries) permanent access to all issues and volumes of the approximately 1,900 Springer journals (including Springer Medical, Palgrave, Adis and Macmillan Academic) but without Nature-branded journals and magazines published during the contract period. The agreement foresees an optional one year extension.

2. European Commission releases White Paper on Artificial Intelligence

https://ec.europa.eu/info/sites/info/files/commission-white-paper-artificial-intelligence-feb2020_en.pdf

In February, the European Commission released a White Paper on Artificial Intelligence to set out policy options on how to achieve the goal of promoting the uptake of AI while addressing the risks associated with certain uses it. The paper emphasizes the importance of AI as an important economic driving factor positing a positive development of AI deployment based on the observation that with over half of the top manufacturers implementing at least one instance of AI in manufacturing operations Europe is world leader in the deployment of AI in manufacturing. This is attributed in part to the EU funding programme for research and innovation for AI which has risen over the past three years to €1.5 billion, i.e. a 70% increase compared to the previous period. Yet, it is acknowledged that with some €3.2 billion that were invested in AI in Europe in 2016, compared to around €12.1 billion in North America and €6.5 billion in Asia investment in research and innovation in Europe is still a fraction of the public and private investment in other regions of the world.

Based on the Coordinated Plan of the European Commission from December 2018 for the development and use of AI in Europe with 70 joint actions in key areas such as research, investment, market update, skills and talent, data and international cooperation the White Paper addresses 6 key actions to be taken over the next ten years:

A plan is laid out to revise the Coordinate Plan to be adopted by end 2020 with the objective to attract over €20 billion of total investment in the EU per year in AI over the next decade. This plan targets sectors where Europe has the potential to become a global champion such as industry, health, transport, finance, agrifood value chains, energy/environment, forestry, earth observation and space. A large amount of this budget will be dedicated under the Digital Europe Programme to the creation of excellence and testing centres that can combine European, national and private investmements. Further substantial support will go through the advanced skill pillar of the Digital Europe Programme to establish networks of leading universities and higher education institutes to offer world-leading masters programmes in AI. This includes a dedicated effort to support the participation of women in AI, the development of new concepts for teaching and AI curricula, as well as the upskilling of the workforce. Another goal is the establishment of at least one digital innovation hub per Member state with a specialisation on AI through the Digital Europe Programme. The Commission and the European Investment Fund will launch a pilot scheme of €100 million in Q1 2020 to provide equity financing for innovative developments in AI. Subject to final agreement on the MFF, the Commission’s intention is to scale it up significantly from 2021 through InvestEU. In the context of Horizon Europe, the Commission will set up a new public private partnership in AI, data and robotics to combine efforts. The Commission will initiate open sector dialogues giving priority to healthcare, rural administrations and public service operators which will be used to prepare a specific ‘Adopt AI programme’ that will support public procurement of AI systems. The Commission has proposed more than €4 billion under the Digital Europe Programme to support high-performance and quantum computing, including edge computing and AI, data and cloud infrastructure. The Commission invites for comments on the proposals set out in the White Paper through an open public consultation available at https://ec.europa.eu/info/law/better-regulation/have-your-say/initiatives/12270-White-Paper-on-Artificial-Intelligence-a-European-Approach/public-consultation. The consultation is open for comments until 14 June 2020.

3. Broad EU mandate and funding for shaping “AI made in Europe”

The European Commission has taken a major step towards strengthening AI research in Europe by allocating 50 million Euros of seed funding, intended to prepare the ground for much larger investments in the near future. This is critically necessary to keep Europe competitive with countries such as the USA, China and Canada, which are investing substantially higher amounts into AI research and innovation.

Five proposals have been selected for funding under ICT-48-2020, coordinated by members of ELLIS and CLAIRE. These proposals are ELISE, TAILOR, Humane-AI-Net and AI4Media, which will form large and diverse networks of centres of excellence in AI research, as well as VISION, which will coordinate between these and one additional network selected for funding, in order to position Europe for leadership in human-centred, trustworthy AI. This will help shaping the European AI ecosystem.

4. German Ministry of Food and Agriculture released call for AI in agriculture

https://www.bmel.de/SharedDocs/Pressemitteilungen/2020/041-kuenstliche-intelligenz-forschung-innovation-foerderung.html

The German Ministry for Food and Agriculture has announced an €18 Mio budget for AI in Agriculture till 2023. Goals are to increase safety, transparency and sustainability of food production while securing harvests and outputs for farmers. The budget targets research projects to support the deployment of AI in agriculture, the food chain, healthy nutrition and rural areas. This addresses projects in industrial research as well as experimental development. Important aspects are the transfer of research results as well as a dialog of the use of AI and its regulation in different application fields.

5. AI endeavors in the Corona Crisis

With the Corona Crisis taking over our everyday lives several overviews on how AI can support dealing with different aspects of the current situation are emerging. To provide politicians and deciders with background knowledge the EU think tank has released a short report on how AI is currently being used to fight Corona [1]. How robots are being used in different areas during the crisis has been summarized by Robin Murphy and colleagues [2]. As a joint endeavor by citizens Hackathons for fighting Corona at national and European level have been set up [3, 4].

[1] https://www.europarl.europa.eu/thinktank/de/document.html?reference=EPRS_ATA%282020%29641538

[2] https://spectrum.ieee.org/automaton/robotics/medical-robots/robotics-for-infectious-diseases-consortium

[3] https://wirvsvirushackathon.org/

[4] https://ec.europa.eu/info/news/euvsvirus-hackathon-develop-innovative-solutions-and-overcome-coronavirus-related-challenges-2020-apr-03_en

Calls

1. KI 2020 : The 43th German Conference on Artificial Intelligence Bamberg, September 21–25, 2020

https://ki2020.uni-bamberg.de/

KI2020 is the 43rd edition of the German conference on Artificial Intelligence organized in cooperation with the Fachbereich Künstliche Intelligenz der Gesellschaft für Informatik (GI-SIG AI).

Due to the current situation caused by the COVID19 pandemic and the given travel restrictions it is not sure that the conference can be organized in the usual form. However, the conference will definitely take place, possibly fully or partially in a digital format, including the possibility of remote participation. So if you planned to participate at KI 2020—the main conference, a workshop, or the doctoral consortium—please stay with this plan. Even if presentations might be only virtually, the accepted papers of the main conference will be published in a Springer proceedings volume as usual.

We are happy to announce the following prominent keynote speakers:

  • •Anthony G. Cohn (University of Leeds, UK),

  • •Hector Geffner (Institució Catalana de Recerca i Estudis Avançats and Universitat Pompeu Fabra, Spain),

  • •Jana Koehler (Algorithmic Business and Production, DFKI Saarbrücken, Germany),

  • •Nada Lavrač (Jožef Stefan Institute, Slovenia),

  • •Sebastian Riedel (Facebook AI Research, University College London, UK)

  • •Ulli Waltinger (Siemens Corporate Technology, Germany)

The early-bird registration deadline will be July 31. The actual registration fees will be adapted to the actual form in which KI2020 is organized. Up-to-date information about the program and all the elements of the conference can be found at the conference webpage. In the following we want to spotlight the highly interesting workshop and tutorial program. More information can be found via the KI2020 webpage.

Workshop Program

W1: AI methods for digital heritage

https://aidh.kinf.wiai.uni-bamberg.de/

The digital transition opens new perspectives for researchers interested in cultural processes. An increasing part of the material and immaterial heritage of Western culture is accessible via digital representations such as digital editions of manuscripts, multispectral images of paintings or 3D models of archeological findings. The task of analyzing and linking the many pieces of information becomes more important and difficult than ever. The Semantic Web technology stack, for instance, permits knowledge-based algorithms to assist scholars in the task of linking large cultural data sets. Another issue is the vagueness and uncertainty omnipresent in the historic study of cultural processes. AI research has devised a number of methods able to deal with these phenomena. It is important, however, to realize that humanities scholars have specific requirements. Different from many AI applications in engineering, it is generally not the goal to resolve all ambiguities. The workshop gathers AI researchers and interested humanities scholars. We encourage submissions that report on work in progress or present a synthesis of emerging research trends.

  • •Christoph Schlieder (University of Bamberg, Germany)

  • •Günther Görz (University of Erlangen, Germany)

Deadline for Submission: June 28, 2020

W2: Explainable and interpretable machine learning (XI-ML)

https://www.cslab.cc/xi-ml-2020/

With the current scientific discourse on explainable AI (XAI), algorithmic transparency, interpretability, accountability and finally explainability of algorithmic models and decisions, the workshop targets a topic of prominent and timely topic. Accordingly, explainable and interpretable machine learning tackles this theme from the modeling and learning perspective, i.e. targeting interpretable methods and models that are able to explain themselves and their output, respectively. The workshop aims to provide an interdisciplinary forum to investigate fundamental issues in explainable and interpretable machine learning as well as to discuss recent advances, trends, and challenges in this area.

  • •Martin Atzmueller (Tilburg University, The Netherlands)

  • •Tomáš Kliegr, (University of Economics Prague, Czech Republic)

  • •Ute Schmid, (University of Bamberg, Germany)

Deadline for Submission: June 23, 2020

W3: 6th workshop on formal and cognitive reasoning (FCR-2020)

https://www.fernuni-hagen.de/wbs/fcr2020

Information for real life AI applications is usually pervaded by uncertainty and subject to change, and thus demands for non-classical reasoning approaches. At the same time, psychological findings indicate that human reasoning cannot be completely described by classical logical systems. Sources of explanations are incomplete knowledge, incorrect beliefs, or inconsistencies. A wide range of reasoning mechanism has to be considered, such as analogical or defeasible reasoning, possibly in combination with machine learning methods. The field of knowledge representation and reasoning offers a rich palette of methods for uncertain reasoning both to describe human reasoning and to model AI approaches. The aim of this series of workshops is to address recent challenges and to present novel approaches to uncertain reasoning and belief change in their broad senses, and in particular provide a forum for research work linking different paradigms of reasoning. We put a special focus on papers from both fields that provide a base for connecting formal-logical models of knowledge representation and cognitive models of reasoning and learning, addressing formal as well as experimental or heuristic issues

  • •Christoph Beierle (FernUniversität in Hagen, Germany)

  • •Marco Ragni (Universität Freiburg, Germany)

  • •Frieder Stolzenburg (Hochschule Harz, Germany)

  • •Matthias Thimm (Universität Koblenz-Landau, Germany)

Deadline for Submission: June 30, 2020

W4: WLP 2020 : workshop on (constraint) logic programming

https://www.is.informatik.uni-wuerzburg.de/aktuelles/meldungen/single/news/workshop-on-logic-programming-wlp-2020/

The WLP workshop provides a forum for exchanging ideas on declarative logic programming, non-monotonic reasoning, and knowledge representation, and facilitate interactions between research in theoretical foundations and in the design and implementation of logic-based programming systems. Contributions are welcome on all theoretical, experimental, and application aspects of logic and constraint logic programming.

  • •Michael Hanus (University of Kiel, Germany)

  • •Sibylle Schwarz (HTWK Leipzig, Germany)

  • •Dietmar Seipel (University of Würzburg, Germany)

Deadline for Submission: June 1st, 2020

W5: dependable artificial intelligence

https://sme.uni-bamberg.de/ki-labor-ws20/

Advances of AI and the increasing degree of application-relevant AI techniques create the necessity to establish means for designing and developing AI software which is dependable. We call a system dependable if it meets functional safety requirements and when its performance is intuitive throughout all situations. We expect that dependability of AI systems will be crucial to the development of trust in AI and the acceptability of its use in important socio-technical applications.

In particular methods based on deep neural networks often lack dependability, possibly caused by unintended biases in the training data or counter-intuitive generalisation leading to sudden failures. While these shortcomings are currently addressed in fields such as explainable machine learning or explainable AI, the problem of achieving dependability goes beyond the advancement of individual methods. We require techniques to compose complex systems out of interdependent components, which may act asynchronously and adapt their behaviour over time. With this workshop we wish to create a forum for researchers from AI and software engineering to discuss methods for specifying dependability and achieving it.

  • •Diedrich Wolter (University of Bamberg, Germany)

  • •Martin Leucker (University of Lübeck, Germany)

Tutorials

T1: Nico Potyka: explainable and computationally efficient decision making with quantitative abstract argumentation frameworks

Abstract argumentation graphs model arguments and relationships between them. Quantitative bipolar graphs are one popular instance with many recent applications. In this setting, relationships are usually attacks and supports and the credibility of arguments is evaluated by numerical values like probabilities or more general strength values. This structure allows to model decision problems very naturally. A decision can be based on pro and contra arguments, which, in turn, may attack or support each other. In many cases, the final decision can be explained very intuitively from the argumentation graph by going backwards through attackers and supporters and their final strength values. In this tutorial, we will focus on two approaches to quantitative abstract argumentation. Epistemic probabilistic argumentation as proposed by the workgroups of Hunter and Thimm and gradual argumentation as proposed by the workgroups of Baroni and Toni. In both frameworks, many interesting reasoning problems can be solved in polynomial time and the results are easily interpretable and explainable from the graph structure.

T2: Julien Siebert and Christof Schroth: detecting changes in time series data: an introduction to changepoint analysis

Changepoints are abrupt changes in the statistical properties of a signal. Changepoints detection algorithms can be used to automatically segment a signal (or several, in the case of multiple dimensions). These segments can later be used as input for other analysis methods, such as anomaly detection, clustering, patterns recognitions, etc. Changepoint detection is a task that is done in many domains, for example medical diagnosis, engineering systems, speech recognition or sensor data monitoring, just to mention a few of them. One usually places the starting point of changepoint detection techniques with the work of Page in the 1950s. Over the time, changepoint techniques have adapted methods not only from statistics and signal processing but also from artificial intelligence. Understanding the mathematical principles of changepoint detection algorithms provides useful tools for data analysis and artificial intelligence. This tutorial is divided in two main parts. In the first part, we present the principles of changepoint detection. In the second part, we focus on the current implementations of changepoint detection available in R and Python.

T3: Julien Siebert and Adam Trendowicz: hands on data preparation: missing values analysis and outliers detection

Within this hands-on tutorial, participants will learn, based on concrete examples, how to kick-start any data analysis projects (i.e., data integration, data preparation). We will show, based on experience from many projects, where the caveats lie and how you safely ship around them. This tutorial consists of both a theoretical part giving an overview of relevant data preparation tasks, as well as a hands-on part, which focuses on frequent in practice issues of missing values and outlier data. Based on different examples implemented using jupyter notebooks (both in Python and R) we will shop how to detect and handle data quality deficits. The goal of the hands-on examples is to illustrate common data preparation methods presented in the theoretical part of the tutorial and to learn which software packages or libraries are currently used and helpful for data preparation.

T4: Stefan Ellmauthaler and Konstantin Schekotihin: tutorial on multi-context stream reasoning

The field of artificial intelligence and knowledge representation (KR) has originated a high variety of formalisms, notions, languages, and formats during the past decades. Each approach has been motivated and designed with specific applications in mind. Nowadays, in the century of Industry 4.0, the Internet of Things, and Smart-devices, we are interested in ways to connect the various approaches and allow them to distribute and exchange their knowledge and beliefs in an uniform way. This leads to the problem that these sophisticated knowledge representation approaches cannot understand each others point of views and their positions on semantics are not necessarily compatible either. Multi-Context Systems allow methods to transfer information under a strong and generalised notion of semantics. Recent advances in the representation of Streams allowed one to utilise the ideas of Multi-Context Systems and expand them to provide reasoning based on streams. Modern languages, such as LARS, extend logic programming formalisms like ASP with sliding windows and temporal modalities that allow one to encode monitoring, configuration, control, and many other problems occurring in the domains listed above. The goal of this tutorial is to provide a sophisticated and formally sound overview on the last decade of advances in the field of multi-context stream reasoning.

T5: Simon Razniewski: extracting and consolidating commonsense knowledge

Machine-readable commonsense knowledge (CSK) is fundamental for automated reasoning about the general world, and relevant for downstream applications such as question answering and dialogue. In this tutorial, we focus on the construction and consolidation of large repositories of commonsense knowledge. After briefly surveying crowdsourcing approaches to commonsense knowledge compilation, in the main parts of this tutorial, we investigate (i) automated text extraction of CSK and relevant choices for extraction methodology and corpora, and (ii) knowledge consolidation techniques, that aim to canonicalize, clean, or enrich initial extraction results. We end the tutorial with an outlook on application scenarios, and the promises of deep pretrained language models.

T6: Dominik Seuß, Andreas Foltyn and Ines Rieger: Bayesian deep learning tutorial focused on computer vision (BayDel)

Bayesian Methods can estimate the model uncertainty and uncertainty regarding the input in Neural Networks and make them more robust and precise. Deep Neural Networks produce state-of-the-art results in various fields like natural language and image processing solving tasks such as speech recognition, object detection or object recognition. In contrast to classic Neural Networks, the model parameters of Bayesian neural networks (BNNs) are not defined by point estimates, but by probability distributions. Therefore, BNNs are prone to tackle the problem of outlier detection, with which neural networks struggle. Thus, they can detect misclassified out-of-distribution input examples and counteract adversarial attacks. This is especially important for safety critical applications in fields like medicine or for autonomous driving. This tutorial aims to give an introduction and motivation about Neural Networks and uncertainty measurements and then dives deeper into comparing Bayesian Deep Learning approaches.

T7: Philipp Cimiano, Henning Wachsmuth and Benno Stein: argumentation technology for artificial intelligence

We argue that artificial intelligence system need to be endowed with a higher level of intelligence that we call “argumentative intelligence” that allows systems to reason beyond facts by taking into account more complex semantic relationships and verbalize those relationships to a user in order enhance transparency, explainability and controllability of AI systems. Arguments play an important role in creating transparency, explaining suggestions and thus support human decision making in interaction with “white-box” systems. In this tutorial we outline the importance of argumentation for artificial intelligence and will consider three important fields within computational argumentation: argumentation mining, argumentation retrieval and argumentation synthesis. Argumentation mining is concerned with understanding arguments expressed in text. Argument retrieval is concerned with supporting humans in retrieving the most relevant arguments for a given topic. Argument synthesis is concerned with how to support human decision making by machine-generated arguments.