Die jüngsten Fortschritte auf dem Gebiet der Künstlichen Intelligenz (KI) sind enorm und erstaunlich. Fast monatlich sehen wir Berichte, die neue Durchbrüche in verschiedenen technologischen Aspekten ankündigen.

Als Organisation, die sich auf Forschung und Entwicklung konzentriert, können wir auf eine zunehmende Anzahl von Veröffentlichungen und Auszeichnungen zurückblicken.


Unser Ziel ist es, den Stand der Technik für Probleme wie automatische Texterkennung (ATR), Sprachmodellierung, Named Entity Recognition (NER), visuelle Fragebeantwortung (VQA) und Bildsegmentierung sogar über die menschliche Leistung hinaus zu erweitern.

Unser Team an erfahrenen KI-Forscher*innen arbeitet mit und verbessert Techniken wie zum Beispiel:

  • Fully Convolutional Neural Networks (vollständig faltende neuronale Netze)
  • Graph Neuronal Networks (GNN)
  • sowohl aufmerksamkeitsbasierte rekurrente freie Modelle als auch Kombinationen mit rekurrenten Modellen
  • neuronale Speichertechniken
  • unüberwachte und selbstüberwachte Pre-Training-Strategien
  • verbesserte Lernstrategien

In order to apply Optical Character Recognition (OCR) to historical printings of Latin script fully automatically, we report on our efforts to construct a widely-applicable polyfont recognition model yielding text with a Character Error Rate (CER) around 2% when applied out-of-the-box. Moreover, we show how this model can be further finetuned to specific classes of printings with little manual and computational effort. The mixed or polyfont model is trained on a wide variety of materials, in terms of age (from the 15th to the 19th century), typography (various types of Fraktur and Antiqua), and languages (among others, German, Latin, and French). To optimize the results we combined established techniques of OCR training like pretraining, data augmentation, and voting. In addition, we used various preprocessing methods to enrich the training data and obtain more robust models. We also implemented a two-stage approach which first trains on all available, considerably unbalanced data and then refines the output by training on a selected more balanced subset. Evaluations on 29 previously unseen books resulted in a CER of 1.73%, outperforming a widely used standard model with a CER of 2.84% by almost 40%. Training a more specialized model for some unseen Early Modern Latin books starting from our mixed model led to a CER of 1.47%, an improvement of up to 50% compared to training from scratch and up to 30% compared to training from the aforementioned standard model. Our new mixed model is made openly available to the community.

Authors: Christian Reul (Universität Würzburg), Christoph Wick (PLANET AI GmbH), Maximilian Nöth, Andreas Büttner, Maximilian Wehner (alle Universität Würzburg), Uwe Springmann (LMU München)

Series: ICDAR 2021

Seiten: in Erscheinung

Read the article

Most recently, Transformers – which are recurrent-free neural network architectures – achieved tremendous performances on various Natural Language Processing (NLP) tasks. Since Transformers represent a traditional Sequence-To-Sequence (S2S)-approach they can be used for several different tasks such as Handwritten Text Recognition (HTR). In this paper, we propose a bidirectional Transformer architecture for line-based HTR that is composed of a Convolutional Neural Network (CNN) for feature extraction and a Transformer-based encoder/decoder, whereby the decoding is performed in reading-order direction and reversed. A voter combines the two predicted sequences to obtain a single result. Our network performed worse compared to a traditional Connectionist Temporal Classification (CTC) approach on the IAM-dataset but reduced the state-of-the-art of Transformers-based approaches by about 25% without using additional data. On a signi cantly larger dataset, the proposed Transformer significantly outperformed our reference model by about 26%. In an error analysis, we show that the Transformer is able to learn a strong language model which explains why a larger training dataset is required to outperform traditional approaches and discuss why Transformers should be used with caution for HTR due to several shortcomings such as repetitions in the text.

Autoren: Christoph Wick (PLANET AI GmbH),  Jochen Zöllner (PLANET AI GmbH, Universität Rostock), Tobias Grüning (PLANET AI GmbH)

Reihe: ICDAR 2021

Seiten: 112 – 126

DOI: 10.1007/978-3-030-86334-0_8

Read the article

In this paper, we propose a novel method for Automatic Text Recognition (ATR) on early printed books. Our approach significantly reduces the Character Error Rates (CERs) for book-specific training when only a few lines of Ground Truth (GT) are available and considerably outperforms previous methods. An ensemble of models is trained simultaneously by optimising each one independently but also with respect to a fused output obtained by averaging the individual confidence matrices. Various experiments on five early printed books show that this approach already outperforms the current state-of-the-art by up to 20% and 10% on average. Replacing the averaging of the confidence matrices during prediction with a con dence-based voting boosts our results by an additional 8% leading to a total average improvement of about 17%.

Autoren: Christoph Wick (PLANET AI GmbH), Christian Reul (Universität Würzburg)

Reihe: ICDAR 2021

Seiten: 385 – 399

DOI: 10.1007/978-3-030-86549-8_25

Read the article

tfaip is a Python-based research framework for developing, structuring, and deploying DeepLearning projects powered by Tensorflow (Abadi et al., 2015) and is intended for scientists of universities or organizations who research, develop, and optionally deploy Deep Learning models. tfaip enables both simple and complex implementation scenarios, such as image classification, object detection, text recognition, natural language processing, or speech recognition. Each scenario is highly configurable by parameters that can directly be modified by the command line or the API.

Autoren: Christoph Wick, Benjamin Kühn, Gundram Leifert (alle PLANET AI GmbH), Konrad Sperfeld (CITlab, Universität Rostock), Jochen Zöllner (PLANET AI GmbH, Universität Rostock), Tobias Grüning (PLANET AI GmbH)

Journal: The Journal of Open Source Software (JOSS)

DOI: 10.21105/joss.03297

Link zum Artikel