Research

Take a look at our latest publications and awards

Recent progress in the area of Artificial Intelligence (AI) is tremendous and amazing. Almost monthly we see reports announcing new breakthroughs in different technological aspects of AI.

As an organization focussing on research and development, we can look back on an increasing number of publications and awards.

Publications

We aim to push the state-of-the-art for problems such as automatic text recognition (ATR), language modeling (LM), named entity recognition (NER), visual question answering (VQA) and image segmentation (IS) even beyond human performance.

Our team of experienced AI researchers is working with and improving techniques such as:

  • fully convolutional neural networks
  • attention-based recurrent free models as well as in combination with recurrent models
  • graph neural networks
  • neural memory techniques
  • unsupervised and self-supervised pre-training strategies
  • improved learning strategies

Most recently, Transformers – which are recurrent-free neural network architectures – achieved tremendous performances on various Natural Language Processing (NLP) tasks. Since Transformers represent a traditional Sequence-To-Sequence (S2S)-approach they can be used for several different tasks such as Handwritten Text Recognition (HTR). In this paper, we propose a bidirectional Transformer architecture for line-based HTR that is composed of a Convolutional Neural Network (CNN) for feature extraction and a Transformer-based encoder/decoder, whereby the decoding is performed in reading-order direction and reversed. A voter combines the two predicted sequences to obtain a single result. Our network performed worse compared to a traditional Connectionist Temporal Classification (CTC) approach on the IAM-dataset but reduced the state-of-the-art of Transformers-based approaches by about 25% without using additional data. On a signi cantly larger dataset, the proposed Transformer significantly outperformed our reference model by about 26%. In an error analysis, we show that the Transformer is able to learn a strong language model which explains why a larger training dataset is required to outperform traditional approaches and discuss why Transformers should be used with caution for HTR due to several shortcomings such as repetitions in the text.

Authors: Christoph Wick (Planet AI GmbH),  Jochen Zöllner (Planet AI GmbH, Universität Rostock), Tobias Grüning (Planet AI GmbH)

Series: ICDAR 2021

Pages: to appear

In this paper, we propose a novel method for Automatic Text Recognition (ATR) on early printed books. Our approach significantly reduces the Character Error Rates (CERs) for book-specific training when only a few lines of Ground Truth (GT) are available and considerably outperforms previous methods. An ensemble of models is trained simultaneously by optimising each one independently but also with respect to a fused output obtained by averaging the individual confidence matrices. Various experiments on five early printed books show that this approach already outperforms the current state-of-the-art by up to 20% and 10% on average. Replacing the averaging of the confidence matrices during prediction with a con dence-based voting boosts our results by an additional 8% leading to a total average improvement of about 17%.

Authors: Christoph Wick (Planet AI GmbH),  Christian Reul (University of Würzburg)

Series: ICDAR 2021

Pages: to appear

tfaip is a Python-based research framework for developing, structuring, and deploying DeepLearning projects powered by Tensorflow (Abadi et al., 2015) and is intended for scientists of universities or organizations who research, develop, and optionally deploy Deep Learning models. tfaip enables both simple and complex implementation scenarios, such as image classification, object detection, text recognition, natural language processing, or speech recognition. Each scenario is highly configurable by parameters that can directly be modified by the command line or the API.

Authors: Christoph Wick, Benjamin Kühn, Gundram Leifert (all Planet AI GmbH), Konrad Sperfeld (CITlab, University of Rostock), Jochen Zöllner (Planet AI GmbH, University of Rostock), Tobias Grüning (Planet AI GmbH)

Journal: The Journal of Open Source Software (JOSS)

DOI: 10.21105/joss.03297

Read the article

Automated text recognition is a fundamental problem in Document Image Analysis. Optical models are used for modeling characters while language models are used for composing sentences. Since the scripts and linguistic context differ widely, it is mandatory to specialize the models by training on task-dependent ground-truth. However, to create a sufficient amount of ground-truth, at least for historical handwritten scripts, well-qualified persons have to mark and transcribe text lines, which is very time-consuming. On the other hand, in many cases unassigned transcripts are already available on page-level from another process chain, or at least transcripts from similar linguistic context are available. In this work we present two approaches that make use of such transcripts: whereas the first one creates training data by automatically assigning page-dependent transcripts to text lines, the second one uses a task-specific language model to generate highly confident training data. Both approaches are successfully applied on a very chal