Recent progress in the areas of Artificial Intelligence (AI) and Machine Learning (ML) are tremendous and amazing. Almost monthly we see reports announcing breakthroughs in different technological aspects of AI.

As an organization focussing on research and development, we can look back on an increasing number of publications and awards.

Publications

We aim to push the state-of-the-art for problems such as automatic text recognition (ATR), language modeling (LM), named entity recognition (NER), visual question answering (VQA) and image segmentation (IS) even beyond human performance.

Our team of experienced AI researchers is working with and improving techniques such as:

  • fully convolutional neural networks
  • attention-based recurrent free models as well as in combination with recurrent models
  • graph neural networks
  • neural memory techniques
  • unsupervised and self-supervised pre-training strategies
  • improved learning strategies

In contrast to Connectionist Temporal Classification (CTC) approaches, Sequence-To-Sequence (S2S) models for Handwritten Text Recognition (HTR) suffer from errors such as skipped or repeated words which often occur at the end of a sequence. In this paper, to combine the best of both approaches, we propose to use the CTC-Prefix-Score during S2S decoding. Hereby, during beam search, paths that are invalid according to the CTC confidence matrix are penalised. Our network architecture is composed of a Convolutional Neural Network (CNN) as visual backbone, bidirectional Long-Short-Term-Memory-Cells (LSTMs) as encoder, and a decoder which is a Transformer with inserted mutual attention layers. The CTC confidences are computed on the encoder while the Transformer is only used for character-wise S2S decoding. We evaluate this setup on three HTR data sets: IAM, Rimes, and StAZH. On IAM, we achieve a competitive Character Error Rate (CER) of 2.95% when pretraining our model on synthetic data and including a character-based language model for contemporary English. Compared to other state-of-the-art approaches, our model requires about 10–20 times less parameters. Access our shared implementations via this link to GitHub.

Authors: Christoph Wick (PLANET AI GmbH), Jochen Zöllner (PLANET AI GmbH, University of Rostock), Tobias Grüning (PLANET AI GmbH)

Series: DAS 2022 – 15th IAPR International Workshop on Document Analysis Systems

DOI: 10.1007/978-3-031-06555-2_18

Read the article