ICT with Industry 2020

During five days a group of about 50 researchers from IT and Computer Science from a wide range of universities (within the Netherlands and Europe) will work together extensively on challenging problems proposed by companies.

The ICT with Industry workshop brings together scientists, in particular (junior) research staff and PhD students, and professionals from industry and governments. The workshop revolves around a number of case studies, which are subject to an intense week of analysing, discussing, and modeling solutions.

ICT with Industry 2020 

ICT with Industry 2020 was held from January 20 to 24 in the Lorentz Center in Leiden, in cooperation with NWO.

Workshop objectives

The main aim of the ICT with Industry workshop is to increase the collaboration between science and industry. The workshop strives for direct and rapid interaction between ICT researchers and industrial partners. It has the following objectives:

  • To stimulate synergies between ICT research and industrial R&D, by analysing challenging problems in multidisciplinary teams.
  • To obtain creative solutions for problems from practice.
  • To give insight into the possibilities ICT research offers, and thereby enable accelerated innovation.
  • To enrich the PhD students’ and postdocs’ experience in collaborating with industry.

Organisation and contacts

Scientific Chairs for 2020

Steering Committee

  • Dr. Paola Grosso (UVA – ASCI) – chair
  • Dr. Suzan Verberne (UL – SIKS)
  • Dr. Michel Reniers (TU/e – IPA)
  • Dr. Wouter Leibbrandt (TNO ESI)
  • Prof. Dr Patricia Lago (VU – IPN)

If you have any questions about ICT with Industry, please contact us at ictwi2020@easychair.org


The code of conduct of the ICT with Industry series is available here

Sound and Vision

Scaping Generous Interfaces for Audiovisual Heritage Collections

Erwin Verbruggen, Johan Oomen, Roeland Ordelman

In collaboration with Roeland J.F. Ordelman, University of Twente

Sound and Vision is interested in developing innovative models for making audiovisual cultural heritage accessible. To this end, we operate platforms for various user groups and play an active role in initiatives such as EUscreen, CLARIAH and Wikipedia.

We would like to challenge workshop participants to develop use cases and a working prototype around presenting ways of creating “wisdom” from loosely organized, available data. Sound and Vision is in talks with partner organizations to create a cross-border access point to public data and heritage collections. International examples such as the Internet Archive allow the exploration of large sets of TV news. The Dutch digital humanities access point CLARIAH’s Media Suite allows humanities researchers analyze the media collections of media collections in The Netherlands. With recent high-profile cases like the Panama Papers or the Paradise Papers, journalists work together on to finely analyze specific sets of documents. Now that more public information sources are being made available (parliamentary video notes, city archives, media collections, historical archives, public radio collections and oral histories, …) our mandate as media collection holders could change from providers of clips to providers of answers. When collections and data sources are connected, what kind of fact checking or answers sought after by journalists can these collections support?

Sound and Vision is building infrastructure to support access to its data sources – but developing against this infrastructure in a sandbox-type of environment is not yet available. Meanwhile, we would like to expand this infrastructure and facilitate cross-border analysis of public data and collections – building on the European Open Cloud SSHOC. We are seeking to match use cases for a type of “Wolfram Alpha for social sciences” in which cross-border collections would be available and seek examples of search methodologies to provide researchers and journalists with practical methods to sift, connect and analyze various data and media collections in search of factual responses to broad societal research questions.

HOMIE

Wash program prediction

Jorrit de Vries

Circular Economy and the Internet of Things are two areas that have been steadily gaining momentum in research, business and politics in the recent years. The potential of these concepts lies with delivering sustainability benefits across product life cycles, from raw material extraction through to the end-of-life phase of products. Utilizing the interconnectedness of modern technologies and moving away from the classic ownership model to new, more sustainable business models lies at the very heart of what Homie does. Homie was founded as a spin-out from a Delft University of Technology by Dr Nancy Bocken, Hidde-Jan Lemstra and Colin Bom.

The challenge is shaped around Homie’s need to monitor the use of its appliances, so it can charge its customers accordingly. We currently only provide just one type of washing machine, but in order to offer multiple brand and different product types, we are looking to find a way to easily track different types of appliances, analyzing their energy consumption patterns. To determine the washing program that a user ran in order to wash their clothes based on the power consumption of the washing machine, we want to be able to determine the temperature, duration, spin-cycles, etc. The approach can be either (advanced) time series analysis or machine learning approaches. Of course, you are free to be creative in your approach beyond this.

There is also a challenge around privacy concerns. How can we make sure to be GDPR compliant while optimizing the value we create for the user and the information we offer to operators or business stakeholders?

A dashboarding solution is often a very attractive way to present the data to end-users, can you implement a dashboarding solution that achieves this goal?

KB (National Library of the Netherlands)

Improving Access to Early Modern Gothic Texts with NLP and Machine Learning (ImAGiN)

Lotte Wilms, Rutger van Koert, Rosemarie van der Veen-Oei, Arno Bosse

In collaboration with Lambert Schomaker, University of Groningen

The KB has digitised over 100 million pages of Dutch books, newspapers, magazines and other texts, many of which were printed before 1800. Approximately 420,000 books in this collection were digitized by the KB in partnership with Google and are now freely available for research.

Unfortunately, the computer readable text resulting from this digitization is often of poor quality. This is due in large part to the use of the so-called ‘gothic’ or blackletter typefaces widely employed by printers in the Netherlands between the 15th and 18th centuries (see Table 1) as well as the physical deterioration of the paper itself. Current OCR (optical character recognition) software, such as ABBYY, which is used to transform bitmapped images of letters to machine readable glyphs does not achieve the 98-99% accuracy levels required for digital scholarship on historical texts (Holley, 2009, Cordell, 2018). The task is made more difficult due to the large variety of Dutch gothic typefaces in use in the early modern period.

But recent advances in computer vision techniques now allow us to apply machine learning methods used to recognize and transcribe handwritten texts (e.g. Transkribus), to printed scripts as well. Other research (e.g. Kettunen and Koistinen, 2019) has shown that OCR engines (such as Tesseract) trained on specific texts using machine learning algorithms should be directly applicable to gothic texts as well.

From the perspective of OCR a book would ideally be printed entirely in the same typeface. However, this is not always the case. Regularly, when emphasizing a certain word or name, a printed text switches from gothic to roman or italic letters. When this happens, the non- dominant type is not recognised by the OCR software even though this change encodes crucial information. This problem can sometimes be addressed through post-correction (for example, using deep learning techniques (van der Zwaan, 2019)). In other instances, more traditional NLP techniques can usefully be applied as well. Alternatively, instead of post processing, another promising option is to avoid the initial error completely by re-training the OCR-engine.

Our use case is centered on the improvement of OCR (post-processing) techniques for early modern texts which were printed (either fully or partially) in a variety of gothic scripts. Next to the type-related issues described above, we must also address complex layouts, bleed- through of text from the opposite page which result in noisy images, multiple and historic languages, and crooked or warped text (lines) caused by the digitisation process. We would like to invite researchers from fields such as computer vision, pattern recognition, image recognition and natural language processing to join us in finding new ways of opening up these rich, historical text resources to researchers.

RTL Nederlands

Multimodal Emotion Recognition

Daan Odijk, Hendrik Vincent Koops

In collaboration with Albert Ali Salah, University of Utrecht

Every day, RTL produces many hours of video content that aims to touch the heart and mind of viewers. Understanding the emotional content of this video content is a critical part of how we tell stories. A better understanding of emotion in video content will allow us to produce better content, improve storytelling and unlock new use cases, thereby benefiting both RTL and its viewers.

In particular, we see a clear opportunity to improve video production by facilitating the making of content that better matches viewers. For example, we know from directors that using emotional salient material is an integral part of the production of promotional content, as emotion compels attention and elicits arousal. A better understanding of emotional content would on the one hand facilitate directors in finding emotional salient material, and on the other hand unlock an important aspect of the automatic generation of promotional video content.

In first pilots, we have found that current cloud solutions such as Google Cloud Video Intelligence and Microsoft Video Indexer can be leveraged to provide emotional metadata based on audiovisual content. However, these do not reach the level of detail nor the emotional depth that our use cases call for. Many existing models either limit themselves to analyzing a single modality or only a particular dimension of emotion (such as positive versus negative). The challenges from the MediaEval 2016-17-18 benchmarking initiative included an Emotional Impact of Movies Task. For this task a dataset of 160 professionally made and amateur movies were annotated for fear, valence and arousal. To illustrate the difficulty of this task, the fear task in MediaEval was considered unsuccessful, as it was both rare and very difficult to properly model.

We are seeking to close the gap in vocabulary between our use cases and automatically generated metadata. To achieve this, models that understand content on a deeper semantic level, including emotional expressions across modalities are needed.

TNO

Create Multi-Purpose Digital Twins for Industry that are a factor 1000 cheaper than current approaches

Jeroen Broekhuijsen, Jacques Verriet

In collaboration with Bayu Jayawardhana, University of Groningen

Industry is currently becoming “smart industry” or “Industry 4.0”. Manufacturing companies are starting to embrace new technologies made possibly by ICT innovation, such as: Big Data, AI, IoT, Data Science, etc. Adopting and embracing these technologies is tough. This year an NWO Perspectief program on DIGITAL TWIN is started where industrial partners such as Airborne, Tata Steel, Philips (and others) want to reach out to PhD students and develop digital twinning framework for smartly combining data and first principle models to optimize their processes.

We invite you to join this journey and meet Airborne and Tata Steel up close and personal in a pre- work-week site-visit to already provide more insight in how these companies operate. During the week you will get the opportunity to access their data and link it to the physical processes inside their process.

Case Airborne: Robot cell for composite manufacturing: Airborne uses a robot arm with a custom built tape layering machine to build 3D shapes from composite tape materials. Once “baked” in the oven, this becomes a sturdy 3D shape suited for customer needs. Customers can custom-order 3d shapes they want to fabricate.
Your challenge: Can you help Airborne guarantee product quality based on the data they’ve logged from previous runs?

Case Tata Steel: The HIsarna plant from Tata is a revolutionary new way of producing high-quality steel that saves 50% in terms of CO 2 output. The process however is much more complex than the traditional methods and poses challenges in the real-time control of the process given the variability in the materials and process environment.
Your challenge: We want you to propose new data analysis techniques to optimize the flow of steel through the Hisarna plant.

While engaging the two cases for twins we’re interested both in how you solve these cases and which difficulties you’ve run into during the process. TNO’s goal is to collect these insights and try and solve these on a National level in a wider audience to achieve a factor 1000 savings in the development of digital twins.

Do you want to work on cases for Airborne and Tata Steel? Then we’re interested in meeting you.