Session 1 (morning, 9:00-12:00 CET)
- 09:00-09:30 Introduction [slides]
- 09:30-10:05 Keynote #1: Daniel Hienert (GESIS) [slides]
- 10:05-10:20 Break
- 10:20-10:40 Experience paper #1: Reusing the Model and Components of an IIR Study for Perceived Effects of OCR Quality Change (Kimmo Kettunen et al.) [slides, pdf]
- 10:40-11:00 Experience paper #2: Interactive Social Book Search Data as Reusable Resource (Hall & Koolen) [slides]
- 11:00-11:10 Short break
- 11:10-11:50 Panel discussion w/ all presenters
- 11:50-12:00 Closing
Session 2 (afternoon, 16:00-19:00 CET)
- 16:00-16:30 Introduction [slides]
- 16:30-17:05 Keynote #2: Georg Buscher (Microsoft Research) [slides]
- 17:05-17:20 Break
- 17:20-17:40 Experience paper #1: Reusing the Model and Components of an IIR Study for Perceived Effects of OCR Quality Change (Kimmo Kettunen et al.) [slides]
- 17:40-18:00 Experience paper #2: Interactive Social Book Search Data as Reusable Resource (Hall & Koolen) [slides]
- 18:00-18:10 Short break
- 18:10-18:50 Panel discussion w/ all presenters
- 18:50-19:00 Closing
- Reusing the Model and Components of an IIR Study for Perceived Effects of OCR Quality Change. Kimmo Kettunen, Heikki Keskustalo, Birger Larsen, Tuula Pääkkönen and Juha Rautiainen
- Interactive Social Book Search Data as Reusable Resource. Mark Michael Hall and Marijn Koolen.
Keynote #1: Daniel Hienert (GESIS)
Information gathering, holding, and re-use in the Social Sciences – Lessons for IIR?
This talk aims to present the gathering, holding, and re-use of information in the social sciences. The directly following question is if we can use and adopt some of these patterns, lessons, or insights for data management and re-use in the field of interactive information retrieval.
This talk is divided into two parts:
- First, I will address the classical approach of information gathering, holding, and re-use of information in the empirical social sciences, which is based on survey data. I will sketch how data is collected, how it looks like and is organized, how it is held in data archives, how it is provided to researchers and end-users for re-use, and which tools are needed.
- The second part will address the relatively new information management approach for digital behavioral data. Here, digital behavioral data comes from social media, sensors, smartphone usage, or web browsing. The information types are new – however, the challenges and questions are the same: how is the data collected, how does it look like, how can it be stored, is there a metadata standard, how can it be made accessible for researchers and end-users and which tools are needed?
Daniel Hienert is a postdoctoral researcher at the GESIS department Knowledge Technologies for the Social Sciences. He joined GESIS in 2007 after his graduation in computer science at the University of Koblenz and further studies of Italian and business studies at the Humboldt University Berlin. He worked in a number of projects at GESIS such as vascoda-TB5, IREON, Sowiport and GESIS-Search. In 2013 he finished his PhD on interactive visualizations for information search and linking. From 10/2013 to 03/2014 he was an acting lead of the team GESIS-Architecture. From 04/2014 to 03/2017 he held a postdoc position in information retrieval. Since 04/2017 he is a postdoctoral research associate with interests in Interactive Information Retrieval, Information Visualization and Information Systems.
Keynote #2: Georg Buscher (Microsoft Research)
Running IR Experiments with Real Users – Common Practices and Challenges
I will give a short overview for how A/B experiments with real users are typically evaluated in the industry in the domain of IR/search. I’ll particularly emphasize what kinds of metrics and process are shared across experiments to yield comparable outcomes, and what common challenges arise.
Georg is a partner general manager at Microsoft Bing where he leads Bing Analytics and Metrics team. He has worked over 12 years in the search + experimentation space in the industry. During this time, he has also been head of Facebook’s Experimentation Platform product team and engineering manager for Facebook’s Search Quality Engineering Metrics team. Apart from establishing experimentation tooling and processes, he has designed search evaluation metrics based on human raters, online behavioral logs, surveys, and large-scale automated tests.