BOOK NOW

Agenda

Registration and welcome refreshments
SESSION 1: MARKET TRENDS & CMOS SENSOR UPDATES
Navigating market dynamics and emerging trends in mobile and automotive CIS
The sensor market is expected to grow over 11% year over year in 2025, with image sensors playing a key role in driving this growth. The expansion of high-end CMOS Image Sensor (CIS) products in mobile and automotive applications will contribute over 70% of the CIS market's revenue in 2025. The image sensor market sees CIS content growth for enabling ADAS systems and expanding the viewing applications. The adoption of high-resolution smartphone image sensors is soaring, registering over 30% share in the market. The demand for large format CIS with higher ASP will anchor the image sensor revenue growth. The presentation will focus on the market landscape for mobile and automotive CIS and define the key trends driving the image sensor market
Jeffrey Mathews | Senior Analyst, TechInsights
How image sensors are making the world smarter
In the rapidly evolving landscape of technology sensors have emerged as a pivotal force driving innovation and enhancing intelligence across various sectors. This presentation explores how various optical sensors are making the world smarter by enabling more efficient and connected systems.
From smart cities and healthcare to industrial automation; from environmental monitoring and aerospace observation to automotive autonomy driving, image sensors are the backbone of the technological advancements that are shaping a smarter, more connected world
Heming Wei | Technical Marketing Manager, X-FAB
Is the perfect CMOS image sensor available today?
Looking at the performance of today’s CMOS image sensor (quantum efficiency over 80 %, dynamic range over 80 dB, temporal noise below 1 electron, fixed-pattern noise below visibility, parasitic light sensitivity below -100 dB, etc.) one may conclude that perfection is being reached.
•            Does that mean that few new developments can be expected?
•            Or, is the perfect image sensor much further away than we think? 
In this presentation certain performance parameters will be analysed in more detail and suggestions will be made for further improvement. 
The good news: there are still a great many challenges ahead for imaging engineers!
Albert Theuwissen | Founder, Harvest Imaging
Networking Break
qCMOS camera ORCA-Quest enabling 2D photon number resolving and its applications
ORCA-Quest is the world’s first quantitative CMOS camera which realises high resolution, high sensitivity, high frame rate and ultra-low readout noise at the same time.
In thanks especially for the superior design and the fabrication technologies for sensors and cameras it achieves 0.3e- rms readout noise which makes the world’s first 2D photon number resolving capability, meaning that it enables precise quantification of photoelectrons.
Due to those high-performance features, ORCA-Quest is applied in various applications such as quantum technologies, astronomy, material science and life sciences.
In this presentation we introduce 2D photon number resolving capability and some use cases in quantum computing applications.
David Castrillo | Director, Hamamatsu Ventures Europe
A bold new chapter for Forza: the development of a standardised advanced CMOS image sensor portfolio
Creating versatile, advanced CMOS sensors that meet the needs of both tailored and standardised markets.
Introducing Forza’s first standard products:
  • Ultra-Fast 8MP BSI low-noise sensor:
    • ideal for applications demanding high speed, low light performance, and precision
  • Complementary product: 2MP sensor in the same family:
    • maintains the ultra-fast, low-noise architecture.
    • provides cost-effective performance while ensuring precision imaging for a wide variety of markets
A glimpse into the future:
  • Continuous innovation: addressing the rapidly evolving needs of advanced industries.
  • Collaboration & partnerships: encouraging partnerships to leverage custom CMOS development while offering solutions that scale across multiple industries.

Abhinav Agarwal | Manager Design Engineering, Forza Silicon (Ametek Inc.)
Recent advancement in charge domain transfer in CMOS image sensors for line scan
Gpixel has recently been developing advanced TDI line scan sensors where a lot of progress has been made in fast charge transfer functionality using so called CCD in CMOS technology with increase line rates beyond 2M lines/s. This same technology is also used to develop a new series of line scan sensors with rectangular pixels up to 1000 um long for laser displacement, spectroscopy and OCT applications.
Assaf Lahav | Vice President of Pixel and Advanced Technology, Gpixel
Networking Lunch
SESSION 2: AUTOMOTIVE APPLICATIONS
Automotive imaging market insights and technological evolution
The automotive CMOS image sensor industry is experiencing significant growth to reach $3.2 billion in 2029. This growth is in parallel with lower expectations on mobile and consumer markets pushing leading CIS companies to explore new growth opportunities. They led the technological advancements with resolutions, dynamic range, and LED flicker mitigation evolving to meet the industry's demands. The future outlook indicates the widespread adoption of ADAS cameras, in-cabin monitoring systems to meet the regulation expectations. The industry is also seeing the rise of cost-efficient hybrid lens sets from a growing Chinese ecosystem, which is now even developing its own camera modules and competing the traditional tier-ones.
Anas Chalak | Market and Technology Analyst, Yole Group
Advanced imaging and sensing technologies for ADAS and AD
Advanced Driver Assistance Systems (ADAS) are increasingly using sensing technologies to improve vehicle safety. Current ADAS solutions primarily rely on CMOS image sensors (CIS) for object detection, but future systems will integrate more advanced sensors, such as depth sensing, to enhance accuracy. By 2030, multi-sensor fusion is expected to provide higher levels of perception.
This presentation will explore the key characteristics of image sensors that ensure robust ADAS performance, focusing on HDR, LFM, and motion artifact reduction for 2D CIS cameras. It will also highlight the role of LiDAR technology in complementing camera systems, particularly in poor weather conditions, and demonstrate how SPAD technology is driving the widespread adoption of LiDAR for next-generation ADAS and autonomous vehicles.
Jens Landgraf | Senior Technical Program Manager, Sony Semiconductor Solutions, Europe Design Center (EUDC)
Successful adoption of CMOS image sensors for driver and occupancy monitoring solutions
With new regulations coming into place in the automotive industry, innovative lines of products have been introduced to meet evolving market demands. CMOS image sensors designed for driver and occupancy monitoring feature both rolling shutter and global shutter capabilities, boasting a compact 2.25 µm pixel pitch and delivering world-class performance. The technology approach to colour matrix allows for both high-quality images and 3D sensing capability. Additionally, the small format is achieved through cutting-edge 3D stacked technology, enabling high performance within a compact sensor design.
Cyrille Trouilleau,Senior Technical Marketing Manager STMicroelectronic
Pierre Malinge | Image Sensor Design, STMicroelectronics
Networking break
SESSION 3: SWIR
Next-generation short-wave infrared sensors enabled by Pb-free quantum dots
Short-wave infrared (SWIR) wavelength range provides information beyond human vision, and beyond capabilities of silicon-based devices. For decades, imaging applications in that spectrum have been limited to high-end use cases (security, space, scientific) due to very high manufacturing costs of the hybrid sensors using III-V detectors. Recently , imagers based on colloidal quantum dots are making inroads into machine vision industry. Monolithic integration of thin-film photodiode stacks allows for wafer-level fabrication and enables new form factors (high resolution, high pixel density, miniaturisation). In this paper we present recent updates on the InAs QD devices with image sensor proofs-of-concept. Pb-free stacks are one of the enablers of wider deployment of QD SWIR imaging technology and represent a critical improvement of the manufacturability aspects for further upscaling of quantum dot image sensors.
Paweł E. Malinowski | Program Manager “Pixel Innovations”, imec
Organic photoconductive film short wave infrared image sensor
Panasonic has developed an organic-photoconductive-film (OPF) CMOS image sensor that performs photoelectric conversion with an organic thin film.
The OPF image sensor has the unique pixel structure in which the organic thin films are stacked over the CMOS circuit, and we have reported the visible OPF image sensors with the remarkable features such as wide dynamic range, global shutter driving mode, and low colour mixture.
In this presentation we will introduce the short-wavelength infrared (SWIR) OPF image sensors which have organic photoelectric conversion films with sensitivity to SWIR wavelength. The basic sensor characteristics and the captured images under the light sources at the wavelength of 1300nm and 1450nm will be reported.
The SWIR OPF image sensors are expected to be environment-friendly such as lead-free and easily integrated with CMOS sensor fabrication processes.
Takahiro Koyanagi | Chief Research Engineer, Technology Sector, Panasonic Holdings Corporation
Panel discussion
  • What are the implications of the polarising world order on the target markets and supply chain of image sensors - Consumer Electronics, Automotive, Industrial?
  • What are the next big mass markets of imaging and vision? Smart home? Service robots? Others?
  • What performance characteristics and breakthroughs are on the horizon?
Speakers to be confirmed
Chair’s closing remarks and end of day one
Networking drinks reception
Reception tbc
18:00 - 20:15
Registration and morning refreshments
SESSION 4: SPAD
High-definition 3D-BSI time-gated SPAD image sensor for machine vision applications
We present a 5μm-pitch, 3D-BSI 1Mpixel time-gated SPAD image sensor for machine vision applications. The SPAD image sensor operates at 1,310fps for global shutter 2D imaging, and event vision sensing with 0.76ms temporal resolution under 0.02lux. Range-gated imaging result demonstrates a feasibility of robust imaging under harsh environments. 3D time-of-flight sensing under 50klux ambient light can be achieved by a newly proposed gating network architecture for robust background suppression.
Ayman Abdelghafar | SPAD Pixel Engineering Lead, Canon, Inc
Photon-counting SPAD X-ray sensors: new possibilities for CMOS digital radiography
Medical X-ray detectors such as computed tomography, radiography and dental X-ray are widely used in medical diagnostics. The X-ray detector market has also grown in recent years and the forecast is also very promising. The size of the X-ray detectors must match the size of the target simply because a lens cannot focus on the X-rays. The wafer-scale CMOS detector can be realised using the wafer-level stitching technique. Due to the volume-scale feasibility of the scintillator with CMOS pixels, CMOS X-ray detectors are not only cost-effective but also suitable for use in large detectors and therefore dominate the X-ray imaging market. A digital SPAD pixel achieves a higher SNR with a lower dose and wider DR than traditional CMOS X-ray sensors. However, SPAD X-ray sensors suffer from many challenges in wafer-level integration. It would be interesting to see how these challenges are adequately addressed in recent advances in the wafer-scale SPAD X-ray sensor.
Youngcheol Chae | Professor, Electrical and Electronic Engineering, Yonsei University, South Korea
GeSi technology for imaging and beyond
In this talk we will review the development of Ge-based photodetection, starting from its role in the Si photonics platform for high speed communication toward new sensing & imaging applications. A brief touch on the photonic quantum computing opportunities using GeSi SPAD will be discussed to conclude this talk.
Erik Chen | Co-Founder and Chief Executive Office, Artilux
Networking break
SESSION 5: DEPTH SENSING
Latest technology advances in optical sensors
In our presentation we will showcase some of our latest innovations on ALS solutions, Medical Imaging detectors, time-of-flight (TOF) sensing technology with groundbreaking improvements from TDC performance to system operation and, last but not least, our sensors from visible to SWIR.
André Srowig | Senior Principal Engineer Research and Development, ams OSRAM
Pseudo-direct LiDAR based on spatio-temporal coded exposure with multi-tap charge modulators
A virtually direct LiDAR system based on an indirect ToF image sensor and charge-domain temporal compressive sensing combined with deep learning is demonstrated. This scheme has high spatio-temporal sampling efficiency and offers advantages such as high pixel count, high photon-rate tolerance, immunity to multipath interference, constant power consumption regardless of incident photon rates, and motion artifact-free. The importance of increasing the number of taps of the charge modulator is suggested by simulation.
Keiichiro Kagawa | Professor, PhD, Shizuoka University
Depth sensing technologies, cameras and sensors for VR and AR applications
Introduction to AR/VR use cases that require RGB, mono and depth cameras/sensors.
Camera & image sensor requirements for:
  • stereo
  • structured light
  • iToF
SPAD sensors as the imaging sensor for the future.
Depth technologies - example applications.
Harish Venkataraman | Principal Engineer, Camera & Depth Systems, Reality Labs, Meta Inc.
Networking Lunch
SESSION 6: CONSUMER & SCIENTIFIC APPLICATIONS
Enhancing imaging, expanding vision - hybrid vision sensing technology and application
Innovative HVS® (Hybrid Vision Sensing) technology - integrating image and event sensing technology at the pixel, data processing and algorithm layers.  This coordinates with the launch of ALPIX®, an HVS® sensors based on the new architecture, which support parallel or independent output of image and event signals.
We will talk about the challenges of combining high quality APS images with event images from the same sensor in relation to enhancement tasks for mobile phones and consumer electronics, such as deblurring, slow motion, super resolution, and image stitching.  Due to the new hybrid spatial-temporal data registration and processing methods, HVS® has made substantial progress and is in preparation for production.
Roger Bostock | Senior Director, AlpsenTek GmbH
CERN’s activities in pixel detectors: advances in monolithic and hybrid pixel detectors
Introducing the MOSAIX chip, a second-generation wafer-scale monolithic stitched sensor developed for the ALICE ITS3 detector using a 65nm CMOS imaging technology. This full-size prototype, with a die size of 26.6x1.96 cm², features over 94% active area and 144 sensor tiles (around 10 million pixels total), each capable of individual powering to manage defects.
In the hybrid domain we will present Timepix4 and LAPicopix, two state-of-the-art developments:
  • Timepix4 was designed using a 65nm technology, is a 24.7 × 30.0 mm² hybrid pixel detector readout ASIC that supports tiling on four sides using TSV technology. With 448 × 512 pixels (55 µm pitch), it operates in data-driven mode with sub-200 ps time binning and can handle hit rates of up to 3.6 MHz/mm². In photon counting mode, it can manage rates of up to 5 GHz/mm². Timepix4 also offers data output flexibility, with 2 to 16 serializers supporting a total readout bandwidth up to 164 Gbps.
  • LAPicopix is currently being developed using a 28nm technology, the next-generation chip aimed at achieving time resolution of ~30 ps rms with on-chip clustering and output data sorting. It features a 256 × 256 pixel array (<50 µm pixel size) and a maximum hit rate of 78 kHz per pixel (equivalent to 20 MHz/mm²), pushing the boundaries of high-precision particle tracking.

Xavi Llopart Cudie | Microelectronic Engineer, CERN European Organization for Nuclear Research
A new-space approach for in-situ tracking of space debris
Millions of pieces of space debris smaller than 10 cm orbit the Earth, causing dozens of dangerous situations every day. Many efforts have been made to catalogue this large amount of space debris, most of which track debris from the ground using either optical telescopes and/or active radar technologies. However, these approaches track debris from very large distances and require not only a large initial investment but also considerable computational resources. This presentation introduces a novel approach to space debris tracking and cataloguing: a black and white CMOS detector, a 3U CubeSat orbiting the most populated orbits in LEO. A first stage of image and data processing will be performed on board, with the remainder performed at the ground station. Due to the high cost of launching the satellite, the missions envisaged in this project will follow the "new space" approach in terms of simplicity and efficiency: high processing power will be required both on board and at the ground station to automate this system as much as possible. The result will be a scalable in-situ solution for space traffic monitoring and cataloguing. Hosted payloads will be added to the mission to increase efficiency in terms of payload used per kilogram launched.
Gerard Vives | Systems Engineer, Beyond Debris Sàrl
SESSION 7: IMAGE PROCESSING
Unlocking embedded vision innovation: the Kamaros API for cross-platform camera interoperability
The embedded vision market has seen explosive growth, driven by cameras in diverse systems such as drones, smart devices, and autonomous machines. However, the lack of interoperable camera API standards increases application development time and maintenance costs while reducing portability and opportunity for code reuse, resulting in unnecessarily high integration costs for camera technologies. Embedded vision applications on these integrated systems lack a pervasively available API to portably generate sensor streams for local accelerated processing. The Khronos Group’s Kamaros™ API aims to address this challenge by providing a new open standard that will enable cross-platform software portability and decouple hardware and software development. Designed specifically for embedded camera systems, Kamaros allows for streamlined hardware access without exposing proprietary details. This presentation will explore the evolving roadmap and technical direction for Kamaros and discuss how its development will benefit both hardware vendors and developers, fostering broader adoption and efficiency in embedded vision applications.
Laurent Pinchart | CEO, Ideas on Board
Performance and costs advantages of lossless compression for high-value imaging applications
Compression is often a misunderstood, and as a result undervalued and under deployed as a  tool for vision applications. This presentation will discuss the various approaches to deploy compression, and a novel approach that enables low latency, high-reliability lossless compression for vision applications. When designed and deployed properly compression can significantly boost the bandwidth capabilities of existing infrastructure to increase performance while maintaining and even lowering costs. The session will include real-world application examples where compression has been deployed in high-value imaging systems for security, defence and medical devices.
James Falconer | Product Manager, Pleora Technologies
Chair's closing remarks and end of conference
IS-Europe-2025-Agenda-15-1-25-updated