Book Your Ticket

Agenda

Session 1: Market Dynamics: Landscape and Industry Overview
CIS Market Overview: Growth, Competition, and Technological Advancements
Image sensors are now everywhere, found in smartphones, laptops, and smart home devices, and increasingly used in automotive, medical, industrial, security, and aerospace applications. In 2024, the CMOS image sensor (CIS) market bounced back from a challenging 2023, growing over 6% YoY to exceed $23 billion in revenue. This recovery was primarily driven by a rebound in smartphone production, which led to an increase in mobile CIS demand, even if machine vision and medical segments continue to face challenges. Automotive continued its steady growth, driven by growing camera penetration and evolving safety regulations. On the competitive side, Chinese players saw double-digit growth supported by local demand, while Sony moved closer to 50% market share. Following this evolution, technological progress is accelerating with advanced stacking architectures, and the emergence of meta surfaces, neuromorphic and event-based sensors, while HDR and LFM remain essential for automotive applications. The presentation will be an overview of the image sensor market, highlighting the key trends of the imaging industry at the market and technology levels. It will also focus on the dynamics of the ecosystems and the main players of this industry.

Bullets: CMOS image sensor industry analysis and 2020-2030 forecasts - CIS market trends -Imaging ecosystem dynamics - CMOS image sensor technology trends and emerging technologies.
Anas Chalak | Market & Technology Analyst – Imaging, Yole Group
KEYNOTE SPEECH – Dive into The Strategy and Technology Roadmap from a Leading Foundry (plan)
Coming soon...
Networking Coffee Break
Session 2: SPAD and ToF Sensors
SPAD-based All-Solid-State LiDAR – Higher Reliability and Stability, Accuracy, and Lower Power Consumption and Heat Output
Chao Zhang, CTO, Adaps Photonics
CMOS Image Sensors and SPAD Direct ToF Sensors for Automotive Applications
  • Introduction
  • CMOS Image Sensors
  • Functional Safety and Cyber Security
  • SPAD Direct ToF Sensors
  • 2in1, Fusion of Camera and LiDAR

Naoki Kawazu | Senior Analog Design Manager of Automotive Development Department, Automotive Business Division, Sony Semiconductor Solutions Corporation
TBC
Networking Lunch Break
Session 3: SWIR
Quantum Dots for Scalable, Affordable SWIR imaging
Short-wave infrared (SWIR) image sensors play a critical role in diverse applications including defense and security, industrial inspection, precision agriculture, autonomous systems, medical diagnostics and environmental monitoring. However, the commercialization of SWIR technology has been hampered by high costs, large form factors and poor resolution of available SWIR sensors. These challenges hinder broader adoption and integration into compact, low-power devices, particularly in cost-sensitive markets such as consumer electronics and portable diagnostics. In this talk, we present a low-cost SWIR sensor based on quantum dots (QDs), which serve as tunable photon absorbers directly deposited onto CMOS readout chips using a solution-based process. This approach enables wafer-level manufacturing, high pixel uniformity and spectral flexibility through controlled QD size, resulting in compact, scalable and efficient SWIR imaging devices suitable for volume production. We will present performance results from our 512×512 demonstrator camera, operating in the 400 to 1700 nm range. The imager achieves external quantum efficiency around 20%, with peak values reaching 50–60% after spectral tuning. Real-world use cases in industrial inspection, environmental monitoring and security will be shown to demonstrate the versatility and commercial potential of this QD-based SWIR platform.

Bullets: - SWIR image sensors are essential for applications like defense, security, industrial inspection, precision agriculture, autonomous systems, medical diagnostics, and environmental monitoring. - Commercialization of SWIR technology faces challenges due to high costs, large form factors, and poor resolution. These challenges limit adoption in compact, low-power, and cost-sensitive markets such as consumer electronics and portable diagnostics. - The talk presents a low-cost SWIR sensor based on quantum dots (QDs) as tunable photon absorbers. - QDs are directly deposited onto CMOS readout chips using a solution-based process, enabling
Dr. Artem Shulga | CEO, QDI
Solutions from Novel Semiconductor Materials - Infrared QD Technology Breakthrough for High-performance, Scalable SWIR Imaging and Sensing(plan) or Germanium on Silicon Image Sensors
Speaker TBC
Networking Coffee Break
Session 4: Image Sensors Application Updates(Part1)
Topic TBC
Coming soon...
Automotive Sensor Fusion Threat
  • Sensor spoofing/jamming (e.g., GPS spoofing, LiDAR interference)
  • Adversarial inputs to mislead AI-based fusion models
  • Sensor faults or drift causing incorrect fusion results
  • Asynchronous or corrupted sensor data
  • Cyberattacks via connected interfaces (V2X, CAN)
  • Blind trust in fused outputs without validation
  • Lack of redundancy or anomaly detection mechanisms

Mahabir Gupta | Solutions & Products Consultant - IoT, Mobility & Data Security, Volvo
Scientific Imaging Updates
Event-based Sensing for XR and Smart Glasses
Event-based sensing enables ultra-fast, low-power, and robust tracking for XR and smart glasses, even in challenging light conditions. By capturing only dynamic changes, these sensors deliver smooth gesture recognition, precise hand and eye tracking, and efficient environment mapping. This technology overcomes limitations of traditional cameras, supporting immersive, always-on XR experiences. The session explores recent breakthroughs and applications driving the next generation of wearable devices
Luca Verre | Co-founder and CEO, Prophesee
End of Day One Conference
Session 4: Image Sensors Application Updates(Part2)
Industrial Sensor Updates
Speaker TBC
Design, Fabrication, and Production of Large-Format Image Sensors for Cinematography: Challenges and Solutions
Large-format, next-generation immersive displays provide challenging requirements for video capture: the combination of the size and resolution of the display that necessitates the detailed image resolution also clearly exposes any deficiencies in the image. This requires a sensor that will create very-high-resolution imagery while maintaining image quality, low noise, high dynamic range, and minimal shutter/image artifacts. We present a case study for a 316MP, 2D stitched large format imager designed by Forza Silicon. In the presentation we will highlight the multi-disciplinary challenges faced to realize such a sensor needing innovations on multiple fronts-from sensor design to foundry fabrication challenges pushing the limits of the imager technology. We will also illustrate the packaging, assembly roadblocks and finally low-volume production challenges involving sensor screening, pixel defect inspection and yield optimization. We will conclude the talk with future directions and roadmap.

Bullets: 1. Motivation 2. Large Format Image Sensors-Requirements and Challenges 3. Case Study: Forza designed 316MP 120FPS, large format 2D stitched image sensor a) Design Challenges and Solutions b) Fabrication challenges in ST-Microelectronics Fab c) Packaging and Assembly Challenges and Solutions d) In-house Production Challenges: pixel defect inspection, yield optimization 4. Future Technology Directions
Abhinav Agarwal | Manager Design Engineering, Forza Silicon (Ametek Inc.)
High-performance CMOS Image Sensors: Driving Imaging Innovation
Networking Coffee Break
Panel Discussion: How the Current Geopolitics Influence the Image Sensors Market and What that Means for the Industry
  • Trade restrictions and the supply chain
  • Regional market shifts and demand-side impacts
  • Implications for the industry
Networking Lunch Break
Session 5: Foundry Updates: Technology Breakthroughs and Product Roadmaps
Revolutionizing the Image Sensor Industry – Foundry Innovation Updates
Speaker TBC
Up-to-date Foundry Solutions for Future CMOS Image Sensors
Speaker TBC
Up-to-date Foundry Solutions for Automotive Image Sensors - Their Trends, Requirements, and Challenges
Session 6: CMOS/Processing Updates
Hybrid CNN-Classical Algorithm for On-Sensor Demosaic
Demosaicing is a critical step in image sensors, reconstructing full-color images from the partial color samples of a Color Filter Array (CFA). Convolutional Neural Networks (CNNs) deliver excellent quality but are resource-intensive and often exceed sensor hardware limits. We present a novel hybrid architecture that makes CNN-based demosaicing practical for on-sensor hardware. A compact CNN handles complex patches, while classical interpolation processes the rest. Focusing the CNN on challenging regions reduces model size (~4.5 K parameters) and power. The CNN generates adaptive filters aligned with the CFA; patch complexity is detected by a hardware-friendly decision tree using simple features such as DCT coefficients. The CNN and decision tree are trained jointly. On proprietary 4 × 4 Bayer data, our hybrid model outperforms classical methods while staying far below the cost of full-CNN approaches—offering a practical on-sensor ISP solution.

Bullets: • On-Sensor ISP CNN Motivation: Making Image Signal Processing CNNs inside the sensor; we target demosaicing as a first high-impact block. • Green-Channel Demosaicing: Demosaicing reconstructs missing CFA colors; the luminance-rich green plane is estimated first and then guides U-V chroma estimation. • Hybrid Strategy: A decision tree routes simple patches to classical filters and complex patches to a compact CNN, combining low cost with deep-learning quality. • Dataset: Used proprietary 4 × 4 Bayer RAW frames with U-Net–generated ground truth that preserves sensor noise and optics. • Decision-Tree Classification: 10-level decision tree uses luminance, edge strength, and low-frequency DCT features to label each patch as simple or difficult. • CNN Architecture: CFA-aligned network that fits HW constraints and shares features for subsuquent activations, generates weights per pixel to blend learned filters. • Iterative CNN–Classifier Training: Repeatedly refine decision classification and retrain the CNN so it runs on only 20–25 % of patches - focusing where it matters. • Results: Sharper symbols, edges, and corners; dot and color artifacts removed. 8 % lower L1 error than classical while fitting in ≈ 1.6 M gates.  benefits of this new tech
Scientific CMOS Image Sensors for Visible Light Detection and Beyond
We have developed a scientific CMOS image sensor, with low noise and high dynamic range. This is the first of a family of image sensors based around the same architecture. The sensor has an 8 um 6T pixel, with a focal plane of 24.0mmx30.9mm, corresponding to 3000x3864pixels, for a total of 11.9 Megapixel. It features 1.5 e- rms noise and 93.6 dB dynamic range. It is back-side illuminated BSI and it was made with two different ARC, one optimized for visible light detection and the other one for ultraviolet detection. It can also be coated with a phosphor like P43 to enhance its performance over a wide range of wavelengths, including low-energy X-rays. The sensor features column-parallel programmable gain amplifiers and ADCs. The high dynamic range is achieved with a LOFIC structure. The PGA gain can be set differently for the two pixel gains, thus optimizing performance for different applications. It works in rolling and global shutter and also has a special mode with global reset and rolling read. CDS is performed on chip for rolling shutter, while digital CDS is performed off-chip for global shutter. The sensor also integrates all the controls and can be programmed through a serial interface for ease of use. The sensor was originally designed for a medical application, but it also finds application in scientific applications, as well as for space applications, and industry. It is stitched and different formats can be readily made from the same mask set. We already planned to produce a medium format sensor with 9000x7228 pixels, or 15Megapixel and a focal plane size of 72.0mmx61.8xmm. Because of the application requirements, the sensor is read out on its short side, thus limiting the speed. A semi-custom design can modify the readout direction, thus turning the sensor in a high-speed sensor. The design was presented at the International Image Sensor Workshop in 2023, but experimental results have not been presented yet. This talk will cover this gap and also talk about possible future developments based on the same architecture
Breaking the Color Barrier: How Color Splitting Nanophotonics Are Transforming Image Sensors Beyond Bayer Filters
For over five decades, the core mechanism behind color photography has remained largely unchanged. At the heart of nearly every CMOS image sensor lies the Bayer color filter array—a grid of red, green, and blue filters- that allows silicon-based sensors to interpret color. Using this mosaic of RGB filters to capture color, each pixel is covered by a red, green, or blue filter, meaning that only one-third of the incoming light contributes to any given pixel’s signal. As a result, roughly 70% of photons are discarded before ever reaching the photodetector. This inefficiency leads to lower signal-to-noise ratios (SNR), a perpetual Achilles’ heel for smartphone and compact camera users. Now, a new frontier in imaging is emerging. New nanophotonic color splitting technology guides, rather than filters, light into sub-diffraction-limited waveguides. This innovative solution eliminates the inefficiencies of Bayer-based systems and unlocks a new era of ultra-compact, high-resolution, and light-hungry cameras across smartphones, XR, industrial inspection, and medical diagnostics. This session will discuss this innovation that replaces the Bayer filter entirely with a nanophotonic waveguide layer that splits light based on its wavelength and directs it to the appropriate pixel. Rather than absorbing unwanted wavelengths, this system uses vertical waveguides designed to separate colors, guiding photons with minimal loss and maximal resolution.
Jeroen Hoet | CEO & Co-founder , eyeo
CMOS Image Sensors Pixels: from Basics to Today's Implementations
The pixel in a CMOS image sensor (CIS) is the part of the device that converts the incoming information (being light) into a signal that one can measure (voltage or digital number) and further process in the digital domain.  As can be expected, the CIS pixel plays a crucial role in the capabilities of the CIS and in the final signal quality delivered by the image sensor.   In this tutorial, several pixel architectures will be reviewed, starting with the general workhorses being the 3-transistor and 4-transistor alternatives.  But since the last decade, a few completely new components have made their inroads, such as a single-photon avalanche photodiode (SPAD) and the event-based sensors (EVS).  Although also based on silicon, the SPAD and EVS pixels are making use of a different working principle than their classical predecessors.  

Modern CIS are very often used in applications outside the visible spectrum, such as in near infrared applications and the SWIR band, but also in the UV part of the spectrum.  The question is whether, in those cases, silicon is still the best base material for those pixels.  The tutorial will indicate what alternatives there are to silicon (such as InGaAs, Quantum dots, Ge on Si, …), but every new material comes with its pros and cons.

The tutorial will conclude with a glance into the future of the CIS pixels: what will bring “stacking” to the table as far as pixel performance is concerned ?
Albert Theuwissen | Founder, Harvest Imaging