Book Your Ticket

Agenda

Session 1: Market Dynamics & CMOS Sensors Updates
CIS Market Overview: Growth, Competition, and Technological Advancements
Image sensors are now everywhere, found in smartphones, laptops, and smart home devices, and increasingly used in automotive, medical, industrial, security, and aerospace applications. In 2024, the CMOS image sensor (CIS) market bounced back from a challenging 2023, growing over 6% YoY to exceed $23 billion in revenue. This recovery was primarily driven by a rebound in smartphone production, which led to an increase in mobile CIS demand, even if machine vision and medical segments continue to face challenges. Automotive continued its steady growth, driven by growing camera penetration and evolving safety regulations. On the competitive side, Chinese players saw double-digit growth supported by local demand, while Sony moved closer to 50% market share. Following this evolution, technological progress is accelerating with advanced stacking architectures, and the emergence of meta surfaces, neuromorphic and event-based sensors, while HDR and LFM remain essential for automotive applications. The presentation will be an overview of the image sensor market, highlighting the key trends of the imaging industry at the market and technology levels. It will also focus on the dynamics of the ecosystems and the main players of this industry.

Bullets: CMOS image sensor industry analysis and 2020-2030 forecasts - CIS market trends -Imaging ecosystem dynamics - CMOS image sensor technology trends and emerging technologies.
 
Anas Chalak | Market & Technology Analyst – Imaging, Yole Group
Global-Shutter CMOS Image Sensors: Combining High Performance, Advanced Features, and Application Versatility
This talk presents high-performance, feature-rich global-shutter (GS) CMOS image sensors (CISs) that can be applied to a wide range of applications. In particular, we introduce two distinctly different types of GS CIS developments. The first is a hybrid-shutter (HS) CIS that integrates both conventional rolling-shutter (RS) and GS functions into a single sensor, targeting mobile applications. It supports switchable operation between 50-megapixel (Mp) RS mode and a 12.5 Mp GS mode, enabling both high-resolution and motion-artifact-free imaging. The second is a high-performance digital pixel sensor (DPS) that features a pixel-parallel analog-to-digital converter (ADC) architecture for automotive, industrial, and consumer applications. Implemented with the world's smallest 3 µm digital pixel at 3 Mp, the sensor demonstrates excellent characteristics, including 1.2 e-rms random noise and over 110dB dynamic range.
 
Dr. Min-Woong Seo | Principal Engineer & Head of Advanced Sensor Design Group, Samsung Electronics
Scientific CMOS Image Sensors for Visible Light Detection and Beyond
We have developed a scientific CMOS image sensor, with low noise and high dynamic range. This is the first of a family of image sensors based around the same architecture. The sensor has an 8 um 6T pixel, with a focal plane of 24.0mmx30.9mm, corresponding to 3000x3864pixels, for a total of 11.9 Megapixel. It features 1.5 e- rms noise and 93.6 dB dynamic range. It is back-side illuminated BSI and it was made with two different ARC, one optimized for visible light detection and the other one for ultraviolet detection. It can also be coated with a phosphor like P43 to enhance its performance over a wide range of wavelengths, including low-energy X-rays. The sensor features column-parallel programmable gain amplifiers and ADCs. The high dynamic range is achieved with a LOFIC structure. The PGA gain can be set differently for the two pixel gains, thus optimizing performance for different applications. It works in rolling and global shutter and also has a special mode with global reset and rolling read. CDS is performed on chip for rolling shutter, while digital CDS is performed off-chip for global shutter. The sensor also integrates all the controls and can be programmed through a serial interface for ease of use. The sensor was originally designed for a medical application, but it also finds application in scientific applications, as well as for space applications, and industry. It is stitched and different formats can be readily made from the same mask set. We already planned to produce a medium format sensor with 9000x7228 pixels, or 15Megapixel and a focal plane size of 72.0mmx61.8xmm. Because of the application requirements, the sensor is read out on its short side, thus limiting the speed. A semi-custom design can modify the readout direction, thus turning the sensor in a high-speed sensor. The design was presented at the International Image Sensor Workshop in 2023, but experimental results have not been presented yet. This talk will cover this gap and also talk about possible future developments based on the same architecture.
 
Renato Turchetta | CEO, IMASENIC
Networking Coffee Break
Session 2: SWIR
Quantum Dots for Scalable, Affordable SWIR imaging
Short-wave infrared (SWIR) image sensors play a critical role in diverse applications including defense and security, industrial inspection, precision agriculture, autonomous systems, medical diagnostics and environmental monitoring. However, the commercialization of SWIR technology has been hampered by high costs, large form factors and poor resolution of available SWIR sensors. These challenges hinder broader adoption and integration into compact, low-power devices, particularly in cost-sensitive markets such as consumer electronics and portable diagnostics. In this talk, we present a low-cost SWIR sensor based on quantum dots (QDs), which serve as tunable photon absorbers directly deposited onto CMOS readout chips using a solution-based process. This approach enables wafer-level manufacturing, high pixel uniformity and spectral flexibility through controlled QD size, resulting in compact, scalable and efficient SWIR imaging devices suitable for volume production. We will present performance results from our 512×512 demonstrator camera, operating in the 400 to 1700 nm range. The imager achieves external quantum efficiency around 20%, with peak values reaching 50–60% after spectral tuning. Real-world use cases in industrial inspection, environmental monitoring and security will be shown to demonstrate the versatility and commercial potential of this QD-based SWIR platform.

Bullets: - SWIR image sensors are essential for applications like defense, security, industrial inspection, precision agriculture, autonomous systems, medical diagnostics, and environmental monitoring. - Commercialization of SWIR technology faces challenges due to high costs, large form factors, and poor resolution. These challenges limit adoption in compact, low-power, and cost-sensitive markets such as consumer electronics and portable diagnostics. - The talk presents a low-cost SWIR sensor based on quantum dots (QDs) as tunable photon absorbers. - QDs are directly deposited onto CMOS readout chips using a solution-based process, enabling wafer-level manufacturing. This method achieves high pixel uniformity and spectral flexibility by controlling QD size. - Resulting SWIR imaging devices are compact, scalable, efficient, and suitable for volume production. - Performance results from a 512×512 pixel demonstrator camera operating from 400 to 1700 nm will be shared. - The imager achieves external quantum efficiency of ~20%, with peak values of 50–60% after spectral tuning. - Real-world use cases include industrial inspection, environmental monitoring, and security, showcasing the platform’s versatility and commercial potential.
 
Dr. Artem Shulga | CEO, QDI
Emerging Trends in SWIR Image Sensors Using Monolithic Integration of QDPD and OPD Technologies
Short-Wave Infrared (SWIR) imaging, typically spanning wavelengths from 1000 nm to 2000 nm, is gaining increasing importance in a wide range of applications such as medical imaging, surveillance, automotive sensing, and industrial inspection. These applications benefit from the superior sensitivity of SWIR sensors compared to conventional silicon photodiodes, enabling clearer images and more accurate data in challenging conditions such as low visibility, material differentiation, or temperature variations. However, conventional SWIR image sensors, which often rely on InGaAs-based photodiodes connected to silicon readout circuits via wafer-to-wafer bonding, face significant challenges in terms of cost, scalability, and large-area integration. To address these challenges, imec is developing alternative SWIR sensing technologies based on monolithic integration of thin-film photodiodes (TFPDs), including quantum dot photodiodes (QDPDs) and organic photodiodes (OPDs). This approach allows direct fabrication of the photodiode layer onto the CMOS readout wafer, eliminating the need for bonding and significantly simplifying the manufacturing flow. In this presentation, it introduces imec’s research progress on various monolithically integrated photodetector structures and prototype imagers for SWIR applications. We will highlight the current trends in QDPD/OPD technologies and discuss the remaining technical challenges and potential directions for enabling next-generation, low-cost, high-performance SWIR image sensors.
 
Myonglae Chu | Principal Member of Technical Staff, Scientific Lead for Image Sensors, IMEC (Interuniversity Microelectronics Centre)
Networking Lunch Break
Session 3: 3D Sensing Technologies
CMOS Image Sensors and SPAD Direct ToF Sensors for Automotive Applications
  • Introduction
  • CMOS Image Sensors
  • Functional Safety and Cyber Security
  • SPAD Direct ToF Sensors
  • 2in1, Fusion of Camera and LiDAR

Naoki Kawazu | Senior Analog Design Manager of Automotive Development Department, Automotive Business Division, Sony Semiconductor Solutions Corporation
Topic TBC
Speaker from LIPS
SPAD-Based LiDAR Chip Design and Commercialization Progress
As a critical sensor for autonomous driving, LiDAR has consistently attracted widespread attention from both industry and academia. In 2024, the sales of LiDAR-equipped vehicles in China exceeded 1.5 million units. Market analysis predicts that this number will reach 3 million units in 2025, reflecting a 100% year-on-year growth rate. Among all LiDAR technologies, those utilizing SPAD (Single-Photon Avalanche Diode) as the core receiver chip dominate the market. In this talk, I will introduce the chip design and commercialization progress of SPAD-based LiDAR, covering technologies such as SiPM (Silicon Photomultipliers), solid-state LiDAR, and semi-solid-state LiDAR. Additionally, I will discuss the key challenges and future directions for SPAD-based LiDAR development.
 
Andrew Lee | Vice president , Adaps Photonics Ltd.
Session 4: Image Sensors Application Updates(Part1)
TheiaCel™ Technology for Single Exposure HDR Video Imaging
TheiaCel is OMNIVISION’s single-exposure HDR technology originally developed to address challenging issues in the automotive field, such as LED flicker and HDR ghosting. The introduction of TheiaCel has set a new benchmark for LED flicker mitigation (LFM) in this field, aiming at Advanced Driver Assistance Systems (ADAS), and Autonomous Driving (AD).

Recognizing the growing demand for single-exposure HDR in mobile devices, OMNIVISION has extended this proven technology into the mobile domain. We recently announced the world’s first image sensor featuring TheiaCel technology for mobile, bringing automotive-grade HDR performance to smartphones.

In this talk, we will first explain the overview of TheiaCel technology . Then we will introduce the key features of the new mobile sensor with TheiaCel, and explore its potential applications in mobile imaging. Finally, we will compare TheiaCel with conventional HDR technologies to highlight its advantages in terms of image quality.
 
Keiji Yamaguchi | Senior Characterization Manager , OmniVision
Networking coffee break
Recent image sensor technologies for mobile phones
The digital camera market has been shrinking for the past decade. The market size in 2024 was about one-fifth of what it was ten years ago in 2014. The market for interchangeable lens digital cameras such as digital DSLRs and mirrorless cameras is about half, while the market for integrated lens digital cameras will be less than one-tenth. One of the reasons for this shrinking is thought to be the widespread use of smartphones and the improvement in their camera performance and image quality.

We believe that technical improvement of image sensor and signal processing, and larger optical system have contributed to the improvement of image quality of smartphone cameras. I will explain technical trend and each technology of image sensor for mobile.
 
Atsushi Kobayashi | Technology Director, Xiaomi Technology
Automotive Sensor Fusion Threat
  • Sensor spoofing/jamming (e.g., GPS spoofing, LiDAR interference)
  • Adversarial inputs to mislead AI-based fusion models
  • Sensor faults or drift causing incorrect fusion results
  • Asynchronous or corrupted sensor data
  • Cyberattacks via connected interfaces (V2X, CAN)
  • Blind trust in fused outputs without validation
  • Lack of redundancy or anomaly detection mechanisms

Mahabir Gupta | Solutions & Products Consultant - IoT, Mobility & Data Security, Volvo
The development of low-noise, high speed CMOS TDI sensor for scientific, industrial applications
Panel Discussion: How the Current Geopolitics Influence the Image Sensors Market and What that Means for the Industry
  • Trade restrictions and the supply chain
  • Regional market shifts and demand-side impacts
  • Implications for the industry
End of Day One Conference
Session 5: Foundry Updates: Technology Breakthroughs and Product Roadmaps
Foundry Processes Requested for Future Image Sensor : Logic and Pixel Processes and Others
The demand for the development of semiconductor processes to improve image quality continues, and the scope continues to grow to improve the extreme imaging environment and video quality. In addition, Vision sensor requires process development in a different direction than image sensor. Such demands not only require the improvement of the Logic process for ISP, but also the improvement and development of the Pixel process. The image sensor is a special product using Wafer to Wafer bonding among products using logic processes, and also applies the back process and color filter process that accept light. Therefore, the areas where development is required are very diverse. This presentation will focus about the development required for Logic process for ISP, Front-side process for Pixel, Wafer to Wafer bonding, Back-side process, Color-filter process and other processes.
 
Dr. Eung-Kyu Lee | Distinguished Engineer & Head of Group in Sensor PA team, Samsung Electronics
Enabling the Future of Automotive Vision: Foundry Insights on Image Sensor Trends
As the automotive industry accelerates toward higher levels of autonomy and safety, image sensors are becoming increasingly central to vehicle perception systems. This presentation explores the evolving requirements for automotive CMOS image sensors (CIS)—from high dynamic range (HDR) and LED flicker mitigation to functional safety and cybersecurity compliance. Drawing on recent industry insights and regulatory trends, we will examine how foundry platforms are adapting to meet these demands.
 
Yuichi Motohashi | Senior Manager and Deputy Director of End Markets, GlobalFoundries
Session 6: Image Sensors Application Updates(Part2)
High-resolution, high-image quality, high-speed CMOS image sensor for multiple applications
The need for high-resolution, high-quality image sensors has been booming with the arrival of higher processing capabilities in applications. We have developed a CMOS image sensor, part of a product family with 2 dimensions design stitching capabilities, enabling to upscale or downscale its resolution or format with the same mask set. It offers 46 Megapixel (MP) with a 24mm x 36mm format with a 4.4 um BSI rolling-shutter pixel, high speed (150 fps in Full Frame, 200 fps in 8K), low noise (1.5e- rms), High Dynamic Range up to 90dB thanks to in-pixel true HDR and is available both in monochrome and RGB version. This product family was designed to support up to 220 Megapixel resolution (73mm x 58mm). This product family targets multiple applications like aerial and space (new space) imaging, astronomy, professional video, high-resolution inspection, and others.

Speaker from Pyxalis
Networking coffee break
High-Performance CMOS Image Sensors: Driving Innovation in Imaging Technology
This presentation will focus on SmartSens' advanced imaging technologies. It will highlight the SuperPixGain HDR™ high dynamic range technology, providing an in-depth analysis of how this technology significantly enhances imaging performance. Additionally, it will introduce low-light imaging technologies including SFCPixel® and PixGain™ Dual Pixel Conversion Gain, demonstrating their optimization and enhancement of imaging quality in security, automotive, and other fields.
 
Jessie Zhao | Marketing Manager, SmartSens Technology (Shanghai) Co., Ltd.
Event-based Sensing for XR and Smart Glasses
Event-based sensing enables ultra-fast, low-power, and robust tracking for XR and smart glasses, even in challenging light conditions. By capturing only dynamic changes, these sensors deliver smooth gesture recognition, precise hand and eye tracking, and efficient environment mapping. This technology overcomes limitations of traditional cameras, supporting immersive, always-on XR experiences. The session explores recent breakthroughs and applications driving the next generation of wearable devices.
 
Luca Verre | Co-founder and CEO, Prophesee
Design, Fabrication, and Production of Large-Format Image Sensors for Cinematography: Challenges and Solutions
Large-format, next-generation immersive displays provide challenging requirements for video capture: the combination of the size and resolution of the display that necessitates the detailed image resolution also clearly exposes any deficiencies in the image. This requires a sensor that will create very-high-resolution imagery while maintaining image quality, low noise, high dynamic range, and minimal shutter/image artifacts. We present a case study for a 316MP, 2D stitched large format imager designed by Forza Silicon. In the presentation we will highlight the multi-disciplinary challenges faced to realize such a sensor needing innovations on multiple fronts-from sensor design to foundry fabrication challenges pushing the limits of the imager technology. We will also illustrate the packaging, assembly roadblocks and finally low-volume production challenges involving sensor screening, pixel defect inspection and yield optimization. We will conclude the talk with future directions and roadmap.

Bullets: 1. Motivation 2. Large Format Image Sensors-Requirements and Challenges 3. Case Study: Forza designed 316MP 120FPS, large format 2D stitched image sensor a) Design Challenges and Solutions b) Fabrication challenges in ST-Microelectronics Fab c) Packaging and Assembly Challenges and Solutions d) In-house Production Challenges: pixel defect inspection, yield optimization 4. Future Technology Directions
 
Abhinav Agarwal | Manager Design Engineering, Forza Silicon (Ametek Inc.)
Networking Lunch Break
Session 7: Processing Updates
Advanced Flip-Chip Bonding for Infrared Sensors and cameras
Infrared (IR) image sensors, demand extremely precise and reliable hybridization techniques. Among these, indium bump bonding has become a key process to achieve the required electrical and mechanical interconnections between the photodetector array and the ROIC (Read-Out Integrated Circuit). This presentation explores the critical role of bonding in IR sensor performance, highlighting the specific challenges of indium bump hybridization — including material sensitivity, sub-micron alignment, parallelism, and thermal management. We will present SET’s high-precision flip-chip bonding solutions, designed to meet the stringent requirements of IR sensor manufacturers. Using real case studies, we will demonstrate how controlled Z-axis motion, low-force bonding, and advanced alignment technologies contribute to yield, reliability, and long-term stability in IR sensor assemblies.
 
François Chabrerie | Sales Manager Asia, SET Corporation
Hybrid CNN-Classical Algorithm for On-Sensor Demosaic
Demosaicing is a critical step in image sensors, reconstructing full-color images from the partial color samples of a Color Filter Array (CFA). Convolutional Neural Networks (CNNs) deliver excellent quality but are resource-intensive and often exceed sensor hardware limits. We present a novel hybrid architecture that makes CNN-based demosaicing practical for on-sensor hardware. A compact CNN handles complex patches, while classical interpolation processes the rest. Focusing the CNN on challenging regions reduces model size (~4.5 K parameters) and power. The CNN generates adaptive filters aligned with the CFA; patch complexity is detected by a hardware-friendly decision tree using simple features such as DCT coefficients. The CNN and decision tree are trained jointly. On proprietary 4 × 4 Bayer data, our hybrid model outperforms classical methods while staying far below the cost of full-CNN approaches—offering a practical on-sensor ISP solution.

Bullets: • On-Sensor ISP CNN Motivation: Making Image Signal Processing CNNs inside the sensor; we target demosaicing as a first high-impact block. • Green-Channel Demosaicing: Demosaicing reconstructs missing CFA colors; the luminance-rich green plane is estimated first and then guides U-V chroma estimation. • Hybrid Strategy: A decision tree routes simple patches to classical filters and complex patches to a compact CNN, combining low cost with deep-learning quality. • Dataset: Used proprietary 4 × 4 Bayer RAW frames with U-Net–generated ground truth that preserves sensor noise and optics. • Decision-Tree Classification: 10-level decision tree uses luminance, edge strength, and low-frequency DCT features to label each patch as simple or difficult. • CNN Architecture: CFA-aligned network that fits HW constraints and shares features for subsuquent activations, generates weights per pixel to blend learned filters. • Iterative CNN–Classifier Training: Repeatedly refine decision classification and retrain the CNN so it runs on only 20–25 % of patches - focusing where it matters. • Results: Sharper symbols, edges, and corners; dot and color artifacts removed. 8 % lower L1 error than classical while fitting in ≈ 1.6 M gates.  benefits of this new tech
 
Oren Girshkin | Algorithms and Computer Vision Engineer , Samsung Electronics
Smart RGB-IR Sensor: Achieving Natural Color and Intelligent IR Noise Control
RGB-IR sensors have long been studied, but due to image quality issues such as limited color accuracy, edge artifacts, and difficulties in HDR scene processing, they have not been able to replace existing RGB sensors in human-viewed applications. These technical obstacles, in particular noise due to color distortion and infrared interference, have long limited their use in high-fidelity imaging.

However, the purpose of imaging is rapidly evolving. Today, images are not just viewed by the human eye, but are increasingly processed by machines such as AI systems, robots, and smart devices. In this new paradigm, IR bands are no longer a nuisance to filter out, but rather a valuable source of information. RGB-IR sensors can provide sufficient resolution for sensing tasks for machines, even with limited pixel allocation, while also providing new possibilities for intelligent scene analysis.

In this presentation, we introduce two key technologies that can unlock the potential of RGB-IR sensors. The first is the Inverse Color Response Transform (ICRT), an advanced color calibration algorithm that restores natural RGB tones by precisely calibrating infrared signal interference. The second is Color Noise Adaptation (CNA) technology, which dynamically adjusts the color gain and noise ratio according to the signal balance between RGB and IR. This allows the sensor to enhance RGB color restoration and preserve meaningful IR information while effectively suppressing noise.

By simultaneously addressing the need between image viewing and machine recognition, the RGBIR technologies announced this time are applied to our company's products and are actually adopted in applications such as in-cabin monitoring systems (OMS), security, face recognition, and smart home appliance vacuum cleaners.
 
Young Woong Kim | CTO, PIXELPLUS
Breaking the Color Barrier: How Color Splitting Nanophotonics Are Transforming Image Sensors Beyond Bayer Filters
For over five decades, the core mechanism behind color photography has remained largely unchanged. At the heart of nearly every CMOS image sensor lies the Bayer color filter array—a grid of red, green, and blue filters- that allows silicon-based sensors to interpret color. Using this mosaic of RGB filters to capture color, each pixel is covered by a red, green, or blue filter, meaning that only one-third of the incoming light contributes to any given pixel’s signal. As a result, roughly 70% of photons are discarded before ever reaching the photodetector. This inefficiency leads to lower signal-to-noise ratios (SNR), a perpetual Achilles’ heel for smartphone and compact camera users. Now, a new frontier in imaging is emerging. New nanophotonic color splitting technology guides, rather than filters, light into sub-diffraction-limited waveguides. This innovative solution eliminates the inefficiencies of Bayer-based systems and unlocks a new era of ultra-compact, high-resolution, and light-hungry cameras across smartphones, XR, industrial inspection, and medical diagnostics. This session will discuss this innovation that replaces the Bayer filter entirely with a nanophotonic waveguide layer that splits light based on its wavelength and directs it to the appropriate pixel. Rather than absorbing unwanted wavelengths, this system uses vertical waveguides designed to separate colors, guiding photons with minimal loss and maximal resolution.
 
Jeroen Hoet | CEO & Co-founder , eyeo
Is the Perfect CMOS Image Sensor Available Today?
Looking at the performance of a today’s CMOS image sensor (quantum efficiency over 80 %, dynamic range over 80 dB, temporal noise below 1 electron, fixed-pattern noise below visibility, parasitic light sensitivity below -100 dB, etc.) one may conclude that the perfection is being reached.  Does that mean that no longer new developments can be expected ?  In this talk it will be explained that indeed for some characteristic the ideal image sensor is almost in our hands, but this is for sure not true for all performance parameters, even not for all the ones abovementioned.  

In some cases we are even fooling ourselves, and the perfect image sensor is much further away than we think.  For instance quantum efficiency : 80 % only can be reached for black-white devices.  In this talk a few performance parameters will be analysed in more details and suggestions will be made for further improvements.  The good news : there are still great challenges ahead for the imaging engineers, we are not going to immediately lose our jobs!
 
Albert Theuwissen | Founder, Harvest Imaging
End of conference