Ahead of Image Sensors Asia 2025 we spoke to Luke Liu about his forthcoming presentation : From Perception to Decision: How LIPS Robotics Vision Platform and Edge AI Are Reshaping Intelligent Robotics and the Sensing Industry
Luke Liu | CEO at LIPS Corporation
Luke Liu is the founder and CEO of LIPS Corporation. From 2008 to 2010, he conducted research at MIT in Speech Recognition and Signal Processing while at Foxconn, where he envisioned the transformative role of 3D vision and machine intelligence in automation and AI. He later founded LIPS, focusing on 3D depth cameras and AI software to power industrial automation, logistics, and robotics. Under his leadership, LIPS has secured multiple patents and built a global presence through partnerships with technology leaders such as Ambarella, Intel, NVIDIA, and Cadence. Today, LIPS continues to innovate in 3D vision and edge AI, driving smarter automation as a Robotics Vision Platform & Edge AI Solutions Provider.
Q1: As the inaugural Asian edition of this renowned conference, what aspects of Image Sensors Asia 2025 are you most excited to participate in?
As the first Asian edition of Image Sensors, I’m most excited about three things:
• Regional Innovation
Asia’s diverse markets—from Japan to Southeast Asia—offer unique challenges and opportunities for image sensing. I look forward to engaging with local innovators and understanding how sensing platforms are being adapted across industries.
• Cross-Industry Collaboration
This conference brings together leaders from automotive, robotics, retail, and healthcare. It’s a rare chance to explore how modular sensing and edge AI can scale across verticals.
• Product Showcase
We’ll be unveiling LIPS’ latest Stereo and Time-of-Flight modules—designed for high-precision depth sensing and real-time perception. It’s a great platform to share our vision for scalable, intelligent robotics.
Image Sensors Asia isn’t just a conference—it’s a launchpad for the next wave of sensing innovation, and we’re proud to be part of it.
Q2: Could you give some background on the business of your company?
LIPS is a global leader in modular AI sensing platforms. We design plug-and-play vision modules that combine depth sensing, semantic recognition, and edge AI for real-world deployment. Our technology powers applications across autonomous mobility, industrial automation, and smart retail—helping machines see, understand, and act intelligently.
What sets us apart is our modular architecture: it’s scalable, multilingual, and ready for cross-industry integration. Whether it’s VSLAM for autonomous forklifts or gesture recognition in kiosks, our solutions are built for rapid commercialization.
We work closely with OEMs and system integrators to co-develop sensing systems that are not only technically robust but also market-ready. At LIPS, we don’t just build sensors—we enable smarter ecosystems.
Q3: Could you share some key themes from your upcoming presentation?
In my presentation titled “How LIPS Robotics Vision Platform and Edge AI Are Reshaping Intelligent Robotics and the Sensing Industry,” I’ll focus on four key themes:
1. Modular Vision Architecture
We’ll explore how LIPS’ plug-and-play sensing modules enable scalable deployment across robotics platforms—from autonomous forklifts to collaborative arms—accelerating integration and customization.
2. Edge AI for Real-Time Decisions
I’ll highlight how our edge AI capabilities deliver low-latency, on-device intelligence, allowing robots to perceive and act locally, securely, and efficiently.
3. Cross-Vertical Adaptability
Our platform supports multilingual recognition and context-aware sensing, making it ideal for smart logistics, retail automation, and industrial robotics across global markets.
4. Showcasing Next-Gen Stereo and ToF Modules
We’ll unveil our latest Stereo and Time-of-Flight products, engineered for high-precision depth sensing and robust environmental adaptability—setting new benchmarks for intelligent perception.
This talk is about turning sensing into strategy—making robotics smarter, faster, and ready to scale.
Q4: From your perspective, which emerging technology trends and application breakthroughs this year and next will most impact the image sensors sector?
In 2025 and 2026, I see four key trends reshaping the image sensor sector:
• Backside-Illuminated CMOS
These sensors offer superior low-light performance, critical for robotics in warehouses and outdoor environments.
• Edge AI Integration
Sensors are evolving into intelligent nodes, performing real-time recognition and decision-making directly on-device—reducing latency and boosting autonomy.
• Sensor Fusion
Combining Stereo, ToF, LiDAR, and IMUs enables robust multi-modal perception. LIPS is actively developing hybrid modules to support this shift.
• Next-Gen Modules
We’re showcasing our latest Stereo and Time-of-Flight products, designed for high-precision depth sensing and scalable deployment across industries.
These breakthroughs are turning image sensors into strategic enablers for smarter, faster, and more adaptive systems.
Q5: As a fast-growing market, what do you see in the Asian market in the next 5-10 years?
Over the next 5 to 10 years, Asia will lead the global transformation in intelligent sensing and robotics. I see four major drivers:
• Smart Manufacturing
China, Japan, and South Korea are accelerating industrial automation with vision-guided robotics for precision and efficiency.
• Autonomous Mobility
From forklifts to delivery bots, Asian cities are adopting real-time perception systems powered by Stereo and ToF sensors.
• AI-Powered Consumer Devices
Smartphones, wearables, and home devices will increasingly rely on compact image sensors for gesture, face, and environment recognition.
• Localization & Scale
Asia’s diversity demands multilingual, modular sensing platforms. With rapid innovation and massive market scale, the region is not just adopting sensing—it’s defining its future.
LIPS is deeply invested in this momentum, building platforms ready to scale across Asia’s dynamic landscape.