Friday, September 30, 2016

ADAS Market Report

Woodside Capital Partners published Report on ADAS/Autonomous Sensing Industry. Few slides from the report:

Videantis Takeaways from AutoSens 2016

Videantis posted its review of AutoSens 2016 conference. There are 5 things learned from the conference:
  1. Self-driving cars are hard. After Tesla Autopilot accidents people are more conservative about self-driving prospects. One analyst mentioned 2040 as a possibility for driverless cars to become available, beyond the horizon for most technology companies.
  2. Deep learning is hard. The nets need to be trained with huge amounts of image data, which needs to be properly annotated by hand, with some companies having hundreds of people on staff to perform this task.
  3. Image quality is hard. Image quality will remain a key topic for quite some time.
  4. We need more sensors. OEMs designing 12 cameras into their cars, and this number continuous to go up.
  5. Surround view replacing rear view. These systems are quickly becoming the rear view camera of yesteryear.

Thursday, September 29, 2016

First Visual Innovation Award

Arizona State University: The new Visual Innovation Award has been announced at IEEE International Conference on Image Processing.

Ren Ng, founder of Lytro light-field camera, has been named the first recipient of the Award. Other finalists related to the image sensing field were Achin Bhowmik, Intel VP, development and deployment lead for Intel RealSense camera technology; Brendan Iribe, co-founder and CEO of Oculus, VR; and Alex Kipman, inventor, Microsoft Kinect.

Nvidia on Key Issues in Automotive Imaging

Image Sensors Auto US publishes an interview with Nvidia imaging architect Joshua Wise. Few points from the interview:

Q: What are the key standards issues need to be addressed as components get more complex and diverse?

Joshua Wise: There are two that are top on my list right now. The first that comes to mind is safety and compliance: an important issue in the automotive environment is the ability to self-diagnose issues. In short, the system must “know when it doesn’t know”. As more components enter the ecosystem, there is more opportunity for data to be damaged in transit — and, similarly, more components result in more health data that must be aggregated and transmitted.

The second on my list is the need for a standard in transmitting data with a high dynamic range. There are as many implementations of sending pixels with greater than 12 bits of data as there are vendors right now — perhaps even more! We’ve been working with vendors to come up with solutions that work with both modern and legacy components, but we see an opportunity to unify and standardize here.

Wednesday, September 28, 2016

Keynote on Fast Image Sensors

University of Strasbourg and CNRS Prof. Wilfried Uhring presented his keynote "High Speed Image sensors" at SIGNAL 2016 conference on June 27, 2016 in Lisbon, Portugal. Few slides out of 33:


Talking about the IO speed, 25GPixel/s limitation is somewhat obsolete by now. For example, PCIe 4.0 standard defines 32 lanes of 16Gbps each, with the aggregate bandwidth of 512Gbps. Assuming 10b per pixel, one can get 51GPixel/s I/O speed by just buying the PCIe 4.0-compliant IP. And PCIe 4.0 is not the fastest interface these days.

Challenges in Time Correlated Single Photon Counting Imagers

C. Bruschini and E. Charbon (EPFL and Delft TU) presented "(Challenges in) Time Correlated Single Photon Counting Imagers" at SIGNAL 2016 conference held on June 26-30, 2016 in Lisbon, Portugal. Few slides out of 55:

CEVA Presents its XM6 Embedded Vision Platform

CEVA introduces a new DSP-based platform bringing deep learning and AI capabilities to low-power embedded systems. The new IP platform that is centered around a new XM6 imaging and vision DSP:


Tuesday, September 27, 2016

Rambus LSS Platform

Rambus launches the Partners in Open Development 2.0 (POD 2.0) evaluation platform for its lensless image sensors:



Update: Rambus also publishes an eyetracking use case video:

DJI Mavic Drone Features (Somewhat) Autonomous Flying

SiliconRepublic: DJI Mavic drone uses 5 vision sensors, Movidius Myriad 2 vision processor and 24 processing cores to offer a limited flight autonomy:

Low-Power Event-Driven Image Sensor Architectures

"Low-Power Event-driven Image Sensor Architectures" presentation by Laurent Fesquet, Amani Darwish, and Gilles Sicard of University Grenoble Alpes and CEA-LETI reviews different enent-driven approaches. The presentation was prepared for First International Conference on Advances in Signal, Image and Video Processing held on June 26 - 30, 2016 - Lisbon, Portugal. Few slides from the presentation: