EyeTap principle was developed by Professor Steve Mann at University of Toronto. As seen in the figure below suggests, what is special about this principle is that the camera vision is perfectly aligned with your vision. Let's see in detail how that works.
From the left, the camera receives light rays reflected by the 2-sided mirror (Beam splitter) that were supposed to go into your eye for you to "see". The camera "see"s what your eye sees. The image received from the camera is processed through a computer (Raspberry Pi Zero). Video or image can be saved, have them image processed for filters, overlay of AR objects etc. The result is projected through the micro display, the box on the right in the figure. The projected image again reflects on the beam splitter and to your eye to be seen. This can replace part of your real vision with a processed image real time.
If you have noticed, our initial design does not have the EyeTap Camera looking into the beam splitter for this smart effect. This is because we are using Spy Camera with hard flex that cannot be flexed in certain directions to be mounted at the EyeTap camera location.
How to solve this problem?
- Use a usb camera that does not need a flex, but just wires that are bendable. However they have some drawbacks compared to flex-spy-cameras. However with an USB camera, the frame rate goes doen from 70fps to 10~15fps which will cause a lot of lag. Check this post: https://www.openeyetap.com/forum-1/hardware/advantages-of-choosing-the-flex-camera-over-the-usb-camera
- Use NTSC camera. However imagine processing would be hard using R-Pi. But we are in progress to explore this option (Designing custom PCB boards to allow R-Pi to accept this camera input)
- Neat design that allows flex-spy-camera to be mounted at the EyeTap-camera location.