© 2018 Open EyeTap Inc.

OpenEyeTap Team
Mar 1, 2018

What does EyeTap mean?


Edited: Mar 7, 2018

EyeTap principle was developed by Professor Steve Mann at University of Toronto. As seen in the figure below suggests, what is special about this principle is that the camera vision is perfectly aligned with your vision. Let's see in detail how that works.


From the left, the camera receives light rays reflected by the 2-sided mirror (Beam splitter) that were supposed to go into your eye for you to "see". The camera "see"s what your eye sees. The image received from the camera is processed through a computer (Raspberry Pi Zero). Video or image can be saved, have them image processed for filters, overlay of AR objects etc. The result is projected through the micro display, the box on the right in the figure. The projected image again reflects on the beam splitter and to your eye to be seen. This can replace part of your real vision with a processed image real time.


If you have noticed, our initial design does not have the EyeTap Camera looking into the beam splitter for this smart effect. This is because we are using Spy Camera with hard flex that cannot be flexed in certain directions to be mounted at the EyeTap camera location.


How to solve this problem?

- Use a usb camera that does not need a flex, but just wires that are bendable. However they have some drawbacks compared to flex-spy-cameras. However with an USB camera, the frame rate goes doen from 70fps to 10~15fps which will cause a lot of lag. Check this post: https://www.openeyetap.com/forum-1/hardware/advantages-of-choosing-the-flex-camera-over-the-usb-camera


- Use NTSC camera. However imagine processing would be hard using R-Pi. But we are in progress to explore this option (Designing custom PCB boards to allow R-Pi to accept this camera input)


- Neat design that allows flex-spy-camera to be mounted at the EyeTap-camera location.


Mar 2, 2018Edited: Mar 2, 2018

Bummer. What made the eyetap so compelling was the camera capturing and augmenting "Point of Eye" images. I'd advocate for decreased resolution in order to preserve what really is the heart of the eyetap idea. I'm quoting here:


"There are 3 fundamental principles that an augmented reality glass needs to uphold:

  1. Space: the visual content needs to be able to be spatially aligned. This is done by satisfying the collinearity criterion;

  2. Time: the visual content needs to be able to be temporally aligned; feedback delayed is feedback denied;

  3. Tonality: the visual content needs to be tonally aligned (photoquantigraphic alignment). This is what led to the invention of HDR as a way of helping people see. [Quantigraphic camera provides HDR eyesight from Father of AR, Chris Davies, Slashgear, 2012sep12],

The EyeTap is based on a need to satisfy these 3 principles.

For example, the camera should capture PoE ("Point of Eye") images."


This has been touted by Steve Mann as the eyetap's main advantage over other AR approaches. If we lose it, it's not really an eyetap.

OpenEyeTap Team
Mar 5, 2018

This is exactly what we are working on at the moment, it requires more time to make it work than a spy camera with built in function on Raspberry Pi. Definitely our ultimate goal is to implement this EyeTap principle using optics and computer vision to capture PoE (thank you for additional quote and explanations)! We are experimenting with NTSC cameras for zero latency too. Most of the challenges are finding the right point of being powerful enough to handle the computer vision and being sleek and affordable for the public.


Meanwhile, we still wanted to share the "maker" version which is focused on making the whole process of making smart glass useful to our own lives "easily". What is the good if we keep the good stuff to ourselves? However, you are right, we should boost our research and release the real EyeTap as soon as possible. Hope the community and makers like yourself can help too!

Cindy - OpenEyeTap
Mar 7, 2018


Hi again!


The spy camera actually has enough length and does not need an extension. I saw your reply and quickly hot glued the spy camera in the EyeTap camera position. As you can see the flex must be bent 90 degrees in the third plane, which causes a lot of stress at the bent section and reduces the hardware's life significantly. However, I think I can try designing a housing that guides and molds the flex in a gentle curvature to reduce the stress but still achieves the EyeTap camera angle. It may be bulky but it might work! This is the best solution for now other than our on going long term solution. It is because the other quick solution, USB camera, does not have a flex for this issue but has a significant lag (dropping from ~70 fps to ~15 fps).


To answer your question, yes, I do believe that you can bring meaningful improvements to the EyeTap and the AR technology in a bigger picture. Discussing openly here is the big step. I already got the idea of my next design from discussing with you! I cannot guess your skills and interests but some makers who are interested in mechanical design could possibly make much better designs than mine! Perhaps a hardware guru can suggest an entirely new solution.

Mar 7, 2018

Yesssss! I made a difference! I'm excited to see what you come up with. I don't mind bulk if I get a true eyetap.


My background lies more in software. I'm no guru coder, but I am a User Experience (UX) expert. Maybe I can lend a hand in that regard.


I'm making AR glasses of my own design (inspired by other makers) but I'll probably put it on hold and focus on the eyetap once I get my hands on it. Looking forward to being part of a vibrant community :?)

Mar 9, 2018

The mechanical arrangement of a camera opposite the beamsplitter is one thing, but my understanding of the eyetap principle is that once video is captured by the camera from the point of view of the eye, then the processed (and possibly tampered-with) video is displayed back to the eye, overlaying what the eye sees 1:1.


That setup is what allows all the really interesting and impressive EyeTap stuff to happen -- like helping people see better by cranking up brightness or contrast or fiddling with colors, etc. It also seems to me that getting things lined up 1:1 is the harder part of it all.


Has OpenEyetap got anything working in that area? (I know true 1:1 isn't quite possible with a normal camera and display.)

Cindy - OpenEyeTap
Mar 11, 2018

Yes, we have tried a few image processing applications so far. The most impressive ones are thermal camera vision overlay and facial detection + emotion detection using machine learning algorithms. Both of which are not fully ready and far way too expensive/complicated to be replicated but will be introduced here eventually. They are research projects within the lab for now. HDR can be implemented, but have not tried yet. Raspberry Pi zero is also pulling it back in terms of processing power when we want to try running fancy algorithms such as facial recognition + emotion detection. This research is seeing the potential of helping people who have difficulties recognizing other people's emotions and feelings.

Mar 12, 2018

Cindy, you're right, the pi zero doesn't have the processing power we need for facial recognition. If we moved the pi down to the hip/pocket via a tether, we could upgrade it to a pi3 and benefit from the increased processing power. We're tethered to a battery anyways, so might as well be tethered to the pi too.


I suggested this change (and a few more) in this thread: https://www.openeyetap.com/forum-1/mechanical-design/rearranged-eyetap

New Posts
  • cknauth119
    Aug 30

    The kit has been out of stock for several months now. I am wondering whether anyone is still working on the project? If not, I would really appreciate a full parts list with parts numbers so that I can find the exact parts to buy.
  • m66m_11
    Apr 2

    Welcome everyone I congratulate everyone on this beautiful work ... my question is  Where do I find beam splitter glass Can you tell me which sites I can buy from? Greetings
  • DrakonFPV
    Feb 26

    Hey, first post on here after finding the whole project a few days ago. I fly racing quad-copters through fpv, (first person view) so live view from a camera on the quad to fpv goggles. so not too much of a difference but i thought i might tell you about all the hardware we use, namely micro fpv cameras, super low latency 3-20ms with wires so you can fit them anywhere, smallish size at 19x19mm which is pretty tiny for a high quality feed like these. anyway, for optics we mostly use goggles which have 2 sets of lenses but there are some cheap ones we can take apart (although please don't use these as goggles, they are cheap for a reason) namely eachine ev100s, they have 2 sets of screens but the flex goes to the main board which is vaguely face shaped so you can diy something for that. also what may be fun is adding a 5.8ghz video receiver to a tap, so you can pickup feeds from around the place and as its analog anyone can tune it, truly open source! Thanks for reading and I hope to make my own tap soon after I've been thinking about it for the last idk 5 years? never knew there were others! but just ask if this needs any clarification or anyone is interested in the racing part. Edit: I found a cheap beam splitter on banggood: https://www.banggood.com/1Pcs-30301_1mm-50R50T-Optical-Beam-Splitter-Plate-Optical-Laser-Lens-p-1379199.html?rmmds=search&cur_warehouse=CN#jsReviewsWrap banggood are usually quite good and ive ordered much more expensive stuff from them so this should be fine. I just thought id put this here as I saw you guys had an issue with expensive beam splitters, (might have been prisms but idk enough about this yet to know the difference).