© 2018 Open EyeTap Inc.

Cayden Pierce
Nov 6, 2018

Progress on Virtual Memory Assistant

0 comments

This is a continutation of my "Virtual Memory Assistant" post in the "Materials and Design" forum.

From my experience and my studies in psychology, it seems evident to me that a central theme in the human mind is other people. That is, while you may not remember the time, date, weather, or your mood when an event took place, you will undoubtedly remember the people you were with.

Thus, to create a virtual memory assistant, I have begun with people. Using the Python face_recognition library (https://github.com/ageitgey/face_recognition), I have begun developing the capabilites to store a database of human faces that can be recognized during use. I do not have access to public face datasets (something I'm sure will very soon become widely available), and becuase data laws state that I may not take profile pictures from social media. Therefore, I am implementing the ability for the EyeTap to learn who you know as it is used. This works by saving a still image of everyone you communicate with during the day. Later, the user tags all new images with the names of the friends and coworkers whom they were interacting with. Thereafter, the EyeTap will always recognize this individual immediatly. Everything I have described, I have already developed. It is not much, but I have not been working on this very long. I will upload to github and will post the repo for this project soon.

 

Once human beings are able to be recognized, the EyeTap now has a framework for memory classification. Imagine accessing a database of every single conversation you've ever had with a person? (looking into conversation transcription using Google API's). Imagine, at the end of each day, being presented with an automatically summarized description of the days events and important points in conversation. Imagine the EyeTap reminding you briefly of small facts and conversation tidbits your biological brain has forgotte, WHILST in conversation. This is truly a development that be an expansion of the human mind, exaclty as Raymond Kurzweil, and many others, describe as the future of human symbiosis with technology.

New Posts
  • Cayden Pierce
    Mar 30

    Hello everyone! I decided to take a small break from the VMP project (see other posts) to hack something fun for the OpenEyeTap! I like to run and bike, so I have built a utility with a live speedometer built in. Also in this library is the function to track your location as you run, so you can later view your route (soon I will overlay this on top of a map, maybe Google Maps API?), your speed along the route, and the total distance you traveled. Here's the Github repo: https://github.com/CaydenPierce/OpenEyeTapJogWear Give it a pull and try it out. This is great for beginners as it is simple and was made in an hour or two. Future features to add would be live directions (Google Maps API, again) and a live update counter of distance traveled. This is a simple feature, but imagine the possibilities when a community of users start building features of all kinds, then we will have functionalities for all types of activities. Cayden Pierce
  • Cayden Pierce
    Nov 7

    Steve Mann told me that a great use of the OpenEyeTap would be a device to help those afflicted with Prosopagnosia (an inability to recognize faces) to be able to recognize people. This is a first step in the direction of the "Virtual Memory Assistant" that I have discussed previously. Here it is! https://github.com/CaydenPierce/OpenEyeTapVirtualMemory Follow install instructions to get things working. Let me know if there are any issues. I will put up an SD image shortly to make things easier (building the dlib C++ lib takes a full day haha).
  • TwoBit
    2 days ago

    Hello, I would like to start by saying: I'm very excited to see this project getting off the ground, and with an excellent start in both hardware and software. While I lack the experience needed for tasks such as improving the layout of hardware or refining user experience, I hope I can at least contribute to the software end of things with my experience as a compsci student (currently involved in machine learning research). I may be mistaken, but one problem I've noticed with the current design is its dependence on a small number of hardware buttons to interface with software running on the OpenEyeTap. This seems unfortunately limiting, as it not only restricts the ease and functionality of many applications, but it also makes it near impossible to create a way to switch applications, restricting the device to a single purpose at once. These concerns were addressed in the UX thread with the proposal of an IMU-based reticle and a bluetooth microphone, and while I think those proposals are excellent and would make for a solid final means of interfacing with the device, they also don't work with the current default hardware configuration, are dependent on environmental variables that could hinder use (particularly sound and the potential for user instructions to be drowned out), and could introduce overhead that could spread thin the already limited processing power of the Raspberry Pi Zero- though I'm admittedly unsure of how bad this impact might be. Instead, for the current default hardware, has there been any consideration of the option of a being able to interface with the EyeTap from another device such as a smart phone? The Pi Zero W is notable for its bluetooth support, and so it would be relatively easy to have a sort of mobile interface to send instructions to start or stop certain applications, or to ease in tasks such connecting the device to WiFi networks. This bluetooth connectivity could also enable applications running on the EyeTap to utilize it for their own interfacing purposes, such as having software alternatives to hardware buttons. Of course, other methods of connection are also possible, but I strive to ensure my solution is wireless and doesn't prevent the ability to connect online which some existing software depends on. These are just some thoughts I had on how to expand the usability of the current design. While it's far from my specialty, I would be personally interested in taking a shot at making an interface like this happen once I've gotten my hands on the hardware. For now, I'd love to hear other people's thoughts on the idea.