Optic nerve signal interception, interpretation, manipulation and reintroduction for human/computer interface for use in augmented reality
I have been doing a lot of thinking lately about human/computer interfaces in respect to augmented reality. There seem to be, at least to me, a lack of serious research in the field of true human/computer interfaces.
Sure, we’ve been able to control simple systems by training ourselves to modify our brainwaves or flexing muscles to control an artificial limb. But this is more us adapting to the interface than us truly being one with the machine.
I think the main problem has been the lack of very detailed research on how our nervous system actually works. More specifically for this post the optic nerve. There is book after book that describes the anatomy of the retina, optic nerve, optic chiasm, optic tracts, etc. But no where have I been able to find a book or paper on how precisely the optic nerve transfers the signal from the retina to the visual cortex.
It’s estimated that there are 1.2 million nerve fibers that make up the optic nerve.
- What are these fibers?
- Does each fiber behave like a wire transferring electrical impulses?
- Do they work together as one large wire?
- Do they work in groups of wires transmitting different information?
- Are the fibers redundant groups transfering the same data in case one is damaged?
- Do we have to tap each fiber?
- Is the data transmission bidirectional?
- Does the eye receive any information from the optic nerve?
- What is the nature of the signal?
- Is it straight bit data?
- Is it a complex modulated signal? Etc.
I can’t seem to find answers to any of these questions. Perhaps I’m not looking hard enough or perhaps there has really been any research into it.If I had the resources required to answer these questions I would very much like to work towards the follow goals or thesis.
Is it possible to splice into the optic nerve (intercept) and feed the signal (for lack of a better term) into a computer, interpret those impulses into raw retinal image data that can be displayed on a computer screen (interpretation), manipulate the raw retinal image data (ie superimpose text) (manipulation) and injecting that altered signal back into the optic nerve for traditional processing by the visual cortex (reintroduction).
If anyone would like to discuss this stuff further drop me a line.
Here is a project that is on the leading edge of this type of research. However, they are interfacing with receptor cells in the retina with a prosthesis and not going straight to the signal. You will still require a functioning or partially functioning eye. Still great science!
Random Additions brought about through discussing this subject
“As to the actual task of intercepting the optic nerve signal, it seems to my blissful ignorance to be a rather straighforward thing. I’m not sure if anyone has actually been able to accomplish it. If you had to tap all 1.2 million optical nerve fibers I could see it being difficult ;).”
“I have a feeling that many of those nerve fibers are redundant, much like the rest of the brain, and we may only need to intercept a relatively small percentage of them to get the desired result, or at least a close approximation of it.
The technology that I think we should really watch out for it nanotechnology. In theory the tiny things could independently search out the nerves and attach themselves to it, requiring simply an injection and not risky surgery. Some form of radio wave or microwave could then be used to communicate with them, ie. receiving and sending video feed. Frankly this kind of thing could be extrapolated to any part of the brain. For example you could create a “movie” where the viewer feels everything the character feels, including all the senses, emotion, even thoughts.”
There is also the great work that is being done at the Human Connectome Project
More specific to this post, here is a dataset from the University of Utah. The Retinal Connectome Mosaics.
Here is another advancement (April 5, 2010);
Researchers in Australia have developed a “wide-view neurosimulator,” to help give sight back to the blind. By implanting electrodes in the eye, they’ll allow those with degenerative vision loss to see a pixelated version of the world around us.