Chapter 362: Contradictions, Controversies, Worries
And whether it is a strong light or a dark environment, it cannot affect the transparency of the screen, thus affecting the wearer's field of vision. This requires that the transparent screen must adjust the display intensity of its image according to the environment.
Enhancing the display will inevitably affect the transparency of the screen and thus the wearer's field of vision. Reducing the intensity of the display screen will affect the quality of the picture, thus affecting the viewing experience.
This is an opposing contradictory problem, and if we want to solve it, we must adapt it to local conditions. When and when to enhance the display screen in that use scenario, and when to reduce the display screen intensity. This requires not only human control, but also the system's intelligent and automatic adjustment according to the relevant usage and wearing environment.
In addition to the technical problems of display, there is also the ability to process information and data, which is also divided into hardware and software parts.
First of all, in terms of hardware, AR glasses can be different from VR glasses. Because of the different environments and scenarios used, AR glasses need to be worn for a long time and adapt to a variety of environments, so the volume and weight of AR glasses must be as light as possible.
The ideal state is a pair of glasses, or not much bigger than glasses, and not much heavier, too big and too heavy will affect the wearing experience.
It is also paradoxical that how to place a large number of hardware devices as light and small as possible has ultra-high requirements for the integration and integration capabilities of the entire hardware.
At present, it is common to integrate these hardware devices into the frames and temples at both ends of the glasses, but even so, they are still very bulky and inconvenient to wear.
Because of the limitation of size and weight, the power of the hardware device is not too strong, which also greatly limits the computing and processing power of the system. How to improve the information and data processing ability of the system is also a difficult problem that the R&D team must solve.
Although with the popularization of 5G technology, the high-speed transmission of information and data is no longer a problem. However, how to receive and process this massive amount of information in a timely manner is also a very tricky problem.
It's okay in a single environment, but it's okay if it's in a complex environment.
Suppose you are walking on a bustling cross street, and all the surrounding buildings, billboards, and even some facilities are equipped with AR interpretation functions. This means that your AR glasses have to accept a large amount of AR data information at once and display it on your screen at the same time, which can have great requirements for the processor and system.
The last challenge is in terms of interaction systems. VR can be controlled using a wearable glove sensor or a hand-held handle.
AR is not good, because AR has to adapt to a variety of environments and scenarios, so there must be a set of simpler and more direct methods.
At present, there are three ways to think of this, the first is the first eye-tracking control technology.
The eye capture sensor captures the rotation, blinking, and focus center of the eyeball in real time for interactive control. This technology has been implemented and has been well used on many devices.
In general, this technology is also used in conjunction with head motion sensors. For example, when you look up, the screen slides up; When you look down, the screen slides down. When you look left or right, the screen will slide to the left and right accordingly.
When you blink, you can perform operations such as OK selection. For example, blinking an eye is OK, twice is undoning, etc., which is equivalent to the left and right buttons of the mouse.
And the focus of the eye also corresponds to the cursor of the mouse. The focus is where you look, and it's as flexible as the cursor you slide over.
The second way is to use gesture control technology, using sensors to capture the movement changes of previous gestures for interactive control.
For example, if you swipe your hand up and down, the screen will also slide up and down, as well as left and right. Finger pull can also move the screen position, or zoom in and out of the screen. Finger click OK, wave undo and more.
Gesture recognition control technology is also developing rapidly, but there are still some difficulties in recognizing gesture changes in high-speed movement. This requires the sensor to have the ability to accurately recognize and capture the gestures, and the processor can quickly and accurately convert these gestures into relevant operation instructions.
Another point is that everyone's gesture operation posture is different, or everyone's operation gesture is different every time. Even if it is a gesture, there will be some changes in different time and environment scenarios.
This brings certain difficulties to the capture and recognition of the system, and therefore requires the system to have good fault tolerance.
The third way of interaction looks more sci-fi, that is, the brain-computer control technology that has become popular recently. To put it simply, it is to control the operation through thinking and imagination.
When we imagine an event, a picture, or an object, the brain waves emitted are different. Brain-computer control technology uses our different brain waves to control and interact with devices.
For example, after your brain imagines an idea of forward movement, the brain will release such a brain wave, and the brain-computer system will recognize this brain wave and convert it into corresponding electrical signal instructions to control the device to move forward.
At present, this technology has been used in a number of fields, including the brain-computer-controlled wheelchair for high paraplegics. The patient can control the wheelchair through the brain to perform motor arrest and so on.
There is also the use of this brain-computer control technology to carry out text-related input. It is said that the input speed can reach 70 words per minute, which can be said to be very fast.
Although this technology is developing rapidly, it is also a hot area for research by technology giants from all over the world. But the controversy over this technology has not stopped, and has even become more and more intense.
And one of the core questions that everyone discussed, that is, is this technology safe? First of all, is it safe to use, will wearing this sensor to capture brain waves for such a long time damage the brain, affect intelligence, nervous system, and whether it will have an impact on health.
Second, since the brain-computer device can read brain waves, it means that brain waves can also be input. Nowadays, Internet security is becoming more and more severe, if hackers master the relevant technology, and then use brain-computer control technology to invade the human brain, it will not be able to steal the information and secrets in the human brain.
Or more seriously, what if hackers use this method to transfer the virus to a person's brain? Is it really necessary to reboot the human brain, or to format it directly? Or install an antivirus software in your brain and set up a firewall?
"Bookmark for easy reading"