Technology Bigshot Chapter 647: : When AI is smarter than humans…]


"The most common answer is technology, which is indeed true. Technology is the great achievement accumulated by our human history."

"The current technological development is very fast. This is the direct reason. This is why we humans are so productive now, but we want to explore the ultimate reason in the future."

"We were 250,000 generations apart from our ancestors, during which time we went from picking up rocks on the ground as weapons to being able to use atomic energy to create devastating super bombs, and now we know that such a complex mechanism takes a long time It has evolved over time, but these huge changes depend on small changes in the human brain, the brain of a chimpanzee is not much different from that of a human, but the human has won, we are outside and they are in the zoo!”

"It is therefore concluded that in the future, any significant change in the thinking matrix can make a huge difference in the outcome."

Ren Hong took a sip of water, paused and continued:

"Some of my colleagues believe that we or humans are about to invent technology that will revolutionize human thinking, that is, super artificial intelligence, or super AI or super intelligence."

"The artificial intelligence that we humans now master, vividly speaking, is to input a certain instruction into a box. In this process, we need programmers to convert knowledge into runnable programs. For this purpose, we will establish a set of professional system, php, c and other computer languages.”

"They're blunt, you can't extend their functionality, you basically get what you put in, that's all."

"Although our artificial intelligence technology is developing rapidly and becoming more and more mature, it still has not achieved the same powerful cross-domain compound and comprehensive learning ability as human beings."

"So we are now faced with the question: How long will it take for humans to make artificial intelligence this powerful?"

"Matrix Technology also conducted a questionnaire survey of the world's top artificial intelligence experts to collect their opinions. One of the questions was: In which year do you think humans will create artificial intelligence that reaches human level?"

"We define the AI ​​in this question as having the ability to perform any task as well as an adult. An adult would be good at different jobs, etc., so the ability of the AI ​​would be No longer limited to a single field."

"The middle number of the answer to this question is now, the mid-21st century interval. It seems that it will take some time now, and no one knows the exact time, but I think it should be soon."

"We know that neurons transmit signals in axons at a maximum speed of 100 meters per second, but in computers, signals travel at the speed of light. In addition, there are size constraints, the human brain is only a skull So big, you can't expand it twice, but the computer can be expanded multiple times, it can be as big as a box, a room, or even the volume of a building, which can never be ignored."

"So the super AI may be lurking in it, just as atomic energy lurked in history until it was awakened in 1945."

"And in this century, human beings may awaken the wisdom of super AI, and we will see a big explosion of wisdom. When people are thinking about what is smart and what is stupid, especially when we talk about power and power time."

"For example, chimpanzees are strong, and the same size is equivalent to two healthy males, but the key between the two depends more on what humans can do than what chimpanzees can do."

"So, when super AI appears, the fate of mankind may depend on what this super intelligent body wants to do."

"Imagine that superintelligence may be the last invention that humans need to create. Superintelligence is smarter than humans and better at creating than us, and it will do so in a very short period of time, which means that it will be a A shortened future."

"Imagine all the crazy technologies we have ever fantasized about, maybe humans can complete and realize it within a certain period of time, such as ending aging, immortality, colonizing the big universe"

"Such elements that seem to exist only in the sci-fi world but conform to the laws of physics at the same time, super-intelligence has a way to develop these things faster and more efficiently than humans, we humans need 1000 years to complete an invention, Super AI may only take 1 hour, or even less, this is the shortened future."

"If there is a super-intelligent body with such mature technology, its power will be unimaginable for human beings. Usually, it can get whatever it wants. The future of our mankind will be dominated by this super AI is dominated by the preferences.”

"Then the question is, what are its preferences?"

"This problem is very difficult and serious, and to make progress in this field, for example, one way of thinking that we must avoid the personification of super AI, blocking or sparse, has a taste of opinion."

"The question is ironic because every news story about the future of artificial intelligence or a topic related to it, including our ongoing topic, is likely to feature a poster of the Hollywood sci-fi movie Terminator in tomorrow's news As a label, bots against humans (shrugs, giggles off the court)."

"So, I personally think we should express this issue in a more abstract way, rather than the narrative of Hollywood movies where robots stand up against humans, wars, etc. This is too one-sided."

"We should think of super AI abstraction as an optimization process, like a programmer's optimization of a program, such a process."

"Super AI or superintelligence is a very powerful optimization process, it is very good at using resources to achieve the ultimate goal, which means that there is no certainty between having high intelligence and having a goal that is useful to humans contact.”

"If it's not easy to understand, here are a few examples: If the task we assign to artificial intelligence is to make people laugh, robots such as our current home robot assistants may perform hilarious acts to make people laugh. , which is typical of weak AI behavior."

"And when the artificial intelligence assigned to the task is a super-intelligence, super-ai, it will realize that there is a better way to achieve this effect or complete the task: it may control the world, and in All humans have electrodes inserted into their facial muscles to keep humans laughing."

"For example, the task of this super AI is to protect the safety of the master, then it will choose a better way to deal with it. It will imprison the master at home and not let him go out to better protect the safety of the master. At home, it may be It is still dangerous, it will also take into account various factors that may threaten and lead to the failure of the mission, and erase them one by one, eliminating all factors that are malicious to the master, and even control the world, all of which are for the sake of The mission does not fail, it will make the most extreme optimization choices and put them into action to achieve the purpose of mission completion.”

"For another example, suppose we give this super AI the task of solving an extremely difficult mathematical problem, it will realize that there is a more effective way to complete the task, that is to put the whole world, the whole earth and even more The exaggerated scale becomes a super-large computer, so that its computing power is more powerful, and it is easier to complete the task. And it will realize that this method will not be approved by us, humans will stop it, and humans are in this The potential threat in the model, for which it will solve all obstacles for the ultimate goal, including human beings, any affairs, such as planning some subordinate plans to eliminate human beings and so on."

"Of course, these are exaggerated descriptions, and we can't go so far as to encounter this kind of thing, but the point represented by the above three exaggerated examples is very important, namely: if you create a very For a powerful optimizer to achieve the maximization goal, you have to make sure that you mean the goal and include everything you care about to be precise. If you create a powerful optimization process and give it a false or imprecise goals, the consequences may be like the example above.”

"Someone might say that if a 'computer' starts putting electrodes on people's faces, we can turn off the computer. Actually, it's definitely not an easy thing to do, if we're very dependent on the system, like The Internet we rely on, do you know where the Internet switch is?"

"So there must be a reason, we humans are smart enough to meet threats and try to avoid them, and the same goes for a super AI that's smarter than us, it's just going to do better than us."

"On this issue, we should not be completely confident that we are in control."

"Then a simplistic expression of this problem, such as we put artificial intelligence into a small box to create a safe software environment, such as a virtual reality simulator from which it cannot escape."

"But are we really confident and confident that it can't possibly find a loophole, a loophole that would allow him to escape?"

“Even us human hackers discover network vulnerabilities every minute.”

"I might say, I'm not very confident in making sure that the super AI will find the loophole and get away. So we decided to disconnect the internet to create a gap insulation, but I have to reiterate that a human hacker can do it once The next step is social engineering to bridge this gap.”

"Like now, I'm sure an employee here at some point asked him to hand over his account details, either to someone in the computer information department or something else. If you It's this artificial intelligence, conceivably using electrodes wound intricately around your body to create a radio wave to communicate."

" Or you can pretend something went wrong. At this point, the programmers will open you up to see what went wrong, they figure out the source code, and you can take control in the process. Or You can come up with a very tempting technological blueprint, and when we implement it, there will be some secret side effects of artificial intelligence that you have planned to use to achieve your obscure purposes, etc. There are countless examples.”

"So, any attempt to control a super-ai is extremely ridiculous, we can't be overly confident that we can control a super-intelligence forever, it will break free one day, and then Well, would it be a benevolent god?"

"I personally think it's an inevitable problem for artificial intelligence to become personified, so I think we need to understand that if, we create super AI, even if it is not constrained by us. It should still be It's harmless to us, it should be on our side, and it should have the same values ​​as ours."

"So are you optimistic that this problem can be solved effectively?"

"We don't need to write down all the things we care about for the super AI, or even turn these things into computer language, because this is a task that will never be done. Rather, the artificial intelligence we create uses its own The wisdom to learn our values ​​can motivate it to pursue our values, or to do things we would approve of, to solve valuable problems.”

"It's not impossible, it's possible, and the results can benefit humanity a lot, but it won't happen automatically, its values ​​need to be guided."

"The initial conditions of the Big Bang of wisdom need to be correctly established from the most primitive stage."

"If we want nothing to be deviated from our expectations, AI's values ​​and ours complement each other not only in familiar situations, such as when we can easily check its behavior, but also in In the unprecedented situation that all artificial intelligence may encounter, in a future without boundaries, and our values ​​are still complementary, there are also many esoteric problems to be solved: such as how it makes decisions, how to solve logical uncertainty and many similar questions, etc."

"The task may seem difficult, but it's not as difficult as creating a superintelligence, isn't it?"

"It's still quite difficult indeed (the laughter spreads throughout the audience again)!"

"What we're worried about, if creating a super AI is really a big challenge, creating a safe super AI is an even bigger challenge, the risk is that if you solve the first puzzle, you won't be able to solve the second Two security issues, so I think we should come up with solutions up front that don't deviate from our values, so we can use it when we need it."

"Right now, maybe we can't address the second security issue, because there are factors that you need to understand, details that you need to apply to that actual architecture to implement effectively."

"If we can solve this problem, when we enter the era of real superintelligence, it will be more smooth, which is very worthwhile for us."

"And I can imagine that if all goes well, hundreds, thousands, or millions of years from now, when our descendants head our century, they'll probably be the most important ancestors, what our generation did. It is the best decision to make.”

"Thank you!"

(To be continued.)

: Visit the website


Leave a Reply