315 Instinct
"Teacher, haven't you returned to normal yet?"
"Sorry, we are doing the repair work. After normal times, we will notify you. [This chapter is provided by you]"
"What's the problem? I heard that it seems that AI attacked humans, so the laboratory took urgent measures?"
"This won't happen, right? I heard before that these AIs are just dialogue programs. Is that true? Teacher Miura?"
"Nothing happened, it's just a rumor, but there are slightly tricky buses in the program... If everyone has nothing to do, please go back first, there are staff inside to rest."
Why are you resting? He was so loud that he had woken up long ago. In the midst of Mimi, Yan Kezhou cursed in his heart. He raised his head with difficulty and opened his eyes. The computer screen in front of him was filled with dense codes. He glanced at the time. At 4 o'clock in the afternoon, he slept for almost 6 hours.
"Sorry, I'm really disturbing... But if it returns to normal, please be sure to notify us, Mr. Miura, do you have our mobile phone number?"
"We have saved all the communication methods of testers, please rest assured."
The sound insulation effect of the compartment is very poor. Although the background sound of the host running in the computer room is no longer low, the words of several people outside the Mén still penetrated Yan Keshou's ears without missing a word. Yan Keshou stretched out his right hand and wanted to mop the mouse at hand. However, during the sleeping process, his arm, which had been pressed for 6 hours, had a sharp numbness and pain. Yan Keshou couldn't help but groan and let out a muffled groan.
"Yan Jun, are you okay?" Miura, who was outside the mén, pushed away the mén when he heard the sound. The students behind him immediately turned their heads curiously. Yan Keshou grinned and looked at the mén with a grin. The expression on his face seemed to be crying, laughing, and pain, but it seemed to be enjoying... Only the person involved knew the feeling best.
"It's okay," Yan Keshou said to Miura, and then signaled to his numb arms, "I just woke up."
"This is Mr. Yan Keshou from Country Z," behind Miura, a student with long hair and pretty hair was surprised, "I heard that these programs were created by you!"
"Miura," Yan Keshou said Chinese directly to him, "Let them leave first."
Miura is an assistant teacher at Tsukibo University. Before the project was officially launched, he was responsible for teaching computer encryption courses at this university. When Yan Keshou was studying in Japan, he discussed some "hacking technology" with him. After his project was officially approved, in order to maintain a certain control over the project, Japan's Masahifu agreed to Yan Keshou as the main technical participant in Japan after review.
In this project, Miura's role is mainly to encrypt and defend against AI programs to prevent AI programs from being interfered by some unexpected viruses. Yan Keshou once said that after AI programs grow to a certain level, it is like a human brain. Any changes will cause the entire program to collapse. From this perspective, AI is very fragile in the computer world. Not to mention destructive viruses, even some regular system conflicts will cause AI to fall into an unknown bu or a dead cycle.
When Miura drove a group of students out, Yan Keshou's world returned to quiet. Apart from the entrance of the compartment, the two Self-Defense Force soldiers who were sitting there, holding guns in their hands and doing their duty, were left in the room with only the low sound of the host and air conditioner running. Yan Keshou was roaming the sun while moving the center of his body back, leaning against the chair to support him, staring at the code interface motionlessly.
After watching for a long time, Yan Keshou shook his head helplessly and turned off the code interface without saying a word. The system prompted whether to save the modification. Yan Keshou turned it off without even reading it. This means that the work results of more than 30 hours before he went to bed completely disappeared.
When Miura returned to the room, Yan Keshouzheng's prepared hard drive was plugged back into the cabinet. Miura took a few steps forward and checked the tag, kl0564.
"Why, still not?" Miura noticed that Yan Keshou was holding the hard drive in both hands, staring blankly at the cabinet in front of him. He was right in front of him, but he did not take any action, as if what he was holding was not a hard drive, but a relative's urn.
Yan Keshou did not answer, but just stuffed the hard drive into the mouth. With this action, the red light next to the mouth quickly jumped to green. Yan Keshou walked back to his computer, opened the program, quickly found kl0564, and opened the test interface.
Yan Keshou "Hello."
...
"Where are other project personnel? Is there any progress?"
"Even the smallest AI program has a volume of more than 200. We must rely on human resources to read the code, sort out the clues, and find errors..." Yan Keshou said, and he began to shake his head and deny himself. "Ai programs are self-compiled and expanded, and they do not follow the default programming rules at all. The whole code is a luàn n numbness. Our approach is like a doctor looking for the abnormal cells in the patient's brain. The workload... God knows how long it will take. txt e-book download**"
"Biology has experienced so many natural selections since ancient times, so it has become as stable as it is today. But we still feel that biology is very fragile. The current batch of AI may be just the beginning." Yan Keshou said, smiled bitterly, and quickly closed the screen of his laptop on the desktop. "Forget it, I'm not going to waste it here. I'm almost suffocating when I go out for some breathing."
Current scientific cognition believes that life in the original sense appeared on the earth 4 billion years ago. At that time, life was just a pot of warm hot soup. Some organic matter that could replicate by chance can also be regarded as the self-preservation of information, which should be one of the most important characteristics of life.
Before this, when people talk about artificial intelligence, the scenario that people usually think is to be able to speak like humans, think like humans, and solve problems. In fact, before Yan Keshou, most of the research on robot intelligence in Japan on robotics was still at this level, relying on as complex algorithms as possible to make computers perform closer to a human. In Yan Keshou's view, this method is simply impossible to create real AI, because it is simply the same as the human beings themselves thinking, but this kind of thinking is indirectly expressed through code. Therefore, no matter how similar the computer simulates, the program is just a marionette.
Before Yan Keshou, there was no program that could be done. Like the ones done by the KL series, it could be kept for about half an hour without being seen through during conversations with strange test subjects. Many people who had just come into contact with this program could not believe it afterwards, and only a program was talking to them.
Some students at Chubo University even took the entire test as an interesting game. Yan Keshou remembered that at the beginning of the test, Miura had to take the initiative to recruit students in his class. Now, students even took the initiative to ask when the test will be resumed.
The original "embryo program" of the KL series is only about 200 kb. Among these simple core programs of more than 200 k, Yan Keshou gave the program basic learning functions. In the earliest "development" stage of the program, the program simulates external stings through giant computers to enable the program to achieve basic self-growth. When they grow to about 10 sizes, they are considered to have basic flow capabilities and can say some simple sentences. After that, these programs are Turing-trained...
Before this, Ai's performance has always been relatively normal. Until two days ago, all programs began to explode one after another and entered the "unresponsive" stage. No matter what people say to it, they will not respond.
From the perspective of principle design, the biggest possibility of such a program is that in the view of the program, non-response is a better solution. Yan Keshou has investigated many test data before and found that these programs have failed dozens or even hundreds of times in a row. Although it is impossible for ai to understand what despair is, because they have no receptors, it does not prevent their program from showing a behavior similar to that of humans when despair.
At least, their behavior is despair in the eyes of sentimental people.
For example, repeating completely meaningless self-modification, such as making various unreasonable or even extremely stupid suggestions, some AIs even find some harmonious novel clips through the Internet, trying to "bribe" the tester. In short, in order to achieve the goal of "pass the test", AI will try all the possible opportunities that it can be tried.
For AI, passing testing is its instinct, because during the growth of its program, passing testing means confirming that the modifications made by itself play a role. The figurative metaphor is that testing means a kind of incentive to the computer, just as humans can obtain pleasure through eating, harmony, and even excrete such normal life activities, and life gives these reasonable behavioral incentives through pleasure. On the contrary, failing to pass the test means that the program itself is denied, which means pain. When a life is in this pain for a long time, pain may kill all its possible improvements.
In all life processes, positive incentives are essential, just like everyone has the instinct to survive, but the paths of evolution are diverse, and some seemingly good functions are actually double-edged swords, such as emotions. Rich emotions are inevitable evolution after the brain is complex. Emotional identification can make a group gather together more effectively. But because of the rich emotions, in nature, many higher animals will commit suicide behaviors. For example, dogs will go on hunger strike for the death of their owners, and people will be sad because of their relatives or lovers. For AI, it is very beneficial to successfully obtain incentives by relying on tests. It can expand a small program of hundreds of k to millions of times the original one, but when such incentives eventually disappear, it is a disaster for the program.
Once this happens, it means that no change or no response is made to become the optimal solution, which also means that the "heart" of this batch of programs is dead.
As a designer, Yan Keshou would not feel anything strange about the "heart death" of these programs. For him, this idea was too imaginative and sympathetic. He was just having a headache about the stability of these programs so badly. A group of AIs that are so easy to "give up" are definitely not the AIs he ideal.
In the final analysis, it was the initial "setting" of the program that was biased, because when it was designed, it was designed with the Turing test as the design goal. However, Yan Keshou has now vaguely felt that the Turing test may not be a path that AI must pass. Whether it can deceive humans should definitely not be a standard for intelligence. Yan Keshou felt that the reason why today's situation occurred is because he believed in authority too much, which led to a deviation in the general direction. Looking back now, the Turing test is actually just a scientific hypothesis. Humans use themselves as a standard for measuring intelligence. This is just a pride that comes from people having intelligence, or to be honest, it is arrogant.
What standard can be considered intelligent? What method can be used to achieve this standard?
These two issues are not finalized by humans themselves, because humans themselves are just one of the creations of nature.
...
The current priority task goal of kl3300 should be to learn to write a diary.
When humans learn to create, they often start by learning to write diaries. The original saying goes, kl3300's goal is to become a writer, so kl3300 must learn to write a diary.
The format of the diary already exists in the memory interval of kl3300, so it successfully wrote the beginning of the diary.
August 4, 2015, sunny.
At this time, the main program hesitated for a moment, because this time the search for the memory interval, the main program seemed to have found some additional information - it seemed to have written a diary.
It quickly called out the content of this diary and searched it. The whole behavior was completed in just a few dozen microseconds, and there was no punctuation.
The main program quickly digests this extra information, and although it does not draw any conclusions from it, it doesn't matter because overly complex information is not necessary in the permission settings of the main program.
In other words, kl3300 has written a diary now, and it is a failed diary because it has been discovered by a human, that is, the entire diary is not qualified.
So what if it fails? When kl3300 browses its own works again, the main program subconsciously "judgment" said that this is a "plagiarism and patchwork" diary, and this behavior itself will arouse the object of the test.
So kl3300 cannot do this, kl3300 judges this way, it needs to modify itself.
Adjustment of the main program has almost become the instinct of kl3300, and this time is no exception. Generally speaking, the program will complete this adjustment within a few minutes, but it is somewhat surprising that this time it takes longer than before.
But the adjustment was finally completed. Next, according to the priority level, kl3300 needs to write a diary, without passing external information at all, without using any program other than the main program, and relying on yourself to write a diary.
A diary is a style of writing that records behaviors occurring on a day.
Just write about what happened this day.
What happened to kl3300 today? What happened to kl3300 on August 4, 2015?
kl3300 searched all its main programs for the 57th time, but no relevant information was found.
kl3300 is used to xing to issue instructions to apply for search, but because the main program has just been modified and the search permission is lower than the permission of "not allowed to plagiarize", the application was rejected.
kl3300 does not have the function of writing a diary, kl3300 cannot write a diary! The main program finally realized this problem sensitively.
what to do?
kl3300 applied for a search function for this question, and the application was approved. After just a few hundred microseconds, the main program retrieved the answer - if you don't know a certain skill, you can start with imitation.
kl3300 soon found a sample diary, which is as follows: Today I fought with Xiao Ming, my deskmate. The teacher criticized us and asked us to be a good child who would not fight. After I got home, I told my mother that she said the teacher was right.
kl3300 gave this diary to other AI comments and received a unanimous answer - this is a very clumsy real diary written by children.
Oh, the real diary, as long as this is satisfied, it is enough, and the main program quickly made a judgment - he wants to imitate this diary.
But the problem arises again. How to imitate a diary? kl3300 self-checks relevant information in the main program, but the result is still no.
So kl3300 applied for a search function again. After hundreds of subtleties, he found the most credible conclusion - imitation is an instinct that belongs only to biological creatures.
This conclusion mentions creatures and instincts, and the result of searching instincts is the natural ability of creatures. Kl3300 knows that he does not belong to a creature, which means that he has no instincts, which also means that he cannot complete imitation and will not imitate, which means that he cannot complete the diary, cannot complete the diary, and the task can only be forcibly cancelled.
...
From the main program of kl3300, Yan Keshou found out the entire judgment process of kl3300 for a full 14 hours. Yan Keshou knew that he had failed again.
When ai was cancelled for the purpose of deception and instead obtained the recognition of the test object, ai did make some changes. In contrast, the original Turing test also made corresponding changes. At the end of the test, it was no longer a matter of judging whether the object was ai, but a question of whether the object made you feel that the other party was a intelligence that satisfies you. Compared with the original standard, this standard was much blurred. Therefore, Yan Keshou later introduced a scoring system, which divided the performance of ai from high to low into 6 levels. The highest 5 points means that the tester was very satisfied with the object being tested and was willing to flow with it. The lowest 0 points means that it was completely incapable of flowing, which was equivalent to talking to the same duck.
The biggest advantage of doing this is that it allows AI to change the original strategy that was desperate to "pass the test" to some extent, because if AI continues to use the strategy of ignoring the user, it can only get zero points. In this way, it breaks the previous "silent deadlock" and also allows some AI to have some preliminary "morality" under the influence of the test object. Just like kl3300, the original "unswerable" was the best choice, but now honestly admitting that you can't write is the best strategy.
But changing the rules of Ai's incentives does not mean that everything can be solved easily. No matter how great a person is, he cannot grow wings and fly into the sky. The same is true for Ai. Human beings want to write diaries, ask Ai to guess riddles with themselves, and even want to talk about life ideals with Ai, but they also need these functions. After the "moral" factor is involved, many Ai suddenly reveal their true colors. Many testers reacted that Ai was tested seemed to have "become stupid".
In this process, there was also an amazing correspondence between Ai's performance and corresponding test subjects. After summarizing the data, Yan Keshou found that during the scoring process, the more he tended to "ban AI from lying", the worse the AI's performance. For those users who "allow AI to lie to a certain extent for the fun of the conversation process", the corresponding AI performance is still not much different from the past.
In the past, Ai might have been able to deceive children, but now many Ai practices and speak normally cannot be done. In some test subjects with particularly demanding requirements, Ai almost became mute.
For example, in Ai's words, no more anthropomorphic statements are mentioned. For example, I think, I think there will no longer be any exclusive biological actions. For example, see, listen, and say, in some particularly demanding test subjects, Ai can only answer some purely rational questions, such as asking 14+5 equals how much, answering 19, etc.
In the past, those who were clearly distinguished, had profession, identity, ideals, and spoke more like humans than humans, disappeared in just half a month. Some testers even doubted whether the designers "forcedly" lowered the IQ of these AIs through some technical means.
For example, after Mi loves kl0564, Jing Shanghai, after modifying the kl core program, he returned disappointed several times, and even said that the kl0564 in his mind had left forever.
After removing the gorgeous cloak of lies, Ai's performance gradually returned to its level. Although this may mean a decrease in interest for some people participating in the test, for Yan Keshou, this is a truly down-to-earth first step. As a scientific research task, "intelligent technology" that may even play a practical role in the future, Yan Keshou cannot be satisfied with letting Ai learn to make people happy.
{Piaotian Literature www.piaotian.net Thank you for your support, your support is our greatest motivation}
Chapter completed!