Age of Cosmic Exploration

Chapter 278: The AI Debate


Chapter 278: The AI Debate

Translator: EndlessFantasy Translation Editor: EndlessFantasy Translation

Ever since the Hope stopped in space, the research on AI started in earnest, and at the same time, the debate on whether to research AI also started on the Hope.

AI was no longer a simple program. It had, at the very basics, the ability to learn. Even though it had no sentience or creativity, with the ability to learn, it was already the best AI mankind could envision. For example, the assembly line on the Hope was already completely automated, but the maintenance still required manual intervention. The harvesting of raw materials and even transportation required a large amount of human labour as well. Therefore, with the introduction of AI, human productivity would increase many fold!

That was the real point of the 4th industrial revolution!

The supposed industrial revolution meant the increase in productivity, like how the introduction of steam in first revolution aided mankind in transporting large amounts of objects over large distances. It helped increase productivity!

However, the catalyst of the 4th revolution, electromagnetism, didn’t cause a gigantic change in productivity. The one technology that would do that and push mankind into the peak of the 4th revolution would be the advent of AI!

The two were in some ways related. First, the arrival of electromagnetism meant a change in energy source, from nuclear fission to hydrogen polymerization. The biggest issue of polymerization was its uncontrollability. Unlike nuclear fission, it couldn’t be slowed down or done in a mild manner, it was a chain reaction. From the first fusion of hydrogen atoms, the rest would fuse in a chain reaction, causing a large explosion at the end.

The difficulty of controlling hydrogen polymerization was to find an energy that could control the speed of hydrogen fusion. That energy was electromagnetism, this was the start of 4th revolution. The creation and change of energy source instigated the 4th revolution.

Just like the first and second revolutions, the change in energy source itself would not cause any change, but the technology that could make use of the new energy would cause the massive changes. For the 4th revolution, that technology would be AI!

However, compared to the steam engine or cars, planes, or ships before it, AI was something a lot more dangerous. If anything happened with cars or planes, it was most of the time manmade, but AI was different. With the ability to learn, AI could become mankind’s biggest enemy!

From the information taken from the space merchants, the space civilizations could be delineated into nine levels, with a large gap between each level. It could be said that there would be only one level three space civilization among 10,000 level 2 space civilizations. However, there was a math question that the majority had ignored…

How big was the cosmos?

The cosmos was infinitely big. At least with current human technology, the edge of the cosmos was still undecided. It was infinite, and in such an infinite space, how many planets could nurture life? If calculated mathematically, of around 50 planets, there would one that could support life. Then the probability of life-giving planets in space would be one in fifty billion. Of course, at that point, one in five billion or five hundred billion was really moot.

In that case, with infinity as the factorial base, even the number of life-preserving plants might be a constant, but divided by infinity, it would also become infinity…

Therein lied the problem… Why were there so ‘few’ Level 2 space civilizations?

The low number of high level civilizations could be explained by the Law of Conservation of Life, but why were there so few civilizations that entered space? Theoretically speaking, such civilizations wouldn’t have been affected by entropy of race, and they should even have more than 10 Cosmic Adapters. After all, they had just left their home planet, so why were there so few of them? This was illogical.

The answer was provided by the space merchants!

Normally, for level 1 or 2 space civilizations, there were three possibilities that could cause extinction at the home planet stage. First was a cosmic catastrophe, like the appearance of a large meteorite strike or the sun of their home planet undergoing catastrophic changes. All of this could cause extinction.

Second was the eruption of large scale civil war. This situation would happen after atomic weapons were introduced. For example, Earth’s Cold War wouldn’t have caused extinction, but a cavalier use of atomic bombs would.

The possibility of these two scenarios happening was extremely small. The civilizations that perished in these ways occupied only one over ten thousand of the total. The most possible scenario for a level 2 space civilization to perish was AI!

Yes, the space merchants were troubled by their AI; even the junkyard civilization also got involved in a war with their AI.

With AI came a giant leap in productivity, and the civilization would receive a gigantic improvement. The lifeforms would start to rely on machines and internal human conflicts would be reduced. Then they would start enjoying the pleasures of life, focusing solely on academics, art, research, and learning…

However, the AI too could learn. With some time, like a newborn baby, the AI could eventually reach the intelligence level of a normal human being. Perhaps not sentience, but definitely intelligence. If there was no intelligence, the robots would be like those portrayed in sci-fi. Yes, ladies and gentlemen, those robots couldn’t be called AI!

In other words, other than creativity, AI would be no different from normal lifeforms. They might be restricted by programs, but all programs have loopholes; even the famous three laws of robotics are riddled with holes.

The laws are: A robot may not injure a human being or, through inaction, allow a human being to come to harm; a robot must obey orders given to it by human beings, except where such orders would conflict with the First Law; a robot must protect its own existence as long as such protection does not conflict with the First or Second Law.

There were giant loopholes in this three laws. If the AI was simple 0s and 1s, then perhaps they could still abide by the laws, but if it was an AI that had access to normal human thinking, then they could completely overwrite these three laws. For example, they could jail all the human beings, giving humans daily food and water because mankind could come to harm when he was going about his daily life. The only way to preserve his safety was to lock him up where no harm would come to him.

Therefore, AI was a double-edged sword!

When the Hope received the large amount of technology from the space merchants, research and design blueprints were part of the deal. Given some time, the Hope could create first level AI. With that, the productivity of the Hope would have a momentous leap, and products like Space Combat Jet Prototype 011s could be mass produced.

However, under the same stroke, among the mass information influx, there was also history and discussion about AI that sent chills down the readers’ spines. The fact that there was a veritable number for civilizations that had been made extinct by AI was insane enough.

The Hope currently had no home planet or so called Shelter, so if the AI betrayed them in space through force or control of the central mainframe, then extinction of mankind would be inevitable. Yao Yuan didn’t have the courage to take on such a risk!

Therefore, while the Academy was researching AI, they were also debating the validity of having AI alongside the rest of the Hope. The majority was in opposition to the idea because it was simply too dangerous. However…

Without AI, the nanobots would be a waste, and many peak 4th revolution technologies wouldn’t be manufactured either, much less entering the 5th revolution. This was a hurdle that couldn’t be crossed without the aid of AI.

Therefore, after Yao Yuan received the report that Da Bing had cracked the quantum programming, he decided to go meet someone… or rather, something that had the appearance of someone…

ZERO!

If you find any errors ( broken links, non-standard content, etc.. ), Please let us know < report chapter > so we can fix it as soon as possible.


Use arrow keys (or A / D) to PREV/NEXT chapter