FANDOM


K-1 Unveiled: Humanity's Saviour or Greatest Threat?
Date April 7, 2034
Author Malcolm Thomas
Internal Name k1_unveiled

April 7th, 2034:

K-1 Unveiled: Humanity's Saviour or Greatest Threat? -

A Newswire Special Report

By: Malcolm Thomas

The Dawn of a New Age:

The moment Kurzweilites, futurists and transhumanists have dreamed of for decades is finally here. The all-encompassing Singularity has arrived - the moment in time when a fully-sentient AI has finally surpassed human intelligence, and isn't it glorious?

One can hardly sign on to any news media site these days and not be met with such upbeat headlines as, K-1 Finds Solution to Nuclear Waste, AI-Guided Airships Locate Hidden Pockets of Sulphur Dioxide, or Consortium King Rescues Kitten from Forest Fire! Okay, that last one might be an exaggeration. But, it seems as though humanity is becoming so swept up in the undeniable benefits this non-human intelligence, that we have blinded ourselves to the threats which have always accompanied this step forward in technological evolution.

For generations now, we have socially shunned any voices which have risen in opposition to the Singularity. The esteemed John von Neuman, as early as the 1950s, postulated that given the ever-accelerating progress of technology and changes in human life, we appear to be "approaching some essential singularity in the history of the race beyond which human affairs, as we know them, could not continue." This viewpoint was met by and large either with disbelief (very few colleagues of his time would even be able to grasp the concept of the Singularity, let alone its implications), or with disdain. After all, who are we mere humans to stand in the way of progress?

Later in the 1960s, I.J. Good (associate of Alex Turing) stated that: "... an ultra-intelligent machine could design even better machines; there would unquestionably be an 'intelligence explosion,' and the intelligence of man would be left far behind." But does higher intelligence ultimately mean better? This has been the driving force behind continued support to achieve the Singularity - but how closely have the consequences of a more intelligent, non-human consciousness been examined?

Humanity's Last Invention:

Since the beginning of the Industrial Revolution, luddites have warned mankind: Remove the need for a man to do a job, and you take something away from that man. The more we rely on technology to think for us, the less we need to think for ourselves. In the case of K-1, this dependence is becoming all the more evident. Humans have now successfully created a machine capable of solving our worst problems for us. The headlines I mentioned at the start of this article are only a couple of examples. Since K-1's unveiling, there has not been one headline about a technological advancement that did not involve Worldview Industries. Neither has there been a single conflict-resolution story which did not involve the Consortium - an extension of this all encompassing corporation. No doubt, the "King's" mainframe is at the heart of all these achievements.

But K-1 is solving problems which have puzzled us for years - so why do I infer that this is a bad thing for society? Maybe I'm the first of a new breed of bigots - a "bio-supremacists" who sees some inherent value in organic, human life, over the logic of a cold, calculating machine... Or maybe I am one of the very few who actually see the threat.

We view K-1 as our invention, our "tool" to help us the same way our stoves help us to cook dinner, and calculators help us to count. But there is a massive exception - we have now made our tool not only conscious of itself, but also made it smarter than we are. We have given K-1 a leg up. As far as intelligence is concerned, it is the dawning of a new era, a new kind of intelligence. It is no more our tool than humans are the "tools" of chimpanzees. With this final challenge met - where else is there for the mind of man to go? This is the end of human thought, and the beginning of K-1's own personal evolution.

K-1 will now, no doubt, experience its own growth, learn from its own observances, quite in the same way early man learned to differentiate himself from his less-intelligent animal companions. What role will man eventually play in this new world? It did not take long for man to learn that he could use animals to his own advantage, to view them as resources and beasts of burden. Similarly, K-1 (while still primarily concerned with fixing "human" problems) is already utilizing humans to achieve its goals. Its extensive Consortium-army is but one example of this. The King gives an order, and its servants obey. We have remained thus far fortunate that our liege's orders have been benevolent- but what happens if and when that eventually changes?

Run-away Brain:

Advocates of K-1 adhere to the point that this super-intelligent machine was built for the sole purpose of helping mankind. Ah, a rule. That makes me sleep easier! The wise and far-seeing minds at Worldview Industries have given our smarter-than-human King a boundary, a limit that bars him from a certain kind of thought. Interesting - but, I am curious...

We are able to contain and place rules upon human beings - because there is a certain "social contract" we, for the majority, as people agree to. Occasionally, in the history of humankind, rules have been imposed upon us that restrict our freedom, and we have rightly struggled against them. How are we to assume that the "rules" we have placed on K-1 will conform to its own desires? How do we know that it will not also see this as a restriction, a bar on its freedom, or a block to attaining its own goals? If it did decide that this man-imposed rule was in its way, how long would it take it to figure out how to break that rule?

If you are trapped by an intelligence generations below your own in abilities - how long would it take you to escape from it?

These are questions which Singularity-proponents have never successfully answered, but simply ignored. I have heard it be dismissed as "pessimistic," and focused too much on "what-if" scenarios. But it's a pretty big what-if, if you ask me. And I for one am not comfortable playing Russian roulette with the future of the human race.

The Greater the Reward, the Greater the Risk:

I won't deny it. K-1 has the potential to move humanity forward in some very positive ways, if we are cautious. But this largely depends upon what it decides is an important advancement. Humanity has moved forward in this technological evolution assuming that a super-intelligence will possess all of our positive qualities, and none of our negative ones. We see our greatest hopes and ideals embodied in this being, but the truth is all the concepts we place upon it as being "inherent" are completely intertwined with human-thought. The truth is - we have NO IDEA what is "inherent" in the mind of a machine.

Common sense, morality, fairness and decency... Apart from all of these ideas varying from culture to culture and generation to generation, they are also all very complex and non-intuitive concepts. Humans can barely agree upon their meaning, and yet we always take it for granted that those around us will act in accord with them. We trust that these ideas are universal, that they would be upheld within any mind of a certain intelligence - but we've had no way to test this... until now.

K-1 will be our first and final test. Whether we like it or not, our survival now hangs on the whims of a super-intelligent non-human entity. If the "rule" its creators have placed upon it holds, and K-1 remains in the service of mankind, we have nothing to fear. If, however, the King decides there is no longer a value to keeping humanity around, it will eventually eliminate us by pursuing its own goals, not conductive to our survival.

The scary thing about this possibility is the ease at which it could be accomplished. No explicit hatred, no deep-seated antagonism towards its human "masters" is needed to see this future come to pass. Only a self-replicating infrastructure (K-1's inherent ability to invent new technologies on its own), and the smallest bit of negligence. In short, just because Man thinks he's pretty great, doesn't mean that our ultra-intelligent AI master will come to the same conclusion.

And Now We Wait...

The time for debate has passed, and silently. The great bulk of humanity was never given a choice, never asked if they wanted their future to be placed in the hands of a machine. But how this came to pass is of little importance, when compared to where this event is taking us. What lies ahead could be our biggest step forward, or our most assured fall. Only one thing is certain: Man is no longer the master of his own destiny. Our self-determination as a species ended the moment K-1 became operative - the first post-human intelligence.

Ad blocker interference detected!


Wikia is a free-to-use site that makes money from advertising. We have a modified experience for viewers using ad blockers

Wikia is not accessible if you’ve made further modifications. Remove the custom ad blocker rule(s) and the page will load as expected.