Definition of APABONIGONAGC:

A Post About Benevolent (Or Not) Immortality-Giving (Or Not) ASI Gods and Cryonics

So you probably know what AI is by now. Lots of important people have been talking about it.

We Are in the AI Revolution

If you don’t know what AI is, let me explain it in the simplest manner possible: It’s synthetic intelligence/technology made to do stuff.

That might sound sorta confusing because it sounds vague. It ranges from the stuff running your crappy calculator to our favorite AI, Watson.


On his free time Watson enjoys showing humanity a glimpse of the superior race.

The Three Types of AI

The media is pretty misinforming. They can shit on Elon Musk all they want. They can interview murderers to represent a minority. And they constantly mess up the difference between ANI and ASI all the fucking time, and it really pisses me off that they’ll smash three different types of AI together and talk about them as if they are the exact same thing.

Ranting aside, here’s the list:

Weak AI – ANI (Artificial Narrow Intelligence) focuses on a narrow task. It’s not sentient. Watson is just a good system of ANIs, which means your Dollar Tree calculator and Watson are the same type of AI. (Except Watson is a group of really good Dollar Tree calculators.)

Strong AI – AGI (Artificial General Intelligence) hypothetical AI that is capable of doing anything a human can. The creation of AGI would be a singularity in all fields of study. But it’s also sorta capable of a global catastrophe. However, it isn’t really something to be that afraid of. AGI’s “brainpower” would be equivalent to a human’s. (By brainpower I don’t mean just processing power, more on that down below.)

Superintelligence – ASI (Artificial Superintelligence) another hypothetical AI, but it surpasses human intelligence by a lot. It’s expected to form soon after AGI hits the ground running. This is the stuff philosophers and smart people like to freak out about. And it’s the stuff that’s actually capable of that global catastrophe I mentioned above.

See the source image

I’m super excited for whatever dystopian future is gonna happen. I’m hoping really really hard that it’ll be a cyberpunk-esque world. That’d be really cool.

This talk about AI has not been without controversy. Some dismiss it as something for the movies. But that’s ANI, we’re talking about the stuff that beats humans at being the apex species. AGI and ASI are going to be a game-changer in how we perceive sentience and the value of life, but more importantly, it will do our shit for us, and that’s cool.

Our Era

We’ve mastered ANI. It’s not sentient, but it can be super good at the narrow tasks they are designed to do. And by working in conjunction with other ANIs they can do some impressive things such as a Google search or running your phone.

The next step is making an AGI. Creating an AGI might be a simple manner of slapping every ANI together into some crazy jumbled mess to see if it becomes sentient. In fact we can simplify parts of our brains into smaller specialized areas like ANI, and we would essentially just be a bunch of ANIs working in a complex manner to form an AGI. We don’t know much about our brains yet, but the closer we get to understanding it, the more this seems true. And if we can understand all of the ANI-like parts of our brain well enough to replicate them in a computer, we could theoretically slap together a very crappy human brain simulation.

In fact, if we continued to build upon Watson, we might end up with the very first AGI!

Image result for Powerpuff girls intro creation

ANI in More Complex Aspects of Everyday Life (Not Just Your Calculator)

In finance and military operations ANI does stuff.
In business and other operations ANI does stuff.

All-in-all, ANI is everywhere, and we’ve created a really complex thicket of ANI systems running our world. But a failure of an ANI system can result in some nasty stuff. Like when for a day people could access other people’s Chase bank accounts. Yeah, that sucked.

Some Other Possibilities of When Shit Hits The Fan:
> Stock market crashes.
> Power grid failures.
> Failure of ANI system in an airport makes luggage clump together and everyone gets pissed at the long delays.

Just keep in mind that this isn’t the stuff people are attributing the extinction of humanity to. It’s AGI and ASI. But since we rely so much on ANI, a failure of an important ANI could be catastrophic (remember the 2010 Flash Crash?)

Computational Power

We already have the CPU power available, but it’s in China (no surprise there.) China‚Äôs Sunway TaihuLight is beating the Tianhe-2’s (also China-made) world record of 34 quadrillion calculations per second. It uses 15.3 megawatts of power (the brain runs on just 20 watts.)

This bad boy is more than 5 times faster than the US, “Titan” supercomputer.

This entire thing isn’t just about CPU, it’s about making another/better version of the most complicated thing in the whole universe (that we know of.) We still have the hard part of creating humanity’s demise before we can celebrate just yet. Luckily, I’m pretty sure the Chinese are gonna come up with the first AGI with their amazing tech. (Which is scaring a lot of FBI agents, politicians, and various other people scared of the Chinese mass-producing bomber planes.)

Ignoring the politics behind the Chinese and their clashings with the US, we should take the time to applaud their astounding advancements because they are the “real” superpower. The only thing they lack is military power.

China's Growth Graph

We barely know anything about the brain. And there’s still a long ways to go. So let’s just wait for the people in China to figure this one out.

Computers are Computers

We’re trying to get past a great barrier, and it’s the fact that computers aren’t sentient. Creating life is one thing, we have a bunch of people trying to branch the path between living and non-living organisms to make “life” stop being an anomaly. Removing the unknown variable that is “life,” the path to creating sentient AI would open up even more.

See the source image

Google has invested billions into making ANIs that try to tell the difference between a dog and a carrot.

So how do we measure if a computer is sentient?

Well, I’ll cover that in its own post (consciousness, life, and sentience will have their own post) but the summary is that we are working on it.

That aside, there are two ways I see AGI taking root.

1# Emulate Evolution + Self-Built Architecture to Make AGI?

Genetic algorithms, which are very robust trial and error algorithms that copy the idea of natural selection, will do all of the work for us.

Genetic Algorithms

Genetic algorithms take a bunch of “mutated” versions of something and see which one does better at its job. The best ones are “bred” together to make a stronger “species.” Only problem is an AGI is supposed have ALL human intelligence. And “general” is different than “narrow”.

A good thing about genetic algorithms is that we can control the parameters of what we want. Unlike “real” evolution, we can limit the amount of “bad” traits we create and form “good” ones. Still, we haven’t figured out how to do this… yet. But it’s certainly a super-promising idea.

The missing piece is figuring out if the evolved AI are sentient. Until we can do this, we’ll just be creating random AI without knowing if they are actually sentient.

#2 Or Just Copy a Real Brain

(This one already works.)

The sciencey term is “whole brain emulation.” It’s where an entire picture of the brain is taken and emulated on a computer.

The next step would be to reverse-engineer the brain and figure out how to create “consciousness.” Once that’s all done, you can let the genetic algorithms breed multiple versions of the poor victim until you end up with a bloodthirsty vengeance-seeking AGI hell-bent on world domination and eradication of the entire weak human race.

The mind-uploading part has already been done. But it was done with a flatworm, which has 302 neurons. We humans have 100 billion. Still a long ways to go.

Also, we aren’t sure about the more recently developed parts of the brain. With such a bigger and complex brain, different parts may behave in different ways.

Being re-born into an emulated world sounds completely possible, it’s still not possible with current technology, but it could be right around the corner!


I might end up seeing this guy a little sooner than I anticipated.

While this entire theory of popping our brains into computers has serious religious complications, the part some people are worried about is… are we simulations? And according to Elon Musk, we probably are. But that’s a whole tangent to cover in its own post. And if we’re gonna be cheating death, who really cares what religion we’re breaking. (I know this is blissful ignorance, but I’m too lazy to make any more detours on this post, so you’ll just need to deal with it.)

Whatever utopia we create, I just want to make sure that everyone out there knows that I want to have my own server rack dedicated to hosting a dystopian cyberpunk futurey society similar to Glitch City in Va-11 Hall-A. And then, if anyone wants, I also want to have another rack dedicated to a world like Konosuba.

Megumin Explosion

When we Make the First AGI, Then What?

Remember the part where they are supposed to have equal human intellect?


The AGI will have increased speed, *almost* infinite storage (yes, humans have storage limits, about 5 petabytes), reliability (near-immortality with easy backups), easy editing (optimization, which isn’t as easy on humans), and the possibility to overpower humanity.

Things won’t be equal. We’ll be inferior. That humanity-ending stuff I mentioned at the start of this post might happen.

Shattering Our Distorted Reality

All species die. But ASIs will be immortal. If we don’t hop on that train soon, humanity could be over in just 100 years.

The possible growth of a species ends due to death.


But if we beat our mortality, our possible growth will become infinite! However, this isn’t really “true” immortality because we can always still die. You know, from stuff like our sun exploding or a big meteor. But we’ll be increasing our possible growth from 100 years to theoretically infinity.

Being near-immortal means these ASIs will have a pretty good chance of bringing forth an burst of creation.

Maybe ASI gods will descend from their thrones to help humanity. Or maybe some even cooler things will happen. ASI could potentially discover new elements or remove the need for human workers in a lot of tedious office jobs.

And Don’t Worry If You’re Dead Before We Discover Immortality

Alright, so the solution here is to take your dead body and dip it into a thermostat and wait until the technology to revive you is available.

Step one is you sign up with a company like Alcor or Cyronics Institute. Visiting both sites, you don’t see corpses dangling in big green tubes. Instead, they show pictures of happy non-dead people and inspiring words to convince you to eventually join them in the metal cylinder.

Cyronics Insitute is much cheaper. However, you’ll be ditching the possibility of neuropreservation (just storing your brain) which is offered by Alcor. Most scientists doubt you’ll actually need your body for revival anyways.

The biggest reason I’d recommend CI to you is because they have patient care trust. So if they get stuck in financial difficulties their stash of frozen people doesn’t melt.

In the 90s lots of these cryonics companies went bankrupt and the people being stored were just trashed, which is a big reason lots of people are wary of cryonics.

Also, if you are absolutely certainly you’re gonna die (like a deathbed scenario,) putting you in the big ol’ can of liquid nitrogen is technically homicide, so you need to die before the team can start prepping your body for the long haul.

Being on your deathbed means you might be about ready to be switched by your cryonics company into a “killing” mode. (They just disconnect you from life support and you die legally and safely.) However, some hospital facilities aren’t cryonics-friendly, so they might force you to die a slow and agonizing death lasting many weeks. And bam-wham you’re officially screwed-over if they refuse to let your cryonics company get your body before info death happens. (When your brain becomes permanently damaged and cannot possibly be restored.)

And if law says, “let’s do an autopsy”, you better be sure that you’ve filed some ‘anti-autopsy for some religious reasons’ papers while you were alive. If you didn’t, you’re still screwed.

Now, let’s say you die unexpectedly. This is common, we don’t usually plan on getting in car crashes and dying aside from maybe getting life insurance. After you are legally and medically dead (legally is when your heart stops for about 7 minutes and medically is when your brain stops doing stuff) you will be on a time rush to have your body preserved before info death occurs. And since your hopes of preservation are mostly in your brain, you don’t really want to damage that.

Ok so I just died, what are the cryonics people gonna do now?

Transporting your body takes time. And info death really sucks for you. So they do something called CPS. Which is like CPR, but you only want to support the corpse. The goal isn’t to resuscitate (duh.)

The first thing that happens is you get dipped in icey water and hooked up to a mechanical heart-lung device (meant for CPR.) Then you’re pumped with drugs to make sure you don’t rot or get blood clots to prevent as much deterioration as possible. Then your circulation and oxygenation of blood is taken care of by sticking a bunch of tubes into the major blood vessels and then letting a machine do your breathing for you.

Once that’s all done you’ll be shipped off to a cryonics facility to be processed and stored.

Now what?

Freezing a person in water isn’t very smart. You damage the entire body by doing that. (You know, because of expanding water crystals.)


Instead, you’ll get vitrified. A “special” antifreeze is injected and a complex process turns your body into essentially a really big block of glass, stopping all of the movement of molecules in your body. In this state of suspended animation you won’t rot.

Fun Fact: A bunny’s brain was vitrified and then recovered in near-perfect condition.

Being vitrified is actually super-duper-cool. There’s a bunch of cool science behind it. But if I tried to go over it I’d probably end up on a really long rant about amorphous solids, and nobody wants that. So the simple explanation is that glass doesn’t have a specific crystalline structure, it’s a “shapeless” solid and so the cryonics people are doing is turning you into a glass-like solid and dipping you into the vat of liquid nitrogen.

The Great Big Revival

Even after being safely stored, there are still risks.

Firstly, the technology might come in the far-far future. In fact, you might even “wake up” in the year 2998.

Second, people will have to choose to revive you. There’s no guarantee that people are gonna revive 100 year-old people just for shits and giggles. It’s pretty useless to do that. It’d be better for them to just revive people who died in the same era so that they don’t deal with some language barrier formed from language semantics or a horrible outbreak of a 150-year-old pathogen.

Thirdly, your body might end up being trashed somewhere along the path to revival. Some family member might decide to stop the expensive upkeep for a freaking corpse before revival is possible. Put it simply, the amount of time spent in the thermostat also increases the chance of something happening to your corpse.

My point is that your chances of being revived aren’t very good, which is why becoming full-blown cyborgs sounds like a great alternative to uncertain futuristic necromancy because you are avoiding the big gamble. Besides, even if a theoretical “you but as a cyborg” were to die, you’d probably be capable of being vitrified (as long as your brain is retrievable.)


There are two things humanity can do.

1 – Eventually go extinct and let all of our work perish.

2 – Become immortal and grow indefinitely.

There is a huge chance of humanity dying out if we don’t become near-immortal. And that means all of our cultures, research, beliefs, and everything goes with us. And that sucks.

But if we do gain immortality, that means we might go extinct from whatever we’ve created along the way, but not going extinct gives the future more hope than going extinct, so the risks might be worth it.

I started writing this post long before I wrote references at the bottom of my posts, so there isn’t a big list of hyperlinks for this one. Sorry. I tried to insert as many other hyperlinks into the post for “extra reading” to try to make up for my goof-up.