TransWikia.com

Computer that lasts for centuries?

Worldbuilding Asked by NinDjak on November 7, 2021

So, somewhere around one hundred years from now, a terrible – and unexpected – disaster strikes Earth, most of humanity dies off, yadda yadda yadda.

Luckily, secret-lab-dwelling scientists had stashed tons of knowledge about a lot of probably important stuff (test data ! everybody loves tests !) inside a super-computer.
Unluckily for them though, they all die, and the secret lab’s power supply shuts down.

Four hundred years later, a bunch of explorers stumble upon our secret lab, and manage to turn the power back on. As the lights illuminate the lab, they hear a “Hello, World !” resounding throught the facilities. It seems our computer is still alive !

Now my question is, if we assume that our computer is safe from any environmental harm, and take only the main parts (cpu, etc…) into account, is there any existing or theorised way to build a computer that would still function after centuries ?

I’ve read in this answer that even with extreme luck we couldn’t really expect much from hard drives past a few decades and from what (little, admittedly) I know, computers can’t really run on an optical drive (then again, I may be wrong, I didn’t manage to find anything about that), even though some of them could theorically store data for a millenia .

So is this theorically possible, or will I have to handwave my way through with a magnificient “Future YO ! Look, there ! hoverboards !” ?


EDIT : Hello again everyone ! Tanks a lot for all your answers and ideas !

A little update on what I decided I’d go with so far, based on your answers and my own research.

  1. How to store the data ?

This is pretty much covered : either DNA Data Storage or neronix17’s Data Crystals with the corresponding reading/writing devices, potentially accompanied by more conventionnal drives for quicker access once data has been restored.

  1. How to preserve usable boot and restoration programs ?

If the above-mentionned techniques can’t fill this specific role (and I’m not even sure we can determine this yet, given the early stage these technologies are at), Jim2B’s answer provides extensive information about all this, so I’d most likely go for magnetic core memory.

  1. How to deal with components decay ?

This is where I’m kinda stuck. Ville Niemi mentioned that some of the computer’s components will be degraded as time passes while Monty Wild affirms the opposite in the comments. Now I’m expecting that the components would suffer at least some kind of degradation over 400 years, but would it really be all that catastrophic if they are kept unused and in an optimal environment ?

By the way, I’m kinda new to this site so please do tell if I need to mark the current question as answered and/or ask this in a separate question, I’m kinda confused ._.

13 Answers

Easily (mostly)

If we build for that purpose only and not to maximize cost efficiency, speed, size, or any of the "normal" criteria.

Instead of making each semiconductor junction a few dozen atoms wide, make them 100000 atoms in size. Think microprocessors built using today's technology, but on 1970's scale of miniaturization.
With each transistor being 1000^3 = 1 billion times as large, your computer will be 1/1000th the speed. SO WHAT! It will also be 1 billion times more resistant to damage from radioactive events, crystal deformation, and oxidizing factors.

Build the hard drive with stonking huge overpowered motors. With physically separate magnetic domains, not tiny little deviations in a smooth magnetic plain like we normally do. Built your driver with RAID-plaid. (That's raid mirroring taken past ludicrous level).

Build every device, every data channel with quadruple or better redundancy.

Get rid of all components that have a built-in lifespan limitation of less than millenia. This especially means to get rid of all electrolytic capacitors. Yes, this is very inconvenient, and will cause the engineers to have fits. But it can be done.

Pay attention to what materials you build it from. For example, do NOT use aluminium and gold junctions. Because, over time, they rot. For that matter, do not use aluminium in its construction at al. That stuff just loves to oxydize. Ditto for copper. And don't even think about using steel as construction material, that stuff is useless!

All of this we can do. Easily.
Not cheaply!
And good grief the resultant computer will be SLOW, and will consume a mountain of power compared to the task it performs. But we can do it.

And, as long as it is protected from physical damage, and shielded from extremes of environment, it will remain functional for a long, long, looong time.

Unsolved problems. I'm not sure what to do about display. Both CRT screens and LCD are not suitable for multi-century storage, and cannot easily be made so. Can a crt be made without a vacuum inside, but with inert gas? I don't think so. Not can i visualize any way to keep a vacuum component retain its vacuum over centuries. Even many cm-thick glass leaks air eventually. Glass is porous, you know. slightly
You might even have to revert to glowing-filament-in-inert-gas tubes for display?

Almost any mechanical switch or relay will become untrustworthy after a few centuries. Possibly if made out of inert material such as gold, and kept in a neutral gas to prevent surface deposits?

Answered by PcMan on November 7, 2021

One approach that hasn't been mentioned yet is that the computers could repair themselves. That is how DNA data survives for millions of years. The DNA is carried in organisms which reproduce, replacing any dying ones. Similarly a computer equipped with a suitable 3d printer and robotic appendages could replace any failing part of itself. This can't be done in real life yet because some components, especially the chips, cannot be 3d printed yet in the size at which they are made. But there is no reason in principle why this couldn't be done. Chips are made by chip making machines, and your computer could be equipped with a few.

Answered by JanKanis on November 7, 2021

Blueprint, Large Parts, Redundancy

Why not split the problem into three easier ones:

Firstly, prototyping a simple computer that can be maintained by the crudest crafting. It would only need the simplest operations, such as displaying raw text files with a series of large 26-digit alphabet spinners (rather than screens, relying on mostly analog parts, 36 if you want the numbers too), and operating on a large keyboard. Everything about it should be massive, so that it is easier to assemble and replace parts. There should be as much analog, moving parts made of normal materials, and if possible, no electrical parts, (like Charles Babbage's differential engine).

Secondly, a storage medium that works with the simple computer, created with materials designed to last for thousands of years.

Thirdly, the blueprints for the simple computer can be placed with the storage medium, and etched onto some material that can last for hundreds of years (mineral or corrosive-resistant metal will probably work, taking into account tombstones). The materials for making the computer will be made of components that can also conceivably last many years without corrosion, so that whoever finds the blueprints can re-assemble them into the simple computer.

A deactivated, working computer can also be placed in the room as well, created with the same blueprint. This way, should it break down, not only will the finder have the blueprints, but also have a way to maintain, repair, and create said computer, to access the storage mediums that will last a long, long, time. This will allow your computer to be used for as long as the storage medium can last for, since we do not rely on the computer to be functioning immediately after countless years, just on it being reliable enough to be easily repaired after countless years.

Who knows, if it's found long enough afterwards, the incredible size of the computer and its all-knowing nature might have the finders worship it as a deity!

Answered by Enthu5ed on November 7, 2021

Here is my description of the technology, it was created specifically for computing systems that required durability above all.

Palantir ( not spherical, but cubic ) When you first look at it, you can not help but think that the developers of this device were clearly big fans of Tolkien, which can already be judged by the name of the device itself. Imagine a block of solid diamond with a side of exactly twenty-five centimeters. And this is a computer. A single-crystal computer is a monoblock in the most literal sense of the word. Any of the six faces can serve as a monitor, contact keyboard, scanner, and solar battery. All the computer circuits, the power supply, the storage device-all embedded directly in the crystal, almost all of different forms of carbon, with minor additions of other substances. No moving parts. There are no cavities. Each module is duplicated three times. Fullerene inserts allow you to hold quite strong shocks, so if an ordinary diamond can be broken with a hammer, this one is like hell. And on top of everything else, this shit is also able to self - repair-growing new transistors and memory cells to replace those damaged by friction and wear, as well as ultraviolet rays and other types of ionizing radiation.

The process of creating memory cells can be described as follows: Under the influence of very short laser pulses, the necessary multilayer self-organizing nanostructure is created in the glass. Such pulses are called femtoseconds, and their duration is equal to one quadrillionth (one millionth of one billionth) of a second.

Information is recorded using three layers of voxels (volume pixels) located at a distance of 5 micrometers (one millionth of a meter) from each other. These points change the polarization of light passing through the disk, which allows you to read the state of the structure using a microscope and a polarizer-similar to the one used in Polaroid sunglasses.

Developers call this technology 5D-memory, because each unit of information (bit) has five different characteristics. This includes the three spatial coordinates of points in the nanostructure, as well as the size and orientation — a total of five possible parameters. Due to this, the new technology provides a huge density of information recording compared to conventional CD-ROMs running on 2D memory technology. This technique allows you to achieve a huge recording density: 360 terabytes of data can be written to a disk made of quartz glass with a diameter of several centimeters. For a minute, in order to record this amount of information, you would need about seven thousand modern 50-Gigabyte double-layer Blu-Ray discs. Since glass is used as the material, data can be stored at temperatures up to 1000°C. the Durability of such a storage device will be, according to scientists, 13.8 billion years at an operating temperature of 190°C.

So this "cube" is the most durable computer and data storage ever created.

The only problem is that due to the incredible durability, we had to pay with computing power. For this reason, the average "Palantir" in terms of computing power is comparable to conventional PCs of the 2010s.

Answered by user73251 on November 7, 2021

Perhaps long-lived computers will be created for use in space probes. Just as Galileo operated for years in transit and then the harsh radiation environment near Jupiter, an interstellar probe would need to be built to last.

Even if only meant for (say) 50 years, if left in a calm environment rather than the radiation of space, it may last much longer.

So, it is conceivable that such a computer will be built and available for use in such a lab.

Answered by JDługosz on November 7, 2021

One possible solution to data decay would be a mirrored array of data storage devices (possibly hard drives or flash memory). You would probably need an array of four or eight drives (possibly with some mirrored parity drives) to maintain data for centuries if not millennia. Flash memory would probably be best, since it's non-magnetic and the main limitation on its lifespan is read / write cycles.

Even if there is significant data corruption on all the drives, you should be able to restore most if not all the data. At that point, you're dealing with physical breakdown of the device, not decay of the data itself. An optimal climate controlled environment would probably protect the data storage devices themselves for 400 years.

If you're looking for longer-term storage and not concerned about cost or capacity, you can use gold leaf punchcards - which will last pretty much indefinitely.

The computer itself probably wouldn't be in great shape after 400 years. Modern electronics simply aren't designed to last that long - a few decades at most. With careful consideration and design - as well as an ideal environment - would probably let you work around those problems. A sealed inert gas environment might do the trick there.

Answered by zagdrob on November 7, 2021

There are new technologies being developed and tested nowadays, for example the 5-dimensional computer 'memory crystal' that according to the scientists the Information encoded with lasers, has a thermal stability of up to 1000°C and a practically unlimited shelf life.

There are a couple of links bellow with news about it.

Like this

And this

Answered by Gabriel on November 7, 2021

Try some wooden computer powered by mechanical energy. Really, computers are just some really fast Abacus And it's state can last for centuries !

Really, I think a pure mechanical computer is possible. Useless because of electricity, but it could be powered by fuel.

Answered by Kii on November 7, 2021

I think computer components should decay with a half-life just like radioactive materials. Assuming half-life of 5 years after 400 years you would have left... not that much of the original components. Having a secure location and being in deep sleep mode should help significantly. I think you could realistically expect the system to still have between one billionth and one trillionth of its original capacity.

So I don't think we can assume any current or projected technology to cover this. But you say this would happen one hundred years in the future. They might have developed such technology. They certainly would have if they had expected the end to be near.

The simplest rubber-science (best that can be provided for "computers century from now" category) solution would be for the computing systems to have self-repair capability. If the system also had high level of redundancy and a secure supply of power, both of which are reasonable possibilities, it could feasibly survive for a long time. It would still have much less capacity than it used to have, but chatting up some explorers should not really require that large a portion of capacity to survive, if the system was originally designed to support cutting edge research.

Unfortunately the only way to build a self-repairing computer with massive redundancy we can currently theorize is bio-mimicry. That is, to create an artificial organism that supports a very large brain modelled after the human brain. Probably it would be more like an entire colony of interconnected brains. Possibly suspended in a container of nutrient fluid with an entire artificial ecosystem. I guess it would be like a large aquarium with "brain-coral" in it.

Radio-thermal or geothermal power could support the ecosystem for a few centuries despite what happens on the surface. And you might be able to justify that by desire not to have an energy trail that can be used to locate the lab on the surface.

Note that with a persistent power source and continuous activity needed for the self-repair and memory refresh, the computer would have been awake and aware for the entire 400 years. It might be confused by visitors after such a long time and suffer severe culture shock. So full ability, but difficulty to communicate? It might appear to suffer from mental issues and general weirdness to the explorers.

Answered by Ville Niemi on November 7, 2021

I'm a little surprised nobody brought up the quartz glass storage device that was talked about the past couple of years, I couldn't find if anything came of it but there's a few articles about it here, here and here.

As the last one point out it sounds like something out of Superman or some other sci-fi shows/movies, but it makes sense. I mean we have had the technology to etch crystals with drawings and 3D 'sculptures' inside of them so it seems perfectly reasonable to think we could do that and have a computer read it as information. The length of time these could last appears to vary quite a bit but I'm sure millions of years is considerably longer than you need.

Answered by neronix17 on November 7, 2021

You may want to check out http://longnow.org/essays/written-wind/ for issues with retrieving archives over long periods of time.

To keep a digital artifact perpetually accessible, record the current version of it on a physically permanent medium, such as silicon disks microetched by Norsam Technologies in New Mexico, then go ahead and let users, robot or human, migrate the artifact through generations of versions and platforms, pausing from time to time to record the new manifestation on a Norsam disk. One path is slow, periodic and conservative; the other, fast, constant and adaptive. When the chain of use is eventually broken, it leaves a permanent record of the chain until then, so the artifact can be revived to begin the chain anew.

The Norsam disk is supposed to be good for a minimum of 1000 years (cite: https://en.wikipedia.org/wiki/HD-Rosetta). If you're looking into a computer system, you'd want a bootstrapped system where it could build itself.

Answered by bryanjonker on November 7, 2021

I am in two minds on this. Either you go all solid state with flash drives and as few moving parts as possible OR you go steampunk and have a completely mechanical computer similar to Babbage's Difference Engine. The level of technology required to maintain it is very low.

You could even have a order of monks who maintain it over the centuries without knowlegde of what it actually does.

Answered by Burgi on November 7, 2021

We don't really know

The recent development of our current digital environment (commercial use of the internet dates back to roughly 1980 - which also coincides for the approximate start of home computing), means that we haven't really had an opportunity to test their long-term viability (essentially we can't even test the digital data standards & storage methods for more than about 35 years because they simply haven't been around longer than that).

But, currently all of the standard storage mechanisms that we use today are only expected to remain viable for from a few years to a few decades (this includes so-called archival media like optical disks and data tapes).

So far we've never encountered a need for extremely long duration archiving of data, so no one has ever bothered to design a system to work for that situation. If the scientists and engineers in your story had a few years of warning, they could probably develop something that would work.

I do not know how it would look but, based upon experience with various methods, I can guess.

But maybe we can guess

The F-15 originally was built with "primitive" magnetic core memory. This type of memory is non-volatile and highly resistant to EMP and other things (like cosmic rays) that can damage the data stored in modern memory. However, it is much slower and bulkier than modern memory.

Magnetic Core Memory

Magnetic Core Memory Durability

Core memory is non-volatile storage—it can retain its contents indefinitely without power. It is also relatively unaffected by EMP and radiation. These were important advantages for some applications like first-generation industrial programmable controllers, military installations and vehicles like fighter aircraft, as well as spacecraft, and led to core being used for a number of years after availability of semiconductor MOS memory (see also MOSFET). For example, the Space Shuttle flight computers initially used core memory, which preserved the contents of memory even through the Challenger's disintegration and subsequent plunge into the sea in 1986.

POST

I imagine your computer's bootstrap would be composed of similar bulky but reliable and non-volatile memory. This basic bootstrap functionality, perhaps similar to your computer's POST (power on self-test), would ensure important portions of the computer still worked and would then (slowly) load the actual operating system for the device.

Flag failed components/use good ones

Because I would expect many of the bits of the computer to have degraded enough to be unusable, the overall system probably would provide massive redundancy for each critical component. As the POST operations encounter failing components, it'd automatically switch to testing the next redundant component in lines. Since the POST operations would likely be fairly elementary, the overall system would likely flag the "failed" component for re-evaluation by the full-up OS once the boot cycle completed. A more thorough mapping of the essential components (e.g. CPUs might reveal just certain portions of the chip failed and that the CPU was otherwise OK). The OS would use this map of its redundant components to ensure it could keep operating as long as a complete set of essential functions remained operational.

After boot up cycle

This computer system would probably fall back on a bank of relatively modern memory chips for actual operations after the initial bootstrap. It'd be up to the original POST operations to initially determine which banks of modern memory were still viable and then (like with the CPU), a more sophisticated utility in the OS would perform a more thorough mapping of the memory to see how much of it remained usable.

Data recovery

After the basic OS and self-check programming began operating, the computer would begin to activate its many RAID (redundant array of independent disks) like data storage systems. The "drives" in the system would be special low density (and probably solid state) memory drives. The RAID system would verify the bit states across multiple drives and slowly reconstitute any damage data in the storage systems.

Slow and reliable (tortoise) performance

In your scenario, the primary goal of the hardware would be reliability and data redundancy so the storage arrays for your data would be quite large and probably not all that fast. A set of fast "working" hard drive storage might be provided for daily operations.

The time it took for the RAID like systems to perform the data validation checks and/or rebuild damaged sections could be quite lengthy (days, weeks, or substantially longer - depending upon the speed of the devices and amount of data we're discussing). From a dramatic perspective this might allow the author to perform a variety of reveals through the course of the book as different sections of the data storage are flagged as "ready for use", loaded into the faster systems, and made available to the characters in the story.

If the data reconstruction was imperfect it might allow the computer to provide false information too...

All good things come to an end

All hardware eventually fails.

Meaning even if your computer booted perfectly upon the application of power, mechanical hard drives fail, solid state drives fail, memory fails, etc. Your computer that survived the centuries would eventually wear out and stop working. It should make that point to the inheritors of the system as soon as possible.

And another thing

Richard Feynman sponsored some prizes to groups that could write data ("There's Plenty of Room at the Bottom", in conventional analog form, in the highest density. For instance trying to print the Encyclopedia Britannica on the head of a pin. The only thing you'd need to read the data is a really good microscope. This sort of data's shelf life is potentially MUCH higher than that of digitally stored data and you wouldn't have to worry about computer interoperability and changes in encoding standards as a condition of data retrieval!

"There's Plenty of Room at the Bottom" was a lecture given by physicist Richard Feynman at an American Physical Society meeting at Caltech on December 29, 1959.1 Feynman considered the possibility of direct manipulation of individual atoms as a more powerful form of synthetic chemistry than those used at the time. The talk went unnoticed and it didn't inspire the conceptual beginnings of the field. In the 1990s it was rediscovered and publicised as a seminal event in the field, probably to boost the history of nanotechnology with Feynman's reputation.

...

At the meeting, Feynman concluded his talk with two challenges, and he offered a prize of $1000 for the first individuals to solve each one. The first challenge involved the construction of a tiny motor, which, to Feynman's surprise, was achieved by November 1960 by William McLellan, a meticulous craftsman, using conventional tools. The motor met the conditions, but did not advance the art. The second challenge involved the possibility of scaling down letters small enough so as to be able to fit the entire Encyclopædia Britannica on the head of a pin, by writing the information from a book page on a surface 1/25,000 smaller in linear scale. In 1985, Tom Newman, a Stanford graduate student, successfully reduced the first paragraph of A Tale of Two Cities by 1/25,000, and collected the second Feynman prize.

Answered by Jim2B on November 7, 2021

Add your own answers!

Ask a Question

Get help from others!

© 2024 TransWikia.com. All rights reserved. Sites we Love: PCI Database, UKBizDB, Menu Kuliner, Sharing RPP