If you think having a backup is too expensive, try not having one

  • Thread starter Thread starter nsaspook
  • Start date Start date
AI Thread Summary
The South Korea data center fire has resulted in the potential loss of 858TB of government data due to a lack of backups, highlighting the critical importance of data protection strategies. A senior officer overseeing recovery efforts tragically died, underscoring the human impact of such data loss. The Ministry of Personnel Management is particularly affected, as it relied on a G-Drive system that failed to preserve eight years of work materials. Discussions emphasize the need for robust backup and restore strategies, as past experiences reveal that automated backups do not guarantee data recoverability. This incident serves as a stark reminder of the vulnerabilities in data management practices and the necessity for regular testing of recovery systems.
nsaspook
Science Advisor
Messages
1,488
Reaction score
4,964
https://www.datacenterdynamics.com/...-for-good-after-south-korea-data-center-fire/

858TB of government data may be lost for good after South Korea data center fire
Destroyed drive wasn't backed up, officials say
Meanwhile, a government worker overseeing efforts to restore the data center has died after jumping from a building.
As reported by local media outlet The Dong-A Ilbo, the 56-year-old man was found in cardiac arrest near the central building at the government complex in Sejong City at 10.50am on Friday, October 3. He was taken to hospital, and died shortly afterwards.
The man was a senior officer in the Digital Government Innovation Office and had been overseeing work on the data center network. His mobile phone was found in the smoking area on the 15th floor of the government building.

https://www.chosun.com/english/national-en/2025/10/02/FPWGFSXMLNCFPIEGWKZF3BOQ3M/

The Ministry of Personnel Management, where all affiliated officials use the G-Drive, is particularly affected. A source from the Ministry of Personnel Management said, “It’s daunting as eight years’ worth of work materials have completely disappeared.”
...
It provided 30GB, gigabytes, per public official. At the time, the Ministry of the Interior and Safety also issued guidelines to each ministry stating, “All work materials should not be stored on office PCs but should be stored on the G-Drive.”

G drive as in GONE drive.
1759972971676.webp
 
  • Sad
  • Like
  • Wow
Likes bhobba, Greg Bernhardt, Rive and 1 other person
Computer science news on Phys.org
Long ago, before backup utilities were so widely available, I worked in a computer lab that did backups to tape every night. The disk capacities were not big then. The administrators used a program that they wrote, which looped to copy every file to tape and finally rewind the tape for them to dismount. Every night, they started many backups for many computers and later dismounted all the tapes. They did that for a long time.
One day, an administrator noticed the tapes seemed to rewind many times in a single run. They saw that the program had the rewind inside the loop after every file copy. So only the last file remained on the backup tape and all others had been overwritten every night.

The moral of the story is to check that archived files can actually be retrieved.
 
Last edited:
  • Like
  • Haha
Likes bhobba, russ_watters, jack action and 2 others
Years ago my wife and I arrived at Heathrow to find our airline was processing check ins on paper after a massive IT failure. The story we heard was it was due to a data center fire that had destroyed the primary system, and although they had had a backup it had been in the next room.
 
  • Like
  • Wow
Likes sophiecentaur and FactChecker
In the latter part of my career, it became increasingly obvious that many data centre operations were dependent on the reliability of modern disk technology. They had a "backup strategy", of course, but little in the way of a "restore strategy". The first example I remember was in about 2005, where the tape storage facility was next to the disk storage units. And, there's was no way to manually extract tapes from this facility. A fire in that case could have destroyed everything, including all backups.

Restoring from tape became increasingly rare, with clever disk-based solutions like snapshots meaning that tape had become a last line of defence. Increasingly, the backups were completely automated, to the point where no one seemed to think very much about testing the restoration of data. It was assumed, more or less, that if the backups were running, then the restoration of data from them would be possible.

Generally, there was always a difficulty in organising a full restore from backup. If the test went wrong, then in principle the data was gone! Often a full restore was tested before a system went live - but once the system was live, it could take quite a bit of ingenuity to test a restore properly. In the old days, when a system required a single Unix server with a tape backup every night, it was relatively easy to restore onto a similar server somewhere else and test the restored system. It might take one techie less than a day to do the whole off-site restoration, using tapes that were stored off-site, where nothing from the live site was needed. In the 1990's we did this sort of thing regularly - although we always kept our fingers crossed when a restore from tape was involved.

But, as systems became increasingly complex and interdependent - with an environment of perhaps several hundred virtual servers - creating a test installation was a project in itself. In one of the last projects I worked on, it took about six weeks to configure the server infrastructure. If an additional user-test environment was required, then it took about six weeks for the various Windows, Linux, Oracle and other techies to do their thing. Then, all the application software had to be loaded. (I think it cost about £1 million per environment!). By contrast, in the old days, creating a new test environment could take a couple of hours - copy the executables, copy an existing database, do some configuration, update the backup schedule(!) and that was it. And, it came at no additional cost - as long as the server had enough memory for another Oracle database. In one case, we had a single Unix server with nine test enviornments on it.

I worked on a project in about 2012 where we did a major upgrade by going live on what had been the "disaster recovery" (DR) site and swapping the roles of live and DR data centres. That all went well, but it was a controlled project over several months and not a DR test as such.

In general, there seemed to be a strong reluctance to do a DR test, even where the facilities were available and an annual test was part of the contract. People always seemed to have better things to do! Also, there was a new generation of system support techies who had a different outlook on things. I was telling anyone who would listen that we were essentially dependent on the reliability of the technology. By 2014, when I retired, I strongly believed that a fire at a data centre would have been a real disaster, with little chance of full systems and data recovery.
 
Last edited:
  • Like
  • Informative
Likes bhobba, berkeman, Ibix and 1 other person
I've also set up systems from scratch where my immediate boss seemed to think that just RAIDing the database server was enough. My argument that the eggs were still in one basket in case of fire or other force majerure went unheeded until I went over his head. That didn't make me popular, ironically.
 
  • Like
  • Sad
Likes nsaspook and FactChecker
At my first job in 1973, on a multi-computer multi-tasking database server using HP 2100s, the system had the advantage that it was offline from 3 am to 7 am each day allowing for offline backup. During that time, every disk pack was backed up to another disk pack (there were 10 drives, so 5 backups done in parallel) and the backup code included a check to make sure the backup was in the correct direction. I don't recall how often a set of backup drives were exchanged with an offsite location.

About once a month, all of our source files were punched into Mylar paper tape (lifespan something like 100 years), put in shoe boxes and stored in safety deposit boxes at banks.

Mainframe sites since the 1960's using tape backups routinely cycled backup tapes offsite.

I assume most server sites now store backups at physical offsite locations or at external cloud storage servers.

Image of one side of the computer room from that 1973 job:

octal.webp
 
Last edited:
  • Like
Likes Rive, sbrothy and FactChecker
Now, that's a computer! Not this modern cell phone or laptop stuff! :kiss:
 
  • Haha
Likes Rive and FactChecker
sbrothy said:
Now, that's a computer! Not this modern cell phone or laptop stuff! :kiss:
(sigh) Nostalgia. The good-old days of "big iron". And a smart-phone today has a lot more capability than a room of such computers!
 
FactChecker said:
(sigh) Nostalgia. The good-old days of "big iron". And a smart-phone today has a lot more capability than a room of such computers!
Yes. 8-inch floppydisks. Magnetic tape. And this was lightyears beyond punchcards! Dial-up modems. Pre-internet BBSs. Measuring your speed in baud!

As a Japanese WW2 survivor says in Archer (adult cartoon):

“There cannot be a an electronic brain in this room [Archer is referring to his cell-phone]. It’s simply not big enough! By gods, it only 8x8 meters!”

:smile:

EDIT: or something to that effect.
 
  • Haha
Likes FactChecker
  • #10
Ibix said:
Years ago my wife and I arrived at Heathrow to find our airline was processing check ins on paper after a massive IT failure. The story we heard was it was due to a data center fire that had destroyed the primary system, and although they had had a backup it had been in the next room.
So many stories like this one. Trouble is that it's a catch 22 situation for making a good system. Senior staff and government ministers tend not to have a clue about the nuts and bolts and their priorities are costs and timescales. They are hopeless about things like security and their own passwords. Designers know the essential details about projects but ignore costs and how 'dumb' the rest of the system can be; they can design for technical perfection, when left to themselves. Recipe for disaster and we keep getting disasters.
 
  • #11
And, whilst on the subject: how many versions are there of my valuable (?) files which Apple keeps on the cloud?
 
  • #12
sophiecentaur said:
And, whilst on the subject: how many versions are there of my valuable (?) files which Apple keeps on the cloud?
I'd like to think they're spread out over several colocated computer centers, but who knows?
 
  • #13
sbrothy said:
I'd like to think they're spread out over several colocated computer centers, but who knows?
And heh, valuable? You mean your memes and your cat photos? :woot:
 
  • Haha
Likes sophiecentaur
  • #14
sbrothy said:
I'd like to think they're spread out over several colocated computer centers, but who knows?
I read that as chocolate centres. :wink::wink::wink:

My passwords are very important, which would be problematical. The important ones are, of course, all different and non-memorable.
 
  • Like
  • Agree
Likes sbrothy and FactChecker
  • #15
sophiecentaur said:
I read that as chocolate centres. :wink::wink::wink:

My passwords are very important, which would be problematical. The important ones are, of course, all different and non-memorable.
Absolutely! My iPhone has passwords, contact information, messages, map locations, etc. It would be a real problem if they were all gone at once. A lot of photos are easy for me to back up to my computer, but not the rest of it.
Can the data on Android phones be backed up to a PC?
 
  • #16
sophiecentaur said:
I read that as chocolate centres. :wink::wink::wink:

My passwords are very important, which would be problematical. The important ones are, of course, all different and non-memorable.

1760994673815.webp
 
  • Like
Likes sophiecentaur and Baluncore
  • #17
There are nice memory tricks for passwords that are good -- until you have a dozen to remember and you are occasionally forced to change some of them.
 
  • #18
FactChecker said:
There are nice memory tricks for passwords that are good -- until you have a dozen to remember and you are occasionally forced to change some of them.
Mnemonics.

"Mum very easily made a jam sandwich using no peanutbutter."

=

Merkur, Venus, Earth, Mars, The Asteroid Belt, Jupiter, Saturn, Uranus, Neptune, Pluto. :smile:
 
  • Like
Likes FactChecker
  • #19
sbrothy said:
Mnemonics.

"Mum very easily made a jam sandwich using no peanutbutter."

=

Merkur, Venus, Earth, Mars, The Asteroid Belt, Jupiter, Saturn, Uranus, Neptune, Pluto. :smile:
That is sort of what I do. I keep all mine in PasswordSafe. Most that I very rarely use can be any random thing and I can look them up when I need them.
The ones that I use often and need to memorize are tied to lyrics of songs I know. Each one would have the first letter of each syllable in its lyric line and some special characters/numbers scattered in a pattern.
It surprises me how long the passwords get and I can still remember them.
 
  • #20
  • #21
FactChecker said:
There are nice memory tricks for passwords that are good -- until you have a dozen to remember and you are occasionally forced to change some of them.
I use basically the same pattern for all my passwords. Except for my e-mail because from there I can reset them all. So guess my password to PF and you may have a couple to other sites (if you can guess which ones), but you’re not getting into my e-mail.

Even though I don’t even follow the logic laid out in the XKCD strip above. :smile:
 
  • #22
sbrothy said:
So guess my password to PF and you may have a couple to other sites :smile:
PF? That's how I do it and still make them all different. ;-)
 
  • #23
PeroK said:
People always seemed to have better things to do!

How true it is.

At one time, I was the computer security officer for the Australian Federal Police (by law, every government agency had to have one, and I was it, until the head of IT decided, correctly IMHO, it was really his responsibility, and I reported to him on security matters while still doing the work).

Anyway, we religiously kept backups, as good computer security practice dictates (as well as DBA practice - I basically just verified that the DBA had done it).

One day, I was asked to conduct a risk evaluation of the backups stored on-site at the computer centre and determine whether they needed to be relocated.

I did the evaluation, and as was pretty obvious, concluded that they should be in a different location.

The decision was made (I conducted the evaluation; upper management makes the decisions) that they were to be kept at the police training college. That seems ok, right? The issue was that the computer centre was at the police training college. I was flabbergasted, but upper management makes those decisions, not me.
Relevant to people who always have better things to do, nothing was done. The story I heard was that someone had checked the computer centre was already at the college - problem solved.

That was over 30 years ago now, so hopefully someone has since realised a better solution was needed.

Thanks
Bill
 
  • #24
FactChecker said:
o0) PF? That's how I do it and still make them all different. ;-)
Well, I cycle between a couple variations on patterns. Something I learned working alongside a statistician as a developer. He was so scatterbrained (or rather buried in his work) that he would forget to pick up his children from kindergarden. He didn’t hear his phone either (even though it was right beside him and I’d turned it up to full volume). When it rang out mine would ring and it would be his wife asking me to ask him to get going. Sometimes I had to go actually touch to get him to snap out of it. I’ve never seen anyone so concentatred. I dunno what it says about me that they’d usually give the jobs he left to me. I guess I should be flattered cause noone else dared touch them. But that was perhaps just because they were boring.

This didn’t have much to do with passwords. So it goes.
 
  • #25
bhobba said:
How true it is.

At one time, I was the computer security officer for the Australian Federal Police (by law, every government agency had to have one, and I was it, until the head of IT decided, correctly IMHO, it was really his responsibility, and I reported to him on security matters while still doing the work).

Anyway, we religiously kept backups, as good computer security practice dictates (as well as DBA practice - I basically just verified that the DBA had done it).

One day, I was asked to conduct a risk evaluation of the backups stored on-site at the computer centre and determine whether they needed to be relocated.

I did the evaluation, and as was pretty obvious, concluded that they should be in a different location.

The decision was made (I conducted the evaluation; upper management makes the decisions) that they were to be kept at the police training college. That seems ok, right? The issue was that the computer centre was at the police training college. I was flabbergasted, but upper management makes those decisions, not me.
Relevant to people who always have better things to do, nothing was done. The story I heard was that someone had checked the computer centre was already at the college - problem solved.

That was over 30 years ago now, so hopefully someone has since realised a better solution was needed.

Thanks
Bill
Yeah. That sounds like real life.
 

Similar threads

Replies
1
Views
10K
Replies
8
Views
5K
Back
Top