Follow Slashdot blog updates by subscribing to our blog RSS feed

 



Forgot your password?
typodupeerror
×
Crime Databases IT

Angry IT Admin Wipes Employer's Databases, Gets 7 Years In Prison (bleepingcomputer.com) 83

Han Bing, a former database administrator for Lianjia, a Chinese real-estate brokerage giant, has been sentenced to 7 years in prison for logging into corporate systems and deleting the company's data. BleepingComputer reports: Bing allegedly performed the act in June 2018, when he used his administrative privileges and "root" account to access the company's financial system and delete all stored data from two database servers and two application servers. This has resulted in the immediate crippling of large portions of Lianjia's operations, leaving tens of thousands of its employees without salaries for an extended period and forcing a data restoration effort that cost roughly $30,000. The indirect damages from the disruption of the firm's business, though, were far more damaging, as Lianjia operates thousands of offices, employs over 120,000 brokers, owns 51 subsidiaries, and its market value is estimated to be $6 billion.
This discussion has been archived. No new comments can be posted.

Angry IT Admin Wipes Employer's Databases, Gets 7 Years In Prison

Comments Filter:
  • Backups? (Score:3, Insightful)

    by King_TJ ( 85913 ) on Tuesday May 17, 2022 @05:15PM (#62544282) Journal

    This scenario plays out every so often, and you have to ask; Where are the database backups? I mean, it's understandable this would disrupt business temporarily and have a cost involved in restoring everything. But leaving employees "without salaries for an extended period" sounds like regular backups weren't being done here.

    • Re:Backups? (Score:5, Insightful)

      by MightyMartian ( 840721 ) on Tuesday May 17, 2022 @05:20PM (#62544304) Journal

      What I've found time and time again is that when people say they have backups, no one is ever actually testing the backups. I've seen backup systems that stopped working months before, and it was only discovered because someone deleted a directory, and when the IT team went to restore the folder from backup, lo and behold, the most recent backup is two months old. When I was directly running an IT department, weekly testing of the backup was required, because looking at a status box on a screen that shows green often doesn't mean a damned thing.

      • I agree 100%. I tell people that in about 90% of the time, an untested backup is worthless.

        Lets just assume for a moment that you are lucky enough that your backup actually worked and that you can get back the data that you saved. Was the whole computer backed up, or was it just the data in the database? Does anyone know how to build the computer from scratch and configure everything that is needed? Do you even know what is needed?

        The only sure fire way to know that your backup is working is to s
        • by ls671 ( 1122017 )

          I simply use RAID 10 as backup strategy, no backup testing needed, much simpler this way /s

          • by GoTeam ( 5042081 )

            I simply use RAID 10 as backup strategy, no backup testing needed, much simpler this way /s

            lol, my heart rate increased as I read that. Thank goodness for the /s!

          • I use RAID 0 for this. /s

            • by ls671 ( 1122017 )

              But, but, but... RAID 0 means no (0) backups! RAID 10 has 2 levels of backups so you are guaranteed to never lose data in case of a ransomware attack ! /s

          • So basically, you have ZERO backup and are subject to corruption, catastrophic failure and malicious actions. The only thing you are protected from is non corrupting 1 or 2 drive failures..
            • and yes I just realised I missed your /s. sadly though what you said in jest is something I regularly hear in IT departments.
      • Re: Backups? (Score:5, Interesting)

        by AcidFnTonic ( 791034 ) on Tuesday May 17, 2022 @06:53PM (#62544526) Homepage

        I wanted to do that, but I was far too busy because today was the day that I had to change my furnace filters, my car also exactly hit the long-term service interval mileage so I've been busy inspecting all of my rubber grommets like the good book said.

        This also was the once a month I had to go around pressing the test button on all my smoke alarms, I also had to change that little battery inside my furnace backup board because of the book keenly warned me to do this exactly Ten Years After purchase. Same day I had to go around and test all the HVAC events for balanced air flow prior to me checking all rotating fans to be rotating in the most efficient direction for this season. It also is exactly 3 years after my last chimney sweeping so I had to get that done, but not before taking the grease gun to lubricate all of the grease points on my lawn mower as it had asked for and it's manual. But then I was too busy replacing seat belts because they were older than 25 years which technically means they can expire. But then I realized my tire rubber was older than 5 years which also means it technically isn't good either so it too had to go...

        But I couldn't do that because I was balancing my checkbook, before performing a bunch of income calculation make sure this year dependent filings are still correct-o-mondo. But the city council meetings I was urged to attend are happening so you know.

        • by Kokuyo ( 549451 )

          I see your point, I think, and you are right. For a private entity.

          But this is a six billion corporation. They should absolutely be able to pay someone to see this task as his primary raison d'etre.

          I am a storage, virtualization and compute engineer. Call it cloud if you want. I checked compliance almost daily and did restore tests once a month.

          But granted, it was random testing, not all of it. Because that would have made my brain ooze out of my ears.

          Believe it or not, everybody talks about testing backups

          • I see your point, I think, and you are right. For a private entity.

            But this is a six billion corporation. They should absolutely be able to pay someone to see this task as his primary raison d'etre.

            What if that person is the one who decides to delete everything?

            Setting things up to deal with an accidental issue is one thing- though far fewer places clear that bar than they should. Making sure that the people dealing with that can't also break things is another level of work, which is non-technical folks are going to have a very hard time determining if is being done correctly. Security is hard.

            • by vadim_t ( 324782 )

              Backups should be shipped off-site and be on offline media. That way even if the sysadmin goes nuts one day, there's still an archive they can't touch easily.

        • That's a cry for help if I've ever heard one. Have you considered outsourcing?

          I know you are going for funny, but that's precisely the reason a service industry exists in the world. Inspecting the car? Never done it, pay someone to do it yearly. Furnace filters, gas fittings, and testing? Never done it, some guy comes and does it every 2 years.

          In both cases I don't even need to keep track. They send reminders. I also don't grow my own food, sew my own cloths, or teach my own kids.

          The service industry exists

        • Your post reminds me of reading Martha Stewart's monthly calendar in this fun book [amazon.com].
      • Also, companies don't always do what they're supposed to. As long as the profits keep coming in then they're not too worried about the fire insurance that they forgot to pay for. Sure, you may think it is unthinkable, but it happens. It's also somewhat invisible, everyone assumes someone else does it.

        I used to do backups back in the 80s for a major defense contractor. I did not get a lot of instructions, there were no written procedures. You just emulated what someone else did. There's a label on the

      • It's easy to look at your backup scheduling program every day and see if the backups ran the previous night, but it's much harder to tell if the backups are any good. Often, the backup program runs, and "something" gets stored to tape, but when you need to use that tape to restore data, too many times there is just nothing to restore and the restore program just shrugs. Why is this still a thing in the 21st century? Why do backups *LOOK* like they run, but actually don't?
      • by quetwo ( 1203948 )

        I did some work for a school district in the mid-90's. Their IT group put out an RFP for a backup solution, and bought the absolute cheapest one they could find. Things were going fine until one day they did get a virus on the network that wiped out all their common storage (pretty much their grading system, attendance records, etc.) They later found out that while they were backing data up each night and rotating tapes each night offsite, their RFP never covered the restore function of the software. Th

    • Re:Backups? (Score:5, Interesting)

      by bloodhawk ( 813939 ) on Tuesday May 17, 2022 @05:22PM (#62544314)
      Its not exactly clear whether they restored from backup or not. But given the restoration cost was only $30k I think it sounds like they did have backups, the $30k was probably mostly spent in recovering data between the last backup and time of deletion. So to me it sounds like backups were being done.
      • > Its not exactly clear whether they restored from backup or not. But given the restoration cost was only $30k I think it sounds like they did have backups, the $30k was probably mostly spent in recovering data between the last backup and time of deletion. So to me it sounds like backups were being done.

        That's probably true, but in a recent document dump from NAIDA it showed that Shezheng Li is making $23K a year, and she's the top virologist in the country. $30K probably covered a lot of labor. I'd gu

    • Given enough access, the disgruntled employee would have deleted those backups as well.

      This then goes into a deeper level of distributed and restricted access, and offsite backups.
      • by gweihir ( 88907 )

        Should not be able to. In a well-run IT organization of this size no single employee should be able to erase life system and backups. Backups should either be write-once with access by a different team only or offline again with access only by a different team. But yes, doing IT right costs money and many moron managers still do not realize that doing it wrong costs far more money in the long run.

    • by gweihir ( 88907 )

      I think they had backups. That "extended period" does sound like bad negligence though, I agree. They probably never ran any restore tests and that is just wrong. The reason you do these tests is because unless you do them regularly (e.g. every 2 or 3 years), you are basically assured to run into some serious, unexpected problems.

    • by BigZee ( 769371 )
      Even short periods of downtime can be very costly.

      Interestingly, it's starting to become common for staff to be put on gardening leave immediately are made redundant or resign.

    • This scenario plays out every so often, and you have to ask; Where are the database backups?

      Remember, this is a database administrator, deleting the databases he administered. Even if he didn't make them himself, he likely knows exactly where the backups are and how to delete or corrupt them if it can be done remotely.

    • This is a head admin with root access. Of course he deleted any backups as well.

  • by MightyMartian ( 840721 ) on Tuesday May 17, 2022 @05:18PM (#62544298) Journal

    Needless to say, that's why you have a rigorous and frequently tested backup system. Such an extraordinary internal attack is almost certainly going to create downtime and chaos, but you can at least partially mitigate that. At the same time, with such a large firm, one has to ask why one person has sufficient privileges to do that much damage. Reality is that if someone has that level of access, right up to root access on servers, they probably have the ability to muck up backups as well. So lots of things have gone wrong here. This isn't a defense of the asshole who did it, who sounds like he got what he deserved, but that he was able to do it and cause such widespread damage indicates some seriously sloppy security on his employer's part. This isn't the first time we've seen a disgruntled IT worker smash systems to pieces.

    • Re:Backups (Score:4, Interesting)

      by jabuzz ( 182671 ) on Tuesday May 17, 2022 @05:40PM (#62544366) Homepage

      As you say if you know what you are doing the backup would be toast too. If you are not stupid, in 2022 it looks like a ransomware attack coming from outside any machine you have legitimate control of and you are in the clear.

      Look if you are going to bad things first hack a computer elsewhere, then hack your target proxying through that machine. When the hack is over nuke your proxy machine. For even greater security make your launch machine say a Raspberry Pi and after the hack physically destroy the microSD card.

      None of this is fricking rocket science.

      • by gweihir ( 88907 )

        If that IT organization is competent, then in 2022 there are write-once backups and/or offline backups (and also off-site) and these are managed by a different team, specifically because of ransomware and insider threats. Those people did it on the cheap and it became hugely expensive.

    • Disasters serve the purpose of motivating a company to pay more attention to how things are done. Hindsight is common, foresight is rare.

  • by freedom_surfer ( 203272 ) on Tuesday May 17, 2022 @05:18PM (#62544300) Homepage

    Was there prison time for the bloke who said there were backups and they had been tested? :P

    • by Anonymous Coward
      Yes, if that person happened to be Han Bing :)
    • Don't know, but before you ask that question shouldn't you ask if the backups existed and worked? No where does it say they didn't. Just that some guy trashed a server and was sent to prison for it.

  • Stupid move (Score:4, Insightful)

    by 93 Escort Wagon ( 326346 ) on Tuesday May 17, 2022 @05:20PM (#62544306)

    It's okay to hate your job, or your supervisor, or your coworkers. But if that's the case, even if they're all dicks... you should act like a professional and just quit. Don't throw a tantrum, and don't destroy your own future - even if you think it'll feel good in the very short term.

    Be the adult in the room.

  • And as long as it wasn't premeditated I do less time. It does not pay to fuck with the ruling class. This is a reminder to anyone and everyone who might want to strike back and anger to stay in their place and suck it down.
  • This is why you should integrate your production backups into your QA environment by baking in, and automating the obfuscation/munging operation, so you're constantly testing your production backups... even beyond the once a year when the auditors force you to do it and show them a screenshot...

  • by suso ( 153703 ) * on Tuesday May 17, 2022 @05:39PM (#62544364) Journal

    Have better offboarding protocols.

    • by Guru2Newbie ( 536637 ) on Tuesday May 17, 2022 @06:39PM (#62544498) Homepage
      Better offboarding, as in, "walk the plank"?
      • by suso ( 153703 ) *

        Better offboarding, as in, "walk the plank"?

        No better offboarding of employees when they leave the organization. (ie, disabling their accounts, changing passwords they had access to before they walk out the door). It sounds like this guy left and they didn't change the root password to the database server. That was dumb, but then again there are many companies that have this problem because they are too lazy to fix it.

  • by Osgeld ( 1900440 ) on Tuesday May 17, 2022 @05:47PM (#62544370)

    Anyone bitching about backups obviously did not look at the scale of the company

    Its going to take some time and money to rebuild from backup media which is usually large but slow

    • by gweihir ( 88907 )

      Actually, that is why a company this size should run regular recovery tests. I guess they are not regulated or that would be a requirement. Because you always find something does not work when you try to restore for the first time or after not having done a real test for a while. My guess is that they had to replay transaction logs as well, 30k is more than just getting a DB dump back in. And they probably had never done that and it went wrong in the first few attempts.

      But yes, companies are sloppy. Some ti

    • by jaa101 ( 627731 )

      Backups that take more than, say, a week to restore are not fit for purpose. If you have a very large database then your backup system needs to have the bandwidth to restore it completely in a reasonable amount of time. The definition of "reasonable" doesn't change with the size of the database. Disaster recovery plans should consider this issue.

  • I think there is some level of prevention but an IT that does a power trip like this is very dangerous even backup are not the end game solution.
    • by gweihir ( 88907 )

      The right backup is. Offline or write-once and off-site with multiple copies. Managed by a team only responsible for that with 4-eye principle for access. Yes, that is more expensive. But it is not rocket-science.

  • If they had immutable cloud backups that were taken hourly even though this person had administrative rights the backups could quickly and easily be restored. Not quite certain where the $30K cost came from.
    • The $30k cost is from the remaining IT people telling the desperate C-suite people that they need $30k worth of new hardware to restore the data ASAP.
    • by gweihir ( 88907 )

      Indeed. My guess is that the 30k were because they had to replay transactions logs because the backups they could get were older. And they may have never done that and needed assistance.

  • Idiots abound! (Score:5, Insightful)

    by gillbates ( 106458 ) on Tuesday May 17, 2022 @07:27PM (#62544578) Homepage Journal

    The real mystery isn't how a single rogue employee crippled the company, but how a 6 billion dollar company ended up with a ten cent disaster recovery plan.

    It seems that even had this guy not been malicious, a clumsy employee with a cup of coffee could have accidentally done the same thing.

    • by AmiMoJo ( 196126 )

      If this only cost them $30k to restore, they probably had a pretty good backup plan in place.

      It's rarely just a case of restoring a database snapshot. They need to figure out which is the most recent good snapshot and what data between then and when he started deleting stuff has been lost. Often drives fail when restoring data, it's a stressful process for them and ZFS/RAID rebuilds take time.

      They might not have wanted to overwrite the production servers either, to preserve evidence or perhaps do data recov

    • What's wrong with the plan? The news here is that someone went to prison. A company mistake costing $30k directly to recover for a 6 billion dollar company isn't news. It's actually evidence that a disaster recovery plan is in place.

    • "It seems that even had this guy not been malicious, a clumsy employee with a cup of coffee could have accidentally done the same thing."

      I reject the premise of your use case. Nobody builds things in a way that shorting out a single console will lead to catastrophic failure.

      Well... okay. If you're in Star Wars you engineer a platform built on lava flows so that nudging a panel causes complete structural failure... but nobody else does that!

  • There have been serious crimes in China where the results aren't so nice. [theatlantic.com]

    Still, he can have years to perfect his skills in making Christmas lights [independent.co.uk] to keep us all happy in the coming years.

  • by freeze128 ( 544774 ) on Tuesday May 17, 2022 @07:33PM (#62544590)
    How much did Microsoft pay BleepingComputer to *NOT* make the headline Bing Deletes Database?
    • Reading this, I heard Dana Gould saying an adapted version of his classic Black Dahlia joke punchline.

      And to this dayâ¦police still donâ(TM)t knowâ¦what it wasâ¦the company did to deserve that.

    • Who would believe Bing had such capabilities when it can't even return a simple search result.

      Or did they Bing "DELETE FROM Employees WHERE Name='Al';

      and Bing helpfully replied:

      Showing results for "DELETE FROM Employees WHERE Name=All;

    • Bing deletes database...and nothing of value was lost.
  • I love the feeling of accomplishment. I have built networks all over the world, and it is my legacy that the networks still exist today. Yes, I get mad at the company, but I'd rather just leave than destroy what I have done. Also, Why are people still using DAT? I used DLT 20 years ago.
  • by Anonymous Coward

    I've worked with quite a few in the last decade. "We can do that, no problem!" does not include ever restoring a production system from scratch. It includes a lot of "I pushed the button on the GUI!!!"

  • ...when you have fucking root access.

    Root accounts for organizations really need to require multiple authorizations to login, and every action/program/script that is run should require the other authorizers to confirm, so that there is always supervision.
    • by v1 ( 525388 )

      Root accounts for organizations really need to require multiple authorizations to login,

      It sounds like he was basically the only one at the top in the IT there? And his complaints and suggestions were being ignored. Many of his complaints were specifically regarding security problems that the management refused to take action on.

      And it may exist somewhere somehow, but I've never seen two-person root login anywhere before.

      It sounds like the fairly typical case of overworked, understaffed IT getting sick of

  • The secret to this kind of thing is how you set it up. I won't delete anything at work. However, if I die or am fired I am quite sure the system will fall apart on its own. I'm the only one that does maintenance.

  • Have quality processes/plans in place AND ensure employees are not resentful. Neither is easy, but both are necessary.
  • Because if they don't and someone breaks in using those accounts somehow, they'll be looking for me.

    I started at one place where the previous admin had left so there was no overlap. The one before that had been fired 1-2 years prior. The fired employee's login and SSH keys were used to access all the customer facing systems in the cloud.

    It was hard coded into all kinds of things and I had to follow the spaghetti strands to change it. It took awhile but it was better protected against any ex-employee and

Children begin by loving their parents. After a time they judge them. Rarely, if ever, do they forgive them. - Oscar Wilde

Working...