Perhaps you should be more specific regarding the exact type of setting (RAID or otherwise) those drives were in, and/or what type of enclosure they were in. Regarding the damage assessment, as far as I know (no pro), it could range from nothing at all to logical bad sector(s) to physical bad sector(s) to... a lot worse. In my (limited) experience, shutting down a HDD during a write operation is likely to cause “logical bad sectors”, i.e. sectors which are still physically operational but are in such an inconsistent state that they can no longer be read, and that can confuse the system to the point of making programs freeze when they try to access those particular sectors. It's normally solved by overwriting the problematic sector(s), but of course their contents are lost in the process. See
this for instance (the HGST drive). I would first check the SMART status of that drive plugged alone (outside of the enclosure) on a Linux system (not Windows as it will mount it automatically and might write on it which you
do not want).
Then if there are either physical or logical bad sectors, in limited number (I would say up to 10 to stay on the safe side, beyond that it would be very risky with such a large volume of data to attempt anything on your own with limited tools & knowledge & experience {*}), the wise course of action would be to clone that drive entirely – of course, it means having a spare $500, 14TB drive, but if you can afford to hoard that much data you should afford to backup that much data, or at least what's really important (and a side question would be, is it worth hoarding data which is not really important – a question I'm asking myself regularly regarding my own hoarding habits). Because, as you may be about to learn the hard way,
RAID is no backup solution.
Then, whether or not there is a problem with that particular drive, if all the drives in that enclosure were in a RAID setting, the whole array may be in an inconsistent state. Since one other drive was about to be swapped, and you removed that one instead, with a little help from Murphy's Law there may not be enough redundancy with the remaining drives to rebuild what's missing (I'm not sure about this, I only ever used RAID 1 myself, but I think that when a drive in a RAID 5 setting or similar is detected as “bad”, even if just a few clusters are missing / inconsistent, the controler will want to rebuild it entirely).
So, in a nutshell, you may be in for a lot more trouble than just that one file which was being written when that fateful pull happened.
{*} Recently I recovered data from a
4TB HDD, which is a lot less than 14TB but already
a lot of data (as it takes about 8 hours to copy the whole contents of a 4TB HDD in perfect condition), and one head failed midway through ; as I don't have the training or equipment to perform a replacement of a head stack assembly, all I could do was continue the cloning until the end with 7 working heads out of 8 and then assess the damage. Luckily there was only about 550GB of data on it, and it wasn't in a RAID setting, so the final recovery rate was quite good. But with a 14TB drive having even small media damage and/or head damage due to the sudden shutdown, and which is part of a RAID array, that would be akin to playing russian roulette with a machine gun.