I am a computer forensics expert with over a decade of study in the area. Physically damaging a hard drive platter is not going to do anything to stop data recovery, for they can just piece it together like a jigsaw puzzle and put it on a spin stand to read via spin stand microscopy. Encrypting a drive will oftentimes only actually encrypt when data is written to it, so if you had a windows installation and then you install Linux with FDE (Full Disk Encryption) in an attempt to destroy data on it, there is a high probability you will only destroy the data that is overwritten. FDE doesn't actually usually encrypt your entire drive, it only encrypts everything that you write to your drive.
The most secure thing that you can do (short of complete physical destruction [i.e., think melting it down into a liquid --- or at least reaching the Currie point of the magnetic substrate, which is sufficient --- rather than breaking it up with a hammer], or otherwise ruining the drive by exposing a super powerful magnetic force to it) is a three part process.
1) ATA Secure Erase
This is a firmware implemented technique for securely erasing data from hard drives and solid state drives. The implementation between SSD and HDD is somewhat different. It has more ability than anything not implemented in firmware, for it can overwrite bad sectors and such things, whereas things like dm-crypt cannot attempt writing to such sectors that the drive internally recognizes as bad. Usually ATA Secure Erase will either issue a command to NAND memory cells telling them to reset to 1, or it will write a pass of data some number of times to the magnetic platter of a hard drive.
ATA Secure Erase has the potential to be the securest erasure technique that preserves drive functionality, for it can information theoretically wipe the entire drive (which means make it physically impossible to recover the data from it). However, it relies on a correct implementation of the firmware, and this has been hit or miss depending on the specific drive studied (researchers have shown that on some drives it works correctly, but on other drives it fails silently).
ata.wiki.kernel.org
2) ATA Secure Erase a second time with the enhanced flag
This is a secondary, unique implementation of a secure erase protocol. Exactly how it is implemented depends on the drive. Many modern drives are SED, or Self Encrypting Drives, which means everything written to them is transparently encrypted during write and decrypted during read. There is usually an ability to set a weak password, but nobody would really use this for encryption per se anyway. The primary use of it is that you can issue an enhanced secure erase command, and it merely needs to rotate (i.e., overwrite) the onboard integrated encryption key, which renders all of the encrypted data that was written to the drive computationally securely inaccessible, which means that it cannot be accessed without breaking the strong encryption used.
On some hard drives this may do a secondary "off-track pass," which puts the magnetic head slightly off track center in order to protect from the theoretical attack of forensic trace evidence being recoverable from magnetic remnants on track edges (the feasibility of this attack is contested, but some implementations of enhanced secure erase aim to protect from the potential for it).
This technique has the potential to render a hard drive entirely securely erased, for even data in bad sectors was encrypted by the rotated key which is now inaccessible because of having been overwritten with the new key. However, as with ATA Secure Erase, you are counting on the firmware implementation being correct, and this has been hit or miss with different drives.
3) Three passes of random data writes with openssl or similar (DBAN used to be good for this, but it isn't maintained any longer and broke down on modern hardware last I knew)
If you are doing this on an SSD, you should use this first, because it will fill the drive up with randomness and make its performance possibly substantially fall even, but upon doing the ATA Secure Erase the performance lost by this step will be restored. With a magnetic platter device, it doesn't matter which you do first.
This technique doesn't rely on firmware and rather uses open source software that can be audited. It probably has the highest probability of being successful of the three techniques, but even upon being successful it isn't capable of erasing data as well as the ATA Secure Erase commands are, for it cannot overwrite bad sectors or other such things.
Does one pass overwrite with randomness to /dev/sd*, uses the AES-NI implementation of the CPU to allow for very fast generation of pseudo-randomness.
There is some controversy regarding how many passes is sufficient. To play it safe I suggest using three passes of randomness. Some people say fewer passes is sufficient on a hard drive, but I would still suggest three to play it safe. On an SSD you really do need at least two and preferably three passes, because they do wear leveling, over provisioning, and other such things. If data destruction is of the utmost importance, such that not even a trace can be recovered, I suggest three passes with randomness.
You want to use all three techniques. You use the two ATA Secure Erase techniques because they have the most ability to render data on the drive inaccessible if they are successful, but either one of them or both of them could be buggy and silently fail. You use the OpenSSL wiping step (with three passes) as a fail safe, for it has the highest probability of living up to its potential, but its potential falls short of the potential of the more failure prone ATA Secure Erase techniques.
You could additionally try doing a pass using just /dev/urandom as the randomness source, to not rely on the AES-NI implementation of the CPU. Or you could use some other software PRNG. But this will probably be very slow to wipe anything. AES-NI can produce pseudorandom data at over a gigabyte a second, /dev/urandom implementations oftentimes only get up to like 13 mbps of pseudorandomness generation.
It is important to use random data for the overwrite if possible, because using things like 0s can be weak to various advanced forensic attacks.