Epstein Files Jan 30, 2026
Data hoarders on reddit have been hard at work archiving the latest Epstein Files release from the U.S. Department of Justice. Below is a compilation of their work with download links.
Please seed all torrent files to distribute and preserve this data.
Epstein Files Data Sets 1-8: INTERNET ARCHIVE LINK
Epstein Files Data Set 1 (2.47 GB): TORRENT MAGNET LINK
Epstein Files Data Set 2 (631.6 MB): TORRENT MAGNET LINK
Epstein Files Data Set 3 (599.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 4 (358.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 5: (61.5 MB) TORRENT MAGNET LINK
Epstein Files Data Set 6 (53.0 MB): TORRENT MAGNET LINK
Epstein Files Data Set 7 (98.2 MB): TORRENT MAGNET LINK
Epstein Files Data Set 8 (10.67 GB): TORRENT MAGNET LINK
Epstein Files Data Set 9 (Incomplete). Only contains 49 GB of 180 GB. Multiple reports of cutoff from DOJ server at offset 48995762176.
ORIGINAL JUSTICE DEPARTMENT LINK
- TORRENT MAGNET LINK (removed due to reports of CSAM)
/u/susadmin’s More Complete Data Set 9 (96.25 GB)
De-duplicated merger of (45.63 GB + 86.74 GB) versions
- TORRENT MAGNET LINK (removed due to reports of CSAM)
Epstein Files Data Set 10 (78.64GB)
ORIGINAL JUSTICE DEPARTMENT LINK
- TORRENT MAGNET LINK (removed due to reports of CSAM)
- INTERNET ARCHIVE FOLDER (removed due to reports of CSAM)
- INTERNET ARCHIVE DIRECT LINK (removed due to reports of CSAM)
Epstein Files Data Set 11 (25.55GB)
ORIGINAL JUSTICE DEPARTMENT LINK
SHA1: 574950c0f86765e897268834ac6ef38b370cad2a
Epstein Files Data Set 12 (114.1 MB)
ORIGINAL JUSTICE DEPARTMENT LINK
SHA1: 20f804ab55687c957fd249cd0d417d5fe7438281
MD5: b1206186332bb1af021e86d68468f9fe
SHA256: b5314b7efca98e25d8b35e4b7fac3ebb3ca2e6cfd0937aa2300ca8b71543bbe2
This list will be edited as more data becomes available, particularly with regard to Data Set 9 (EDIT: NOT ANYMORE)
EDIT [2026-02-02]: After being made aware of potential CSAM in the original Data Set 9 releases and seeing confirmation in the New York Times, I will no longer support any effort to maintain links to archives of it. There is suspicion of CSAM in Data Set 10 as well. I am removing links to both archives.
Some in this thread may be upset by this action. It is right to be distrustful of a government that has not shown signs of integrity. However, I do trust journalists who hold the government accountable.
I am abandoning this project and removing any links to content that commenters here and on reddit have suggested may contain CSAM.
Ref 1: https://www.nytimes.com/2026/02/01/us/nude-photos-epstein-files.html
Ref 2: https://www.404media.co/doj-released-unredacted-nude-images-in-epstein-files

I’m in the process of downloading both dataset 9 torrents (45.63 GB + 86.74 GB). I will then compare the filenames in both versions (the 45.63GB version has 201,358 files alone), note any duplicates, and merge all unique files into one folder. I’ll upload that as a torrent once it’s done so we can get closer to a complete dataset 9 as one file.
here is the file contents w/ SHA-256 hashes: https://files.catbox.moe/cw09tv.txt
the original post on reddit was deleted after sharing this https://old.reddit.com/r/DataHoarder/comments/1qsfv3j/epstein_9_10_11_12_reddit_keeps_nuking_thread_we/o2vqgoc/
anyone have the original 186gb magnet link from that thread? someone said reddit keeps nuking it because it implicates reddit admins like spez
Thank you so much for re-archiving it in a better format
I’m downloading 8-11 now, I’m seeding 1-7+12 now. I’ve tried checking up on reddit, but every other time i check in the post is nuked or something. My home server never goes down and I’m outside USA. I’m working on the 100GB+ #9 right now and I’ll seed whatever you can get up here too.
When merging versions of Data Set 9, is there any risk of loss with simply using
rsync --checksumto dump all files into one directory and merge the sets?rsync --checksumis better than my file name + file size comparison, since you are calculating the hash of each file and comparing it to the hash all other files. For example, if there is a file called data1.pdf with size 1024 bytes in dataset9-v1, and another file called data1.pdf with size 1024 bytes in dataset9-v2, but their content is different, my method will still detect them as identical files.I’m going to modify my script to calculate and compare the hashes of all files that I previously determined to be duplicates. If the hashes of the duplicates in dataset9 (45GB torrent) match the hashes of the duplicates in dataset9 (86GB torrent), then they are in fact duplicates between the two datasets.
Amazing, thank you. That was my thought, check hashes while merging the files to keep any copies that might have been modified by DOJ and discard duplicates even if the duplicates have different metadata, e.g. timestamps.
looking forward to your torrent, will seed.
I have several incomplete sets of files from dataset 9 that I downloaded with a scraped set of urls - should I try to get them to you to compare as well?
Yes! I’m not sure the best way to do that - upload them to MEGA and message me a download link?
maybe archive.org? that way they can be torrented if others want to attempt their own merging techniques? either way it will be a long upload, my speed is not especially good. I’m still churning through one set of urls that is 1.2M lines, most are failing but I have 65k from that batch so far.
archive.org is a great idea. Post the link here when you can!
I’ll get the first set (42k files in 31G) uploading as soon as I get it zipped up. it’s the one least likely to have any new files in it since I started at the beginning like others but it’s worth a shot
Superb, I have 1-8, 11-12.
Only remaining 10 (to complete - downloading from Archive.org now)
Dataset 9 is the biggest. I ended up writing a parser to go through every page on justice.gov and make an index list.
Current estimate of files list is:
Your merged 45GB + 86GB torrents (~500K-700K files) would be a huge help. Happy to cross-reference with my scraped URL list to find any gaps.
deleted by creator
Thank you so much for keeping us updated!!
Have a good night. I’ll be waiting to download it, seed it, make hardcopies and redistribute it.
Please check back in with us