Epstein Files Jan 30, 2026

Data hoarders on reddit have been hard at work archiving the latest Epstein Files release from the U.S. Department of Justice. Below is a compilation of their work with download links.

Please seed all torrent files to distribute and preserve this data.

Ref: https://old.reddit.com/r/DataHoarder/comments/1qrk3qk/epstein_files_datasets_9_10_11_300_gb_lets_keep/

Epstein Files Data Sets 1-8: INTERNET ARCHIVE LINK

Epstein Files Data Set 1 (2.47 GB): TORRENT MAGNET LINK
Epstein Files Data Set 2 (631.6 MB): TORRENT MAGNET LINK
Epstein Files Data Set 3 (599.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 4 (358.4 MB): TORRENT MAGNET LINK
Epstein Files Data Set 5: (61.5 MB) TORRENT MAGNET LINK
Epstein Files Data Set 6 (53.0 MB): TORRENT MAGNET LINK
Epstein Files Data Set 7 (98.2 MB): TORRENT MAGNET LINK
Epstein Files Data Set 8 (10.67 GB): TORRENT MAGNET LINK


Epstein Files Data Set 9 (Incomplete). Only contains 49 GB of 180 GB. Multiple reports of cutoff from DOJ server at offset 48995762176.

ORIGINAL JUSTICE DEPARTMENT LINK

  • TORRENT MAGNET LINK (removed due to reports of CSAM)

/u/susadmin’s More Complete Data Set 9 (96.25 GB)
De-duplicated merger of (45.63 GB + 86.74 GB) versions

  • TORRENT MAGNET LINK (removed due to reports of CSAM)

Epstein Files Data Set 10 (78.64GB)

ORIGINAL JUSTICE DEPARTMENT LINK

  • TORRENT MAGNET LINK (removed due to reports of CSAM)
  • INTERNET ARCHIVE FOLDER (removed due to reports of CSAM)
  • INTERNET ARCHIVE DIRECT LINK (removed due to reports of CSAM)

Epstein Files Data Set 11 (25.55GB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 574950c0f86765e897268834ac6ef38b370cad2a


Epstein Files Data Set 12 (114.1 MB)

ORIGINAL JUSTICE DEPARTMENT LINK

SHA1: 20f804ab55687c957fd249cd0d417d5fe7438281
MD5: b1206186332bb1af021e86d68468f9fe
SHA256: b5314b7efca98e25d8b35e4b7fac3ebb3ca2e6cfd0937aa2300ca8b71543bbe2


This list will be edited as more data becomes available, particularly with regard to Data Set 9 (EDIT: NOT ANYMORE)


EDIT [2026-02-02]: After being made aware of potential CSAM in the original Data Set 9 releases and seeing confirmation in the New York Times, I will no longer support any effort to maintain links to archives of it. There is suspicion of CSAM in Data Set 10 as well. I am removing links to both archives.

Some in this thread may be upset by this action. It is right to be distrustful of a government that has not shown signs of integrity. However, I do trust journalists who hold the government accountable.

I am abandoning this project and removing any links to content that commenters here and on reddit have suggested may contain CSAM.

Ref 1: https://www.nytimes.com/2026/02/01/us/nude-photos-epstein-files.html
Ref 2: https://www.404media.co/doj-released-unredacted-nude-images-in-epstein-files

    • WhatCD@lemmy.world
      link
      fedilink
      arrow-up
      0
      ·
      24 days ago

      What happens when you go to https://www.justice.gov/epstein/files/DataSet%209.zip in your browser?

        • WhatCD@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          24 days ago

          Yeah when I run into this I’ve switched browsers and it’s helped. I’ve also switched IP addresses and it’s helped.

        • WorldlyBasis9838@lemmy.world
          link
          fedilink
          arrow-up
          0
          ·
          24 days ago

          Can also confirm, receiving more chunks again.

          EDIT: Someone should play around with the retry and backoff settings to see if a certain configuration can avoid being blocked for a longer period of time. IP rotating is too much trouble.

          • WhatCD@lemmy.world
            link
            fedilink
            arrow-up
            0
            ·
            24 days ago

            Updated the script to display information better: https://pastebin.com/S4gvw9q1

            It has one library dependency so you’ll have to do:

            pip install rich
            

            I haven’t been getting blocked with this:

            python script.py 'https://www.justice.gov/epstein/files/DataSet%209.zip' -o 'DataSet 9.zip' --cookies cookie.txt --retries 2 --referer 'https://www.justice.gov/age-verify?destination=%2Fepstein%2Ffiles%2FDataSet+9.zip' --ua '<set-this>' --timeout 90 -t 16 -c auto
            

            The new script can auto set threads and chunks, I updated the main comment with more info about those.

            I’m setting the --ua option which let’s you override the user agent header. I’m making sure it matches the browser that I use to request the cookie.

              • xodoh74984@lemmy.worldOP
                link
                fedilink
                arrow-up
                0
                ·
                24 days ago

                I’ve been trying to achieve the same thing using aria2 with 8 concurrent download threads + cookies exported from my browser, the user agent from my browser, and a random retry interval between 5-30 seconds after each download failure.

                But I think I’ve been blocked by the server.

                My download attempts started to fail before I began using my browser’s user agent, so it’s difficult for me to know what exactly caused me to get blocked. The download was incredibly fast before things started breaking and could’ve finished within 30 minutes.

                Does anyone know how long the apparent IP ban lasts?

                • WhatCD@lemmy.world
                  link
                  fedilink
                  arrow-up
                  0
                  ·
                  24 days ago

                  I don’t know exactly, but seems about an hour or two if you get a 401 unauthorized.

                  Would you be interested in joining out effort here? I’m hoping to crowd source these chunks and then combine our effort.

                  • xodoh74984@lemmy.worldOP
                    link
                    fedilink
                    arrow-up
                    0
                    ·
                    24 days ago

                    Absolutely! By the way, I hadn’t thanked you yet for your massive effort here. Thank you very much for putting this all together. Also, love your username.

                    Do you think we could modify the script to use HTTP Range headers and download from the end of the file to the beginning? Or, perhaps we could work together and target different byte ranges?

                    You seem much better versed in this than I am to know what’s possible.

                  • WorldlyBasis9838@lemmy.world
                    link
                    fedilink
                    arrow-up
                    0
                    ·
                    24 days ago

                    My IP appears to have been completely blocked by the domain. Multiple browsers, devices, confirm it.

                    If anyone has any suggestions for other options, I’m listening.