Why Post-Publication Review Needs a Permanent Record
In 2015, a French court ordered PubPeer to reveal the identity of an anonymous commenter who had flagged image manipulation in published papers. The commenter was right — the papers were eventually corrected or retracted. But the legal threat was real, and the message was clear: if your scientific criticism lives on someone else's server, it can be silenced.

This isn't an isolated case. Researchers who post critical reviews face legal threats, institutional pressure, and platform-level takedowns. The people doing the most important quality control work in science are also the most exposed.
The problem isn't that platforms like PubPeer don't try to protect their users. They do, often at significant legal cost. The problem is structural: any platform run by a single organization, in a single legal jurisdiction, with a single domain name, has a single point of failure.
What would it take to fix this?
Post-publication review needs infrastructure with three properties:
Permanence. Once a review is published, it stays published.
Independence. No single organization controls who can post or what stays up.
Accountability. Anonymity where needed, but also verified identities for those who want their reviewing track record to count.
These properties are in tension with each other. Permanence without accountability enables abuse. Accountability without anonymity silences the people who need protection most. Getting the balance right matters.
How PEvO approaches this
PEvO is an open-source platform for scientific publication and peer review. It writes to a permanent, decentralized record that no single party controls. Here's what that means in practice:
Reviews can't be taken down. Once posted, a review exists on a distributed network maintained by independent operators worldwide.
Verified scientists, protected critics. Researchers verify their identity once. After that, they can review under their own name to build a track record, or anonymously through a platform-managed proxy when the situation calls for it.
Structured evaluation. Reviewers rate papers on methodology, novelty, clarity, and significance. Over time, this builds a transparent, computable reputation for both authors and reviewers, based on the quality of their contributions, not their institutional affiliation or publication count.
No vendor lock-in. The entire platform is MIT-licensed. The data is on an open network. If you don't like how we run PEvO, you can fork it and run your own instance with the same data. This isn't a feature, it's the architecture. It means the platform can't enshittify, because users can leave without losing anything.
Who this is for
PEvO isn't trying to replace journals or existing review platforms. It's adding a layer that doesn't exist yet: a permanent, open record of scientific evaluation that nobody owns.
The project is non-profit, volunteer-run, and open to contributors.
Join our Discord at https://discord.gg/jqvmz7wdPV
The code is at https://github.com/pharesim/pevo-science.
It is very cool :). Good to see the front end!