Judging and peer-review evidence

Peer review evidence for EB1A: what USCIS wants to see beyond a dashboard screenshot

A lot of candidates have reviewer dashboards, manuscript counts, and automated thank-you emails. The stronger question is not whether you reviewed papers. It is whether the file clearly shows that recognized organizations trusted you to evaluate the work of others because of your expertise.

Published Mar 24, 2026 · Educational only, not legal advice

Short version: peer review evidence gets much stronger when it is packaged as trusted evaluation of others' work, not just review volume. Dashboard counts can prove activity. They often do not prove why the activity reflects peer recognition.

A lot of EB1A candidates have some peer review history and assume that should be enough.

They have screenshots from a reviewer dashboard. They have a count of completed reviews. They may even have invitations from journals or conferences.

Then they still get nervous about whether it actually helps.

That instinct is usually right.

Peer review can be useful evidence. But USCIS often responds much better when you package it as you were trusted to evaluate the work of others rather than just you reviewed a lot of papers.

That framing difference matters.

The common mistake: proving activity instead of recognition

Many candidates submit peer review evidence like this:

  • screenshot of a reviewer portal,
  • list of manuscript counts,
  • automated thank-you emails,
  • maybe one sentence saying they reviewed for a journal.

That can prove activity.

What it often does not prove clearly enough is why that activity reflects peer recognition, subject-matter trust, or selective professional standing.

If the packet reads like “I clicked through a platform and completed tasks,” the officer can treat it as administrative volume.

If the packet reads like “recognized organizations selected me to judge the work of other professionals because of my expertise,” it becomes much more persuasive.

The stronger framing

For the judging criterion, the useful question is usually not:

How many papers did I review?

It is:

What evidence shows credible organizations trusted me to evaluate other professionals' work because of my expertise?

That is the story your exhibits should support.

The strongest peer review bundle usually has four pieces

1. Invitation or selection proof

Start with the clearest evidence that you were invited or selected.

Useful examples include invitation emails from editors, editorial-board invitations, program-committee invitations, reviewer account screens showing selection status, and conference reviewer appointment messages.

This matters because it shows you were not just self-registering into a system with no meaningful screening.

2. Credibility and selectivity proof

This is the piece many candidates skip.

If the officer does not know the venue, your packet has to explain why the invitation means something.

  • journal or conference reputation,
  • publisher information,
  • impact factor or ranking if appropriate,
  • acceptance rate if available,
  • editorial explanation of how reviewers are chosen,
  • evidence that reviewers are selected for subject-matter expertise rather than open signup.

The goal is to make the selection logic legible. You are not just showing a venue name. You are showing that recognized institutions trusted you in a gatekeeping role.

3. Completion proof

Do not stop at the invitation.

Show that the reviews were actually completed with screenshots of completed review history, confirmation emails, reviewer records from the platform, certificates of review, or editorial thank-you messages tied to specific completed reviews.

This closes the obvious gap between “invited” and “performed the work.”

4. Scale and stature proof

Counts help, but counts alone are weak.

A better packet shows the number of manuscripts reviewed, the names of journals or conferences, whether they are international or field-recognized venues, the time span of the review activity, and any repeat invitations suggesting ongoing trust.

The point is not to inflate volume. It is to show a pattern of recognized organizations repeatedly trusting your judgment.

Where weak RFEs usually come from

A weak peer review section often looks like this:

  • dashboard screenshot with counts,
  • no clear invitation proof,
  • no explanation of how reviewers are selected,
  • no context on the venue,
  • no narrative connecting the evidence to peer recognition.

That package can prove you did something. It does not necessarily prove why the thing matters.

This is why people get frustrated. They know the work was real. But the filing never made the significance easy for the officer to see.

The exhibit note most people are missing

If you already have the review history, the missing piece is often not more raw evidence.

It is a short exhibit note that ties each venue to field stature, selection logic, your expertise fit, and completed review proof.

Instead of forcing the officer to infer meaning from screenshots, you make the significance explicit: who trusted you, why they trusted you, what you actually reviewed, and why that trust reflects standing in the field.

A practical packaging example

A stronger peer review section might read like this:

  • Exhibit A: invitation from editor of Journal X asking you to review manuscript Y
  • Exhibit B: short venue note showing Journal X is a recognized journal or publisher in your field and that reviewers are selected for subject expertise
  • Exhibit C: screenshot or confirmation showing the review was completed
  • Exhibit D: table listing additional completed reviews across other recognized venues
  • Exhibit E: short summary noting repeat invitations, total reviews, international scope, and how this reflects external trust in your judgment

That is much easier to adjudicate than a pile of screenshots.

What peer review does and does not do well

Peer review can be strong evidence when it is external to your employer, tied to recognized venues, clearly selective, supported by completion proof, and framed as trusted evaluation of others' work.

Peer review is weaker when it is only a count with no context, from venues the officer cannot evaluate, missing invitation or selection logic, unsupported by completion records, or presented as generic professional activity.

A quick self-check before you file

  1. Do I have proof I was invited or selected, not just that I accessed a platform?
  2. Can I explain why the journal or conference is credible in my field?
  3. Can I show that reviewers are chosen for expertise rather than open participation?
  4. Do I have proof that the reviews were completed?
  5. Does my packet show a pattern of trusted evaluation rather than isolated activity?
  6. If someone unfamiliar with my field saw these exhibits, would they understand why this reflects peer recognition?

Bottom line

Peer reviews can absolutely help an EB1A case.

But USCIS usually responds better when the evidence shows that recognized organizations trusted you to judge the work of others, not just that you accumulated review counts.

That means the strongest bundle is usually invitation or selection proof, venue credibility and selectivity proof, completion proof, and scale and stature proof.

The goal is not to dump screenshots. The goal is to make peer recognition easy to see.

If you want to inspect the structure before buying, use the sample preview first. If you have real evidence but it is packaged thinly, Starter is the right next step. If you are not even sure the case is close enough, take the free fit check.