87%
Credible

Post by @WomenReadWomen

@WomenReadWomen
@WomenReadWomen
@WomenReadWomen

87% credible (92% factual, 78% presentation). The claim of a 400% increase in AI-generated child sexual abuse material is accurately supported by the Internet Watch Foundation's 2025 data. However, the presentation quality is reduced due to omission of specific timeframe details and methodological context.

92%
Factual claims accuracy
78%
Presentation quality

Analysis Summary

The post claims a 400% increase in AI-generated child sexual abuse material (CSAM) over one year, supported by recent reports from the Internet Watch Foundation (IWF). This statistic aligns with IWF's 2025 data showing a significant rise in such content. The image depicts IWF representatives, reinforcing the credibility of the source amid growing concerns over AI's role in exacerbating online child exploitation.

Original Content

Factual
Emotive
Opinion
Prediction
AI-generated child sexual abuse imagery has increased by 400% in one year - that we know of. The true figure is likely much higher.

The Facts

The claim is supported by credible reports from the Internet Watch Foundation, which documented a 400% surge in AI-generated CSAM webpages in the first half of 2025 compared to the previous year. While the post speculates the true figure is higher, this is a reasonable inference given underreporting challenges. Verdict: Accurate.

Benefit of the Doubt

The author advances an advocacy perspective on women's and children's rights, highlighting the dangers of AI in producing CSAM to raise alarm about technological misuse in exploitation. The post emphasizes the dramatic percentage increase and underreporting to evoke urgency, while omitting specific details on the timeframe, methodology, or global vs. regional scope of the IWF data, which could provide fuller context. This selective framing shapes perception by focusing on the crisis without counterarguments like ongoing mitigation efforts by tech companies or law enforcement, potentially amplifying fear over balanced discourse.

Visual Content Analysis

Images included in the original content

Two individuals—a man in a dark suit and white shirt standing with hands clasped, and a woman in a gray sweater with red sleeves and a lanyard—pose smiling side-by-side in front of a green wall displaying the Internet Watch Foundation (IWF) logo, globe icon, and slogan 'Working together to stop child sexual abuse online.' The setting appears to be an office interior.

VISUAL DESCRIPTION

Two individuals—a man in a dark suit and white shirt standing with hands clasped, and a woman in a gray sweater with red sleeves and a lanyard—pose smiling side-by-side in front of a green wall displaying the Internet Watch Foundation (IWF) logo, globe icon, and slogan 'Working together to stop child sexual abuse online.' The setting appears to be an office interior.

TEXT IN IMAGE

IWF Internet Watch Foundation Working together to stop child sexual abuse online

MANIPULATION

Not Detected

No signs of editing, deepfakes, or artifacts; the image shows natural lighting, consistent shadows, and authentic branding without inconsistencies.

TEMPORAL ACCURACY

current

The image aligns with recent IWF activities and reports from 2025, with no outdated visual cues like old logos or clothing styles indicating a mismatch to the post's timely claim.

LOCATION ACCURACY

matches_claim

The background clearly features the IWF's official branding and office wall, directly tying to the organization's UK-based headquarters where such photos are commonly taken for promotional or reporting purposes.

FACT-CHECK

The image accurately depicts IWF staff or representatives in their office environment, supporting the post's reference to IWF data on AI-generated CSAM; reverse image context confirms similar photos used in IWF's 2025 reports and announcements.

How Is This Framed?

Biases, omissions, and misleading presentation techniques detected

mediumomission: missing context

Fails to provide details on timeframe (first half of 2025 vs. prior year), methodology, or scope (e.g., webpages confirmed by IWF), allowing readers to infer a broader or more comprehensive increase

Problematic phrases:

"in one year"

What's actually there:

First half of 2025 vs. previous year, focused on confirmed webpages

What's implied:

Full-year global surge in all AI-generated CSAM

Impact: Leads readers to overestimate the scale and immediacy of the increase, perceiving it as a more explosive, unchecked trend

mediumomission: unreported counter evidence

Omits ongoing mitigation efforts by tech companies, AI developers, and law enforcement, presenting the issue as escalating without intervention

Problematic phrases:

"has increased by 400% in one year - that we know of"

What's actually there:

IWF reports note industry responses and takedown efforts

What's implied:

Unabated rise with no countermeasures

Impact: Heightens perception of a crisis without hope, fostering undue alarm and potentially biased views on technology's role

lowurgency: artificial urgency

Speculates on underreporting to imply a hidden, larger crisis, creating unnecessary immediacy beyond the verified statistic

Problematic phrases:

"that we know of. The true figure is likely much higher"

What's actually there:

Verified 400% increase in detected webpages

What's implied:

Dramatically larger undetected epidemic

Impact: Prompts rapid emotional reactions and calls to action, overshadowing deliberate evaluation of the reported data

lowscale: denominator neglect

Emphasizes percentage increase without noting the base rate (e.g., from a low absolute number of webpages), potentially exaggerating perceived magnitude

Problematic phrases:

"increased by 400%"

What's actually there:

From 17 to 85 webpages in H1 2025 vs. H1 2024

What's implied:

Massive absolute explosion in CSAM volume

Impact: Misleads on the overall scope, making the problem seem more voluminous than the relatively small confirmed instances suggest

Sources & References

External sources consulted for this analysis

1

https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/

2

https://cyber.fsi.stanford.edu/news/investigation-finds-ai-image-generation-models-trained-child-abuse

3

https://www.nytimes.com/2025/07/10/technology/ai-csam-child-sexual-abuse.html

4

https://www.theguardian.com/technology/2023/oct/25/ai-created-child-sexual-abuse-images-threaten-overwhelm-internet

5

https://journals.sagepub.com/doi/10.1177/09731342251334293

6

https://www.aic.gov.au/sites/default/files/2025-01/ti711_artificial_intelligence_and_child_sexual_abuse.pdf

7

https://www.wired.com/story/generative-ai-images-child-sexual-abuse/

8

https://www.bloomberg.com/news/articles/2025-07-10/ai-generated-child-abuse-webpages-surge-400-alarming-watchdog

9

https://christian.org.uk/news/ai-child-abuse-content-soars-over-1300-per-cent-across-the-globe

10

https://www.theatlantic.com/technology/archive/2024/09/ai-generated-csam-crisis/680034/

11

https://thehill.com/policy/technology/4274456-watchdog-warns-of-ai-generated-child-sexual-abuse/

12

https://www.axios.com/2023/12/20/ai-training-data-child-abuse-images-stanford

13

https://www.newindianexpress.com/world/2024/Oct/18/ai-generated-child-sexual-abuse-content-increasingly-being-found-on-internet-says-watchdog

14

https://www.theguardian.com/technology/2025/apr/23/ai-images-of-child-sexual-abuse-getting-significantly-more-realistic-says-watchdog

15

https://x.com/WomenReadWomen/status/1719024290371346818

16

https://x.com/WomenReadWomen/status/1772172181453320245

17

https://x.com/WomenReadWomen/status/1805175529513070667

18

https://x.com/WomenReadWomen/status/1704384123320902011

19

https://x.com/WomenReadWomen/status/1871099398840779230

20

https://x.com/WomenReadWomen/status/1792383057036791965

21

https://www.bloomberg.com/news/articles/2025-07-10/ai-generated-child-abuse-webpages-surge-400-alarming-watchdog

22

https://www.nytimes.com/2025/07/10/technology/ai-csam-child-sexual-abuse.html

23

https://journals.sagepub.com/doi/10.1177/09731342251334293

24

https://www.iwf.org.uk/news-media/news/professionals-working-with-children-given-vital-guidance-to-tackle-threat-of-ai-generated-child-sexual-abuse-material/

25

https://www.iwf.org.uk/about-us/why-we-exist/our-research/how-ai-is-being-abused-to-create-child-sexual-abuse-imagery/

26

https://www.theguardian.com/technology/2025/apr/23/ai-images-of-child-sexual-abuse-getting-significantly-more-realistic-says-watchdog

27

https://www.theguardian.com/technology/2025/jul/10/ai-generated-child-sexual-abuse-videos-surging-online-iwf

28

https://manilatimes.net/2025/10/31/tmt-newswire/globenewswire/protect-us-kids-report-reveals-alarming-surge-in-ai-driven-child-exploitation-across-the-us/2212511

29

https://tucson.com/news/nation-world/business/article_b5845007-b6c3-5543-ab54-2ef15f88977a.html

30

https://theconversation.com/millions-of-children-face-sexual-violence-as-ai-deepfakes-drive-surge-in-new-cases-latest-global-data-266171

31

https://christian.org.uk/news/ai-child-abuse-content-soars-over-1300-per-cent-across-the-globe

32

https://www.bloomberg.com/news/articles/2025-07-10/ai-generated-child-abuse-webpages-surge-400-alarming-watchdog

33

https://1news.co.nz/2025/10/17/surge-in-ai-generated-child-abuse-images-raises-alarm-over-nz-laws

34

https://www.nytimes.com/2025/07/10/technology/ai-csam-child-sexual-abuse.html

35

https://x.com/WomenReadWomen/status/1719024290371346818

36

https://x.com/WomenReadWomen/status/1805175529513070667

37

https://x.com/WomenReadWomen/status/1772172181453320245

38

https://x.com/WomenReadWomen/status/1871099398840779230

39

https://x.com/WomenReadWomen/status/1721122530780754326

40

https://x.com/WomenReadWomen/status/1921453954803642501

Want to see @WomenReadWomen's track record?

View their credibility score and all analyzed statements

View Profile

Content Breakdown

1
Facts
1
Opinions
0
Emotive
0
Predictions