Facebook has said that 8.7 million images of child nudity were removed by its moderators in just three months.
The social network said that it had developed new software to automatically flag possible sexualised images of children.
It was put into service last year but has only become public now.
Another program can also detect possible instances of child grooming related to sexual exploitation, Facebook said.
Of the 8.7 million images removed, 99% were taken down before any Facebook user had reported them, the social network said.
Last year, Facebook was heavily criticised by the chairman of the Commons media committee, Damian Collins, over the prevalence of child sexual abuse material on the platform.
This followed a BBC investigation in 2016, which found evidence that paedophiles were sharing obscene images of children via secret Facebook groups.
Now, Facebook’s global head of safety Antigone Davis has said that Facebook is considering rolling out systems for spotting child nudity and grooming to Instagram as well.
A separate system is used to block child sexual abuse imagery which has previously been reported to authorities.
“Recently, our engineers have been focused on classifiers to actually prevent unknown images, new images,” Ms Davis said in an online video about the technology.
Such newly discovered material is reported by Facebook to the National Center for Missing and Exploited Children (NCMEC).