Facebook is responding to whistleblower Frances Haugen’s testimony and a recent Wall Street Journal leak by attempting to shift the narrative on hate speech. Integrity VP Guy Rosen has posted a defense of the social network’s anti-hate measures where he argued the declining visibility of hate speech mattered more than the mere existence of that content. The “prevalence” (aka visibility) of hate on Facebook has dropped nearly 50 percent in the past three quarters to 0.05 percent of content viewed, Rosen said, or about five views out of every 10,000.

The executive contended it was “wrong” to focus on content removals as the only metric. There were other ways to counter hate, Rosen said, and Facebook had to be “confident” before it removed any material. That meant erring on the side of caution to avoid mistakenly removing content, and limiting the reach of people, groups and pages that will probably violate policies.

There is a degree of truth here. Facebook has occasionally run into trouble for mistakenly flagging content as hate speech, and an aggressive removal system might lead to further accidents. Likewise, hate will only have limited impact if few people ever see a given post.

However, there’s little doubt Facebook is engaged in some spin. Haugen in her testimony asserted that Facebook can only catch a “very tiny minority” of offending material — that’s still an issue if true, even if only a small fraction of users ever see the material. The Journal‘s leaked documents, meanwhile, indicated that Facebook only removed a “low-single-digit” percentage of content and had trouble consistently spotting first-person shooting videos or racist tirades.

Rosen’s response also didn’t touch on Haugen’s allegations that Facebook resisted implementing safer algorithms and other efforts to minimize hateful and divisive interactions. Facebook may be making significant strides in limiting hate, but that’s not the point made by Haugen or other critics — it’s that the social media firm isn’t doing enough.