Category Archives: Content Filtering

The Great FBI Biblical Inappropriate Texting Challenge

FBI battling ‘rash of sexting’ among its employees (CNN)
…employee used a government-issued BlackBerry “to
send sexually explicit messages to another employee…

How bad is the FBI’s sexting problem? (The Week)
…The number of these cases that involved sexting was small,
but it was still big enough to alarm FBI leaders.
…Last year, another CNN investigation uncovered numerous
cases of misconduct within the FBI, many of them sexually charged…

FBI on sexting employees: Everybody does it (NBCNews)
…employees should assume that their bosses can (and will)
monitor communications on their company devices — meaning
that those sending explicit sex messages are bound to get busted…

The Bible contains many
(modern-wise euphemistic) referrals
to human sexual organs and actions

Here’s the challenge :

Will the following bible-based electronic messages sent
internally between imaginary FBI employees be picked
up by the FBI’s own in-house automatic filtering software?

“Show me your stones and I will show you my secret…”

“…maybe not your cloth but definitely your loins…”

“…your fountain is the cool resting place for my privy member…”

“…my uncomely parts are just made for that place of the breaking forth of children…”

“oh…to go in unto that front-desk maid…”

“hmm…some seed might be conceived there…”

Gatfol thinks not…

…a biblical lead in the 21st century, that gets away
with saying what would otherwise be a career-ending move…

Keywords are the problem…

Gatfol breaks the keyword barrier with a
base technology served in microseconds for the next
generation of corporate automatic language filtering tools…

South Korea Will Not Stem its Suicide Epidemic…



Gatfol predicts that South Korea’s 100-strong team of internet suicide “watchers” will fail.

No amount of human monitors will be able to cover the full inflow of daily personal web information.

Machine monitoring will be necessary – and control machines will be primed with “danger” keyword lists.

“Suicide, take overdose, kill myself, goodbye cruel world” are obvious…

…unfortunately semantics deals the last hand…

 “reached the end of the road”, “cannot taake (sic) it anymore” will not hit individual word lists

…without Gatfol lives will be lost….

How Gatfol Semantically Cleans Habbo Hotel


Habbo Hotel’s “automated filtering technology” and semantic chat control failed in June 2012.

Cyberstalkers and paedophiles took advantage and mute action temporarily
crippled the social network environment of over 250 million users.

The two largest Habbo Hotel funding backers withdrew…

Semantics can kill operationally and commercially…

…but semantics can also save…with Gatfol…

Online predators are now clever enough to avoid “danger words” that invite trouble.
Human moderators and filtering technology running on blacklisted keyword lists will continue to fail…

…hey angel, sounds like things are tough for you right nowyou wanna chat

Gatfol shows us that “hey angel” is semantically almost always used by adults.

Keywords fail – Gatfol alerts…

Gatfol and the Child-safe Web

Web virtual worlds, social networks and e-mail systems are easy to make child-safe. Just get a comprehensive swear-, sexual-, religious-, and bullying danger word list and filter all application text streams through it…


How would we stop the following e-mail content ?

“…I want to slowly savor your soft marshmallow…”
“…care for a banana sandwich ?…”
“…lets get on with some puppy jamming…”

“…what about some bedtime reading between the cheeks…”
“…with you I’ll always be checking my watch…”
“…are we swimming upstream tonight ?…”
“…you and I are team peloni on the bobsled….”


Many web environments – especially for children – are now too large to be adequately monitored manually. The solution is to protect it with underlying Gatfol technology.

Gatfol stealthily and instantaneously creates thousands of semantic equivalents to any input phrase. Gatfol crystallises innocent word combinations into single concepts. If ANY of the thousands of replacement concepts contain a single danger word, inappropriate content is flagged.