This is our district's 30-second survey for the week:

I'll blog the results at the end of the week.
Such a check is long overdue. The last formal study I did on the efficacy of our Internet filter was in 2002 - about a year and a half after first installing it in our district. Here is the report (yes, I keep a lot of junk):
An experiment in evaluating a filter
I took some time over the school’s winter break in December of 2002 (15-18 hours) to do a quick breakdown of the report from our district filtering system (Webblocker/Cyberpatrol set to block only full nudity and sexual acts within our WatchGuard firewall). The study is of blocked sites from 6 am to 6 pm on Dec 19, 2002 - a regular school day.
A total of 617,000 requests were made from our district of 6800 students and 800 staff members using about 2500 networked computers on Dec 19th. Of these, 592 requests were blocked. (.001%) By eliminating duplicate requests (same URL within a 5 minute spread of time from the same IP address), the unique instances of blocking was reduced to 262. I checked each blocked URL by copying it from the log into my browser (also checking the root IP address if the specific page was blocked), then categorizing what I found as follows:
SEX = Strong sexual content. 100 attempts blocked. Of these 72 came from a single machine within a 2.5 hour time period - we believe this was the result of the CODE RED virus.
T = Tasteless. 9 attempts blocked
NOC* = No offensive content. 78 attempts blocked
PNF = Page or server not found. 72 attempts blocked
STU=Site temporarily unavailable 3 attempts blocked
*NOC details - 14 AOL blocks, 11 banner ads, 6 commercial music sites, 3 Italian pages, 12 search engine sites, 1 medical (questionable - female ejaculation, non-prurient, personal site), 3 plugin download sites, 3 webhosting sites, 3 gambling, 22 misc.
Since we use DHCP to assign IP numbers to computers throughout the district, we cannot precisely tell where each of these requests originated. But due to our VPN IP assignment scheme, I can tell:
107 requests came from a technician's machine (the one with the virus), 3 requests (no sexual ones) came from staff computers, 155 requests came from lab/student machines, and 1 came from a machine with a static IP in a high school of unknown whereabouts.
By building type:
Elementary buildings 26
High school 44
Middle school 10
Combined HS/JH 61
Area alternative high school/community ed/early childhood building 121 (107 virus related)
Questions for me that still remain:
- Why are so many URLs blocked that return File Not Found errors when searched for with a browser?
- How can I determine "leakage" of the filter - if users are getting to inappropriate sites that the filter does not catch? Why after 18 months of having a filter are users still trying to get to sites with "sexual acts" if they do not think they have at least some hope of success?
- How does one determine "intentionality" of attempts to get to sites that are inappropriate? How many of these requests were to porn-napped sites? How many made by kids in error? I have no way of knowing if users were purposely looking for sites that are not permitted in our AUP (with the exception of a couple instances where a single machine made multiple requests over a short period of time for such material.)
- How successful was I in convincing my wife that I was doing "official school business" after seeing a few of the images on my computer screen?
Conclusions:
- At least in this "snapshot," it does not appear that our filter is grossly overblocking sites. At a blocking rate of less than .001% and with only 78 requests for non-offensive pages denied, I see this as tolerable.
- To the extent that I am able to determine, the filter is blocking many inappropriate sites. Most of what I checked, I believe, even the most ardent civil libertarian would agree does not belong in schools. Heck, I had to wash MY eyes out with soap after viewing some of this stuff. I would say that about half the blocked requests for sex sites were for "doorway" pages - a non-explicit page with a strict warning not to enter if under 18.
- This is a very time consuming process, most of it checking and categorizing the blocked URLs. I am not sure how this could be simplified without putting a good deal of control into the hands of the filtering manufacturers.
Will the 30-second survey do as good (or better) job in helping the department determine if our filter is over or under blocking?
How does your district determine if its filter is working the way it ought to?