Concern about children’s safety and privacy online has led to a number of initiatives and programs — by schools, by private companies, and by government entities. These efforts are all aimed at protecting children and teens from what are perceived to be the big dangers on the Internet: sexual predators, advertisers, and bullies, for example, but they’re also at protecting children and teens from themselves.
A new proposed piece of legislation in California (SB242) aims to mandate new privacy policies and practices for social networking sites. Much of the language was initially framed in terms of protecting those under age 18. That age restriction has been taken out of the bill’s draft language, and now requires a number of changes to how social networks handle all their users’ privacy.
Facebook still does not allow users under 13 to register for an account – and the legislation won’t change existing age restrictions. But now all social networks will have to establish default settings that prevent public or private display of anything other than a user’s name and city without their consent. New users would have to establish their privacy settings during the registration process. Privacy options would need to be written in “plain language” and displayed in an “easy-to-use format.” Sites would have to remove personally identifying information, including photos, within 48 hours of a user’s – or a minor user’s parents’ – request. And companies could be fined up to $10,000 any time they fail to do any of this.
Not surprisingly, many notable Internet companies, including Facebook, Zynga, Twitter, Google and Skype, are expressing their opposition to the bill, saying that not only is it unnecessary, it violates the First Amendment, and would damage California’s technology sector.
Nonetheless the bill raises a number of interesting questions about how we think privacy and security online works — and for whom. Is there a difference between making the Internet safe for children, versus safe for teens, versus for anyone? Is it an easy slide between creating laws that address the security online of children under age 13 (as in COPPA), users under 18, and all users?
That line between who needs such protection is also at stake as federal legislators look to update COPPA, with the “Do Not Track Kids Online Act.” There was some concern that this new COPPA would also change the age limit on privacy protection measures from 13 to 18, but the draft introduced by Representatives Joe Barton (R-Texas) and Edward Markey (D-Mass.) has left the age limit the same but beefed up and modernized the language. (COPPA was first passed in 1998, in pre-Facebook and even pre-Google world.)
It isn’t just a matter of the age of a child that may or may not need better privacy protection online that has some onlookers concerned; it’s about the role of the parent. Although it may reassure some parents to know that a law could enable them to demand data about their child be pulled offline within 48 hours, some have interpreted the bill to mean that parents would also have a backdoor to their children’s social media accounts. Teen researcher danah boyd is among many who have balked at this idea asking “Why do well-intentioned politicians assume that parent-child dynamics are always healthy?”
How will these parental requests work? How will companies verify parenthood? What about divorced parents? Emancipated minors? When does parental access get revoked?
If parents need to have some sort of system for monitoring their children’s online activities, what should this look like? Should this be legislated? Should technology be used to negotiate children’s online activities, or should parents and children work that out together? That last option is ideal, perhaps, but is it realistic?