when relevant content is
added and updated.
when relevant content is
added and updated.
This was published by the Digital Policy Alliance in March 2015 and used to help brief politicians on why robust Age Checking processes had the support of a critical mass of on-line publishers and retailers who would also help deliver any political pledges with regard to child protection. It puts the current consultation on age checking to help protect children from pornography into context.
Contact, Conduct, Compliance, Content, Commerce + Choice, Consent, Co‐operation and Confidence
Robust age verification for both children and young people wishing to access a range of products and services online, including via mobile devices, is regarded by many as essential to improving children’s safety online. Most children now spend more time online, unsupervised, using smart phones and tablets than in the playground or park. Child protection is the main driver for social networks and content providers to check the ages of those with whom they are dealing online. It is not the only one. The requirements in the “real” world for age checks, including on travel passes and other concessions for young and old, can be achieved simply by the presentation of a photocard from a reputable organisation, such as a Citizencard (for the adolescent) or a Freedom Pass (for the mature). Why should an online equivalent be so difficult?
Debate on practicalities masks that on commercial, moral and political priorities
Five years ago the age verification processes available were not fit purpose (see What’s Changed). Today the UK is on the cusp of defining standards to underpin the roll‐out of scalable, viable, cost‐effective, age verification solutions, built on the principles of ‘verify once, use many times’. The next step is to turn these into internationally‐recognised interoperable standards, supported by certification programmes that define the liability models. It is time for social networking platforms, data aggregators and advertisers to re‐visit their approaches to age verification, not just to ensure that they comply with legislation designed to protect children online but also to facilitate the confident use of online services by all age groups.
Who is working to find realistic ways forward?
• Those providing online gambling services are required to ensure all customers are over 18. As many 18‐24 year olds are not on the electoral register and do not have a credit history gambling operators have had to develop innovative solutions to ensure all their customers are over 18.
• Those providing access to adult entertainment are required by ATVOD (Authority for Television on Demand – replaced 1st January 2016 by Ofcom) to check the age (but not necessarily the identity) of those seeking access.
• Those running online sales operations and payment services: who may be liable for not undertaking checks before selling age‐restricted products (e.g. alcohol, knives, tobacco, vaping products, etc.).
• Those running educational services and linked social networks for children: they need user friendly but robust age verification checks to facilitate social inclusion while protecting against abuse.
Solutions here can also be applied in other situations, e.g., to prove eligibility for education grants or to crowdsource business start‐ups [those aged under 18 not allowed to use most orocesses].
The Digital Policy Alliance has therefore brought together a range of industry players who need effective solutions for marketing and moral, as well as regulatory, reasons to work towards national and internationally recognized standards that reflect common needs. These should enable information already on file across central and local government (including DWP and The NHS) and/or the private sector to enable service providers to reliably check the age of almost any online user, including those who wish to remain anonymous, providing the relevant regulations permit this.
Delivering results at affordable cost (operational as well as up front) requires consensus on nine Cs: Contact, Conduct, Compliance, Content, Commerce, Choice, Consent, Co‐operation and Confidence.
One of the most feared risks in the online world is that of bullying and abuse as a result of inappropriate and unwanted contact between users, particularly (but not only) those who are not telling the truth about their age, let alone identity. This includes contacts in which one user may exploit another, such as an adult with a sexual interest in children.
Robust age verification has real potential to reduce the risks. It also has the positive effect of making it easier and safer to use educational and social platforms to, for example, enable school children learning foreign languages to network with those of the same age, in different countries. The benefits of reliable age verification, including the choice (see below) of whether to link identity and age, increase with adolescence and transition to the age of work.
Youngsters become increasingly vulnerable as they explore socially or try to do business online (perhaps below the age at which credit or conventional funding are available). They may, or may not, wish to disclose their age lest they be discriminated against. Similar issues apply to older age groups.
Many risks relate to how people behave online. Those we seek to protect may also initiate or participate in antisocial or criminal activities. These can include bullying or victimization (spreading rumours and gossip, excluding peers from one’s social group and withdrawing friendship or acceptance) and potentially risky behaviours (which may include divulging personal information, posting sexually provocative photographs, impersonation, lying about age or arranging to meet face‐to‐face with people only previously met online). Those who have not made the necessary checks are also placed at risk when children try to gain access to products and services which it is illegal to supply to them. Age verification mechanisms will not address behavioural issues but can be an important addition to the safety measures used by companies, parents and guardians and educational institutions. They also
make it easier for those providing services to fulfil their moral and legal obligations.
The approach is to address the generic problem of age verification, using privacy preserving, data minimization routines to check different ages against authoritative sources while minimising the opportunities for abuse or fraud. Success will entail enabling low cost processes for account registration and transaction authorisation, which operate according to credible, effective and well‐publicised standards (including for anonymity processes where required) and enable low integration costs for less burdensome worldwide working. It also needs to be easier and more attractive (marketing and profitability) for reputable businesses to comply with processes which meet or better the mandatory standards, rather than work around them. That will entail educating staff accordingly.
In the offline world, regulators require businesses to put mechanisms in place to limit the risks of children and young people being exposed to age‐inappropriate, harmful and illegal content. For example, the British Film Classification Board applies age ratings to films that will be shown at cinemas. These can be enforced during the sale of tickets or at the point of entry to view a film. It is relatively easy to assess the age of a young person in a face‐to‐face setting, albeit “proof” of age is increasingly being required. It is much more difficult to do so in an online setting.
New and emerging online age verification mechanisms seek to replicate the safety mechanisms used in real world settings. The efforts to deploy online age verification mechanisms designed to protect children from accessing age inappropriate content online have implications beyond the adult sector: for example, platforms such as Netflix are attempting to replicate the access restrictions applied in film theatres by enabling parents to set up a ‘Kids’ profile, with an assumption that children will only seek to access age appropriate films. Effective online age verification mechanisms would afford parents a higher level of confidence that their children are indeed watching age appropriate content. This would not, however, remove the onus on a parent to engage with children in “negotiation” as to what is appropriate for them.
In the offline world, there are clear rules around the age restrictions that retailers must apply to the sale of, for instance, alcohol. Many retailers are engaged in initiatives designed to reduce the sale of alcohol to those who are underage, for example by implementing ‘Challenge 25’, the message of which is that if you are over 18 but look under 25, you should expect to be asked for some form of proof of age. Since 1st October 2007 it has been illegal to sell tobacco products to anyone under 18 – this includes cigarettes, cigars, roll‐your‐own and pipe tobacco as well as cigarette rolling papers. Selling any of these products to someone under 18 can result in a fine of £2,500.
Representatives of the tobacco industry and the Electronic Cigarettes Industry are working toward deploying online age verification mechanisms designed to restrict children and young people below the age of 18 from purchasing their respective products online.
When Internet users are purchasing products online and using online payment mechanisms, a number of checks are conducted to ensure that, for example, there are sufficient funds in your bank account and that the postal address associated with the payment mechanism matches that held by the bank etc. The inclusion of an age check carried out before a payment is authorised may be part of enabling frictionless, low cost, viable and scalable age verification solutions. The Digital Policy Alliance’s Age Verification Working Group is working with a range of Identity and attribute providers (age is an attribute of your identity) to enable access to relevant age attributes from a variety of sources, some more authoritative than others. One of the issues will be to obtain access to those in the public sector which could be used to improve confidence and reduce the risk of fraud, impersonation and error in, for example, the Government Verify programme, particularly with regard to those aged under 18. Examples include the school and medical records already used to help control the issue of some age cards.
Many users, of all ages, complain that, in order to obtain service online, they are required to provide information that they see as irrelevant (date of birth is one of the most common). The excuse is commonly “to improve service”. The reality is more usually to enable the supplier or its advertisers to collate information from a variety of sources to “refine” this into the new “oil” of information. The backlash can be seen with enquiries for age controlled related products and services dropping away, as more questions are asked. Many service operators would benefit greatly from being able to ask only that which will enable a trusted (by regulators as well as by the public) age verification provider to “confirm” that the enquirer is above or below the relevant age. Of course no verification be 100%, whether in the online world or the real world and the degree of confidence required (see below) will vary according to context, risk (legal, regulatory, moral and commercial).
It is rare that online users, of any age, are given a choice of what information to provide in order to use a product or service. That needs to change. Splitting age verification services from those for identity checking and transactions authorisation, giving choice to both suppliers and consumers, young and old, would be a powerful first step.
Internet users and regulators are concerned about the privacy implications of websites, apps and third party data aggregators tracking internet users’ online activities. There is unease about the privacy implications of data aggregation, online behavioural advertising and the lack of transparency and accountability over what data is collected, how it is used and how long it is retained. There is growing opposition to the assumption that, by virtue of using a platform like Facebook, users have consented to their personal data being collected, collated and sold to unknown third parties. Such anxieties are significantly heightened with regard to young children who may not understand the safety and privacy issues created by the online collection of personal information and are particularly vulnerable to overreaching by marketers. A recent study recorded the levels of tracking on 40 of the top websites visited by children and discovered 1,110 third party trackers on these websites from 644 different
tracking organisations. There was an average of 24 third party trackers on pre‐school websites, 25 on education sites, 29 on gaming sites and 34 on entertainment sites. The level of tracking varied significantly, ranging from less than 10 to over 180 third party trackers on the 40 sites analysed.
The Children’s Online Privacy Protection Act (COPPA) requires the Federal Trade Commission to issue and enforce regulations concerning children’s online privacy. These apply to operators of commercial websites and online services (including mobile apps) directed to children under 13, requiring them to obtain verifiable parental consent to the website or app collecting, using, or disclosing personal information from children. In practice when a child indicates that they are below 13 years of age and wishes to register to access a website, they are asked to supply a parent’s email to which an automated email is sent that contains a link. If the link is clicked it is assumed that parental consent to process that child’s data has been obtained. Thus system has inherent flaws, e.g. a child may supply an email other than their parents’ email, to which they have access. It is, however, deemed by the FTC to be sufficient.
Meanwhile a recent survey indicated that 45% of UK parents whose children had a profile on Facebook didn’t know that the supposed minimum age for this was 13 years of age. More recently, the European Commission (EC) has proposed legislative measures that will define children as data subjects for the first time and require stronger legal protection of children’s personal data in the online environment. In Article 8 of the proposal for the General Data Protection Regulation, the EC introduces verifiable parental (or custodian) consent that would serve as a means of legitimising the processing of personal data of children under the age of 13 on the Internet.
The electronic and mobile identities used in a number of EU member states, but not the UK, enable regulators to require local businesses to include both parental consent and online age verification mechanisms when children register to use online services. The Digital Policy Alliance Age Verification Working Group is exploring the scope to enable parents to have more effective control over how their children’s personal data is processed online. Once these have been identified the intention is to work with the DPA Digital Single Market Group and others to help inform the reviews of Data Protection, Identity and Information Security Regulation, including by the European and UK Parliaments.
The following groups have agreed to work together under the umbrella of the Dig ital Policy Alliance to agree common requirements: [Statements on DPA website]
• Online Gambling
• Adult Entertainment: ATVOD requires UK based adult content providers to verify that people accessing adult content are over 18.
• Tobacco Industry
• Alcohol Industry
• Online Dating Industry
• Educational Network Operators
• Child Protection organisations
• Silver Surfer support organisations
• Social Inclusion operation
• Crime Prevention organisations
No identity or verification process, from physical passports and ID cards to electronic signatures is 100% foolproof. Attempts to achieve perfection tend to result in complex systems which open up as many vulnerabilities as they close, often using sources that are claimed to be “authoritative” but are already seriously compromised or capable of relatively spoofing. The need is therefore to agree standards that support processes, products and services that not only meet current and foreseeable regulatory requirements but exceed these and enable any weaknesses to be addressed as these emerge and affect consumer confidence, without awaiting government action. Performance measure should include the level of fraud and abuse (expected and actual) compared to that with supposedly authoritative online sources, such as the new Government Verify programme.