Kids Online Safety Act
S.1748 – Kids Online Safety Act: Rules for platforms minors use and transparency on algorithms
119th Congress
S.1748 sets national rules for how large online platforms, games, and streaming services must protect users under 17. It requires safety and privacy safeguards, clearer notices, outside audits, and options to see content without personalized algorithms. It is introduced legislation and would take effect mostly 18 months after enactment if passed.
- Bill Number
- S1748
- Chamber
- senate
What This Bill Does
Title I creates a legal “duty of care” for covered platforms to design and run their services in ways that help prevent certain harms to minors, like compulsive use, serious bullying, sexual exploitation, and promotion of drugs, gambling, or alcohol. Platforms must provide strong default safeguards for users they know are under 17, such as limits on who can contact them, controls on sharing personal data and location, limits by default on features that encourage endless use (like autoplay and infinite scroll), and clear options to turn off or narrow personalized recommendations. Platforms must also offer tools for parents to manage young children’s accounts, see and limit time on the platform, and restrict purchases, with these tools turned on by default for children under 13 unless a parent has already opted out. The bill requires platforms to give minors and parents clear notices about these safeguards, how recommendation systems use their data, and which content is advertising. Large social platforms must undergo yearly independent audits and publish public transparency reports on how many minors use their services, how long they use them, what safeguards are in place, and how user reports of harm are handled. It restricts market and product-focused research on children without parental consent, directs federal agencies to study device-level age verification methods, and requires the Federal Trade Commission (FTC) to issue guidance on identifying risky design features and how it interprets the “knowledge” standard for knowing a user is a minor. The bill gives the FTC primary enforcement power, treating violations as unfair or deceptive practices, and lets state attorneys general sue platforms in federal or state court for violating the safeguards, disclosure, and transparency sections. It creates a temporary Kids Online Safety Council of experts, parents, youth, educators, industry, and state officials to advise Congress and recommend best practices on online safety and transparency standards. The bill states it does not change COPPA, FERPA, or Section 230, and it does not require platforms to start collecting age data or to use age-gating or age-verification systems. Title II focuses on “filter bubble” transparency for online platforms that use opaque algorithms. These platforms must clearly tell users when an algorithm uses user-specific data to rank or recommend content and explain key features, inputs, and data categories used. They also must let users easily switch to an “input-transparent” option that shows content based only on data the user directly provides, like search terms, follows, or subscriptions, and cannot charge more or reduce service if a user chooses that option. Title III sets the relationship to state law. It allows state laws that give stronger protections to minors, but preempts state provisions that directly conflict with this act. If any provision of the act is ruled invalid, the rest of the act stays in effect.
Why It Matters
This bill would set nationwide baseline rules for how social media, online games, and some streaming services treat minors, instead of leaving these issues mainly to company policies or a patchwork of state laws. It would directly affect how young people interact with platforms by changing default settings, limiting certain design features, and giving both minors and parents more built-in tools to manage time, privacy, and interactions. For platforms and app developers, the bill would create clear legal duties tied to specific harms, with enforcement by the FTC and state attorneys general. This could lead to changes in product design, data practices, and how algorithms are used for younger users, potentially increasing compliance and auditing costs. The filter-bubble rules would also affect adults, giving all users more information about ranking algorithms and the option to see content ordered without relying on hidden user-profiling, though the exact practical impact on user experience would depend on how platforms implement those options. The bill leaves important details—such as how companies identify minors without collecting more data, how “reasonable care” is judged, and how audits are conducted—to future FTC guidance, audits, and court decisions. As a result, while the main goals and requirements are clear in the text, some real-world effects, like changes in teen mental health outcomes or the amount of content users see, are uncertain and would become clearer only after implementation and enforcement.
External Categories and Tags
Categories
Tags
Arguments
Arguments in support
- Creates clear, nationwide standards for platforms to build in safety and privacy protections for minors, instead of relying on voluntary company policies or a patchwork of state rules.
- Targets specific, named harms (such as compulsive use, severe harassment, and sexual exploitation) and ties them to design features, which may push companies to redesign products with young users’ well-being in mind.
- Strengthens parents’ ability to manage their children’s online use through default-on tools for younger children and clearer information about safeguards, time use, and purchases.
- Increases transparency by requiring independent audits and public reports on how platforms are used by minors and how well safeguards work, which can help researchers, policymakers, and families understand risks and responses.
- Gives all users more control and insight into algorithmic rankings by requiring platforms that use opaque algorithms to offer a non-profiled, input-transparent view and to explain in plain language how ranking systems use user data.
- Preserves existing federal privacy and liability frameworks and explicitly avoids mandating new age-gating or age-verification systems, which may reduce pressure for platforms to collect more sensitive identity data.
- Involves a range of stakeholders, including youth, parents, educators, disadvantaged communities, and industry, through the Kids Online Safety Council to inform future improvements and technical standards.
Arguments against
- The “duty of care” standard and broad list of harms could be seen as vague, making it hard for platforms to know exactly what designs are legally acceptable and potentially leading to over-removal of content or over-restriction of features for minors.
- Requirements to identify and treat minors differently, combined with possible FTC guidance on “knowledge implied,” may push companies to expand age-detection or profiling, even though the bill does not formally require age verification, raising privacy concerns.
- The new reporting, audit, and compliance obligations for large platforms, and added parental tools and safeguards, could impose significant costs and technical burdens, which might be harder for smaller or newer services to meet if they grow toward the thresholds.
- Limits on certain design features and personalized recommendations for minors may reduce personalization and convenience for older teens who may prefer or rely on these features for social connection, information discovery, or creative expression.
- Giving both the FTC and state attorneys general enforcement powers may lead to overlapping investigations or differing interpretations, creating legal uncertainty and incentives for cautious behavior that could affect product design and content policies.
- The requirement for an easy switch to an input-transparent algorithm could complicate user interfaces and may not substantially change outcomes if most users choose to stay with personalized feeds, raising questions about effectiveness compared to complexity added.
Key Facts
- Applies to “covered platforms,” including online platforms, online video games, messaging apps, and video streaming services that are used or likely to be used by minors, with specific carve-outs (e.g., email, basic SMS/MMS, most schools, libraries, some news/sports sites, VPNs, government .gov sites).
- Defines “child” as under 13 and “minor” as under 17; requirements are stronger and more parent‑controlled for children than for older minors.
- Establishes a duty of care requiring platforms to exercise reasonable care in design features to prevent and mitigate specified harms (e.g., compulsive use, certain mental health conditions tied to usage, severe harassment, sexual exploitation, and promotion of narcotics, tobacco, cannabis, gambling, or alcohol).
- Requires default high‑protection settings for minors on communication limits, data sharing, geolocation sharing, and design features that encourage prolonged use, unless changed by a parent.
- Mandates easy controls for minors to limit time on the platform and for parents to view and restrict time and purchases, with parental tools enabled by default for known children.
- Requires platforms to maintain a reporting system for harms to minors, including an electronic contact point and set response deadlines (10 or 21 days depending on platform size, with faster responses for imminent threats).
- Bans advertising of narcotic drugs, cannabis products, tobacco, gambling, or alcohol to users the platform knows are minors.
- Requires clear notices before minors register or purchase, parental consent and information for known children, and clear labels showing when content is advertising and what product or brand it promotes.
- Requires large social platforms (over 10 million U.S. monthly users and that primarily host user-generated content) to obtain annual independent audits and publish public transparency reports on minors’ usage, safeguards, parental tools, and handling of harm reports, with data de-identified and aggregated.
- Prohibits market or product-focused research on known children and requires verifiable parental consent for such research on minors.
- Directs a federal study on device- or operating-system-level age verification methods and requires a report to Congress within one year of enactment.
- Directs the FTC to issue guidance within 18 months on identifying harmful design features, protecting against misuse of parental tools, using age inferences, and interpreting the “knowledge fairly implied” standard.
- Treats violations as unfair or deceptive practices under the FTC Act; gives state attorneys general authority to bring civil actions for violations of the safeguards, disclosure, and transparency sections, but not directly for the duty-of-care section.
- Creates an 11-member Kids Online Safety Council drawn from government, experts, parents, youth, educators, industry, and disadvantaged communities to provide interim (1-year) and final (3-year) reports and recommendations, then sunset.
- States that the act does not change or preempt COPPA, FERPA, or Section 230, and does not require platforms to newly collect age data or implement age-gating or age verification.
- Requires platforms using “opaque algorithms” to clearly disclose that they use user-specific data for ranking, explain key features and data inputs, and allow users to easily switch to an “input-transparent” option without paying more or receiving fewer services.
- Makes violations of the opaque-algorithm transparency and opt-out requirements enforceable by the FTC as unfair or deceptive practices.
- Preempts state laws only where they conflict with this act, while allowing states to provide greater protections for minors; includes a severability clause to keep remaining sections in force if part is struck down.
Gotchas
- State attorneys general are barred from using the duty-of-care section (Section 102) as a direct basis for liability under state law, even though they can enforce other sections; this may focus state cases more on failures in safeguards, notices, and reporting than on alleged harm-causing designs themselves.
- The bill repeatedly protects First Amendment concerns by stating enforcement cannot be based on viewpoints in user speech and by clarifying it does not alter Section 230, which can limit expectations that platforms will remove certain lawful but harmful content.
- Video streaming services that mainly show provider-selected programming (rather than user-generated content) can be deemed compliant through a separate, narrower set of capabilities, meaning many major streaming platforms might face a lighter set of changes than social media sites.
- The bill does not cover some services often used by youth, such as basic email, standard SMS/MMS messaging, and many school systems, unless they also function as covered platforms.
- The Kids Online Safety Council is exempt from the usual Federal Advisory Committee Act rules, which affects transparency and procedural requirements that normally apply to federal advisory bodies.
- For opaque algorithms, the bill protects trade secrets and confidential business information, so the detailed workings of ranking systems do not have to be fully revealed, even though general features, inputs, and data categories must be described.
- The bill allows platforms to partner with third-party tools or use operating system or console-level controls to meet safeguard requirements, which could shift some implementation to device makers or specialized safety providers rather than each platform building all tools alone.
Full Bill Text
We're fetching the official bill text from Congress.gov. Check back shortly.
