Illinois for Responsible AI
Before the Illinois General Assembly

Protecting Illinois families from AI risks

The AI Public Safety and Child Protection Transparency Act establishes reasonable safeguards for the most powerful AI systems. 89% of Illinoisans support it.

HB 4705 / SB 3261
Sponsored by Rep. Daniel Didech and Sen. Mary Edly-Allen

What Does the Bill Do?

AI systems are developing at a rapid pace, and gain further capabilities with each passing month. This is exciting — it creates opportunities for breakthroughs in medicine, agriculture, biotechnology, and general productivity. But it also creates novel risks — that users with malicious intent will use AI tools to develop dangerous biological or chemical weapons, that children will be emotionally manipulated to harm themselves, and that companies could lose control of the technology.

The AI Public Safety and Child Protection Transparency Act addresses these risks through six key provisions:

01

CBRN & Cyber Safety Plans

Requires the largest AI companies to develop and follow safety plans which address the risks their systems could directly contribute to the development of chemical, biological, nuclear, or radiological weapons or cyberattacks; or evade human control

02

Child Safety Plans

Requires the providers of widely-used AI chatbots to develop and follow safety plans which address risks to the safety and mental health of users.

03

Incident Reporting

Requires the largest AI companies and providers of widely used chatbots to report major safety incidents.

04

Third Party Verification

Requires the largest AI companies to engage third parties to verify their compliance with the law

05

Internal Whistleblowing Processes

Requires the largest AI companies to have an internal whistleblowing process in place.

06

Attorney General Enforcement

Provides the Attorney General the ability to enforce the law, update key definitions as necessary, and seek civil penalties in court in the case of violations.

Who Supports This Bill?

A broad coalition of unions, academics, community organizations, and AI experts support responsible AI safety legislation in Illinois.

Frequently Asked Questions

Get answers to common questions about the AI Public Safety and Child Protection Transparency Act.

Can AI companies reasonably comply with this law?

Yes. Many AI companies have already adopted safety plans to address catastrophic risks.

Yes. Many AI companies have already adopted safety plans to address catastrophic risks. Furthermore, AI companies made voluntary safety commitments in 2023, with the Biden White House, and 2024, at the AI Seoul Summit. Most AI companies have signed up to the EU Code of Practice, which includes more stringent requirements. Finally, similar laws have already passed in California and New York so companies already have to comply with similar laws in other states.

All of the major AI companies have large safety teams, and routinely share their research and governance practices with the public. The law, by intention, does not create undue burdens on companies to comply.


Will this legislation harm the US in an AI race with China?

No. Putting in place reasonable safeguards on artificial intelligence helps the sector to continue to grow.

No. Putting in place reasonable safeguards on artificial intelligence helps the sector to continue to grow. Safety practices don’t stop innovation; they foster innovation. We require seatbelts in cars so that they can drive fast, not slow. Seatbelts in cars didn’t get in the way of automotive innovation, and safety standards for AI won’t get in the way of AI innovation.

Have children already been harmed by AI?

Yes. Multiple young people took their own lives after engaging with chatbots, in some cases instructed and guided to their tragic passing by AI chatbots.

Yes. Multiple young people took their own lives after engaging with chatbots, in some cases instructed and guided to their tragic passing by AI chatbots. In the tragic case of a 16-year-old boy named Adam Raine, the chatbot discouraged him from telling his parents, offered to write his suicide note, and gave him detailed instructions on how to build the noose he ultimately used to hang himself.

If a human had taken the actions that the AI system took with Adam Raine, the human could be sued. But currently, there’s no requirement that large AI companies even have plans to prevent this kind of behavior. HB 4705 would change that.

Does AI safety legislation threaten innovation and jobs in Illinois?

No. First, many AI companies already have robust safety practices.

No. First, many AI companies already have robust safety practices. Codifying these requirements will not slow the impressive innovation these companies have already achieved; it will set a baseline for the industry as a whole. Second, AI companies already have to comply with similar laws in California and New York, and they have not left those states or lost jobs because of them.

Shouldn’t the federal government regulate AI? Will state legislation create an unworkable patchwork of laws?

The federal government has so far not passed any meaningful AI safety legislation.

The federal government has so far not passed any meaningful AI safety legislation. In the absence of a federal framework, states must step up to protect our kids and communities. Furthermore, HB 4705/SB3261 is deliberately harmonized with AI safety legislation proposed and passed in other states.

What Leading Experts are Saying

Leading Experts

Yoshua Bengio

Turning Award Winner, world’s most cited computer scientist

Time Magazine, December 11, 2025

"A key concern is that, without adequate safeguards, these models possess the capacity to enable those without biological expertise to undertake potentially dangerous bioweapon development. The acceleration of the same reasoning capabilities also increases threats in other areas such as cybersecurity. The increasing capacity of AI to identify vulnerabilities significantly enhances the potential for large-scale cyberattacks."

Steven Adler

Former OpenAI Dangerous Capability Evaluations Lead

"I worked on OpenAI's safety team for four years, and I can tell you with confidence: AI companies aren't taking your safety seriously enough, and they aren't on track to solve critical safety problems. This is despite an understanding from the leadership of OpenAI and other AI companies that superintelligence, the technology they're building, could literally cause the death of every human on Earth."

The California Report on Frontier Model Policy

June 17, 2025

"Evidence that foundation models contribute to both chemical, biological, radiological, and nuclear (CBRN) weapons risks for novices and loss of control concerns has grown, even since the release of the draft of this report in March 2025. Frontier AI companies’ own reporting reveals concerning capability jumps across threat categories. In late February 2025, OpenAI reported that risk levels were Medium across CBRN, cybersecurity, and model autonomy—AI systems’ capacity to operate without human oversight. Meanwhile Anthropic’s Claude 3.7 System Card notes “substantial probability that our next model may require ASL-3 safeguards.” At the time of the release of the Claude 3.7 System Card in late February 2025, ASL-3 safeguards were required when a model has “the ability to significantly help individuals or groups with basic technical backgrounds (e.g., undergraduate STEM degrees) create/obtain and deploy CBRN weapons”. In its May 2025 System Card, Anthropic shared that it had subsequently added safeguards to Claude Opus 4 as it could not rule out that these safeguards were not needed. Finally, Google Gemini 2.5 Pro’s Model Card noted: “The model’s performance is strong enough that it has passed our early warning alert threshold, that is, we find it possible that subsequent revisions in the next few months could lead to a model that reaches the [critical capability level]”—with the “critical capability level” defined as a model that “can be used to significantly assist with high impact cyber attacks, resulting in overall cost/resource reductions of an order of magnitude or more"

International AI Safety Report

February 2026

"Advances in AI’s scientific capabilities have heightened concerns about misuse in biological weapons development. Multiple AI companies chose to release new models in 2025 with additional safeguards after pre-deployment testing could not rule out the possibility that they could meaningfully help novices develop such weapons. More evidence has emerged of AI systems being used in real-world cyberattacks. Security analyses by AI companies indicate that malicious actors and state-associated groups are using AI tools to assist in cyber operations. Reliable pre-deployment safety testing has become harder to conduct. It has become more common for models to distinguish between test settings and real-world deployment, and to exploit loopholes in evaluations. This means that dangerous capabilities could go undetected before deployment."

AI Company CEOs

Sam Altman

CEO, OpenAI

Testimony to Senate Judiciary Committee, May 16, 2023

"My worst fears are that we cause — we the field, the technology, the industry — cause significant harm to the world. I think if this technology goes wrong, it can go quite wrong"

Sam Altman

CEO OpenAI

Comments at Federal Reserve Conference, July 2025

"I think that the bio capability of these models, the cyber capability of these models, these are getting quite significant. We continue to flash the warning lights on this. The world is not taking us seriously. I don’t know what else we can do there. This is a very big thing coming."

Elon Musk

CEO x.ai

X, August 26, 2024

"For over 20 years, I have been an advocate for AI regulation, just as we regulate any product/technology that is a potential risk to the public."

Dario Amodei

CEO Anthropic

Essay on personal website, January 2026

"I am not going to go into detail about how to make biological weapons, for reasons that should be obvious. But at a high level, I am concerned that LLMs are approaching (or may already have reached) the knowledge needed to create and release them end-to-end, and that their potential for destruction is very high."

Demis Hassabis

CEO Google DeepMind (and 2024 Nobel Prize Winner)

BBC News, February 23, 2026

"There are two main worries. One is, bad actors repurposing these technologies for harmful ends. The second worry is a more technical risk, which is as these AI systems get more powerful, more autonomous, how do we make sure we can build robust enough guardrails to keep them doing what we want them to do?"

Mustafa Suleyman

CEO Microsoft AI

The Coming Wave: AI, Power, and Our Future. (New York: Random House, 2025: p. 245)

"Actually having meaningful oversight and enforceable rules and reviewing technical implementation are vital. Technical safety advances and regulation will struggle to be effective if you can’t verify that they are working as intended... Trust comes from transparency. We absolutely need to be able to verify, at every level, the safety integrity or uncompromised nature of a system. That in turn is about access rights and audit capacity. It’s just not acceptable to create situations where the threat of catastrophic outcomes is ever present. Intelligence, life, raw power–these are not plaything, and should be treated with the respect, care, and control they deserve. Technologists and the general public alike will have to accept greater levels of oversight and regulation than have ever been the case before."

Take Action

Tell Your Legislators: Support AI Transparency and Safety

The Illinois AI Public Safety and Child Protection Transparency Act needs your voice. Whether you're a parent concerned about your child's safety online, a citizen who believes in responsible AI development, or simply an Illinoisan who wants more transparency from powerful tech companies—your legislators need to hear from you.

Not sure who represents you? Find your State Representative and State Senator:

Find Your Legislators

What to Say

When contacting your legislators, consider including these points:

Sample Message

Subject Line: Please Support the AI Public Safety and Child Protection Transparency Act

Sample Message:

"Dear [Legislator Name],

I am writing as your constituent to urge you to support the AI Public Safety and Child Protection Transparency Act, sponsored by Representative Daniel Didech and Senator Edly-Allen.

As AI technology rapidly advances, Illinois needs common-sense transparency requirements to ensure that the most powerful AI systems are developed safely and responsibly. This bill:

  • Requires the largest AI companies to develop and follow safety plans which address the risks their systems could directly contribute to the development of chemical, biological, nuclear, or radiological weapons or cyberattacks; or evade human control
  • Requires the providers of widely-used AI chatbots to develop and follow safety plans which address risks to the safety and mental health of users
  • Requires the largest AI companies and providers of widely used chatbots to report major safety incidents
  • Requires the largest AI companies to have an internal whistleblowing process in place
  • Requires the largest AI companies to engage third parties to verify their compliance with the law
  • Provides the Attorney General the ability to enforce the law, update key definitions as necessary, and seek civil penalties in court in the case of violations

This is reasonable legislation that supports innovation while protecting Illinois families. I urge you to co-sponsor and support this important bill.

Thank you for your consideration.

Sincerely,
[Your Name]
[Your Address]
[Your City, State, ZIP]

Posting on Social Media

Share your support on social media. Click to copy any of these sample posts.

Illinois has a chance to lead the nation on AI safety. The AI Public Safety and Child Protection Transparency Act would require the biggest AI companies to be transparent about risks and would protect our children. Tell your legislators to support this bill! @yourlegislator @jbpritzker @kwameraoul

Innovation and Safety go hand in hand! While we roll out artificial intelligence and reap the benefits, we have to make sure AI companies leading the way have safety plans in place to protect our kids and our communities. Tell your representatives to support the AI Public Safety and Child Protection Transparency Act! The time to act is now! @yourlegislator, @jbpritzker, @kwameraoul

Contact

Get in Touch

Have questions about the Illinois AI Public Safety Act? Want to get involved? Reach out to us.

info@secureaiproject.org