Our manifesto

We believe that safety is not a feature, it’s a foundation.

In a world reshaped by artificial intelligence, we design for humanity first. Not convenience. Not scale. Not novelty. Humanity.

Designing for safety means designing for the people who will be most impacted, not just the loudest voices in the room.

Too often, protocols are built by a few, imposed on many. We reject top-down safety. Instead, we ask: What would it look like to learn the collective values of everyone who uses these systems? To treat every interaction not just as data, but as a vote. To build protocols that are participatory, not paternalistic. To ask everyone using AI what safety means to them and to listen.

We don’t automate away the human, we honour them.

While others race toward machine speed, we slow down to pay attention. We study humans, not just the model. We value messiness, emotion, contradiction – the parts AI can’t replicate. Because what makes us human is not an error to correct. It’s the point.

Safer.design is a commitment to asking different questions.

Instead of: How do we get users to trust AI? We ask: How can AI earn our trust? Instead of: How do we design AI to feel human? We ask: How do we protect what’s human?

Safety is not static. It’s a continuous, collective process.

So we build tools and frameworks that evolve with our communities. We co-create ethical guardrails, visible feedback loops, and space for dissent. We design with the expectation that what’s safe today may not be safe tomorrow.

This is not about control. It’s about care.

Care for the vulnerable. Care for the overlooked. Care for the future.

We are Safer.design. And we’re building AI-integrated design approaches that put people, not progress, at the centre.

Because the future shouldn’t just be smarter. It should be safer.

Join the Movement for Human-Centred AI

Be the first to access our tools, frameworks, and early workshops.