AI-Use Policy Announcement

Building Trust Through Transparency: Skills for Change's AI Use Policy 

When you work with vulnerable populations (newcomers navigating complex immigration systems, refugees rebuilding their lives, vulnerable seniors, women and youth, families seeking settlement services) trust isn't optional. It's everything. 

That's why this month, Skills for Change became one of the first settlement agencies in Canada to deploy a comprehensive AI Use Policy. All 46 pages of it! On this AI in Work Day, I want to share why we did it, what we learned, and invite other nonprofits to join the conversation. 

Why We Developed This Policy 

The catalyst was twofold. First, our staff were already using AI tools. The question wasn't whether to use AI, it was how to use it responsibly. 

Second, earlier this year Skills for Change was selected as one of only four organizations across Canada to receive funding from Google's $13 Million AI Opportunity Fund. We're developing AI skills programs to train individuals from communities facing high unemployment and underemployment. 

If we're going to teach AI literacy to our clients, we need to model responsible AI use ourselves. You can't credibly train others on something you haven't fully figured out, internally. 

Our sector handles incredibly sensitive information, daily. Immigration status. Personal histories. Assessment results. Employment barriers. The stakes of getting AI wrong aren't just operational, they're human. 

In order to establish effective AI policies, we need to ensure we are: 

  • Protecting client privacy and dignity 

  • Maintaining compliance with IRCC requirements and funding agreements 

  • Empowering staff to leverage AI benefits without compromising our values 

Our Core Principles 

The policy we built rests on six pillars: 

1. Human Oversight Always AI assists, humans decide. Every output gets reviewed. Critical decisions affecting clients never rely solely on AI recommendations. 

2. Privacy First Clear data classification system. Client information never touches AI tools. Period. 

3. Bias Awareness Active recognition that AI systems may not reflect our diverse client base. Staff trained to spot cultural, language, and socioeconomic bias. 

4. Transparency When AI contributes to client-facing content, we disclose it. Clients have the right to know and the right to request human-only alternatives. 

5. Learning-Focused Enforcement We implemented a 90-day amnesty period. The goal isn't punishment, it's building competency and confidence. 

6. Policies should be Iterative – with the incredibly fast speed that AI is moving at, our policy will remain updated to incorporate new changes that could impact our clients and staff. 

Early Learnings (Not Wins Yet) 

We're only a few weeks in, but here's what's emerging: 

The completion itself matters. Getting this comprehensive policy approved and deployed (from CEO sign-off to board review) signals organizational commitment to responsible innovation. 

Staff want guidance, not gatekeeping. The response hasn't been resistance, it's been relief. People want to use these tools well. 

The "red light/yellow light/green light" framework works. Simple data classification helps staff make decisions in the moment. 

Supervisor training is critical. Middle management needs as much support as frontline staff, maybe more. 

Questions reveal gaps. Every "how do I..." query is helping us refine implementation. 

What's Next 

We're building: 

  • An AI Champions network for peer support 

  • Scenario-based training using real settlement service situations 

  • Quarterly risk assessments and tool version reviews 

  • Comprehensive audit logging for significant AI-assisted decisions 

An Invitation 

If you're in the nonprofit sector (especially immigration, settlement, employment services, workforce development, youth programs, or climate action) and are working towards effective use of AI, let's learn together. 

We don't have all the answers. We're figuring this out as we go. But we believe transparency about our approach can help the sector move forward collectively. 

What challenges are you facing with AI governance? What approaches are working? What keeps you up at night? 

The technology is moving fast. Our responsibility to the communities we serve isn't changing. Let's navigate this together. 

 

Skills for Change has served newcomers and underserved communities since 1982. Our AI Use Policy reflects our commitment to innovation that protects dignity, privacy, and trust.

Skills for Change's AI Policy will be shared on this website on October 20

 

For more information, Get in touch with us:

416.658.3101 x 237