AI Readiness in Social Impact Organizations: It Is a Capability and Culture Problem

Yes, another post on AI. I know. Nobody has the full picture, including me. But I am going to try anyway to make some sense of a very blurry picture.

Every week there is a new term to learn. Agentic AI, country of geniuses, multimodal reasoning, RL, agent swarm, and the list goes on. I watch leaders drop these phrases in meetings with great confidence, and I wonder sometimes how many of them could explain what any of it actually means, or more importantly, how it connects to what their organization is trying to do. To be fair, I have been guilty of this myself at one point or another.

But this is not really a post about jargon. It is about something more fundamental: what it actually takes for organizations working in social impact and development to use AI in a meaningful way. And the more I observe, the more I wonder if we are having the right conversation at all.

We talk endlessly about models, tools, and data pipelines. We debate whether one large language model is better than another for a particular task, or whether our data infrastructure is mature enough, or if we should simply tighten the knowledge sources. These are real questions, no doubt. But they are not the hard questions. The hard question, in my view, is this: why do so many organizations in the social sector that invest in AI and still struggle to make it actually land?

In my experience, the answer is not the technology. It is the people and the culture that surrounds the use of this technology.

The Maturity Gap That Is Not Talked Enough About

There is a pattern I have seen play out many times. An organization becomes excited about AI. A small team, usually a combination of data scientists and enthusiastic early adopters, runs a pilot. It works. The results look promising. Leadership is impressed. And then, slowly and quietly, it dies. It gets absorbed back into the organization without ever scaling or changing the way anyone actually works.

So what happened? The organization had the technology but did not have the institutional readiness to absorb it. The staff did not trust it. Work practices were not redesigned to accommodate it. Leadership did not know how to champion it beyond the initial presentation. The pilot was a success. The adoption was a failure.

I think of this as the maturity gap. It is the distance between an organization's technical capability and its cultural and operational readiness to actually embed AI into how it works. In the social impact sector, this gap tends to be particularly wide, and for understandable reasons. These organizations are mission-first. They are often resource-constrained. And they have very legitimate concerns about bias, accountability, and the ethics of deploying AI in contexts that directly affect vulnerable populations. Those concerns are valid and important. But they can also, if we are honest, become a convenient reason to never move past the pilot stage.

And then there is the productivity pressure problem. Every AI conversation in these organizations eventually circles back to the same question: how much time are we saving, and how quickly can we show it? Leadership wants to see dramatic improvements in the shortest possible time, usually because they are responding to what someone read in a consulting firm’s report or heard at a summit or conference. So organizations end up chasing efficiency metrics before they have even figured out whether the AI is producing trustworthy outputs. It is getting the cart very far ahead of the horse!

Fragmentation Is Making Things Worse

The maturity gap gets compounded by fragmentation. In most development-focused organizations, AI work happens in pockets. A data team here, a technology unit there, an innovation lab somewhere else. Each group is doing interesting things, but they are not talking to each other. There is no shared vocabulary, no common framework for evaluating ideas, and no institutional memory of what has been tried and what has not worked.

The result is that the organization keeps reinventing the wheel, while at the same time having no wheel at all in most of its operations. It is a strange situation: lots of AI activity, very little AI progress.

That model is not new, by the way. We saw versions of it with data, with digital transformation, and with innovation labs over the past two decades. It had its time and place, and in some contexts it served a purpose. The results were mixed, and honestly, I think that chapter may have run its course. It is particularly ill-suited to AI, because unlike those earlier waves, AI capabilities are genuinely accessible across an organization in a way that earlier technologies were not. You cannot centralize something that is already in everyone's hands. What is needed is something more like a connective tissue across the organization: shared standards, shared platforms, and communities of practice where distributed teams can learn from each other and build on a common foundation.

Culture Eats Strategy, and It Will Definitely Eat Your AI Roadmap

I want to spend a moment on something that rarely makes it into AI strategy documents: the human dimension of AI adoption. 

The fear of replacement is real and it is everywhere. Staff in these organizations are watching the headlines, seeing the layoff announcements at tech companies, and quietly wondering if their job is next. So they do what anxious, smart people do. They go sign up for every AI course they can find. They stack up certifications from Coursera, LinkedIn Learning, and a dozen other platforms. They post about it on LinkedIn. There is a kind of FOMO-driven credentialing happening that feels productive but rarely translates into the specific, contextual AI skills that the work in the social sector actually requires. Knowing how to prompt a chatbot is not the same as knowing how to use AI to meaningfully assess whether a workforce training program is actually moving someone who is formerly incarcerated into a quality job, or whether it is just producing another credential that the labor market does not value. It is also not the same as knowing how to use AI to map the real gap between the green skills that workers currently have and the green jobs that are actually emerging in their region, so that training investments go to the right places rather than the obvious ones.

At the same time, the over-correction is just as real. Once people do start using AI, some organizations respond to every mistake or hallucination by demanding human review for everything the AI touches. Human in the loop for this, human in the loop for that. Which sounds responsible, and in some cases genuinely is. But if you require a human to review every single AI output, you have not really adopted AI at all. You have just added a step to your existing process.

In the social impact sector, there is of course a very legitimate and admirable commitment to equity and human dignity that makes people appropriately skeptical of algorithmic systems. That skepticism is not wrong. But it needs to be calibrated, not blanket. You have to build genuine AI confidence, the kind that comes from hands-on experience, honest conversations about where AI fails and why, and a culture where it is safe to experiment without every experiment being treated as a career-defining risk.

This means investing in learning not as a checkbox exercise where you can say "we ran a training" and move on. It means creating genuine space for the questions that staff are actually sitting with but rarely feel safe asking out loud. Questions like: how is AI changing what my job requires of me? What should I be focusing on now that I am handing off certain tasks to AI? How do I transition into whatever this role is becoming? No certification course is going to answer those. But an organization that takes them seriously will come out of this transition with its people intact.

What Actually Works

So what does good look like? A few things stand out that organizations can navigate reasonably well.

They start with concrete problems, not with technology. They do not ask "how can we use AI?" They ask "where are we most bottlenecked, where are we most dependent on scarce expertise, or where are we most limited in our ability to scale?" and only then ask whether AI can help address that.

They invest in the middle layer. Not just the data scientists at one end and the beneficiaries at the other, but the program staff, the analysts, the subject specialists, the managers who need to understand AI well enough to use it responsibly and ask good questions about it. And by this I do not mean sending them off to collect more certifications. I mean situated, contextual learning tied to the actual problems they are trying to solve. This is where most training investments tend to fall short.

They build for replication, not just for novelty. The goal is not to have a cool AI solution. The goal is to have something that can be adopted, adapted, and scaled, ideally across geographies and different contexts.

And perhaps most importantly, they treat AI adoption as an organizational change problem, not a technology deployment problem. That one shift in framing, from "we are implementing a tool" to "we are changing how this organization thinks and works," changes almost everything about how you plan for it, resource it, and lead through it.

None of this is easy or fast. But the organizations that tend to get it right are the ones that resist the temptation to lead with technology. They lead instead with questions, with humility, and with a genuine long-term commitment to building the institutional muscle that makes AI actually useful. That path is slower and messier and far less exciting than announcing a new AI initiative. But in my experience, it is the work that actually matters.

The ideas, musings, and opinions in this post are entirely my own. I used generative AI to help pull them together into a coherent draft.

Next
Next

Beyond LLMs: Exploring Six Use Cases that Can Benefit from AI’s Other Technologies and Approaches