Podcast episode transcript ↓
Josh:
Artificial intelligence is transforming industries, but what does it mean for the nonprofit sector?
From streamlining processes to improving donor engagement, AI for nonprofits has the potential to revolutionize how organizations operate.
So, how can nonprofits get started with AI, and what steps can they take to approach this technology practically and ethically while ensuring it supports their mission without losing the personal touch?
I'm Josh with Anedot and welcome to Nonprofit Pulse, where we explore trends, insights, and resources that help nonprofits accomplish their mission.
On this episode, we are joined by Albert Chen on the topic of how AI can empower nonprofits to achieve their mission more effectively.
Albert is the Co-founder and CEO of Anago, which partners with resource-constrained teams to get more done, efficiently.
He has over 18 years of startup, corporate, and nonprofit experience in the US and Latin America.
Albert helps executives design and implement their AI roadmaps and provides AI training for both non-technical and technical teams. Hey Albert, thanks for joining us on Nonprofit Pulse.
Albert:
Nice to be here. Thanks for having me.
Josh:
Yeah, excited for our topic today. We're finally discussing AI for nonprofits.
It's been in the works on our end for a few months and maybe a few weeks on booking you and talking about AI and just excited for the conversation.
There's a lot of content out there around AI right now, but I think we're going to have a really helpful, perspective for folks, in the audience.
Josh:
So maybe just starting out, let's talk about what do we mean by AI in the context of nonprofit work and in nonprofit operations?
Albert:
Yeah. Well, before we jump into what it means for nonprofits, just to kind of quick recap, for your audience to level set themselves on what is AI.
Because AI is a lot of things to a lot of different people.
And generally speaking, you can kind of put them in two buckets.
There's the you know, like traditional AI is what we'll call that.
And that's kind of the more predictive analytics, think why Spotify knows what song you want to listen to next? Netflix knows what movie you want.
Next, Instagram has your favorite product that you didn't know you wanted. That's kind of the predictive AI that has already existed for some time now.
Generative AI is what a lot of buzz is about these days, and that's the idea of generating content and generating words, pictures, images, sounds, all that.
That's what most of the new technology has just been unlocked over the past maybe two years now at this point.
So most of what I tend to focus on is the new wave of generative AI and the impact that has had on the nonprofit sector.
How AI can make an immediate impact in your nonprofit

Josh:
Yeah, I'm thinking about where can nonprofits kind of immediately plug in to AI when they're starting kind of their adoption journey?
What are some areas, as you, coach and consult with nonprofits around AI adoption?
What are some of those areas that AI can have an immediate impact with probably the least resistance?
Albert:
The way I would say AI can have the most impact on nonprofits is you break it down to three different categories.
There's the external, and that's what impacts your stakeholders, your target population, whoever you're trying to serve, right? That's the external use.
There's the internal use, which is what your team internally is utilizing tools to make certain workflows more efficient.
And then there's the personal side. And that's when you are just utilizing AI to help with different processes like brainstorming, feedback, things like that.
I would say where you'll find the most immediate impact is thinking about the more personal use cases first.
That requires much less, coordinating, communicating, programing all that stuff, is learning how to personally utilize AI in your own day to day workflows, work.
And then secondary would be internal, right?
Working among your team to kind of figure things out. And, figure out what are the workflows, what are the specific key decision points that are needed, that can be AI-ified? And then starting there.
Then later on, once you feel comfortable with that, then thinking about, okay, how can we then utilize AI to reach out to those that we're serving?
Another very common use case that I think would help most nonprofits is figuring out things that they don't know, right.
And so, before you were left to you and Google and you had to kind of search endlessly webpage on webpage looking for an answer.
AI can now help you get to the answer much quicker.
There are tools out there like You.com or Perplexity that ingests dozens of websites at one time, right? And synthesizing them into single reports for you.
It's things that would probably take like an analyst maybe a day or two to do. You can essentially get it done in 30 seconds.
And all of this again, you all got to double check.
It's about getting you the information quickly. So that way you can start to sift through that much easier.
The other part that is vastly underutilized, now we hear about AI helping you produce documents, right? Drafts, things like that.
Never, never turn it in just like that.
If there's anything that you leave with is, always be skeptical of the output. You can't be skeptical of what it's capable of. But you got to be skeptical of what the output is.
So, have it generate a good draft of things. You got to take it to that finish line and say, okay, this is good enough.
But the part that is vastly underutilized, I think, is the critiquing of your own work.
Getting that feedback, the feedback that we wish we had but didn't have to actually hear from a real human coworker, but we know is actually true.
That is something that I love to throw at AI and say, hey, here's a report, here's the draft, here's whatever this proposal is, a grant proposal.
Review it from the perspective of like, whatever stakeholder should actually be reviewing or will be reviewing this. And sometimes you kind of have to ask it to be brutally honest, right?
These algorithms, these AI are trained to be a little bit nice to you.
And so if you want the honest feedback, sometimes you just gotta tell it.
Give it to me straight, give it to me with radical candor. And I hear some people put in: rip it to shreds.
Whatever works for you, it will give you that good feedback from various different perspectives.
Then at the very end of the day, you can ask it to then take the feedback and make revisions and suggestions on that document.
But remember, at the very, very end, the human always has to come in and check the work before you turn anything in and send anything out, because it's your reputation at the end of the day that's on the line, and you don't want to delegate your reputation to AI.
Josh:
Yeah. So important. And, you know, I've had that suspicion ever since beginning to use these tools.
And it’s blown my mind how many folks, you can see on Reddit, Reddit channels and different places, Reddit boards of turning in stuff that has not been reviewed.
It just blows my mind because hallucinations do happen. They are common.
I don't have as many AI tool hallucinations as a lot of my friends and folks I talked to. But, they do happen.
Especially when dealing with stakeholders like donors or board members.
There may be a lot of grace inside your organization, but when it comes to the public or really VIP stakeholders, it's super important.
Albert:
I think something that will be helpful to explain, too, is that at the end of the day, generative AI as it currently exists right now is just really, really smart autocomplete, right?
And it's autocompleting based upon the patterns and trends that it's been trained on before.
And because we've given it tons and tons of data, books, Wikipedia, the internet, that is what it was trained on.
That is what it's learned to essentially replicate or regurgitate. And so that's good and bad, right?
So it's really good at picking up patterns that are very obvious. But for things that are very novel or new, it's not going to know.
It actually doesn't know things. It just knows patterns.
And so when you're dealing with very specific situations where you say, hey, come up with a grant proposal for this project.
If you don't tell it what the project is, it doesn't know that. It's going to just make something up.
The nice thing is if you give it information, the current technology is built in a way that if you give it ground source, ground truth and data, it will build off of that.
So it's always good to go from more data and synthesize, not the reverse. You don't want it to kind of give it a little bit and have it come up with stuff.
Unless you're doing brainstorming, right? And that's one way where we say, okay, we're trying to come up with new original ideas or ideas that kind of stretch our thinking.
Then sure, you can have it go in areas that don't exist.
If you're wanting to reduce hallucination, which is when AI makes things up, it's about giving it more information than it needs and having it synthesize into smaller bits of information.
How AI can help nonprofits identify donor prospects and enhance engagement

Josh:
So transitioning to fundraising and thinking through that area for nonprofit leaders, Albert, how do you see AI helping nonprofits in that area, especially around maybe identifying new prospects or optimizing their engagement with existing donors?
Albert:
Yeah. So I would say the current limitations of AI is that it's good at responding, it's good at words. It's not as good at doing things yet.
There are many tools that are being built for that. And that's still being developed.
So currently with the current limitations and current capabilities, I would say research is probably a good use for like finding new donors, right?
Having these tools that can go out and do searches for you, to do quick massive searches, in different areas, looking at different potential funders, helping you explore different donor bases.
And then once you have that information, then the other area is hyper personalization, right?
You can essentially create a matrix of your programs, whatever your organization does and then specifically what this donor is interested in, and find the exact match of those two things and personalize whatever communications that connect the dots there.
That's something that would have taken a really, really long time to do on a individual basis.
Now you can do this at scale utilizing AI. The other thing that helps in donors, for fundraising, is grant proposals.
And so I work with both funders and grant writers and realizing that at the end of the day, it's about finding a good fit and a good match.
And so one of the things that you can do as a grant writer is to say, what are the requirements?
Take the requirements of a foundation, or grant proposal and match it with what you have in your own proposal with your programs and say, come up with a proposal that integrates all of those requirements and the funding priorities, all that together.
Now again, make sure that it is actually true.
What you don't want is AI coming up with a brand new program that sounds amazing to a funder, all to realize you actually can't pull it off.
That is definitely what you don't want to do. But it is helpful to maybe mix match, what you're already doing with what this funder might actually care about.
Josh:
I love that, I love that, and so much of the grant process is laborious. I mean, it's so many requirements.
And then you have to write the copy, you need to approve the copy, you need to have the copy approved by various stakeholders, make sure you're saying the right things, and you can make those commitments.
And using AI in the grant process is really just a game changer.
I mean, I can't imagine the hours saved by being able to put in those requirements both from the party that is giving the grant, an application, but also you can put in requirements from your stakeholders too, right?
So you're going to be there already 90% of the way before you get into that review cycle.
Albert:
Yeah. And the same applies for reviewing your grants.
So rather than just producing these documents, if your goal is to produce the best version of this document, have AI review it from the perspective of the foundation, from the perspective of the donor, the grant maker and the grantees.
Any perspective you can have, have it reviewed, have it critiqued, and continue to improve and improve on it.
How nonprofits can use AI to forecast donation trends and understand donor behavior better

Josh:
So earlier you mentioned predictive analytics. And I want to kind of dive in there a bit.
There's been a lot of discussion around predictive analytics and kind of in every industry, and in the nonprofit space, it's super popular right now because people want to have that insight of, okay, here's my data, what's going on?
What can I project and what can I speak to my board and our supporters about where we think we're headed?
And for nonprofit leaders, they typically don't have a background in that world.
But AI is really helping us kind of democratize that work, where nonprofit directors, executive directors can pick up that tool and get to work and really have a nice output.
So, thinking through that, how can nonprofits use predictive analytics when it comes to analysis from these tools to forecast donation trends or maybe even understand donor behavior better, when they have the data from their CRM or, even from manual entry?
Albert:
Well, the reality is with the current capabilities of large language models, this is still an area that needs to be developed a little bit further.
I wouldn't say it's quite there. There are some models that can now start to do data analysis for you if your data is clean.
So again, if your data is not ready to be used and processed, you're not going to get good results out of that.
So having it give you suggestions on what to do with the data is helpful. This kind of goes along with the broader data question.
But, unless you can have good data, you're not going to have good analytics.
Now, there is one significant benefit of large language models and generative AI that was not possible before. And that's being able to process qualitative data.
Before you had to have things in nice rows and columns and easy to understand buckets.
Now you can throw in like huge, like blocks of text, and AI can analyze that and come up with themes and then trends and all that within just narrative form.
So that's something that's new. And I think they're still trying to figure out how do we leverage that and turn that into data that we can then use for predictive analytics.
But some of the more intense machine learning engines that we see out there, I would say that's kind of hard for smaller nonprofits to be able to leverage, unless you actually had a data scientist or data engineer who can navigate that.
Because what you don't want is to have it come up with a wrong conclusion. Then you'll be leading other people astray too.
Josh:
Yeah. And maybe even just thinking some base level analysis of how many of your donors, what percentage of your donors are turning into recurring givers?
What percentage of your donors are giving more year over year, or increasing their gift, thinking of kind of that value ladder in business.
But in nonprofits that applies as well right?
Thinking through charting revenue to program size and impact. So, okay, we launched three new programs and we had 300 new volunteers.
And our revenue was X before, and it's Y now, those types of things of seeing. And again, that's very flat and basic.
But it's things that if your columns and rows are clean and we're about to have a question around kind of data management and how to really keep your data clean, that's easy work for AI tools to have an output for you to include in a quarterly report or include in a briefing for the board.
Albert:
Yeah, yeah. And so, I mean, as we go into that question, it's all to me, it's all together.
I mean, one, I think of leveraging AI, if your organization does not have somebody who can handle that data for you, you can ask AI, what do I need to know?
Pretend you are that data scientist. What are the questions that you would be asking?
I would recommend folks to use AI to identify what data should be captured, how to capture that data, how to process that data, how to interpret the data, and then ultimately how to present the data.
Now we're at the stage now where AI is not going to do that for you, but it can tell you what should be done. And hopefully if you are Excel savvy enough, right, you can kind of do this with AI.
Now again, I would still recommend having somebody who actually knows their way around it.
But the nice thing is AI can help guide you and boost your skills in areas that you haven't fully developed.
And the check on top of that is always have AI then review your work.
So imagine it's like having a coach walk alongside you as you're trying to do this.
Because if you have the data, then that's great. Then you can actually accomplish a lot of these things.
AI tools and platforms to help nonprofits get started

Josh:
So thinking about AI tools and really the list is endless.
We can go to all of our favorite AI directories online and see. But you're just mentioning Excel. It just brought to mind Microsoft Copilot now is within Excel. But there's so many tools.
I just want to pick your brain, Albert, on when you're interacting with nonprofit leaders, what tools do you recommend for those who are just kind of getting started and trying to get their feet underneath them?
Albert:
Yeah, I would say the the two easiest ones to use right now are ChatGPT and Claude.
So OpenAI is the company that made ChatGPT and they have the latest models, they have the latest intelligence and latest features too.
And then Claude is the other main competitor right now and it's made by a company called Anthropic.
They both have been neck and neck as far as having like the smartest models. It all comes down to the use case.
And both of them have free options, so you should be able to use them. If you don't want to use those two, you have, if you're on Microsoft tech stack, go ahead and use Copilot.
But I think that does cost money to turn on. And if you're on Google you can use Gemini. And some folks can use that for free.
But there's also a premium version of that. The other option out there is also Meta.
And Meta came out with a model called Llama. But if you just go to Meta.ai you can leverage that too.
So all of them right now, I would say, are smart enough to make a huge dent in the way that we work. Just to kind of give you a sense of how quickly this is advancing.
The state of the art model from a year ago has now been replicated by an open source, Meta model that essentially I can run off of my computer.
And so that's how fast this all advanced in about a year, where the state of the art of last year now runs locally on my computer today. So it's crazy.
Other AI products that I would highly recommend. I use Fathom to do meeting note-taking.
I find that to work really, really well, if you feel comfortable with an AI bot entering into the room and sitting there.
If you don't feel as comfortable with that, there's another product called Granola, and that one sits on your computer.
You always have to ask permission if you're going to record, but that one transcribes a meeting immediately and just turns it into words and doesn't actually record anything.
But the whole idea is that so you can be much more present during a meeting and then have the notes processed into takeaways, meeting notes, to do's, etc., etc..
Another product that I utilize every single day is called Shortwave. And Shortwave is AI on my email. And the idea is that this takes into context threads that I have with certain people.
So if I'm responding to somebody, it's not just making stuff up.
It's taking into the context all the other conversations I've had with this person and can get to a first draft pretty quickly, and pretty well.
Without that context, AI has no idea what it's talking about.
So the idea is that these tools have made it easier to give AI the context that it needs to give a good response and good draft.
And again, always, always check the final output. Don't send anything out without actually double checking it.
Keeping the human element in nonprofit work while using AI

Josh:
So thinking about some of the objections or concerns that nonprofit leaders and nonprofit team members have around AI and using AI in nonprofit work, one that comes to mind is this idea of kind of losing your soul, right?
Losing the heart of who we are by automating repetitive tasks or kind of boiling it down to this common denominator language, whether it's marketing deliverables or emails.
There's just a lot of concern around there because the nature of nonprofit work is a very people oriented work and impact.
And so how do we navigate that with AI? And maybe even talk about some opportunities around custom GPTs, around like leveraging who we are, how we speak, things we value.
But just first kind of big picture, is there really a risk here for nonprofits to kind of lose their voice and lose their human warm touch in using these tools?
Albert:
I think there is a risk. And I think nonprofits need to be very clear headed when they think about what they delegate to AI.
My recommendation is that anything that is more relational, whether if it's with your donors or the people that you're serving, keep that relational, keep that human connection. Use AI for the more administrative tasks that aren't as life giving. And focus on those things that require that human touch. Don't delegate that to AI. Nobody wants to talk to your AI bot. They want to talk to you.
And that's not even mentioning all the dangers that come with that part.
But as far as the actual work, I mean, there's no replacement for human relationship and human connection.
And so, yeah, I think nonprofits need to be very careful not to replace that.
It can be tempting to replace certain things because we feel like we can be much more efficient and scale it.
But you have to understand both from the perspective of your stakeholders, donors or target populations. What are they looking for? What do they expect?
Don't undermine the trust that you've worked so hard to build with them, because you can undermine it very easily if they feel like you are being disingenuous with them.
Josh:
Yeah. I want to talk a bit about custom GPTs because I think it's an interesting tool to really leverage who you are, leverage how you speak, leverage the things you value.
But in a tool that can automate some of those tasks you have, whether it's writing copy for a new page, it's in the voice of your organization or in the voice of your executive director.
There's just a lot of opportunity there around custom GPTs. Can you speak to that?
Albert:
Yeah. So custom GPTs is one of the products built by OpenAI.
And what it really does is it allows you to put in preprogramed instructions into a chat bot. And so they made it a lot easier, for folks to be able to create this.
You actually don't even need to know how to program them to program one.
You can just chat with the builder and it will help them build a bot that will then have instructions put into it. And so if there's a very repeatable task, this is a great use case for that.
So if I needed to, for instance, translate stories from the field from Hindi to English and then turn it into a summary, you can turn that into a custom GPT and have that as a repeatable task.
You can also imbue it with context, so that way you don't have to tell it everything every single time, or even copy and paste so much every single time.
You can just put in the parts that it needs to process this work. So it's a very quick and easy way to dabble into workflow automation.
Josh:
Yeah. And and one of the big key benefits of it is you can train it, you can fine tune it to where it has the context of your organization.
So if you're looking for, like I said, creating copy for a website or an event, or an email, you can feed it all of your previous copy that you have.
It will read it, it will understand how you talk about certain things, certain topics, the words you used.
And then when you prompt it, it will give you a copy that is really in your own voice, which is again, it's 90% there.
It's not totally there. You need to review it. You need to say, actually, I would say a little bit differently here.
But it's a huge help, because it brings a lot of kind of the best practices around copywriting, especially around calls to action and things that maybe your nonprofit doesn't have a lot of expertise in-house on, is calls to action and increasing conversions around whether it's a donation page or an event page or whatever. Volunteer page.
That's a really cool benefit of custom GPT. And it grows with you, right?
So over weeks, over months, that you use it and you feed it more context. It's kind of dialing in to having that voice of your organization, knowing the particularities around your programs, around the things that you value and your donors value.
And it's a really cool opportunity.
Albert:
Now, I'll actually say that you don't want to feed it too much information, right?
So then it starts to get confused because it doesn't really know inherently what is better than other content that it's been given, right?
And so there's a whole process called retrieval augmented generation, where it is just trying to process all the information that you've given it and think about what is the best to be able to leverage into what it's about to produce for you.
And so both Anthropic’s Claude and OpenAI's ChatGPT and the custom GPTs, they all follow a very similar process where they're just looking at documents that you've produced and you give it instructions to say, write it similar to what I've given you.
Now the better quality of the information that you can give it and think about it as like, I'm really refining it down to the best writing examples that we have.
It will then produce a better example, versus if I gave it everything, it can still parse through that, but you're actually making its job harder.
And it kind of follows just general, garbage in, garbage out when it comes to data as well. Same applies for these large language models.
They're very capable. But the cleaner examples you can give it, actually the better it'll produce for you.
Josh:
Yes. And when you're training them, the cleaner you can prompt it to give it context of what this is, right?
This is the invitation for our Christmas fundraising gala from 2023.
That X, Y, and Z, right? In that way, it understands, without having to get into kind of a long, nerdy rag discussion here, there's a lot of opportunity there.
Affordable and accessible AI solutions for nonprofits

Josh:
So cool, Albert thinking about the cost, right? So with any technology adoption in the nonprofit space, one of the immediate concerns is costs and can we afford this?
You kind of alluded to this and prices earlier around Claude, ChatGPT, and others. But, just kind of camping out here for a second.
What do you say to nonprofits who are concerned about, well, we shouldn’t take a next step because we don't have it in our budget.
What tools would you say, actually, there's many free tools.
You should start using these and maybe even what the next step is after exploring free tools and thinking about the budgeting and how much should nonprofits be spending and allocating or budgeting for next year?
Albert:
Yeah, I would say currently there's enough free tools out there for you to start to dabble and learn how they work and start to identify whether or not you can find it useful for your organization or not.
Like I said, ChatGPT, Claude, they all have free versions that are amazingly powerful already. All of them have pro versions that give you all the bells and whistles.
But you don't necessarily need them to get started. And then those who really don't have a budget for this, I mean, you'll get by plenty with the free versions.
There's also Gemini with Google, and Meta has Llama, and those are free, right?
Again, they all have kind of pro versions that you can get. The other tools I would say are free and are tremendously powerful are Perplexity and You.com.
Both of those are search enabled AI chat bots. And what they allow you to do is integrate or search lots of information and synthesize lots of information very rapidly.
Both of those have versions called pro, Perplexity Pro and You.com has Research Mode.
You get three queries a day free and one query could be searching 140 sources for you and synthesizing it into a single report.
You probably don't need all three a day. I only use it a couple times a week. But those are great tools that are free right off the bat. Yeah.
So I would say what you might want to budget in the future is there might be some specialized tools for nonprofits or for specific departments that would make a tremendous dent in efficiency that could come in handy if your team knew how to leverage it effectively.
And you want to be able to budget sometimes, I would say it's usually around $20 to $25 a month.
Many of these, you don't need licenses for everyone. You just need it for those who are going to be using it.
But start to think through these monthly SaaS subscriptions to really help some folks.
Ethical considerations for nonprofits adopting AI for the first time

Josh:
Thinking about nonprofits that are just starting to adopt AI tools and are getting their feet wet, what are some of the ethical considerations that they should really bring in front of the team?
And especially around data privacy, it's an area we haven't talked about yet, in our conversation. But what are your thoughts there?
And what's your counsel to nonprofits as you work with them?
Albert:
Yeah. So when it comes to data privacy, every organization has their own risk tolerance on this.
If you are much more risk averse, I would recommend stick to the tech stacks that you're already on.
So if you're using Microsoft then use Copilot. If you're on Google then use Gemini. That way you're not introducing another vendor, another product into the mix.
Your data should not be trained on, because that's not quite how it works anymore. All of these companies got a lot of lash back, when it came to potentially training on your information.
Well, they'll train on how they improve their models for the future is kind of the thumbs up and thumbs down that you see a lot of these models will ask you how their response was.
If you don't respond to those, it doesn't really know. And it's just hoping that it did a good job.
Otherwise, if you were to use new products outside of that tech stack, make sure you consult your head of IT to make sure that everything is safe as far as the terms of service and privacy policy.
Because the last thing you want is to give your data up to some startup that then sells it to another person.
And so be very careful of that.
So I would tend to stick with the bigger, more established tech companies because at least they're a lot more eyes on them.
Now, the other thing that you really need to be careful of is algorithmic bias. And like I said before, all these AI models were trained off of the internet.
And so the values that it picks up are also found on the internet.
One interesting anecdote that really demonstrates this is one advice when it comes to prompt engineering, that I tell people, is to be nice to the AI when you're giving instructions or asking it for something.
And it's not because it has feelings and it's not because it's sentient or any of that stuff, but it's because it's modeling and it's mirroring what it learned on the internet.
And on the internet, those who were jerks to other people don't get as nice a response. And those who are earnest and honestly just looking for help and nice about it tend to get better responses from other people on the internet.
And so there's been studies that show that the AI models kind of model this behavior. And so if you just ask nicely, you tend to get better responses.
Josh:
I've seen that research. It's it's very interesting and you know, I joke with friends about, you know, be nice to your AI overlords.
But I actually do practice this with my own personal ChatGPT account on my phone.
And the way it mirrors it, it's like I'm training it to be a kind, a very kind and warm helper on the other end.
And it's been interesting to watch.
Albert:
Yeah, yeah. The other thing too to be aware of is when it comes to making recommendations or decisions, right? It will have a bias in it.
And so what you can do is actually steer and give it instructions on how you want it to think through certain decisions or how you want it to evaluate certain recommendations.
That way it's not just defaulting to what it thinks is a good job or what it thinks would be a good candidate for X, Y, Z.
Tell it how you would make that decision so it aligns to that. It's pretty good at following that if you're very explicit with your directions.
But the other counter to that is if you're not sure about what biases might be there, ask it and say how might I be biased?
How might the AI be biased when it comes to making these decisions or recommendations?
And then the other layer on top of that is to have the AI call you out on your own biases.
You may not even be aware that you have a bias when it comes to evaluating something or you know, yeah, making a decision.
And so you can say, hey, here's the situation, here's the decision that I'm coming up with.
Is there anything that could be problematic about this? And it'll tell you.
Josh:
Yeah, I love that. And there's some viral prompts recently around asking your personal AI tool, knowing what you know about me, what is something about me that I may not know about myself?
And there's all sorts of variations of this, right?
The longer history you have with that ChatGPT, or I use ChatGPT, with that GPT, that LLM, the more it's going to be in tune with who you are and understand your concerns, the things you ask it, the things you tell it.
But it's very interesting just how your AI tool can grow with you.
And I think that's something that's not mentioned a lot is the more you interact with it, the more you tell it who you are, what your team is, what your challenges are as a leader, what your challenges are with maybe the market you're in or the area you're in within your nonprofit, the more it does maintain a lot of that context to help bring to bear and various questions that you ask it.
And a week from now, a month from now. And yeah, bias is there.
And kind of weaving here, but to bring it back to your bias point, that's a great opportunity to ask it, where might I have bias in my thinking here? It’s because it knows you at a deeper level than it did the first day you logged in.
Albert:
Yeah. And just to be very clear, AI in it of itself knows nothing about you, unless you happen to be a famous person.
But each conversation that you produce is starting from scratch again, unless you are using a product like ChatGPT with memory turned on.
So this is a very specific feature that will try to remember tidbits about your preferences or things about you in order to give itself context for future interactions.
At any point in time, you can have a new interaction, new conversation where it knows nothing about you or anything of the past.
Or you can just turn that feature off completely and it'll be like talking to a brand new chat bot every single time.
And so it all depends on, do you want convenience or do you want personalization?
The future of AI in nonprofits

Josh:
Thinking of trends, Albert, where do you see AI headed in the future?
Where do you see the next year or two years? Are we going to see AGI soon? What's going to happen?
Albert:
Well, I think immediately what we're going to see is a lot of domain specific products being built. I think there was a lot of flack that startups were getting for just being GPT wrappers, as they called them.
In that they were just products that were utilizing, OpenAI's technology. What we found is that it's actually very useful.
And even though somebody like me could probably go and do all this prompt engineering on my own to get the same results, it's so much easier to just press a button to have it done for you.
To have somebody who is a domain expert figure that out and build it and just say, hey, just press this button or just throw in your document here,
I've already figured out all the prompting that needs to be done. So that's the first step. We're going to get lots of products that are being built for specific domains.
Next around the corner is AI agents. AI agents is going from a chat bot to an AI that can actually do things.
And there's a difference between just responding to you versus taking action.
And so the AI for search, like Perplexity and You.com is a great example of this, where instead of just giving you a response from its memory, it goes and looks things up on the internet for you, and then reports back to you and synthesizes it into a report. That is multiple steps that the AI is taking.
So we're going to start to see more and more of those types of tools where it will take multiple steps, take actions, maybe even program things for you. And then in the future we'll start to see whole teams of these AI agents working together to accomplish a goal.
That's still a little bit down the line and there's a lot of things that need to be worked out before we get to that.
But folks who are building can already see the capabilities there and the potential, but it's a lot harder to build than it sounds.
Closing thoughts

Josh:
Thinking of resources for our audience and those are just getting started or want to learn more, Albert, what would you recommend for nonprofit leaders listening?
Albert:
I would say the best way to get started is really to just try it out, and to talk to other folks that you know who have tried it out, who are using it, who have implemented it.
I think the research shows that more people are using AI than they care to admit. And so talk to your friends, talk to your coworkers and see who's actually been using it.
Find use cases that you care about, and start testing it with that.
And then what actually works well is if there's a task that you actually know what the best answer would be, or a really good answer would be, to test it with that too, because then you start to push it to its limit and realize where it's not so good.
There's a lot of AI influencers, as they call them on LinkedIn, that you can follow, for tips and tricks on how to utilize the latest prompting technique.
But by and large, prompting itself is getting easier and easier as these models are getting smarter and smarter, so you don't have to worry about that too much.
But if you want to build anything serious, especially if it's external facing, then you do want to bring in folks who really know the ins and outs of how these models work.
Josh:
Last question for us, Albert.
If you are standing on stage in front of a thousand nonprofit leaders and could share one thing, one sentence around AI for nonprofits, what would you say?
Albert:
I'll leave you with this. AI is a tool, and a tool can be used for many things.
We need to make sure that we don't worship the tool because it's a tool, right?
It's not the same as the God who created this earth. We need to make sure that we use the tool for good and not for evil.
So how we use the tool, the tools that we choose to use all matter. And so be mindful of that. And always use it with humility and use it with curiosity.
Josh:
Love that. Albert. This has been so helpful. I hope it's helpful for our audience.
And there's so much more we could talk about with AI and nonprofits. But thanks so much for coming on the podcast.
And, maybe we can reconnect towards the end of next year and kind of touch base on what's happened in AI over the past few months.
Albert:
Sounds good. Thanks for having me.
Josh:
Hey, thanks for listening.
If you enjoyed this conversation, please share or leave us a rating and review wherever you listen to podcasts.
Also, head on over to Nonprofitpulse.com to sign up for our monthly newsletter, as well as check out all the links and resources in the show notes. We’ll see you next time.

80 Community Service Ideas for Nonprofits

