You are welcome to change your privacy preferences here.
An over view of the Chat Bot project
A Veteran Chat Bot is a computer program designed to provide support and assistance to veterans through a natural language interface. It is a type of artificial intelligence-based conversational agent that has been trained on the language, knowledge, and skills of a counsellor with a specific fine tuning on appropriate responses for military veterans. Think highly focussed chat GPT and you get the idea. The chat bot is designed to provide a personalised experience for veterans, allowing them to access information, ask questions, and receive answers in a natural conversational manner using language that is appropriate to kiwi veterans and information that is NZ specific. Its your phone chat with a computer doing the other side of the conversation.
The current problems for veterans requiring access to support are simple to enumerate but difficult to overcome with the present approach. The issues are accessibility, timeliness, effectiveness and cost. In a system struggling with managing the major mental health presentations and no specialist providers for veterans access is a major issue. Our anecdotal experience is that at least 80% of our presentations are able to self recover with the right information and support. Unfortunately you don’t know which will be which when the veteran first presents. That means all presentations have to be assessed and triaged to enable those whom can self recover to be given what assistance they need whilst the remaining group are referred on for further support.
What if there was a portal that the veteran could access 24 hours a day, that would provide companionship and support, would triage the presentation, send a note through health link to the Gp record (where the veteran has given consent for this to happen) and which leads the 80% through something like our self help course. That would be a game changer. Imagine if that portal could also do risk analysis and connect veterans to help when they need it.
The purpose of a veteran chat bot is to provide veterans with access to resources, information, and support in areas such as mental health, stress reduction, lifeskills and companionship. It can be designed to provide veterans with a platform to connect with other veterans, allowing them to share experiences and receive advice from one another whilst being moderated by the chat bot. Additionally, the chat bot can provide personalised information tailored to the individual veteran’s needs, making it easier for them to access the resources they need when they need them. It can triage the veteran to further support if needed. It can add the presentation to the Gp notes. By providing veterans with access to these resources and support, the chat bot can help to improve the quality of life for veterans.
The development options that we have explored are many and we have developed a modular approach to implementing such a service. The main components we are looking are;
Can an AI really do counselling and would veterans accept it
This is a common question. Some of the text above was written by the chat bot we have been experimenting with, some was written by a human author - can you tell which is which? A natural language chat bot is capable of undertaking very rich and nuanced conversations.
What follows is question and answer with the AI chat bot to give a flavour of how far this has come. I ask the question and these are the unedited responses.
Question How can an AI provide mental health support?
Answer AI can provide mental health support in a variety of ways. AI-enabled chatbots can be used to provide users with a safe space to talk about their mental health and to provide advice and resources. AI can also be used to provide personalized mental health interventions, such as cognitive behavioral therapy, tailored to the individual’s needs. AI can also be used to detect signs of mental health issues in users’ social media posts and to provide early intervention. Finally, AI can be used to provide mental health screenings and to monitor users’ progress over time.
Question I am feeling sad, what should I do?
Answer
1. Take some time for yourself. Do something that makes you feel relaxed or happy. This could be reading a good book, taking a walk, listening to calming music, or even just taking a hot bath.
2. Talk to someone. Reach out to a friend, family member, or even a therapist and let them know how you're feeling. Talking about your emotions can help you feel more in control and make you feel less alone.
3. Practice self-care. Make sure you're eating healthy, exercising, and getting enough sleep. These activities can help to improve your mood and boost your energy levels.
4. Do something creative. Expressing yourself through art, music, writing, or photography can be a great way to cope with your sadness.
5. Get out of the house. Going to a new place or doing something different can help to break up your routine and give you a fresh perspective.
Question how can an AI can do CBT counselling?
Answer Behavioral Therapy (CBT) is a type of psychotherapy that focuses on changing patterns of thinking and behavior in order to improve mental health. AI can be used to help people with CBT counseling by providing automated, personalized support and guidance. AI can be used to provide personalized advice and feedback based on an individual's unique situation and needs. AI can also be used to track progress and provide data-driven insights into the effectiveness of the treatment. Additionally, AI can be used to provide virtual support, such as chatbots, which can provide answers to commonly asked questions, help people stay motivated, and provide emotional support.
These responses are from an AI that has not been fine tuned for a veteran or even New Zealand population. Imagine how much better that would be.
At No Duff we suspect that the barrier to adoption will not be the veterans, it will a systemic resistance based upon perceived risk. We exist in a legal system that must be able to apportion blame when something goes wrong. Who do you blame if its software?
This picture was generated by the Dall-e AI - the prompt was a painting in the style of Van Gogh showing a veteran watching the sunset.
How does an AI chat bot work?
Almost everyone has heard of chatGPT by Open AI and what that can do. However, there are literally hundreds of models on the market now. The big well known models such as Open AI's chat GPT represent massive investments. The latest large models now cost of the order of $1B to produce. That is clearly not an option for us.
When we look at the project its actually got a series of parts.
Human speak, with all its local slang and kiwisms thrown in, is hugely complex. We need something that understands human speak - a natural language processing (NLP) module. NLP is a branch of AI that focuses on the use of natural human language for various computing applications. NLP is a broad category that encompasses many different types of language processing tasks, including sentiment analysis, speech recognition, machine translation, text generation, and text summarisation and categorisation. We need all of that to make the chat bot work.
GPT stands for generalised pre trained transformer. What that means is the hard work of training the AI is done up front by someone with lots of money and what we get is a working generalised NLP AI - which we rent time on. As an example, Open AI’s GPT-4 is a general-purpose NLP system. Although it looks really clever it actually just does one thing very well - it predicts what comes next based on the text and context that is provided as input. With the right architecture and enough data, that one thing can handle a stunning array of language processing tasks.
The Large foundational models have lots of knowledge, they are trained on vast amounts of information. However they are not intelligent or wise. There is a difference between knowing lots and knowing what to do with that knowledge. We therefore need to take that into account when we design the chatbot.
Prompts are how you get that GPT to do what you want. A prompt is what you type in to tell the AI what you want - think of it like programming in plain English. The more focussed the prompt, the more focussed the answer. The Prompt can be short or long - the limit on our test setup is currently 200,000 tokens which equates to roughly 140,000 words for the prompt and the answer combined. The more you tell the AI about what you want the better the AI can respond.
So you ask a question and you get an answer. That lends itself perfectly to a text based chat interface. The interface that would be most acceptable to the veteran population and most affordable for us would be a standard chat message interface like you get on your phone, its what we use now to support many of our veterans. All that would change is who is on the other end. The front side would not be complex to program - its a simple web chat interface. However the back end at the NDLC server would be considerably more complex.
Training the Chat bot
In order to successfully design for the veteran demographic we could approach the problem in a couple of ways - which in reality is the basic choices for all machine learning. We could study the veteran demographic ad nauseam, scrape all of the psych material from the web and then build a data model, which is then used to train the AI. Its tedious, labour intensive and extremely costly. Or we could have the AI study the published literature that we select, then work with the demographic directly and have the model train itself - which means we would put humans in the loop to fine tune what it has learned and moderate its output. This approach is faster and cheaper and ultimately likely to be more accurate. That approach is called reinforcement learning with human feedback.
We have an early model up and running on our test server. It still requires a great deal of work before we can field this with any confidence, but even this early test is really incredible in what it can do.
If we build and run our own model we are not restricted by the lawyers working for someone else and we are not paying per query. But we have to do all the hard work up front. In the longrun its a much more cost and clinically effective option to use a large foundation model that someone else has paid for and then augment it with the extra bits we need. The cost per interaction is low, the upfront costs are ok and it simplifies the technological problem, but it does take up front work and accepting a certain level of risk.
Because we develop the model and its interface we can make the implementation platform agnostic - its blind to what device you are using, it will work so long as your device can run a modern browser. No special tools required and no need to download something unless you choose to. One of the issues to keep in mind is the demographic that this is being developed for - its for veterans who can be somewhat suspicous of anything that gets their information.
The aim is at some point in the future to be able to run our own instance on our hardware in order to provide the best quality of service, direct our own research and increase the speed and efficiency of the system (and its security) - but that is a long way off.
Security and privacy
A common concern amongst veterans is privacy. This is actually easier than it seems. It is possible to offer a generic ‘anonymised’ veterans account for the truly paranoid - the down side to this is all you will get are generic vanilla answers. No identifiers of any kind collected or stored. The chat bot won’t remember you the next time so it won’t learn. Its like a therapist with dementia but it can be done.
The ideal is a tiered approach. The next step up would be an account with only enough information to enable the chatbot to remember you next time. Just the basics - it allows less repetition. That information would be stored on your device. Change devices and the chatbot has no memory. You are responsible for the security of that information - which for most people is not a great option. Loose your phone and some one has your life history in detail.
Next in the scale would be the information stored in both the device and by the chat bot in an encrypted format - which is where we start to get to the real magic of the AI approach. Allowing the AI to remember allows a therapist with a memory - which means it can tailor everything to the specific veteran and where they are in the therapy journey. If the veteran goes all in and allows access to the full spectrum of their data (including medical) and feeds back to the chat bot then the AI can be trained for that specific individual - imagine a therapist trained specifically just for your special needs that is on call for you alone 24/7.
The security problem is then ours. No sharing without the veterans permission - no matter what. Thats the hard part. There are ways to technically enable this, so that when compelled to give up the server side data its meaningless rubbish because the key for it is on the client side - the veterans phone. Would we need to go that far - that all depends on what our veterans want.
Safety and reliability
An AI doesn’t actually think - not in the sense of a human being anyway. What it does is make probability based predictions of what comes next based upon what you input and what it already knows. That means the quality of the data it was trained on and the fine tuning determines what in medical speak is called the sensitivity and the specificity, or how well it detects what is needed and the correctness of the answer it gives. The reliability, the ability to give an appropriate answer to a question and similar answers to the same problem phrased differently, again is determined in large part by the investment in the fine tuning - that hand holding by the human in the loop so that it learns what is a good fit.
Some of the limitations of a chat AI are;
1. Lack of Common sense: A ChatGPT bot, like any other language model, lacks a common sense understanding of the world. It can generate coherent text but it cannot understand the context of the text it generates, which can lead to nonsensical or biased output, or just plain wierd shit.
2. Bias: The model has been trained on a large dataset of text data, which can contain biases. That dataset at some point will have included a large chunk of the internet - so think every freak out there had a say. This can result in the model generating text that is biased in certain ways, such as gender bias, which can be problematic for certain applications. however, it also means the chat bot will ‘understand' the wierd shit that some of our veterans come up with. One way to manage some of this risk is to have input and output filters - error checks to make sure it makes sense and its appropriate for the audience.
3. Limited understanding of real-world concepts: The model has been trained on a large dataset of text data, but its understanding of the real-world concepts is limited to the data it has seen. It may not be able to understand or generate text about unfamiliar or rare concepts. Hence it needs to be specially trained to handle veterans - a process termed fine tuning. It also needs to be set up to say I dont know, rather than simply find the nearest fit.
All expert systems make mistakes - including human medical experts. However some mistakes are more important than others. The fine tuning by human experts - hand holding the AI responses so that it learns what is right is not unlike how our medical expert apprenticeship model works. The safety of the system is therefore based in large part on the investment in the human moderation.
Chat bot counselling - what can it really do?
A chatbot can provide counselling for PTSD, anxiety, depression, relationship difficulties and also act as a life coach. The chat bot can help the patient identify their triggers and provide coping strategies to manage their symptoms. It can also provide relaxation techniques, such as deep breathing and mindfulness, art therapy (yes it can do art and way better than I can - see below for what I mean) to help the patient manage their anxiety. The chatbot can also provide conversational information about PTSD and other mental health problems and resources available to help the patient manage their symptoms. Additionally, the chatbot can provide emotional support and encourage the patient to seek professional help if necessary.
The chat bot can provide structured cognitive behavioural therapy and other therapies within the structure of the NDLC program - the self help course can be a guided course with a one on one tutor.
Can an AI chat bot show empathy?
Empathy was thought to be a uniquely human quality, however recent research has shown it occurs in other species too. But what about an AI - how would an AI show empathy? lets ask the chat bot - this is the actual answer from our test chatbot;
Answer A chat bot can show empathy by using empathetic language, such as phrases like "I understand," "That must be hard," and "I'm sorry to hear that." It can also use supportive language, such as phrases like "I'm here for you," "Let me help," and "What can I do to make this better?" The chat bot can also provide helpful resources, such as links to support groups or articles about managing difficult emotions.
The all important question - how much does this cost?
The cost is based upon the processing time used and the particular model, mainly how many parameters it has. There are a number of models developed by Open AI with chatGPT based upon the most complex. Different providers have different pricing structures around which model you use and how many tokens you use per billing period.
Our current test Model uses Anthropics Claude models. The reason we selected this set is the approach used by Anthropic has attempted to build safety in from the ground up.
Each word and question consists of a number of parts. Each part is a token. A typical question would break down into approximately 10 to 20 tokens worth of processing. A typical brief answer will be typically 70 tokens. For the sake of easy maths say each question and answer couplet is 100 tokens. The cost of a typical 10 cycle interaction would therefore be 1000 tokens. With a special trained AI on a commercial plan that would be of the order of 5 cents. That’s right - a typical support session would cost 5 cents - and that is NZ cents not US cents.
The cost issue becomes one of scale - 5 cents isn’t much - lets say its $1 per week per veteran to make the numbers easy again. Lets also say we get in any given week 1% of the veterans using the service once - that would equate to around $500 per week, or $25,000 per year - which isn’t huge. The second problem is the rate of uptake - lets say every veteran tries it in the first year - but only for a week (unrealistic I know but it gives the idea). That would be $50k in the first year. Again not insurmountable - half the cost of a single counsellor for a year. When you think about it thats pretty damn awesome.
The general AI model needs to focussed. Veterans are a special group with special risks. They exist in a somewhat unique population - kiwis. That means the AI needs to be fed lots of material about New Zealand in general, about mental health, about veterans and specifically about New Zealand veterans. That information needs to be prepared and then the model needs to be trained on it. The AI can help with preparing the material that but its still quite labour intensive - you don’t want to feed it rubbish because this is its reference material. A rough estimate for this would be 1500 hours of human time and about 72 hours of compute time on a commercial computing set up. Thats not cheap but its not a deal breaker.
But there is a problem - there is fuck all information published on kiwi veterans. That means it needs to learn from the source, the veterans themselves - its the second option we talked about above which means having humans for the AI to learn from.
The next step therefore is the machine needs to be trained by interacting with veterans - and that needs to be human moderated and takes time. That is typically about 1500 hours worth on a select group to get going. Add to this human moderation for the initial deplyment period and also the front side app development and the realistic deployment cost is approximately 4000 hours of work. Sounds a lot. To put that into context, that is the cost of two counsellors for a year.
A veteran chat bot has the potential to significantly change the veteran support landscape. The key benefits are;
We believe that a chat bot mentor could provide the 80% of veterans who need straight forwards help the support that they need. That would allow the meat mentors to put all of their effort into the 20% that are really struggling. It allows us to use our limited resource to best effect.
We are continuously updating our courses and adding new information, so check in often. Let us know what you think we should do, what is it that would make a difference? Leave us comments on what you like, what you think is bullshit and what you think needs to be here.
The No Duff learning centre (NDLC) is a project of the No Duff Charitable trust (NDCT).